From jsabater at gmail.com Sat May 4 10:08:59 2024 From: jsabater at gmail.com (Jaume Sabater) Date: Sat, 4 May 2024 12:08:59 +0200 Subject: Feature request: health checks and active Message-ID: Dear Maxim, First of all, thanks for taking the step forward to create `freenginx`. I wish you all the best. Following up on this thread from March 2024 [1], I'd like to request/suggest the active health checks [2] [3] feature built into `freenginx`: 1. Special requests are regularly sent to application endpoints to make sure they are responding correctly. 2. They enable continuous monitoring of the health of specific application components and processes. 3. They enable fine grained control of the traffic to improve deploy processes. Thanks and keep up the good work. [1] http://freenginx.org/pipermail/nginx/2024-March/000061.html [2] https://www.nginx.com/blog/active-or-passive-health-checks-which-is-right-for-you/ [3] https://github.com/zhouchangxun/ngx_healthcheck_module -- Jaume Sabater "Ubi sapientas ibi libertas" -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat May 4 17:10:31 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 4 May 2024 20:10:31 +0300 Subject: Feature request: health checks and active In-Reply-To: References: Message-ID: Hello! On Sat, May 04, 2024 at 12:08:59PM +0200, Jaume Sabater wrote: > Dear Maxim, > > First of all, thanks for taking the step forward to create `freenginx`. I > wish you all the best. > > Following up on this thread from March 2024 [1], I'd like to > request/suggest the active health checks [2] [3] feature built into > `freenginx`: > > 1. Special requests are regularly sent to application endpoints to make > sure they are responding correctly. > 2. They enable continuous monitoring of the health of specific application > components and processes. > 3. They enable fine grained control of the traffic to improve deploy > processes. > > Thanks and keep up the good work. Thanks for the suggestion. I personally not a big fan of active health checks, and rather think that checking actual upstream responses is much better and cleaner way to monitor upstream servers health. And actual implementations I've seen so far, including the one available in NGINX Plus, are quite far from reaching the code quality and usability I would expect in [free]nginx. Still, it is understood that in some use cases active checks can be useful. I'll take a closer look to see how things can be improved here. -- Maxim Dounin http://mdounin.ru/ From srebecchi at kameleoon.com Sat May 4 17:31:43 2024 From: srebecchi at kameleoon.com (=?UTF-8?Q?S=C3=A9bastien_Rebecchi?=) Date: Sat, 4 May 2024 19:31:43 +0200 Subject: Status code 0 Message-ID: Hello What does it mean when nginx returns an http status code 0? We see that cause we monitor nginx response status code, which is used as front of our apps. The apps themselves can not return 0, only 200 or 500. So it seems issue here comes from nginx itself which can not process the connection under high peak of load, but why 0, is that expected? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Sat May 4 17:42:39 2024 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Sat, 4 May 2024 20:42:39 +0300 Subject: Status code 0 In-Reply-To: References: Message-ID: Hi S?bastien, thanks for the report. On Sat, May 04, 2024 at 07:31:43PM +0200, S?bastien Rebecchi wrote: > > What does it mean when nginx returns an http status code 0? > > We see that cause we monitor nginx response status code, which is used as > front of our apps. The apps themselves can not return 0, only 200 or 500. > So it seems issue here comes from nginx itself which can not process the > connection under high peak of load, but why 0, is that expected? Could you please provide more details about the case: - configuration file - test cases that reproduces the issue - additional details Thank you. -- Sergey A. Osokin From mdounin at mdounin.ru Sat May 4 18:47:14 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 4 May 2024 21:47:14 +0300 Subject: Status code 0 In-Reply-To: References: Message-ID: Hello! On Sat, May 04, 2024 at 07:31:43PM +0200, S?bastien Rebecchi wrote: > Hello > > What does it mean when nginx returns an http status code 0? > > We see that cause we monitor nginx response status code, which is used as > front of our apps. The apps themselves can not return 0, only 200 or 500. > So it seems issue here comes from nginx itself which can not process the > connection under high peak of load, but why 0, is that expected? Status code 0 as seen in nginx http access logs means that nginx wasn't able to generate any reasonable status code, even some generic failure like 500 (Internal Server Error), yet the request was closed. This usually happens due to some fatal issues, like unexpected conditions (which might indicate a bug somewhere), unexpected errors, or memory allocation errors if it wasn't possible to return 500. In most cases there should be additional details in the error log explaining the reasons. If there aren't any, or reasons aren't clear, it might be a good idea to dig further. -- Maxim Dounin http://mdounin.ru/ From srebecchi at kameleoon.com Mon May 6 09:33:22 2024 From: srebecchi at kameleoon.com (=?UTF-8?Q?S=C3=A9bastien_Rebecchi?=) Date: Mon, 6 May 2024 11:33:22 +0200 Subject: Status code 0 In-Reply-To: References: Message-ID: Hello! There is nothing regarding this issue in nginx logs. Now I think the issue is not with nginx itself, but in front of nginx, with Linux itself. We monitor using curl, and it seems that curl can print status code 0 when it can not establish a connection with the server. I think the Linux kernel is not configured properly to handle our connection peaks. Looking at net.core.somaxconn, we have the default of 128. More generally, I will deep dive into this and possibly other kernel settings to see if optimizing them will solve the problem. For information, on each of our servers we have an average of around 80K simultaneous TCP connections globally and around 35K in state ESTABLISHED (according to netstat/ss -nta), and more during high peaks. I saw this doc section which is a good starting point https://www.nginx.com/blog/tuning-nginx/#Tuning-Your-Linux-Configuration If you have any advice on other settings to increase, this would be very appreciated. I will keep you in touch about my investigations, to confirm at least that there is no bug on nginx side, which i am quite confident about now. Thank you very much for your help! Le sam. 4 mai 2024 ? 20:47, Maxim Dounin a ?crit : > Hello! > > On Sat, May 04, 2024 at 07:31:43PM +0200, S?bastien Rebecchi wrote: > > > Hello > > > > What does it mean when nginx returns an http status code 0? > > > > We see that cause we monitor nginx response status code, which is used as > > front of our apps. The apps themselves can not return 0, only 200 or 500. > > So it seems issue here comes from nginx itself which can not process the > > connection under high peak of load, but why 0, is that expected? > > Status code 0 as seen in nginx http access logs means that nginx > wasn't able to generate any reasonable status code, even some > generic failure like 500 (Internal Server Error), yet the request > was closed. > > This usually happens due to some fatal issues, like unexpected > conditions (which might indicate a bug somewhere), unexpected > errors, or memory allocation errors if it wasn't possible to > return 500. > > In most cases there should be additional details in the error log > explaining the reasons. If there aren't any, or reasons aren't > clear, it might be a good idea to dig further. > > -- > Maxim Dounin > http://mdounin.ru/ > -- > nginx mailing list > nginx at freenginx.org > https://freenginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at cretaforce.gr Thu May 9 17:11:26 2024 From: chris at cretaforce.gr (Christos Chatzaras) Date: Thu, 9 May 2024 20:11:26 +0300 Subject: Bypass cache if PHPSESSID exists Message-ID: <91C700C5-CF21-424B-923F-3DABA4C96961@cretaforce.gr> Hello, I want to bypass cache if PHPSESSID exists. I have this configuration: http { fastcgi_cache_path /tmpfs/cache levels=1:2 keys_zone=fastcgicache:10m inactive=10m max_size=1024m; fastcgi_cache_key $device_type$scheme$request_method$host$request_uri; fastcgi_cache_min_uses 1; fastcgi_cache fastcgicache; fastcgi_cache_valid 200 301 10s; fastcgi_cache_valid 302 1m; fastcgi_cache_valid 404 5m; fastcgi_cache_lock on; fastcgi_cache_lock_timeout 8000; fastcgi_pass_header Set-Cookie; fastcgi_pass_header Cookie; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; fastcgi_no_cache $no_cache; fastcgi_cache_bypass $no_cache; } server { location ~ [^/]\.php(/|$) { set $no_cache ""; if ($request_method = POST) { set $no_cache "1"; } if ($http_cookie ~* "_mcnc|PHPSESSID") { set $no_cache "1"; } if ($no_cache = "1") { add_header Set-Cookie "_mcnc=1; Max-Age=31536000; Path=/"; } } } When I repeatedly run curl, the content is fetched from the cache, and the Set-Cookie header always contains "PHPSESSID=604e406c1c7a6ae061bf6ce3806d5eee", leading to session leakage: curl -I https://example.com HTTP/1.1 200 OK Server: nginx Date: Thu, 09 May 2024 16:37:15 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive Vary: Accept-Encoding Set-Cookie: PHPSESSID=604e406c1c7a6ae061bf6ce3806d5eee; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate Pragma: no-cache X-Cache: HIT Any idea what's wrong with my configuration? Kind regards, Christos Chatzaras From mdounin at mdounin.ru Thu May 9 20:42:19 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 May 2024 23:42:19 +0300 Subject: Bypass cache if PHPSESSID exists In-Reply-To: <91C700C5-CF21-424B-923F-3DABA4C96961@cretaforce.gr> References: <91C700C5-CF21-424B-923F-3DABA4C96961@cretaforce.gr> Message-ID: Hello! On Thu, May 09, 2024 at 08:11:26PM +0300, Christos Chatzaras wrote: > Hello, > > I want to bypass cache if PHPSESSID exists. > > I have this configuration: > > http { > fastcgi_cache_path /tmpfs/cache levels=1:2 keys_zone=fastcgicache:10m inactive=10m max_size=1024m; > fastcgi_cache_key $device_type$scheme$request_method$host$request_uri; > fastcgi_cache_min_uses 1; > fastcgi_cache fastcgicache; > fastcgi_cache_valid 200 301 10s; > fastcgi_cache_valid 302 1m; > fastcgi_cache_valid 404 5m; > fastcgi_cache_lock on; > fastcgi_cache_lock_timeout 8000; > fastcgi_pass_header Set-Cookie; > fastcgi_pass_header Cookie; > fastcgi_ignore_headers Cache-Control Expires Set-Cookie; Note that you ignore Set-Cookie here, so responses with the Set-Cookie response headers from the upstream server are expected to be cached. > fastcgi_no_cache $no_cache; > fastcgi_cache_bypass $no_cache; [...] > if ($http_cookie ~* "_mcnc|PHPSESSID") { > set $no_cache "1"; > } And the $no_cache variable is set based on the Cookie request header, not the upstream server response headers. [...] > When I repeatedly run curl, the content is fetched from the > cache, and the Set-Cookie header always contains > "PHPSESSID=604e406c1c7a6ae061bf6ce3806d5eee", leading to session > leakage: > > curl -I https://example.com > HTTP/1.1 200 OK > Server: nginx > Date: Thu, 09 May 2024 16:37:15 GMT > Content-Type: text/html; charset=UTF-8 > Connection: keep-alive > Vary: Accept-Encoding > Set-Cookie: PHPSESSID=604e406c1c7a6ae061bf6ce3806d5eee; path=/ > Expires: Thu, 19 Nov 1981 08:52:00 GMT > Cache-Control: no-store, no-cache, must-revalidate > Pragma: no-cache > X-Cache: HIT > > Any idea what's wrong with my configuration? Your configuration explicitly permits caching of such responses due to the "fastcgi_ignore_headers" directive you use. Consider removing it. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue May 14 15:03:19 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 May 2024 18:03:19 +0300 Subject: freenginx-1.27.0 Message-ID: Changes with freenginx 1.27.0 14 May 2024 *) Feature: updated descriptions of HTTP status codes. Thanks to Michiel W. Beijen. *) Change: now, if an error occurs during reading a request body, the request body is automatically discarded, and for complex error processing, such as proxying, it is no longer needed to explicitly disable passing of the request body to the proxied server. *) Change: the logging level of the "SSL alert number N" and "invalid alert" SSL errors has been lowered from "crit" to "info". *) Change: now freenginx always returns an error if a header name is not followed by a colon. Thanks to Maksim Yevmenkin. *) Feature: the "off" parameter of the "pid" directive. *) Feature: now during reconfiguration no attempt to recreate the PID file is made if the name in the "pid" directive was changed, but points to the same file via symlinks. *) Workaround: "PID file ... not readable (yet?) after start" and "Failed to parse PID from file..." errors might appear when starting with systemd. *) Bugfix: no error was written to the error log when a timeout occurred during reading a request body. *) Bugfix: redirecting errors with code 413 with the "error_page" directive worked incorrectly when using HTTP/2 and HTTP/3. *) Bugfix: freenginx could not be built on NetBSD 10.0. *) Bugfix: in HTTP/3. -- Maxim Dounin http://freenginx.org/ From srebecchi at kameleoon.com Fri May 17 08:40:10 2024 From: srebecchi at kameleoon.com (=?UTF-8?Q?S=C3=A9bastien_Rebecchi?=) Date: Fri, 17 May 2024 10:40:10 +0200 Subject: Status code 0 In-Reply-To: References: Message-ID: Hello Just to close that conversation, it seems this was an error of our devops in charge of alerting, who was using curl in a bad way. Best regards, S?bastien Le lun. 6 mai 2024 ? 11:33, S?bastien Rebecchi a ?crit : > Hello! > > There is nothing regarding this issue in nginx logs. > > Now I think the issue is not with nginx itself, but in front of nginx, > with Linux itself. > We monitor using curl, and it seems that curl can print status code 0 when > it can not establish a connection with the server. > I think the Linux kernel is not configured properly to handle our > connection peaks. Looking at net.core.somaxconn, we have the default of > 128. More generally, I will deep dive into this and possibly other kernel > settings to see if optimizing them will solve the problem. For information, > on each of our servers we have an average of around 80K simultaneous TCP > connections globally and around 35K in state ESTABLISHED (according to > netstat/ss -nta), and more during high peaks. > I saw this doc section which is a good starting point > https://www.nginx.com/blog/tuning-nginx/#Tuning-Your-Linux-Configuration > If you have any advice on other settings to increase, this would be very > appreciated. > > I will keep you in touch about my investigations, to confirm at least that > there is no bug on nginx side, which i am quite confident about now. > > Thank you very much for your help! > > Le sam. 4 mai 2024 ? 20:47, Maxim Dounin a ?crit : > >> Hello! >> >> On Sat, May 04, 2024 at 07:31:43PM +0200, S?bastien Rebecchi wrote: >> >> > Hello >> > >> > What does it mean when nginx returns an http status code 0? >> > >> > We see that cause we monitor nginx response status code, which is used >> as >> > front of our apps. The apps themselves can not return 0, only 200 or >> 500. >> > So it seems issue here comes from nginx itself which can not process the >> > connection under high peak of load, but why 0, is that expected? >> >> Status code 0 as seen in nginx http access logs means that nginx >> wasn't able to generate any reasonable status code, even some >> generic failure like 500 (Internal Server Error), yet the request >> was closed. >> >> This usually happens due to some fatal issues, like unexpected >> conditions (which might indicate a bug somewhere), unexpected >> errors, or memory allocation errors if it wasn't possible to >> return 500. >> >> In most cases there should be additional details in the error log >> explaining the reasons. If there aren't any, or reasons aren't >> clear, it might be a good idea to dig further. >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> -- >> nginx mailing list >> nginx at freenginx.org >> https://freenginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cello86 at gmail.com Thu May 23 08:49:13 2024 From: cello86 at gmail.com (Marcello Lorenzi) Date: Thu, 23 May 2024 10:49:13 +0200 Subject: nginx ngx_http_limit_req_module and PROXY PROTOCOL Message-ID: Hi All, we have configured a reverse proxy behind an haproxy load balancer and we used PROXY PROTOCOL to forward the real IP to the backends. All worked fine but if we enabled the ngx_http_limit_req_module and we based our limit_req_zone rule to the $binary_remote_addr we noticed that all requests received from the haproxy server have been blocked. Do we have to use the $proxy_remote_addr variable to avoid this issue? We tried to implement the variable but the block didn't work. Thanks, Marcello -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirill at korins.ky Thu May 23 09:52:13 2024 From: kirill at korins.ky (Kirill A. Korinsky) Date: Thu, 23 May 2024 10:52:13 +0100 Subject: [PATCH] Add $upstream_cache_key Message-ID: Greetings, Here is a patch that exposes the constructed cache key as $upstream_cache_key variable. Sometimes it's quite useful when debugging complicated setups and fighting some typos. index 2ce9f2114..561108681 100644 --- src/http/ngx_http_upstream.c +++ src/http/ngx_http_upstream.c @@ -23,6 +23,8 @@ static ngx_int_t ngx_http_upstream_cache_check_range(ngx_http_request_t *r, ngx_http_upstream_t *u); static ngx_int_t ngx_http_upstream_cache_status(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_upstream_cache_key(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_upstream_cache_last_modified(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_upstream_cache_etag(ngx_http_request_t *r, @@ -414,6 +416,10 @@ static ngx_http_variable_t ngx_http_upstream_vars[] = { ngx_http_upstream_cache_status, 0, NGX_HTTP_VAR_NOCACHEABLE, 0 }, + { ngx_string("upstream_cache_key"), NULL, + ngx_http_upstream_cache_key, 0, + NGX_HTTP_VAR_NOCACHEABLE, 0 }, + { ngx_string("upstream_cache_last_modified"), NULL, ngx_http_upstream_cache_last_modified, 0, NGX_HTTP_VAR_NOCACHEABLE|NGX_HTTP_VAR_NOHASH, 0 }, @@ -5990,6 +5996,47 @@ ngx_http_upstream_cache_status(ngx_http_request_t *r, } +static ngx_int_t +ngx_http_upstream_cache_key(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + size_t len; + ngx_uint_t i; + ngx_str_t *key; + ngx_http_cache_t *c; + + if (r->cache == NULL || r->cache->keys.nelts == 0) { + v->not_found = 1; + return NGX_OK; + } + + c = r->cache; + + key = c->keys.elts; + len = 0; + for (i = 0; i < c->keys.nelts; i++) { + len += key[i].len; + } + + v->data = ngx_pcalloc(r->pool, len); + if (v->data == 0) { + return NGX_ERROR; + } + + v->len = 0; + for (i = 0; i < c->keys.nelts; i++) { + memcpy(v->data + v->len, key[i].data, key[i].len); + v->len += key[i].len; + } + + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + + return NGX_OK; +} + + static ngx_int_t ngx_http_upstream_cache_last_modified(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data) -- wbr, Kirill From mdounin at mdounin.ru Thu May 23 12:25:16 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 23 May 2024 15:25:16 +0300 Subject: nginx ngx_http_limit_req_module and PROXY PROTOCOL In-Reply-To: References: Message-ID: Hello! On Thu, May 23, 2024 at 10:49:13AM +0200, Marcello Lorenzi wrote: > Hi All, > we have configured a reverse proxy behind an haproxy load balancer and we > used PROXY PROTOCOL to forward the real IP to the backends. All worked fine > but if we enabled the ngx_http_limit_req_module and we based our > limit_req_zone rule to the $binary_remote_addr we noticed that all requests > received from the haproxy server have been blocked. > > Do we have to use the $proxy_remote_addr variable to avoid this issue? We > tried to implement the variable but the block didn't work. If you are running limit_req behind a load balancer, there are two basic options: 1. Configure set_real_ip_from/real_ip_header (http://freenginx.org/r/set_real_ip_from), so the client address as seen by [free]nginx ill be set to the one obtained from the load balancer, including $remote_addr and $binary_remote_addr variables. 2. Use appropriate variable with the client address, such as $proxy_protocol_addr (http://freenginx.org/r/$proxy_protocol_addr), directly in the limit_req_zone configuration. Both variants should work fine as long as configured correctly. Note though that limit_req by default delays excessive requests, and to get an error you'll have to use a client which is able to do multiple parallel requests. Testing "limit_req ... nodelay;" could be easier. If you hare having troubles with configuring things, consider sharing your configuration. Hope this helps. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu May 23 16:59:17 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 23 May 2024 19:59:17 +0300 Subject: [PATCH] Add $upstream_cache_key In-Reply-To: References: Message-ID: Hello! On Thu, May 23, 2024 at 10:52:13AM +0100, Kirill A. Korinsky wrote: > Greetings, > > Here is a patch that exposes the constructed cache key as > $upstream_cache_key variable. > > Sometimes it's quite useful when debugging complicated setups and fighting > some typos. Thanks for the patch. I have no objections in general, see below for some minor comments. > > index 2ce9f2114..561108681 100644 > --- src/http/ngx_http_upstream.c > +++ src/http/ngx_http_upstream.c > @@ -23,6 +23,8 @@ static ngx_int_t ngx_http_upstream_cache_check_range(ngx_http_request_t *r, > ngx_http_upstream_t *u); > static ngx_int_t ngx_http_upstream_cache_status(ngx_http_request_t *r, > ngx_http_variable_value_t *v, uintptr_t data); > +static ngx_int_t ngx_http_upstream_cache_key(ngx_http_request_t *r, > + ngx_http_variable_value_t *v, uintptr_t data); > static ngx_int_t ngx_http_upstream_cache_last_modified(ngx_http_request_t *r, > ngx_http_variable_value_t *v, uintptr_t data); > static ngx_int_t ngx_http_upstream_cache_etag(ngx_http_request_t *r, > @@ -414,6 +416,10 @@ static ngx_http_variable_t ngx_http_upstream_vars[] = { > ngx_http_upstream_cache_status, 0, > NGX_HTTP_VAR_NOCACHEABLE, 0 }, > > + { ngx_string("upstream_cache_key"), NULL, > + ngx_http_upstream_cache_key, 0, > + NGX_HTTP_VAR_NOCACHEABLE, 0 }, > + > { ngx_string("upstream_cache_last_modified"), NULL, > ngx_http_upstream_cache_last_modified, 0, > NGX_HTTP_VAR_NOCACHEABLE|NGX_HTTP_VAR_NOHASH, 0 }, > @@ -5990,6 +5996,47 @@ ngx_http_upstream_cache_status(ngx_http_request_t *r, > } > > > +static ngx_int_t > +ngx_http_upstream_cache_key(ngx_http_request_t *r, > + ngx_http_variable_value_t *v, uintptr_t data) > +{ > + size_t len; > + ngx_uint_t i; > + ngx_str_t *key; Nitpicking: wrong variables order, should be sorted by type length. > + ngx_http_cache_t *c; > + > + if (r->cache == NULL || r->cache->keys.nelts == 0) { > + v->not_found = 1; > + return NGX_OK; > + } > + > + c = r->cache; > + > + key = c->keys.elts; > + len = 0; > + for (i = 0; i < c->keys.nelts; i++) { > + len += key[i].len; > + } > + > + v->data = ngx_pcalloc(r->pool, len); There is no need to use ngx_pcalloc() here: the memory is anyway to be written to, so just normal ngx_palloc(), without "c", would be enough. Further, given the result is a string, no alignment is needed, so ngx_pnalloc() would be appropriate. > + if (v->data == 0) { In [free]nginx code, NULL is used in pointer contexts. > + return NGX_ERROR; > + } > + > + v->len = 0; > + for (i = 0; i < c->keys.nelts; i++) { > + memcpy(v->data + v->len, key[i].data, key[i].len); > + v->len += key[i].len; Usual approach for such loops is doing something like p = ngx_cpymem(p, key[i].data, key[i].len); This avoids an extra addition per iteration, and generally easier to read. (Also, using memcpy() directly is discouraged, it should be either ngx_memcpy() or ngx_cpymem().) > + } > + > + v->valid = 1; > + v->no_cacheable = 0; > + v->not_found = 0; > + > + return NGX_OK; > +} > + > + > static ngx_int_t > ngx_http_upstream_cache_last_modified(ngx_http_request_t *r, > ngx_http_variable_value_t *v, uintptr_t data) > Patch with the above comments incorporated, please take a look: # HG changeset patch # User Kirill A. Korinsky # Date 1716479312 -10800 # Thu May 23 18:48:32 2024 +0300 # Node ID bd920ccd6f1aa43fcb40507ba463e44548ca4676 # Parent 46ecad404a296042c0088e699f275a92758e5ab9 Upstream: $upstream_cache_key variable. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -23,6 +23,8 @@ static ngx_int_t ngx_http_upstream_cache ngx_http_upstream_t *u); static ngx_int_t ngx_http_upstream_cache_status(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_upstream_cache_key(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_upstream_cache_last_modified(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_upstream_cache_etag(ngx_http_request_t *r, @@ -414,6 +416,10 @@ static ngx_http_variable_t ngx_http_ups ngx_http_upstream_cache_status, 0, NGX_HTTP_VAR_NOCACHEABLE, 0 }, + { ngx_string("upstream_cache_key"), NULL, + ngx_http_upstream_cache_key, 0, + NGX_HTTP_VAR_NOCACHEABLE, 0 }, + { ngx_string("upstream_cache_last_modified"), NULL, ngx_http_upstream_cache_last_modified, 0, NGX_HTTP_VAR_NOCACHEABLE|NGX_HTTP_VAR_NOHASH, 0 }, @@ -6004,6 +6010,49 @@ ngx_http_upstream_cache_status(ngx_http_ static ngx_int_t +ngx_http_upstream_cache_key(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + u_char *p; + size_t len; + ngx_str_t *key; + ngx_uint_t i; + ngx_http_cache_t *c; + + if (r->cache == NULL || r->cache->keys.nelts == 0) { + v->not_found = 1; + return NGX_OK; + } + + c = r->cache; + + len = 0; + key = c->keys.elts; + + for (i = 0; i < c->keys.nelts; i++) { + len += key[i].len; + } + + p = ngx_pnalloc(r->pool, len); + if (p == NULL) { + return NGX_ERROR; + } + + v->len = len; + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + v->data = p; + + for (i = 0; i < c->keys.nelts; i++) { + p = ngx_cpymem(p, key[i].data, key[i].len); + } + + return NGX_OK; +} + + +static ngx_int_t ngx_http_upstream_cache_last_modified(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data) { -- Maxim Dounin http://mdounin.ru/ From kirill at korins.ky Thu May 23 17:08:32 2024 From: kirill at korins.ky (Kirill A. Korinsky) Date: Thu, 23 May 2024 18:08:32 +0100 Subject: [PATCH] Add $upstream_cache_key In-Reply-To: References: Message-ID: Greetings, On Thu, 23 May 2024 17:59:17 +0100, Maxim Dounin wrote: > > Patch with the above comments incorporated, please take a look: > > # HG changeset patch > # User Kirill A. Korinsky > # Date 1716479312 -10800 > # Thu May 23 18:48:32 2024 +0300 > # Node ID bd920ccd6f1aa43fcb40507ba463e44548ca4676 > # Parent 46ecad404a296042c0088e699f275a92758e5ab9 > Upstream: $upstream_cache_key variable. > I'm absolutley fine with new version, thanks! Do you mind if I re-send it to the nginx with your changes? -- wbr, Kirill From mdounin at mdounin.ru Fri May 24 01:05:06 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 24 May 2024 04:05:06 +0300 Subject: [PATCH] Add $upstream_cache_key In-Reply-To: References: Message-ID: Hello! On Thu, May 23, 2024 at 06:08:32PM +0100, Kirill A. Korinsky wrote: > Greetings, > > On Thu, 23 May 2024 17:59:17 +0100, > Maxim Dounin wrote: > > > > Patch with the above comments incorporated, please take a look: > > > > # HG changeset patch > > # User Kirill A. Korinsky > > # Date 1716479312 -10800 > > # Thu May 23 18:48:32 2024 +0300 > > # Node ID bd920ccd6f1aa43fcb40507ba463e44548ca4676 > > # Parent 46ecad404a296042c0088e699f275a92758e5ab9 > > Upstream: $upstream_cache_key variable. > > > > I'm absolutley fine with new version, thanks! Committed. Please check docs patch below. > Do you mind if I re-send it to the nginx with your changes? I don't think my changes are copyrightable, but even if they are, all the code is under the BSD license, and hence can be merged into F5 NGINX. Patch to the documentation: # HG changeset patch # User Maxim Dounin # Date 1716511750 -10800 # Fri May 24 03:49:10 2024 +0300 # Node ID 0bbf14c9fd6692c95dcca75bd02e979b2f50f45b # Parent 3b5594157fab8ce6c65872b136aa9c599bc005b8 Documented $upstream_cache_key. diff --git a/xml/en/docs/http/ngx_http_upstream_module.xml b/xml/en/docs/http/ngx_http_upstream_module.xml --- a/xml/en/docs/http/ngx_http_upstream_module.xml +++ b/xml/en/docs/http/ngx_http_upstream_module.xml @@ -10,7 +10,7 @@ + rev="90">
@@ -574,6 +574,12 @@ are separated by commas and colons like $upstream_addr variable. +$upstream_cache_key + + +the cache key being used (1.27.1). + + $upstream_cache_status diff --git a/xml/ru/docs/http/ngx_http_upstream_module.xml b/xml/ru/docs/http/ngx_http_upstream_module.xml --- a/xml/ru/docs/http/ngx_http_upstream_module.xml +++ b/xml/ru/docs/http/ngx_http_upstream_module.xml @@ -10,7 +10,7 @@ + rev="90">
@@ -581,6 +581,12 @@ server { $upstream_addr. +$upstream_cache_key + + +???????????? ???? ??????????? (1.27.1). + + $upstream_cache_status -- Maxim Dounin http://mdounin.ru/