From donotspam at fastmail.fm Fri Mar 1 02:20:44 2024 From: donotspam at fastmail.fm (Gareth Evans) Date: Fri, 01 Mar 2024 02:20:44 +0000 Subject: Packaging Message-ID: <9e65a402-e3a3-476c-8a32-c258287dd42e@app.fastmail.com> Hello, Do you expect to be able to provide packages for major formats? Deb/RPM? Thanks, Gareth From mdounin at mdounin.ru Fri Mar 1 02:39:40 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 1 Mar 2024 05:39:40 +0300 Subject: Packaging In-Reply-To: <9e65a402-e3a3-476c-8a32-c258287dd42e@app.fastmail.com> References: <9e65a402-e3a3-476c-8a32-c258287dd42e@app.fastmail.com> Message-ID: Hello! On Fri, Mar 01, 2024 at 02:20:44AM +0000, Gareth Evans wrote: > Do you expect to be able to provide packages for major formats? Deb/RPM? As of now there are no plans to provide binary packages. Rather, I would expect OS distributions to provide ones. Thomas Ward is currently working on Debian and Ubuntu, see https://freenginx.org/pipermail/nginx-devel/2024-February/000049.html. -- Maxim Dounin http://mdounin.ru/ From teward at thomas-ward.net Fri Mar 1 02:47:40 2024 From: teward at thomas-ward.net (Thomas Ward) Date: Fri, 1 Mar 2024 02:47:40 +0000 Subject: Packaging In-Reply-To: References: <9e65a402-e3a3-476c-8a32-c258287dd42e@app.fastmail.com> Message-ID: Debian-legal has already taken an interest in this, and consensus is to steer clear of your freenginx fork due to copyright and trademark infringement risks. Because of that edict, I am unable to get this into Debian. I have already reached out to Canonical Legal for assessment on the concerns with regards to Ubuntu. If they rule similarly, then we face a similar hurdle, which will prohibit us from using the Debian or Ubuntu repositories to distribute freenginx. Thomas -----Original Message----- From: nginx On Behalf Of Maxim Dounin Sent: Thursday, February 29, 2024 21:40 To: free nginx mailing list Subject: Re: Packaging Hello! On Fri, Mar 01, 2024 at 02:20:44AM +0000, Gareth Evans wrote: > Do you expect to be able to provide packages for major formats? Deb/RPM? As of now there are no plans to provide binary packages. Rather, I would expect OS distributions to provide ones. Thomas Ward is currently working on Debian and Ubuntu, see https://freenginx.org/pipermail/nginx-devel/2024-February/000049.html. -- Maxim Dounin http://mdounin.ru/ -- nginx mailing list nginx at freenginx.org https://freenginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Fri Mar 1 03:42:54 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 1 Mar 2024 06:42:54 +0300 Subject: Packaging In-Reply-To: References: <9e65a402-e3a3-476c-8a32-c258287dd42e@app.fastmail.com> Message-ID: Hello! On Fri, Mar 01, 2024 at 02:47:40AM +0000, Thomas Ward via nginx wrote: > Debian-legal has already taken an interest in this, and > consensus is to steer clear of your freenginx fork due to > copyright and trademark infringement risks. Because of that > edict, I am unable to get this into Debian. > > I have already reached out to Canonical Legal for assessment on > the concerns with regards to Ubuntu. If they rule similarly, > then we face a similar hurdle, which will prohibit us from using > the Debian or Ubuntu repositories to distribute freenginx. Thanks for the update. Providing at least source packages might be the way to go then. -- Maxim Dounin http://mdounin.ru/ From highclass99 at gmail.com Mon Mar 4 16:37:49 2024 From: highclass99 at gmail.com (highclass99) Date: Tue, 5 Mar 2024 01:37:49 +0900 Subject: Some suggestions for freenginx project Message-ID: Hello, Some suggestions for freenginx project. I suggest the following modules be included for default build in freenginx, I nearly sure the traffic stats features would be a "wow" factor for many who do not know about the modules compared to nginx. 1. These modules would add much commonly used traffic stats features from nginx plus for stats. https://github.com/vozlt/nginx-module-vts https://github.com/vozlt/nginx-module-sts https://github.com/vozlt/nginx-module-stream-sts 2. This would be a better version of tengine sysguard module, compatible with nginx https://github.com/vozlt/nginx-module-sysguard I suggest the following be completely built into freenginx 3. This would add an important feature similar to nginx commercial allowing dns resolves for upstream etc https://github.com/eriede/nginx-upstream-dynamic-servers 4. Add feature to limit and prevent excessive logging for nginx rate limiting. Rate limiting feature in nginx has major problem that you can only log all rate limits or none/ I have seen rate limiting all requests cause so much IO due to DDoS attacks that nginx stops responding mainly because of logging IO. 5. Add feature so that when rate limiting occurs, block or redirect the IP for a certain of time. 6. Add feature for dynamic upstream support. Thank you. Have a nice day. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Mar 5 16:28:30 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 Mar 2024 19:28:30 +0300 Subject: Some suggestions for freenginx project In-Reply-To: References: Message-ID: Hello! On Tue, Mar 05, 2024 at 01:37:49AM +0900, highclass99 wrote: > Hello, > > Some suggestions for freenginx project. > > I suggest the following modules be included for default build in freenginx, > I nearly sure the traffic stats features would be a "wow" factor for many > who do not know about the modules compared to nginx. > 1. > These modules would add much commonly used traffic stats features from > nginx plus for stats. > https://github.com/vozlt/nginx-module-vts > https://github.com/vozlt/nginx-module-sts > https://github.com/vozlt/nginx-module-stream-sts > > 2. > This would be a better version of tengine sysguard module, compatible with > nginx > https://github.com/vozlt/nginx-module-sysguard > > I suggest the following be completely built into freenginx > 3. > This would add an important feature similar to nginx commercial allowing > dns resolves for upstream etc > https://github.com/eriede/nginx-upstream-dynamic-servers > > 4. > Add feature to limit and prevent excessive logging for nginx rate limiting. > Rate limiting feature in nginx has major problem that you can only log all > rate limits or none/ > I have seen rate limiting all requests cause so much IO due to DDoS attacks > that nginx stops responding mainly because of logging IO. > > 5. > Add feature so that when rate limiting occurs, block or redirect the IP for > a certain of time. > > 6. > Add feature for dynamic upstream support. Thanks for the suggestions, appreciated. I'm somewhat sceptical about the idea of importing 3rd party modules, though certainly suggested modules are a good indicator of various areas to improve. In particular, stub_status module is somewhat outdated, and needs improvements, much like the degradation module, which is not documented and mostly non-functional for a long time now. Particularly for logging improvements as mentioned in (4), I've recently submitted a patch series which introduces logging moderation when logging errors during syslog logging: https://freenginx.org/pipermail/nginx-devel/2024-March/000069.html Introducing logging moderation in various other areas certainly might make sense as well. Further, it might be a good idea to add something general. Right now I'm thinking about something like "bump logging level if more than N messages per second are seen" or similar (leaky bucket for each logging level?), so it would adapt dynamically for various logging rates. -- Maxim Dounin http://mdounin.ru/ From admin at rzmahdi.ir Mon Mar 11 10:34:34 2024 From: admin at rzmahdi.ir (Reza Mahdi) Date: Mon, 11 Mar 2024 14:04:34 +0330 Subject: Providing TODO list Message-ID: Hi. It seems that the major development in the while was carried out by Mr. Dounin and mailing list is so solitude. I think having some sort of TODO list can help motivate developers to do something. -- Reza Mahdi From mdounin at mdounin.ru Tue Mar 12 14:04:10 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Mar 2024 17:04:10 +0300 Subject: Providing TODO list In-Reply-To: References: Message-ID: Hello! On Mon, Mar 11, 2024 at 02:04:34PM +0330, Reza Mahdi wrote: > It seems that the major development in the while was carried out by Mr. Dounin > and mailing list is so solitude. I think having some sort of TODO list can help > motivate developers to do something. Thank you for your suggestion. My personal TODO list is hardly suitable for publishing, but something certainly should be done here. In particular, an issue and feature request tracker is probably needed. Right now I'm considering Github's one, as it will be anyway available with Github mirror of the repository. -- Maxim Dounin http://mdounin.ru/ From markbsdmail2023 at gmail.com Tue Mar 12 18:25:41 2024 From: markbsdmail2023 at gmail.com (Mark) Date: Tue, 12 Mar 2024 21:25:41 +0300 Subject: freenginx with owncast? Message-ID: Hello there. I'm using freenginx under FreeBSD, as a reverse proxy with SSL/TLS termination, for my owncast streaming (HLS / m3u8) server (also on the same server running on 127.0.0.1:8080). The stream will be 7/24 live, and there will (might?) be thousands of different TV models watching it. So my SSL/443 configuration is like; server { server_name mystream.mysite.com; .... location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_pass http://127.0.0.1:8080; } } I'd like to ask; 1- As "proxy_buffering" is on by default in freenginx, should I turn it off? 2- Does "proxy_ignore_client_abort" matter in my case? If so, should it be on or off? 3- How about "tcp_nopush", "tcp_nodelay" and "sendfile" directives? 4- I see some sample configurations having; add_header Cache-Control no-cache; AND add_header Access-Control-Allow-Origin *; Do I really need both? These were my questions I wondered and wanted to learn before I move the whole setup to production. Many thanks, Best. Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From steffen at schwebel.online Tue Mar 12 17:41:08 2024 From: steffen at schwebel.online (Steffen Schwebel) Date: Tue, 12 Mar 2024 18:41:08 +0100 Subject: Providing TODO list In-Reply-To: References: Message-ID: GitHub gets a thumbs up from me, a random Internet person On March 12, 2024 3:04:10 PM GMT+01:00, Maxim Dounin wrote: >Hello! > >On Mon, Mar 11, 2024 at 02:04:34PM +0330, Reza Mahdi wrote: > >> It seems that the major development in the while was carried out by Mr. Dounin >> and mailing list is so solitude. I think having some sort of TODO list can help >> motivate developers to do something. > >Thank you for your suggestion. > >My personal TODO list is hardly suitable for publishing, but >something certainly should be done here. In particular, an issue >and feature request tracker is probably needed. Right now I'm >considering Github's one, as it will be anyway available with >Github mirror of the repository. > >-- >Maxim Dounin >http://mdounin.ru/ >-- >nginx mailing list >nginx at freenginx.org >https://freenginx.org/mailman/listinfo/nginx -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Mar 13 00:03:44 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Mar 2024 03:03:44 +0300 Subject: freenginx with owncast? In-Reply-To: References: Message-ID: Hello! On Tue, Mar 12, 2024 at 09:25:41PM +0300, Mark wrote: > Hello there. > > I'm using freenginx under FreeBSD, as a reverse proxy with SSL/TLS > termination, > for my owncast streaming (HLS / m3u8) server (also on the same server > running on 127.0.0.1:8080). > > The stream will be 7/24 live, and there will (might?) be thousands of > different TV models watching it. > > So my SSL/443 configuration is like; > > server { > server_name mystream.mysite.com; > .... > location / { > proxy_set_header Host $host; > proxy_set_header X-Forwarded-Host $host; > proxy_set_header X-Forwarded-Server $host; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection $connection_upgrade; > proxy_pass http://127.0.0.1:8080; > } > } > > I'd like to ask; I'm not familiar with Owncast, but here are some basic hints from freenginx point of view: > 1- As "proxy_buffering" is on by default in freenginx, should I turn it off? In general, there is no need to turn of proxy_buffering unless you are using streaming within individual HTTP responses. This is not the case for HLS-based streaming, which instead uses m3u8 playlist and individual files for each video segment. Further, switching off proxy_buffering means you won't be able to use cache, which is sometimes a good idea. If disk-based buffering is a concern, using "proxy_max_temp_file_size 0;" might be a better approach to consider, see http://freenginx.org/r/proxy_max_temp_file_size for details. > 2- Does "proxy_ignore_client_abort" matter in my case? If so, should it be > on or off? This depends on the Owncast behaviour. In most cases the default (off) is fine. However, you may want to turn it on if your backend server does not recognize connections being closed, so various limits in freenginx, such as limit_conn, will better match resources used by the upstream server. > 3- How about "tcp_nopush", "tcp_nodelay" and "sendfile" directives? The "tcp_nodelay" directive is on by default, and it is a good idea to keep it enabled as long as you are using SSL. The "tcp_nopush" directive makes sense if you are using sendfile on FreeBSD (or Linux). The "sendfile" directive enables using sendfile, which is one of the most effective ways to serve files on FreeBSD. In your setup sendfile is not likely to be used, except may be for disk-buffered upstream server responses. OTOH, keeping it enabled won't hurt, and will be beneficial if sendfile will be actually used. > 4- I see some sample configurations having; add_header Cache-Control > no-cache; AND add_header Access-Control-Allow-Origin *; > Do I really need both? Adding Cache-Control shouldn't be needed unless Owncast does something wrong with cache control. Unless there are reasons, I would rather avoid it. Adding Access-Control-Allow-Origin might be needed if you are using Owncast from other domains. Whether you need it depends on the intended usage. Hope this helps. -- Maxim Dounin http://mdounin.ru/ From jmax at cock.li Sun Mar 17 04:52:28 2024 From: jmax at cock.li (GNAA Jmax) Date: Sun, 17 Mar 2024 04:52:28 +0000 Subject: nginx-niggers and nginx-lgbt projects. Message-ID: <7753f3614978a2970b7bbe77529a94ff@cock.li> Dear Brothers and Sisters: I am interested in starting some nginx projects. As a homosexual, nginx-using, black, I am surprised at the low numbers of black and/or LGBT members of the nginx community. I believe that starting nginx-niggers, and nginx-gay or nginx-lgbt projects would help to increase participation of the respective parties in the nginx community. The first step in achieving this goal is to start mailing lists, where fellow nginx-using niggers and gays can communicate. I'm sure if such great niggers as Doctor Martin Luther King Jr. or Malcolm X were alive today, they would be NGINX advocates! Heralds of free speech and free software! Please respond with haste, not hate! Jonathan Maxwell, Head of Free Speech at Gay Nigger Advocates of America, a division of SUKI (TM) From reflux4448 at outlook.com Sun Mar 17 05:54:56 2024 From: reflux4448 at outlook.com (F Reflux) Date: Sun, 17 Mar 2024 05:54:56 +0000 Subject: nginx-niggers and nginx-lgbt projects. In-Reply-To: <7753f3614978a2970b7bbe77529a94ff@cock.li> References: <7753f3614978a2970b7bbe77529a94ff@cock.li> Message-ID: https://lists.debian.org/debian-project/2006/06/msg00174.html As a side note, since GNOME is already offering the Google Summer of Code's $9000 to female developers, perhaps Debian could offer its money to nigger developers? On Thu, 15 Jun 2006 12:38:41 +0200, Wouter Verhelst wrote: For those of you not familiar with the GNAA: please do not reply to this message or in any way assume that he is serious. The GNAA is a well-known trolling organisation. See -------------- next part -------------- An HTML attachment was scrubbed... URL: From srebecchi at kameleoon.com Mon Mar 18 20:08:18 2024 From: srebecchi at kameleoon.com (=?UTF-8?Q?S=C3=A9bastien_Rebecchi?=) Date: Mon, 18 Mar 2024 21:08:18 +0100 Subject: Number of keepalive connections to an upstream Message-ID: Hello, What is the good rule of thumbs for setting the number of keepalive connections to an upstream group? 1. https://www.nginx.com/blog/performance-tuning-tips-tricks/ in this blog, the writer seems to recommend a constant value of 128, no real explanation why it would fit whatever the number of servers in the upstream 2. https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive the upstream module doc seems to recommend a rule like 16 times the number of servers in the upstream, as we have two examples with respectively keepalive 32 for 2 upstream servers and keepalive 16 for 1 upstream server 3. https://www.nginx.com/blog/avoiding-top-10-nginx-configuration-mistakes/#no-keepalives in this blog, the writer recommends a rule of 2 times the number of servers in the upstream I used to follow rule of item 3 as it comes with a somewhat good explanation, but it does not seem to be widely accepted. What could explain such a divergence between several sources? What would you recommend please? Regards, S?bastien. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 18 22:50:51 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Mar 2024 01:50:51 +0300 Subject: Number of keepalive connections to an upstream In-Reply-To: References: Message-ID: Hello! On Mon, Mar 18, 2024 at 09:08:18PM +0100, S?bastien Rebecchi wrote: > Hello, > > What is the good rule of thumbs for setting the number of keepalive > connections to an upstream group? > > 1. https://www.nginx.com/blog/performance-tuning-tips-tricks/ > in this blog, the writer seems to recommend a constant value of 128, no > real explanation why it would fit whatever the number of servers in the > upstream > > 2. https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive > the upstream module doc seems to recommend a rule like 16 times the number > of servers in the upstream, as we have two examples with respectively > keepalive 32 for 2 upstream servers and keepalive 16 for 1 upstream server > > 3. > https://www.nginx.com/blog/avoiding-top-10-nginx-configuration-mistakes/#no-keepalives > in this blog, the writer recommends a rule of 2 times the number of servers > in the upstream > > I used to follow rule of item 3 as it comes with a somewhat good > explanation, but it does not seem to be widely accepted. > > What could explain such a divergence between several sources? What would > you recommend please? There is no strict rule, and actual numbers to use depend heavily on the particular configuration, expected load, the number of upstream servers, and the maximum number of connections each of the upstream servers can maintain. The most notable limitation to keep in mind is that when you are configuring keepalive connections with upstream servers using process-per-connection model, it might be important to keep total number of cached connections below the maximum number of keepalive connections the upstream server can maintain, or you might end up with non-responsive upstream server. For example, consider an upstream server, such as Apache with prefork MPM, which is configured to run 256 worker processes (MaxRequestWorkers 256, which is the default and will typically require more than 8G of memory). As long as nginx is configured with "keepalive 32;" and 16 worker processes, this can easily consume all the Apache workers for keepalive connections, and you'll end up with occasional slow upstream responses simply because all the upstream worker processes are occupied by cached connections from other nginx worker processes, even without any real load. That is, it is a good idea to keep (worker_processes * keepalive) below the maximum number of connections the upstream server can maintain. Note well that keepalive cache size is configured for all servers, but it might end up keeping connection to just one upstream server. As such, it might be important to choose the keepalive connections cache size based on the capacity of each upstream server individually, and not their total capacity. On the other hand, configuring cache which cannot hold at least 1 connection to each upstream server hardly makes sense, as it can easily end up in cached connections being closed after each request and never reused. Hope this helps. -- Maxim Dounin http://mdounin.ru/ From srebecchi at kameleoon.com Tue Mar 19 07:47:57 2024 From: srebecchi at kameleoon.com (=?UTF-8?Q?S=C3=A9bastien_Rebecchi?=) Date: Tue, 19 Mar 2024 08:47:57 +0100 Subject: Number of keepalive connections to an upstream In-Reply-To: References: Message-ID: Hi Maxim, Thank you for that long explanation. It is quite clear and helps me a lot. In my case upstream servers are vert.x servers, using event loop pattern so i won't have the pb induced by thread-per-request servers like Apache you mentionned. I understood that all keepalive connections could end up being assigned to the same server of the group occasionnally, no even distribution guarantee. In my case I think that I can increase much the number of keepalive then. I have 12 vertx servers in the group and implemented the *2 rule so it means max 24 keepalive connections maintained to the same server, which is quite low. Thanks agin, have a great day. S?bastien. Le lun. 18 mars 2024, 23:50, Maxim Dounin a ?crit : > Hello! > > On Mon, Mar 18, 2024 at 09:08:18PM +0100, S?bastien Rebecchi wrote: > > > Hello, > > > > What is the good rule of thumbs for setting the number of keepalive > > connections to an upstream group? > > > > 1. https://www.nginx.com/blog/performance-tuning-tips-tricks/ > > in this blog, the writer seems to recommend a constant value of 128, no > > real explanation why it would fit whatever the number of servers in the > > upstream > > > > 2. > https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive > > the upstream module doc seems to recommend a rule like 16 times the > number > > of servers in the upstream, as we have two examples with respectively > > keepalive 32 for 2 upstream servers and keepalive 16 for 1 upstream > server > > > > 3. > > > https://www.nginx.com/blog/avoiding-top-10-nginx-configuration-mistakes/#no-keepalives > > in this blog, the writer recommends a rule of 2 times the number of > servers > > in the upstream > > > > I used to follow rule of item 3 as it comes with a somewhat good > > explanation, but it does not seem to be widely accepted. > > > > What could explain such a divergence between several sources? What would > > you recommend please? > > There is no strict rule, and actual numbers to use depend heavily > on the particular configuration, expected load, the number of > upstream servers, and the maximum number of connections each of > the upstream servers can maintain. > > The most notable limitation to keep in mind is that when you are > configuring keepalive connections with upstream servers using > process-per-connection model, it might be important to keep total > number of cached connections below the maximum number of > keepalive connections the upstream server can maintain, or you > might end up with non-responsive upstream server. > > For example, consider an upstream server, such as Apache with > prefork MPM, which is configured to run 256 worker processes > (MaxRequestWorkers 256, which is the default and will typically > require more than 8G of memory). > > As long as nginx is configured with "keepalive 32;" and 16 worker > processes, this can easily consume all the Apache workers for > keepalive connections, and you'll end up with occasional slow > upstream responses simply because all the upstream worker > processes are occupied by cached connections from other nginx > worker processes, even without any real load. > > That is, it is a good idea to keep (worker_processes * keepalive) > below the maximum number of connections the upstream server can > maintain. > > Note well that keepalive cache size is configured for all servers, > but it might end up keeping connection to just one upstream > server. As such, it might be important to choose the keepalive > connections cache size based on the capacity of each upstream > server individually, and not their total capacity. > > On the other hand, configuring cache which cannot hold at least 1 > connection to each upstream server hardly makes sense, as it can > easily end up in cached connections being closed after each > request and never reused. > > Hope this helps. > > -- > Maxim Dounin > http://mdounin.ru/ > -- > nginx mailing list > nginx at freenginx.org > https://freenginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From markbsdmail2023 at gmail.com Fri Mar 22 06:29:31 2024 From: markbsdmail2023 at gmail.com (Mark) Date: Fri, 22 Mar 2024 09:29:31 +0300 Subject: freenginx with owncast? In-Reply-To: References: Message-ID: On Wed, Mar 13, 2024 at 3:03?AM Maxim Dounin wrote: > Adding Cache-Control shouldn't be needed unless Owncast does > something wrong with cache control. Unless there are reasons, I > would rather avoid it. > > Adding Access-Control-Allow-Origin might be needed if you are > using Owncast from other domains. Whether you need it depends on > the intended usage. > > Hope this helps. > > -- > Maxim Dounin > http://mdounin.ru/ > -- > nginx mailing list > nginx at freenginx.org > https://freenginx.org/mailman/listinfo/nginx Thank you VERY much for all those great suggestions, Maxim. Indeed, it helped a lot! Regards, Mark. -------------- next part -------------- An HTML attachment was scrubbed... URL: From srebecchi at kameleoon.com Mon Mar 25 12:31:26 2024 From: srebecchi at kameleoon.com (=?UTF-8?Q?S=C3=A9bastien_Rebecchi?=) Date: Mon, 25 Mar 2024 13:31:26 +0100 Subject: Nginx prematurely closing connections when reloaded Message-ID: Hello I have an issue with nginx closing prematurely connections when reload is performed. I have some nginx servers configured to proxy_pass requests to an upstream group. This group itself is composed of several servers which are nginx themselves, and is configured to use keepalive connections. When I trigger a reload (-s reload) on an nginx of one of the servers which is target of the upstream, I see in error logs of all servers in front that connection was reset by the nginx which was reloaded. Here configuration of upstream group (IPs are hidden replaced by IP_X): --- BEGIN --- upstream data_api { random; server IP_1:80 max_fails=3 fail_timeout=30s; server IP_2:80 max_fails=3 fail_timeout=30s; server IP_3:80 max_fails=3 fail_timeout=30s; server IP_4:80 max_fails=3 fail_timeout=30s; server IP_5:80 max_fails=3 fail_timeout=30s; server IP_6:80 max_fails=3 fail_timeout=30s; server IP_7:80 max_fails=3 fail_timeout=30s; server IP_8:80 max_fails=3 fail_timeout=30s; server IP_9:80 max_fails=3 fail_timeout=30s; server IP_10:80 max_fails=3 fail_timeout=30s; keepalive 20; } --- END --- Here configuration of the location using this upstream: --- BEGIN --- location / { proxy_pass http://data_api; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $real_ip; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 2s; proxy_send_timeout 6s; proxy_read_timeout 10s; proxy_next_upstream error timeout http_502 http_504; } --- END --- And here the kind of error messages I get when I reload nginx of "IP_1": --- BEGIN --- 2024/03/25 11:24:25 [error] 3758170#0: *1795895162 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: CLIENT_IP_HIDDEN, server: SERVER_HIDDEN, request: "POST /REQUEST_LOCATION_HIDDEN HTTP/2.0", upstream: " http://IP_1:80/REQUEST_LOCATION_HIDDEN", host: "HOST_HIDDEN", referrer: "REFERRER_HIDDEN" --- END --- I thought -s reload was doing graceful shutdown of connections. Is it due to the fact that nginx can not handle that when using keepalive connections? Is it a bug? I am using nginx 1.24.0 everywhere, no particular Thank you for any help. S?bastien -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 25 15:20:09 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Mar 2024 18:20:09 +0300 Subject: Nginx prematurely closing connections when reloaded In-Reply-To: References: Message-ID: Hello! On Mon, Mar 25, 2024 at 01:31:26PM +0100, S?bastien Rebecchi wrote: > I have an issue with nginx closing prematurely connections when reload is > performed. > > I have some nginx servers configured to proxy_pass requests to an upstream > group. This group itself is composed of several servers which are nginx > themselves, and is configured to use keepalive connections. > > When I trigger a reload (-s reload) on an nginx of one of the servers which > is target of the upstream, I see in error logs of all servers in front that > connection was reset by the nginx which was reloaded. [...] > And here the kind of error messages I get when I reload nginx of "IP_1": > > --- BEGIN --- > > 2024/03/25 11:24:25 [error] 3758170#0: *1795895162 recv() failed (104: > Connection reset by peer) while reading response header from upstream, > client: CLIENT_IP_HIDDEN, server: SERVER_HIDDEN, request: "POST > /REQUEST_LOCATION_HIDDEN HTTP/2.0", upstream: " > http://IP_1:80/REQUEST_LOCATION_HIDDEN", host: "HOST_HIDDEN", referrer: > "REFERRER_HIDDEN" > > --- END --- > > > I thought -s reload was doing graceful shutdown of connections. Is it due > to the fact that nginx can not handle that when using keepalive > connections? Is it a bug? > > I am using nginx 1.24.0 everywhere, no particular This looks like a well known race condition when closing HTTP connections. In RFC 2616, it is documented as follows (https://datatracker.ietf.org/doc/html/rfc2616#section-8.1.4): A client, server, or proxy MAY close the transport connection at any time. For example, a client might have started to send a new request at the same time that the server has decided to close the "idle" connection. From the server's point of view, the connection is being closed while it was idle, but from the client's point of view, a request is in progress. This means that clients, servers, and proxies MUST be able to recover from asynchronous close events. Client software SHOULD reopen the transport connection and retransmit the aborted sequence of requests without user interaction so long as the request sequence is idempotent (see section 9.1.2). Non-idempotent methods or sequences MUST NOT be automatically retried, although user agents MAY offer a human operator the choice of retrying the request(s). Confirmation by user-agent software with semantic understanding of the application MAY substitute for user confirmation. The automatic retry SHOULD NOT be repeated if the second sequence of requests fails. That is, when you shutdown your backend server, it closes the keepalive connection - which is expected to be perfectly safe from the server point of view. But if at the same time a request is being sent to this connection by the client (frontend nginx server in your case) - this might result in an error. Note that the race is generally unavoidable and such errors can happen at any time, during any connection close by the server. Closing multiple keepalive connections during shutdown makes such errors more likely though, since connections are closed right away, and not after keepalive timeout expires. Further, since in your case there are just a few loaded keepalive connections, this also makes errors during shutdown more likely. Typical solution is to retry such requests, as RFC 2616 recommends. In particular, nginx does so based on the "proxy_next_upstream" setting. Note that to retry POST requests you will need "proxy_next_upstream ... non_idempotent;" (which implies that non-idempotent requests will be retried on errors, and might not be the desired behaviour). Another possible approach is to try to minimize the race window by waiting some time after the shutdown before closing keepalive connections. There were several attempts in the past to implement this, the last one can be found here: https://mailman.nginx.org/pipermail/nginx-devel/2024-January/YSJATQMPXDIBETCDS46OTKUZNOJK6Q22.html While there are some questions to the particular patch, something like this should probably be implemented. This is my TODO list, so a proper solution should be eventually available out of the box in upcoming freenginx releases. Hope this helps. -- Maxim Dounin http://mdounin.ru/ From srebecchi at kameleoon.com Mon Mar 25 15:40:29 2024 From: srebecchi at kameleoon.com (=?UTF-8?Q?S=C3=A9bastien_Rebecchi?=) Date: Mon, 25 Mar 2024 16:40:29 +0100 Subject: Nginx prematurely closing connections when reloaded In-Reply-To: References: Message-ID: Thank you Maxim for that comprehensive explanation. I will think about non_idempotent then, and wait for an eventual release of freenginx that natively solves that issue :) Have a great day S?bastien. Le lun. 25 mars 2024 ? 16:20, Maxim Dounin a ?crit : > Hello! > > On Mon, Mar 25, 2024 at 01:31:26PM +0100, S?bastien Rebecchi wrote: > > > I have an issue with nginx closing prematurely connections when reload is > > performed. > > > > I have some nginx servers configured to proxy_pass requests to an > upstream > > group. This group itself is composed of several servers which are nginx > > themselves, and is configured to use keepalive connections. > > > > When I trigger a reload (-s reload) on an nginx of one of the servers > which > > is target of the upstream, I see in error logs of all servers in front > that > > connection was reset by the nginx which was reloaded. > > [...] > > > And here the kind of error messages I get when I reload nginx of "IP_1": > > > > --- BEGIN --- > > > > 2024/03/25 11:24:25 [error] 3758170#0: *1795895162 recv() failed (104: > > Connection reset by peer) while reading response header from upstream, > > client: CLIENT_IP_HIDDEN, server: SERVER_HIDDEN, request: "POST > > /REQUEST_LOCATION_HIDDEN HTTP/2.0", upstream: " > > http://IP_1:80/REQUEST_LOCATION_HIDDEN", host: "HOST_HIDDEN", referrer: > > "REFERRER_HIDDEN" > > > > --- END --- > > > > > > I thought -s reload was doing graceful shutdown of connections. Is it due > > to the fact that nginx can not handle that when using keepalive > > connections? Is it a bug? > > > > I am using nginx 1.24.0 everywhere, no particular > > This looks like a well known race condition when closing HTTP > connections. In RFC 2616, it is documented as follows > (https://datatracker.ietf.org/doc/html/rfc2616#section-8.1.4): > > A client, server, or proxy MAY close the transport connection at any > time. For example, a client might have started to send a new request > at the same time that the server has decided to close the "idle" > connection. From the server's point of view, the connection is being > closed while it was idle, but from the client's point of view, a > request is in progress. > > This means that clients, servers, and proxies MUST be able to recover > from asynchronous close events. Client software SHOULD reopen the > transport connection and retransmit the aborted sequence of requests > without user interaction so long as the request sequence is > idempotent (see section 9.1.2). Non-idempotent methods or sequences > MUST NOT be automatically retried, although user agents MAY offer a > human operator the choice of retrying the request(s). Confirmation by > user-agent software with semantic understanding of the application > MAY substitute for user confirmation. The automatic retry SHOULD NOT > be repeated if the second sequence of requests fails. > > That is, when you shutdown your backend server, it closes the > keepalive connection - which is expected to be perfectly safe from > the server point of view. But if at the same time a request is > being sent to this connection by the client (frontend nginx server > in your case) - this might result in an error. > > Note that the race is generally unavoidable and such errors can > happen at any time, during any connection close by the server. > Closing multiple keepalive connections during shutdown makes such > errors more likely though, since connections are closed right > away, and not after keepalive timeout expires. Further, since in > your case there are just a few loaded keepalive connections, this > also makes errors during shutdown more likely. > > Typical solution is to retry such requests, as RFC 2616 > recommends. In particular, nginx does so based on the > "proxy_next_upstream" setting. Note that to retry POST requests > you will need "proxy_next_upstream ... non_idempotent;" (which > implies that non-idempotent requests will be retried on errors, > and might not be the desired behaviour). > > Another possible approach is to try to minimize the race window by > waiting some time after the shutdown before closing keepalive > connections. There were several attempts in the past to implement > this, the last one can be found here: > > > https://mailman.nginx.org/pipermail/nginx-devel/2024-January/YSJATQMPXDIBETCDS46OTKUZNOJK6Q22.html > > While there are some questions to the particular patch, something > like this should probably be implemented. > > This is my TODO list, so a proper solution should be eventually > available out of the box in upcoming freenginx releases. > > Hope this helps. > > -- > Maxim Dounin > http://mdounin.ru/ > -- > nginx mailing list > nginx at freenginx.org > https://freenginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From srebecchi at kameleoon.com Tue Mar 26 14:03:19 2024 From: srebecchi at kameleoon.com (=?UTF-8?Q?S=C3=A9bastien_Rebecchi?=) Date: Tue, 26 Mar 2024 15:03:19 +0100 Subject: Nginx prematurely closing connections when reloaded In-Reply-To: References: Message-ID: Hi Maxim, I finally decided to activate retries of non idempotent requests, cause I already manage data deduplication. Now I have: proxy_next_upstream error timeout invalid_header http_502 http_504 non_idempotent; Nevertheless I still see the same error messages when I trigger a reload of an nginx in upstream. Does it mean the problem is that same, or just that nginx (the ones in front) still displays such error messages for information, but has effectively retried the request on another server in upstream? Kind regards, S?bastien. Le lun. 25 mars 2024 ? 16:40, S?bastien Rebecchi a ?crit : > Thank you Maxim for that comprehensive explanation. > I will think about non_idempotent then, and wait for an eventual release > of freenginx that natively solves that issue :) > Have a great day > S?bastien. > > Le lun. 25 mars 2024 ? 16:20, Maxim Dounin a ?crit : > >> Hello! >> >> On Mon, Mar 25, 2024 at 01:31:26PM +0100, S?bastien Rebecchi wrote: >> >> > I have an issue with nginx closing prematurely connections when reload >> is >> > performed. >> > >> > I have some nginx servers configured to proxy_pass requests to an >> upstream >> > group. This group itself is composed of several servers which are nginx >> > themselves, and is configured to use keepalive connections. >> > >> > When I trigger a reload (-s reload) on an nginx of one of the servers >> which >> > is target of the upstream, I see in error logs of all servers in front >> that >> > connection was reset by the nginx which was reloaded. >> >> [...] >> >> > And here the kind of error messages I get when I reload nginx of "IP_1": >> > >> > --- BEGIN --- >> > >> > 2024/03/25 11:24:25 [error] 3758170#0: *1795895162 recv() failed (104: >> > Connection reset by peer) while reading response header from upstream, >> > client: CLIENT_IP_HIDDEN, server: SERVER_HIDDEN, request: "POST >> > /REQUEST_LOCATION_HIDDEN HTTP/2.0", upstream: " >> > http://IP_1:80/REQUEST_LOCATION_HIDDEN", host: "HOST_HIDDEN", referrer: >> > "REFERRER_HIDDEN" >> > >> > --- END --- >> > >> > >> > I thought -s reload was doing graceful shutdown of connections. Is it >> due >> > to the fact that nginx can not handle that when using keepalive >> > connections? Is it a bug? >> > >> > I am using nginx 1.24.0 everywhere, no particular >> >> This looks like a well known race condition when closing HTTP >> connections. In RFC 2616, it is documented as follows >> (https://datatracker.ietf.org/doc/html/rfc2616#section-8.1.4): >> >> A client, server, or proxy MAY close the transport connection at any >> time. For example, a client might have started to send a new request >> at the same time that the server has decided to close the "idle" >> connection. From the server's point of view, the connection is being >> closed while it was idle, but from the client's point of view, a >> request is in progress. >> >> This means that clients, servers, and proxies MUST be able to recover >> from asynchronous close events. Client software SHOULD reopen the >> transport connection and retransmit the aborted sequence of requests >> without user interaction so long as the request sequence is >> idempotent (see section 9.1.2). Non-idempotent methods or sequences >> MUST NOT be automatically retried, although user agents MAY offer a >> human operator the choice of retrying the request(s). Confirmation by >> user-agent software with semantic understanding of the application >> MAY substitute for user confirmation. The automatic retry SHOULD NOT >> be repeated if the second sequence of requests fails. >> >> That is, when you shutdown your backend server, it closes the >> keepalive connection - which is expected to be perfectly safe from >> the server point of view. But if at the same time a request is >> being sent to this connection by the client (frontend nginx server >> in your case) - this might result in an error. >> >> Note that the race is generally unavoidable and such errors can >> happen at any time, during any connection close by the server. >> Closing multiple keepalive connections during shutdown makes such >> errors more likely though, since connections are closed right >> away, and not after keepalive timeout expires. Further, since in >> your case there are just a few loaded keepalive connections, this >> also makes errors during shutdown more likely. >> >> Typical solution is to retry such requests, as RFC 2616 >> recommends. In particular, nginx does so based on the >> "proxy_next_upstream" setting. Note that to retry POST requests >> you will need "proxy_next_upstream ... non_idempotent;" (which >> implies that non-idempotent requests will be retried on errors, >> and might not be the desired behaviour). >> >> Another possible approach is to try to minimize the race window by >> waiting some time after the shutdown before closing keepalive >> connections. There were several attempts in the past to implement >> this, the last one can be found here: >> >> >> https://mailman.nginx.org/pipermail/nginx-devel/2024-January/YSJATQMPXDIBETCDS46OTKUZNOJK6Q22.html >> >> While there are some questions to the particular patch, something >> like this should probably be implemented. >> >> This is my TODO list, so a proper solution should be eventually >> available out of the box in upcoming freenginx releases. >> >> Hope this helps. >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> -- >> nginx mailing list >> nginx at freenginx.org >> https://freenginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Mar 26 16:33:08 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 Mar 2024 19:33:08 +0300 Subject: Nginx prematurely closing connections when reloaded In-Reply-To: References: Message-ID: Hello! On Tue, Mar 26, 2024 at 03:03:19PM +0100, S?bastien Rebecchi wrote: > I finally decided to activate retries of non idempotent requests, cause I > already manage data deduplication. > > Now I have: proxy_next_upstream error timeout invalid_header http_502 > http_504 non_idempotent; > > Nevertheless I still see the same error messages when I trigger a reload of > an nginx in upstream. > Does it mean the problem is that same, or just that nginx (the ones in > front) still displays such error messages for information, but has > effectively retried the request on another server in upstream? The latter. As the errors are still here, such error messages are expected to appear in logs. The difference is that corresponding POST requests will be automatically retried to other upstream servers, so clients will get proper upstream responses instead of 502. Looking into corresponding access log entries should show the difference, especially if $upstream_status is logged (http://freenginx.org/r/$upstream_status). -- Maxim Dounin http://mdounin.ru/ From srebecchi at kameleoon.com Tue Mar 26 16:40:36 2024 From: srebecchi at kameleoon.com (=?UTF-8?Q?S=C3=A9bastien_Rebecchi?=) Date: Tue, 26 Mar 2024 17:40:36 +0100 Subject: Nginx prematurely closing connections when reloaded In-Reply-To: References: Message-ID: Thank you so much Maxim! Le mar. 26 mars 2024 ? 17:33, Maxim Dounin a ?crit : > Hello! > > On Tue, Mar 26, 2024 at 03:03:19PM +0100, S?bastien Rebecchi wrote: > > > I finally decided to activate retries of non idempotent requests, cause I > > already manage data deduplication. > > > > Now I have: proxy_next_upstream error timeout invalid_header http_502 > > http_504 non_idempotent; > > > > Nevertheless I still see the same error messages when I trigger a reload > of > > an nginx in upstream. > > Does it mean the problem is that same, or just that nginx (the ones in > > front) still displays such error messages for information, but has > > effectively retried the request on another server in upstream? > > The latter. As the errors are still here, such error messages are > expected to appear in logs. The difference is that corresponding > POST requests will be automatically retried to other upstream > servers, so clients will get proper upstream responses instead of > 502. Looking into corresponding access log entries should show > the difference, especially if $upstream_status is logged > (http://freenginx.org/r/$upstream_status). > > -- > Maxim Dounin > http://mdounin.ru/ > -- > nginx mailing list > nginx at freenginx.org > https://freenginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amarjeetxc at gmail.com Wed Mar 27 11:21:30 2024 From: amarjeetxc at gmail.com (Amarjeet Singh) Date: Wed, 27 Mar 2024 16:51:30 +0530 Subject: Require commercial support for Nginx Message-ID: Hi All, I am reaching out to solicit your expertise in resolving a critical performance issue we are currently facing with our Nginx servers. As the backbone of our web serving and reverse proxying for over 700 applications, maintaining optimal performance is imperative for our operations. Recently, we've encountered a significant challenge: when our concurrent user count exceeds 5,000, response times from Nginx dramatically increase, affecting even the loading times of local pages. This issue is impacting our service delivery and user experience, and we are keen on addressing it as swiftly and efficiently as possible. Given the complexity of our setup and the potential for various underlying causes?ranging from configuration missteps to more systemic limitations?we believe that professional consultation and intervention are necessary. We are seeking a consultant who can offer: 1. In-depth analysis of our current Nginx configuration and infrastructure to identify bottlenecks or misconfigurations. 2. Strategic advice on infrastructure optimization or scaling to handle high traffic volumes more effectively. 3. Hands-on support in implementing recommended changes to our Nginx setup and associated systems. Given the urgency and importance of this issue, we are prepared to compensate for your expertise on an hourly basis, with terms to be discussed based on the scope of work identified. If this is within your capacity and interest, could we schedule a call to discuss our situation in more detail? Your insights and expertise would be invaluable to us as we strive to resolve these performance challenges. Thank you for considering our request. We look forward to the possibility of working together. Warm regards, Amarjeet Singh Senior Software Engineer +919592338879 -------------- next part -------------- An HTML attachment was scrubbed... URL: