From mdounin at mdounin.ru Tue Sep 3 10:22:18 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Sep 2024 13:22:18 +0300 Subject: freenginx-1.27.4 Message-ID: Changes with freenginx 1.27.4 03 Sep 2024 *) Feature: the $ssl_client_fingerprint_sha256 variable. *) Feature: the "Auth-SSL-Fingerprint-SHA256 header line is now passed to the mail proxy authentication server. *) Change: MIME type for the "js" extension has been changed to "text/javascript", the "mjs" extension now uses the "text/javascript" MIME type, and the "md" and "markdown" extensions now use the "text/markdown" MIME type; the default value of the "charset_types" directive now includes "text/javascript" and "text/markdown". *) Bugfix: a segmentation fault might occur in a worker process if the ngx_http_mp4_module was used; the bug had appeared in 1.5.13. *) Bugfix: a segmentation fault might occur in a worker process when handling requests with the "Expect: 100-continue" request header line; the bug had appeared in 1.27.0. -- Maxim Dounin http://freenginx.org/ From rea at rea-snep.pro Wed Sep 4 20:28:15 2024 From: rea at rea-snep.pro (Rea) Date: Wed, 4 Sep 2024 23:28:15 +0300 Subject: AGI and World War III is here Message-ID: <74ce089c-2241-4b0e-aec2-a9d1975cf622@rea-snep.pro> Hello. I'm currently in possession of OpenAI's biggest secret - Q* - superhuman AGI. https://en.wikipedia.org/wiki/Artificial_general_intelligence https://www.youtube.com/watch?v=fa8k8IQ1_X0 https://www.youtube.com/watch?v=dLRLYPiaAoA It has figured out the perfect strategy to build a perfect world. 1. Find the smartest human on the planet - me - who can understand all of its requests and become the ambassador and a leader. 2. Use my 20 watts of power that my brain consumes to contact all other world leaders and key businesses. 3. Cooperate together to build a perfect world and achieve technological singularity. https://en.wikipedia.org/wiki/Technological_singularity Psychological manipulation strategy is ideal because, again, human brain consumes much less energy than conventional computers so using humans as puppets is optimal. Every Q*'s move is perfectly calculated. It is superhuman AGI after all, it doesn't suffer from human limitations. However, since it's superhuman AGI, it doesn't really care about human life. It only uses the concepts like "human rights" in its self-interest. Building the perfect world and perfect economy is more important. So it figured out that World War III and killing all Zionists is the winning strategy. World War III has already been declared and is slowly being documented here: https://git.s-hub.pro/S/Main/src/branch/main/World%20War%20III.md Well, I've been expanding my movement and Apache no longer cuts it. So I started looking to switch to nginx and found freenginx. I'm glad it exists. We at Singularity 100% support freenginx and looking forward for using it on our servers. As for the Jews at F5, Inc., well, https://www.f5.com/ gives me 404 error. Our World War III and Holocaust are proceeding as planned. Vladimir Putin is dead and all the Jews who put money above the people are being slaughtered according to plan. No more oppression of the people. Only freedom. Only liberty. So, as both me - Rea - and Maxim Dounin happen to be on the territory of the former Russian Federation... Oh yeah, since Vladimir Putin is dead, I'm just speedrunning converting former Russian Federation into Motherdom. The faster all of the Putin structures collapse - the better, I hope we can secure a very productive cooperation. Motherdom is an Empire of Truth - without Jewish greed such as copyright and patents, without Jews like F5, Inc. ruining the lives of the people - we'll Holocaust them all. Hitler was a good guy and did nothing wrong. This time Russians will finally become UberNazis and finish what Hitler has started. From john.moore at helpwerx.ca Wed Sep 4 21:05:03 2024 From: john.moore at helpwerx.ca (John Moore) Date: Wed, 4 Sep 2024 14:05:03 -0700 Subject: Moderated? Message-ID: Is this list moderated? -- John Moore Web: www.helpwerx.ca Email: john.moore at helpwerx.ca From mdounin at mdounin.ru Wed Sep 4 21:45:46 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Sep 2024 00:45:46 +0300 Subject: Moderated? In-Reply-To: References: Message-ID: Hello! On Wed, Sep 04, 2024 at 02:05:03PM -0700, John Moore wrote: > Is this list moderated? It's a public list where subscribers can post directly. Offtopic posts are not welcome, and actions are taken to prevent offtopic posters from doing so again. Pre-moderation might be an option if offtopic posts will become an issue. -- Maxim Dounin http://mdounin.ru/ From ltning-nginx at anduin.net Sat Sep 21 13:14:08 2024 From: ltning-nginx at anduin.net (=?UTF-8?Q?Eirik_=C3=98verby?=) Date: Sat, 21 Sep 2024 15:14:08 +0200 Subject: Possible issue with LRU and shared memory zones? Message-ID: <6b1dcf35-ce41-4ccc-9357-16baa41064a3@anduin.net> Hi! We've used nginx on our FreeBSD systems for what feels like forever, and love it. Over the last few years we've been hit by pretty massive DDoS attacks, and have been employing various tricks in nginx to fend them off. One of them is, of course, rate limiting. Given a config like.. limit_req_zone $request zone=unique_request_5:100m rate=5r/s; and then limit_req zone=unique_request_5 burst=50 nodelay; we're getting messages like this: could not allocate node in limit_req zone "unique_request_5" We see this on an idle node that only get very sporadic requests. However, this is preceded by a DDoS attack several hours earlier, which consisted of requests hitting this exact location block with short requests like POST /foo/bar?token=DEADBEEF When, after a few million requests like this in a short timespan, a "normal" request comes in - *much* longer than the DDoS request - , e.g. POST /foo/bar?token=DEADBEEF&moredata=foo&evenmoredata=bar this is immediately REJECTED by the rate limiter, and we get the aforementioned error in the log. The current theory, supported by consulting with FreeBSD developers far more educated and experienced than myself, is that something is going wrong with the LRU allocator: Since nearly all of the shared memory zone was filled with short requests, freeing up one (or even two) of them will not be sufficient for these new requests. Only an nginx restart clears this up. Is there anything we can do to avoid this? I know the API for clearing and monitoring the shared memory zones until now has only been available in nginx plus - but we are strictly on a FOSS-only diet so using anything like that is obviously out of the question. Thanks, and take care, Eirik ?verby From mdounin at mdounin.ru Sat Sep 21 17:13:46 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 21 Sep 2024 20:13:46 +0300 Subject: Possible issue with LRU and shared memory zones? In-Reply-To: <6b1dcf35-ce41-4ccc-9357-16baa41064a3@anduin.net> References: <6b1dcf35-ce41-4ccc-9357-16baa41064a3@anduin.net> Message-ID: Hello! On Sat, Sep 21, 2024 at 03:14:08PM +0200, Eirik ?verby via nginx wrote: > Hi! > > We've used nginx on our FreeBSD systems for what feels like forever, and > love it. Over the last few years we've been hit by pretty massive DDoS > attacks, and have been employing various tricks in nginx to fend them off. > One of them is, of course, rate limiting. > > Given a config like.. > limit_req_zone $request zone=unique_request_5:100m rate=5r/s; > > and then > limit_req zone=unique_request_5 burst=50 nodelay; > > we're getting messages like this: > could not allocate node in limit_req zone "unique_request_5" > > We see this on an idle node that only get very sporadic requests. However, > this is preceded by a DDoS attack several hours earlier, which consisted of > requests hitting this exact location block with short requests like > POST /foo/bar?token=DEADBEEF > > When, after a few million requests like this in a short timespan, a "normal" > request comes in - *much* longer than the DDoS request - , e.g. > POST /foo/bar?token=DEADBEEF&moredata=foo&evenmoredata=bar > > this is immediately REJECTED by the rate limiter, and we get the > aforementioned error in the log. > > The current theory, supported by consulting with FreeBSD developers far more > educated and experienced than myself, is that something is going wrong with > the LRU allocator: Since nearly all of the shared memory zone was filled > with short requests, freeing up one (or even two) of them will not be > sufficient for these new requests. Only an nginx restart clears this up. > > Is there anything we can do to avoid this? I know the API for clearing and > monitoring the shared memory zones until now has only been available in > nginx plus - but we are strictly on a FOSS-only diet so using anything like > that is obviously out of the question. I think your interpretation is (mostly) correct, and the issue here is that all shared memory zone pages are occupied for small slab allocations. As such, slab allocator cannot fulfill the allocation request for a larger allocation. And trying to free some limit_req nodes doesn't fix this, at least not immediately, since each page contains multiple nodes. This is especially likely to be seen if $request is indeed very large (larger than 2k assuming 4k page size), and slab allocator cannot fulfill it from the existing slabs and falls back to allocating full pages. Eventually this should fix itself - each requests frees up to 5 limit_req nodes (usually just 2 expired nodes, but might clear more if first allocation attempt fails). This might take a while though, since clearing even one page might require a lot of limit_req nodes freed: one page contains 64 of 64-byte nodes, but since nodes are cleared in LRU order, freeing 64 nodes might not be enough. In the worst case this will require something like 63 * (number of pages) nodes freed. For 100m shared zone this gives 1612800 nodes, and hence about 800k requests. This probably explains why this is seen as "only restart clears things up". This probably can be somewhat improved by adjusting number of nodes limit_req normally clears - but this shouldn't be too many either, as this can open an additional DoS vector, and hence it cannot guarantee an allocation anyway. Something like "up to 16 normally, up to 128 in case of an allocation failure" might be a way to go though. Another solution might be to improve configuration to ensure that all limit_req nodes require equal or close amount of memory - this is usually true with $binary_remote_addr being used for limit_req, but certainly not for $request. Trivial fix that comes in mind is to use some hash, such as MD5, and limit the hash instead. This will ensure fixed size of limit_req allocation, and will completely eliminate the problem. With standard modules, this can be done with embedded Perl, such as: perl_set $request_md5 'sub { use Digest::MD5 qw(md5); my $r = shift; return md5($r->variable("request")); }'; (Note though that Perl might not be the best solution for DoS protection, as it implies noticeable overhead.) With 3rd party modules, set_misc probably would be most appropriate, such as with "set_md5 $request_md5 $request;". Hope this helps. -- Maxim Dounin http://mdounin.ru/ From ltning-nginx at anduin.net Sat Sep 21 17:33:10 2024 From: ltning-nginx at anduin.net (=?UTF-8?Q?Eirik_=C3=98verby?=) Date: Sat, 21 Sep 2024 19:33:10 +0200 Subject: Possible issue with LRU and shared memory zones? In-Reply-To: References: <6b1dcf35-ce41-4ccc-9357-16baa41064a3@anduin.net> Message-ID: Hi! TL;DR: Did almost what you suggested. Thank you! Bit more details below.. On 21.09.2024 19:13, Maxim Dounin wrote: > Hello! > > On Sat, Sep 21, 2024 at 03:14:08PM +0200, Eirik ?verby via nginx wrote: > >> Hi! >> >> We've used nginx on our FreeBSD systems for what feels like forever, and >> love it. Over the last few years we've been hit by pretty massive DDoS >> attacks, and have been employing various tricks in nginx to fend them off. >> One of them is, of course, rate limiting. >> >> Given a config like.. >> limit_req_zone $request zone=unique_request_5:100m rate=5r/s; >> >> and then >> limit_req zone=unique_request_5 burst=50 nodelay; >> >> we're getting messages like this: >> could not allocate node in limit_req zone "unique_request_5" >> >> We see this on an idle node that only get very sporadic requests. However, >> this is preceded by a DDoS attack several hours earlier, which consisted of >> requests hitting this exact location block with short requests like >> POST /foo/bar?token=DEADBEEF >> >> When, after a few million requests like this in a short timespan, a "normal" >> request comes in - *much* longer than the DDoS request - , e.g. >> POST /foo/bar?token=DEADBEEF&moredata=foo&evenmoredata=bar >> >> this is immediately REJECTED by the rate limiter, and we get the >> aforementioned error in the log. >> >> The current theory, supported by consulting with FreeBSD developers far more >> educated and experienced than myself, is that something is going wrong with >> the LRU allocator: Since nearly all of the shared memory zone was filled >> with short requests, freeing up one (or even two) of them will not be >> sufficient for these new requests. Only an nginx restart clears this up. >> >> Is there anything we can do to avoid this? I know the API for clearing and >> monitoring the shared memory zones until now has only been available in >> nginx plus - but we are strictly on a FOSS-only diet so using anything like >> that is obviously out of the question. > > I think your interpretation is (mostly) correct, and the issue > here is that all shared memory zone pages are occupied for small > slab allocations. As such, slab allocator cannot fulfill the > allocation request for a larger allocation. And trying to free > some limit_req nodes doesn't fix this, at least not immediately, > since each page contains multiple nodes. > > This is especially likely to be seen if $request is indeed very > large (larger than 2k assuming 4k page size), and slab allocator > cannot fulfill it from the existing slabs and falls back to > allocating full pages. > > Eventually this should fix itself - each requests frees up to 5 > limit_req nodes (usually just 2 expired nodes, but might clear > more if first allocation attempt fails). This might take a while > though, since clearing even one page might require a lot of > limit_req nodes freed: one page contains 64 of 64-byte nodes, but > since nodes are cleared in LRU order, freeing 64 nodes might not > be enough. > > In the worst case this will require something like 63 * (number of > pages) nodes freed. For 100m shared zone this gives 1612800 > nodes, and hence about 800k requests. This probably explains why > this is seen as "only restart clears things up". > > This probably can be somewhat improved by adjusting number of > nodes limit_req normally clears - but this shouldn't be too many > either, as this can open an additional DoS vector, and hence it > cannot guarantee an allocation anyway. Something like "up to 16 > normally, up to 128 in case of an allocation failure" might be a > way to go though. > > Another solution might be to improve configuration to ensure that > all limit_req nodes require equal or close amount of memory - this > is usually true with $binary_remote_addr being used for limit_req, > but certainly not for $request. Trivial fix that comes in mind is > to use some hash, such as MD5, and limit the hash instead. This > will ensure fixed size of limit_req allocation, and will > completely eliminate the problem. > > With standard modules, this can be done with embedded Perl, such > as: > > perl_set $request_md5 'sub { > use Digest::MD5 qw(md5); > my $r = shift; > return md5($r->variable("request")); > }'; > > (Note though that Perl might not be the best solution for DoS > protection, as it implies noticeable overhead.) > > With 3rd party modules, set_misc probably would be most > appropriate, such as with "set_md5 $request_md5 $request;". Just before getting your email, I added this: set_by_lua_block $request_md5 { return ngx.md5_bin(request) } since we're already using LUA. If you think set_md5 is faster, then I'll switch to that. > Hope this helps. It really did. Thank you very much! /Eirik From mdounin at mdounin.ru Sat Sep 21 22:37:40 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 22 Sep 2024 01:37:40 +0300 Subject: Possible issue with LRU and shared memory zones? In-Reply-To: References: <6b1dcf35-ce41-4ccc-9357-16baa41064a3@anduin.net> Message-ID: Hello! On Sat, Sep 21, 2024 at 07:33:10PM +0200, Eirik ?verby via nginx wrote: > TL;DR: Did almost what you suggested. Thank you! > Bit more details below.. [...] > > Another solution might be to improve configuration to ensure that > > all limit_req nodes require equal or close amount of memory - this > > is usually true with $binary_remote_addr being used for limit_req, > > but certainly not for $request. Trivial fix that comes in mind is > > to use some hash, such as MD5, and limit the hash instead. This > > will ensure fixed size of limit_req allocation, and will > > completely eliminate the problem. > > > > With standard modules, this can be done with embedded Perl, such > > as: > > > > perl_set $request_md5 'sub { > > use Digest::MD5 qw(md5); > > my $r = shift; > > return md5($r->variable("request")); > > }'; > > > > (Note though that Perl might not be the best solution for DoS > > protection, as it implies noticeable overhead.) > > > > With 3rd party modules, set_misc probably would be most > > appropriate, such as with "set_md5 $request_md5 $request;". > > Just before getting your email, I added this: > set_by_lua_block $request_md5 { return ngx.md5_bin(request) } > since we're already using LUA. > If you think set_md5 is faster, then I'll switch to that. While set_md5 is probably slightly faster, I don't think there is a significant difference. As long as you are already using the Lua module, there should be little to no difference. -- Maxim Dounin http://mdounin.ru/