Too many open files at 1000 req/sec

Maxim Dounin mdounin at mdounin.ru
Sun Aug 10 10:34:55 UTC 2025


Hello!

On Sat, Aug 09, 2025 at 09:57:40AM -0700, Zsolt Ero wrote:

> I'm seeking advice on the most robust way to configure Nginx for a specific
> scenario that led to a caching issue.
> 
> I run a free vector tile map service (https://openfreemap.org/). The
> server's primary job is to serve a massive number of small (~70 kB),
> pre-gzipped PBF files.
> 
> To optimize for ocean areas, tiles that don't exist on disk should be
> served as a 200 OK with an empty body. These are then rendered as empty
> space on the map.
> 
> Recently, the server experienced an extremely high load: 100k req/sec on
> Cloudflare, and 1k req/sec on my two Hetzner servers. During this peak,
> Nginx started serving some *existing* tiles as empty bodies. Because these
> responses included cache-friendly headers (expires 10y), the CDN cached the
> incorrect empty responses, effectively making parts of the map disappear
> until a manual cache purge was performed.
> 
> My goal is to prevent this from happening again. A temporary server
> overload should result in a server error (e.g., 5xx), not incorrect content
> that gets permanently cached.

[...]

> Full generated config is uploaded here:
> https://github.com/hyperknot/openfreemap/blob/main/docs/assets/nginx.conf
> Questions
> 
> 1. I think multi_accept + open_file_cache > worker_rlimit_nofile is causing
> the whole trouble by not distributing the requests across workers, and then
> reaching the limit. Can you confirm if this is the correct take?

The root cause is definitely open_file_cache configured 
with maximum number of cached files higher than allowed by the 
number of open files resource limit.

Using multi_accept makes this easier to hit by making request 
distribution between worker processes worse than it could be.

Overall, I would recommend to:

- Remove multi_accept, it's not needed unless you have very high 
connection rates (and with small connection rates it'll waste 
resources).  Even assuming 1k r/s translates to 1k connections per 
second, using multi_accept is unlikely to be beneficial.

- Remove open_file_cache.  It is only beneficial if opening files 
requires significant resources, and this is unlikely for local 
files on Unix systems.  On the other hand, it is very likely to 
introduce various issues, either by itself due to bugs (e.g., I've 
recently fixed several open_file_cache bugs related to caching 
files with directio enabled), or by exposing and magnifying other 
issues, such as non-atomic file updates or misconfigurations like 
this one.

> 2. How should I handle the "missing file should be empty response, server
> error should be 5xx" scenario? I've asked 5 LLMs and each gave different
> answers, which I'm including below. I'd like to ask your expert opinion,
> and not trust LLMs in this regard.
> 
> *o3*
> 
> error_page 404 = @empty_tile;

That's what I would recommend as well.

You may also want to use "log_not_found off;" to avoid excessive 
logging.

Also, it may make sense to actually rethink how empty tiles are 
stored.  With "no file means empty title" approach you are still 
risking the same issue even with "error_page 404" if files will be 
lost somehow - such as due to disk issues, during incomplete 
synchronization, or whatever.

[...]

> 3. *open_file_cache Tuning:* My current open_file_cache settings are
> clearly too aggressive and caused the problem. For a workload of millions
> of tiny, static files, what would be considered a good configuration for max,
> inactive, and min_uses?

I don't think that open_file_cache would be beneficial for your 
use case.  Rather, it may make sense to tune OS namei(9) cache 
(dentry cache on Linux; not sure there are any settings other than 
vm.vfs_cache_pressure) to the number of files.  On the other hand, 
given the 1k r/s request rate, most systems should be good enough 
without any tuning.

> 4. *open_file_cache_errors:* Should this be on or off? My intent for having
> it on was to cache the "not found" status for ocean tiles to reduce disk
> checks. I want to cache file-not-found scenarios, but not server errors.
> What is the correct usage in this context?

The "open_file_cache_errors" directive currently caches all file 
system errors, and doesn't make any distinction between what 
exactly gone wrong - either the file or directory cannot be found, 
or there is a permissions error, or something else.  If you want 
to make sure that no unexpected errors will be cached, consider 
keeping it off.

On the other hand, it may make sense to explicitly exclude EMFILE, 
ENFILE, and may be ENOMEM from caching.  I'll take a look.

Note though, that as suggested above, my recommendation would be 
to avoid using "open_file_cache" at all.

> 5. *Limits:* What values would you recommend for values like
> worker_rlimit_nofile and worker_connections? Should I raise LimitNOFILESoft?

In general, "worker_connections" should be set depending on the 
expected load (and "worker_processes").  Total number of 
connections nginx will be able to handle is worker_processes * 
worker_connections.

Given you use "worker_connections 40000;" and at least 5 worker 
processes, your server is already able to handle more than 200k 
connections, and it is likely more than enough.  Looking into 
stub_status numbers (and/or system connections stats) might give 
you an idea if you needed more connections.  Note that using 
many worker connection might require OS tuning (but it looks like 
you've already set fs.file-max to an arbitrary high value).

And the RLIMIT_NOFILE limit should be set to a value needed for 
your worker processes.  It doesn't matter how do you set it, 
either in system (such as with LimitNOFILESoft in systemd) or with 
worker_rlimit_nofile in nginx itself (assuming it's under the hard 
limit set in the system).

The basic idea is that worker processes shouldn't hit the 
RLIMIT_NOFILE limit, but should hit worker_connections limit 
instead.  This way workers will be able to reuse least recently 
used connections to free some resources, and will be able to 
actively avoid accepting new connections to let other worker 
processes do this.

Given each connection uses at least one file for the socket, and 
can use many (for the file it returns, for upstream connections, 
for various temporary files, subrequests, streams in HTTP/2, and 
so on), it is usually a good idea to keep RLIMIT_NOFILE several 
times higher than worker_connections.

Since you have HTTP/2 enabled with the default 
max_concurrent_streams (128), and no proxying or subrequests, a 
reasonable limit would be worker_connections * (128 + 1) or so, 
that's about 5 mln open files (or you could consider reducing 
max_concurrent_streams, or worker_connections, or both).  And of 
course you'll have to add some for various files not related to 
connections, such as logs and open_file_cache if you'll decide to 
keep it.

> Finally, since this is the freenginx list: does freenginx offer anything
> over stock nginx which would help me in this use case? Even just a
> monitoring page with FD values would help.

I don't think there is a significant difference in this particular 
use case.  While freenginx provides various fixes and 
improvements, including fixes in open_file_cache, they won't make 
a difference here - the root cause of the issue you've hit is 
fragile configuration combined with too low resource limits.

-- 
Maxim Dounin
http://mdounin.ru/


More information about the nginx mailing list