Module ngx_http_upstream_module
Example Configuration Directives upstream server zone hash ip_hash keepalive keepalive_requests keepalive_time keepalive_timeout least_conn random Embedded Variables |
The ngx_http_upstream_module
module
is used to define groups of servers that can be referenced
by the proxy_pass,
fastcgi_pass,
uwsgi_pass,
scgi_pass,
memcached_pass, and
grpc_pass directives.
Example Configuration
upstream backend { server backend1.example.com weight=5; server backend2.example.com:8080; server unix:/tmp/backend3; server backup1.example.com:8080 backup; server backup2.example.com:8080 backup; } server { location / { proxy_pass http://backend; } }
Directives
Syntax: |
upstream |
---|---|
Default: | — |
Context: |
http |
Defines a group of servers. Servers can listen on different ports. In addition, servers listening on TCP and UNIX-domain sockets can be mixed.
Example:
upstream backend { server backend1.example.com weight=5; server 127.0.0.1:8080 max_fails=3 fail_timeout=30s; server unix:/tmp/backend3; server backup1.example.com backup; }
By default, requests are distributed between the servers using a
weighted round-robin balancing method.
In the above example, each 7 requests will be distributed as follows:
5 requests go to backend1.example.com
and one request to each of the second and third servers.
If an error occurs during communication with a server, the request will
be passed to the next server, and so on until all of the functioning
servers will be tried.
If a successful response could not be obtained from any of the servers,
the client will receive the result of the communication with the last server.
Syntax: |
server |
---|---|
Default: | — |
Context: |
upstream |
Defines the address
and other parameters
of a server.
The address can be specified as a domain name or IP address,
with an optional port, or as a UNIX-domain socket path
specified after the “unix:
” prefix.
If a port is not specified, the port 80 is used.
A domain name that resolves to several IP addresses defines
multiple servers at once.
The following parameters can be defined:
-
weight
=number
- sets the weight of the server, by default, 1.
-
max_conns
=number
-
limits the maximum
number
of simultaneous active connections to the proxied server (1.11.5). Default value is zero, meaning there is no limit. If the server group does not reside in the shared memory, the limitation works per each worker process.If idle keepalive connections, multiple workers, and the shared memory are enabled, the total number of active and idle connections to the proxied server may exceed the
max_conns
value. -
max_fails
=number
-
sets the number of unsuccessful attempts to communicate with the server
that should happen in the duration set by the
fail_timeout
parameter to consider the server unavailable for a duration also set by thefail_timeout
parameter. By default, the number of unsuccessful attempts is set to 1. The zero value disables the accounting of attempts. What is considered an unsuccessful attempt is defined by the proxy_next_upstream, fastcgi_next_upstream, uwsgi_next_upstream, scgi_next_upstream, memcached_next_upstream, and grpc_next_upstream directives. -
fail_timeout
=time
-
sets
- the time during which the specified number of unsuccessful attempts to communicate with the server should happen to consider the server unavailable;
- and the period of time the server will be considered unavailable.
-
backup
-
marks the server as a backup server.
It will be passed requests when the primary servers are unavailable.
The parameter cannot be used along with the hash, ip_hash, and random load balancing methods.
-
down
- marks the server as permanently unavailable.
If there is only a single server in a group,max_fails
andfail_timeout
parameters are ignored, and such a server will never be considered unavailable.
Syntax: |
zone |
---|---|
Default: | — |
Context: |
upstream |
This directive appeared in version 1.9.0.
Defines the name
and size
of the shared
memory zone that keeps the group’s configuration and run-time state that are
shared between worker processes.
Several groups may share the same zone.
In this case, it is enough to specify the size
only once.
Syntax: |
hash |
---|---|
Default: | — |
Context: |
upstream |
This directive appeared in version 1.7.2.
Specifies a load balancing method for a server group
where the client-server mapping is based on the hashed key
value.
The key
can contain text, variables, and their combinations.
Note that adding or removing a server from the group
may result in remapping most of the keys to different servers.
The method is compatible with the
Cache::Memcached
Perl library.
If the consistent
parameter is specified,
the ketama
consistent hashing method will be used instead.
The method ensures that only a few keys
will be remapped to different servers
when a server is added to or removed from the group.
This helps to achieve a higher cache hit ratio for caching servers.
The method is compatible with the
Cache::Memcached::Fast
Perl library with the ketama_points
parameter set to 160.
Syntax: |
ip_hash; |
---|---|
Default: | — |
Context: |
upstream |
Specifies that a group should use a load balancing method where requests are distributed between servers based on client IP addresses. The first three octets of the client IPv4 address, or the entire IPv6 address, are used as a hashing key. The method ensures that requests from the same client will always be passed to the same server except when this server is unavailable. In the latter case client requests will be passed to another server. Most probably, it will always be the same server as well.
IPv6 addresses are supported starting from versions 1.3.2 and 1.2.2.
If one of the servers needs to be temporarily removed, it should
be marked with the down
parameter in
order to preserve the current hashing of client IP addresses.
Example:
upstream backend { ip_hash; server backend1.example.com; server backend2.example.com; server backend3.example.com down; server backend4.example.com; }
Until versions 1.3.1 and 1.2.2, it was not possible to specify a weight for
servers using the ip_hash
load balancing method.
Syntax: |
keepalive |
---|---|
Default: | — |
Context: |
upstream |
This directive appeared in version 1.1.4.
Activates the cache for connections to upstream servers.
The connections
parameter sets the maximum number of
idle keepalive connections to upstream servers that are preserved in
the cache of each worker process.
When this number is exceeded, the least recently used connections
are closed.
It should be particularly noted that thekeepalive
directive does not limit the total number of connections to upstream servers that an nginx worker process can open. Theconnections
parameter should be set to a number small enough to let upstream servers process new incoming connections as well.
When using load balancing methods other than the default
round-robin method, it is necessary to activate them before
the keepalive
directive.
Example configuration of memcached upstream with keepalive connections:
upstream memcached_backend { server 127.0.0.1:11211; server 10.0.0.2:11211; keepalive 32; } server { ... location /memcached/ { set $memcached_key $uri; memcached_pass memcached_backend; } }
For HTTP, the proxy_http_version
directive should be set to “1.1
”
and the “Connection” header field should be cleared:
upstream http_backend { server 127.0.0.1:8080; keepalive 16; } server { ... location /http/ { proxy_pass http://http_backend; proxy_http_version 1.1; proxy_set_header Connection ""; ... } }
Alternatively, HTTP/1.0 persistent connections can be used by passing the “Connection: Keep-Alive” header field to an upstream server, though this method is not recommended.
For FastCGI servers, it is required to set fastcgi_keep_conn for keepalive connections to work:
upstream fastcgi_backend { server 127.0.0.1:9000; keepalive 8; } server { ... location /fastcgi/ { fastcgi_pass fastcgi_backend; fastcgi_keep_conn on; ... } }
SCGI and uwsgi protocols do not have a notion of keepalive connections.
Syntax: |
keepalive_requests |
---|---|
Default: |
keepalive_requests 1000; |
Context: |
upstream |
This directive appeared in version 1.15.3.
Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed.
Closing connections periodically is necessary to free per-connection memory allocations. Therefore, using too high maximum number of requests could result in excessive memory usage and not recommended.
Prior to version 1.19.10, the default value was 100.
Syntax: |
keepalive_time |
---|---|
Default: |
keepalive_time 1h; |
Context: |
upstream |
This directive appeared in version 1.19.10.
Limits the maximum time during which requests can be processed through one keepalive connection. After this time is reached, the connection is closed following the subsequent request processing.
Syntax: |
keepalive_timeout |
---|---|
Default: |
keepalive_timeout 60s; |
Context: |
upstream |
This directive appeared in version 1.15.3.
Sets a timeout during which an idle keepalive connection to an upstream server will stay open.
Syntax: |
least_conn; |
---|---|
Default: | — |
Context: |
upstream |
This directive appeared in versions 1.3.1 and 1.2.2.
Specifies that a group should use a load balancing method where a request is passed to the server with the least number of active connections, taking into account weights of servers. If there are several such servers, they are tried in turn using a weighted round-robin balancing method.
Syntax: |
random [ |
---|---|
Default: | — |
Context: |
upstream |
This directive appeared in version 1.15.1.
Specifies that a group should use a load balancing method where a request is passed to a randomly selected server, taking into account weights of servers.
The optional two
parameter
instructs nginx to randomly select
two
servers and then choose a server
using the specified method
.
The default method is least_conn
which passes a request to a server
with the least number of active connections.
Embedded Variables
The ngx_http_upstream_module
module
supports the following embedded variables:
$upstream_addr
-
keeps the IP address and port,
or the path to the UNIX-domain socket of the upstream server.
If several servers were contacted during request processing,
their addresses are separated by commas, e.g.
“
192.168.1.1:80, 192.168.1.2:80, unix:/tmp/sock
”. If an internal redirect from one server group to another happens, initiated by “X-Accel-Redirect” or error_page, then the server addresses from different groups are separated by colons, e.g. “192.168.1.1:80, 192.168.1.2:80, unix:/tmp/sock : 192.168.10.1:80, 192.168.10.2:80
”. If a server cannot be selected, the variable keeps the name of the server group. $upstream_bytes_received
- number of bytes received from an upstream server (1.11.4). Values from several connections are separated by commas and colons like addresses in the $upstream_addr variable.
$upstream_bytes_sent
- number of bytes sent to an upstream server (1.15.8). Values from several connections are separated by commas and colons like addresses in the $upstream_addr variable.
$upstream_cache_age
- age of the cache item (1.27.3).
$upstream_cache_key
- the cache key being used (1.27.1).
$upstream_cache_status
-
keeps the status of accessing a response cache (0.8.3).
The status can be either “
MISS
”, “BYPASS
”, “EXPIRED
”, “STALE
”, “UPDATING
”, “REVALIDATED
”, or “HIT
”. $upstream_connect_time
- keeps time spent on establishing a connection with the upstream server (1.9.1); the time is kept in seconds with millisecond resolution. In case of SSL, includes time spent on handshake. Times of several connections are separated by commas and colons like addresses in the $upstream_addr variable.
-
cookie with the specified
name
sent by the upstream server in the “Set-Cookie” response header field (1.7.1). Only the cookies from the response of the last server are saved. $upstream_header_time
- keeps time spent on receiving the response header from the upstream server (1.7.10); the time is kept in seconds with millisecond resolution. Times of several responses are separated by commas and colons like addresses in the $upstream_addr variable.
$upstream_http_
name
-
keep server response header fields.
For example, the “Server” response header field
is available through the
$upstream_http_server
variable. The rules of converting header field names to variable names are the same as for the variables that start with the “$http_” prefix. Only the header fields from the response of the last server are saved. $upstream_response_length
- keeps the length of the response obtained from the upstream server (0.7.27); the length is kept in bytes. Lengths of several responses are separated by commas and colons like addresses in the $upstream_addr variable.
$upstream_response_time
- keeps time spent on receiving the response from the upstream server; the time is kept in seconds with millisecond resolution. Times of several responses are separated by commas and colons like addresses in the $upstream_addr variable.
$upstream_status
- keeps status code of the response obtained from the upstream server. Status codes of several responses are separated by commas and colons like addresses in the $upstream_addr variable. If a server cannot be selected, the variable keeps the 502 (Bad Gateway) status code.
$upstream_trailer_
name
- keeps fields from the end of the response obtained from the upstream server (1.13.10).