Serving Static Content
Root Directory and Index Files
If a request ends with a slash, NGINX treats it as a request for a directory and tries to find an index file in the directory. The index
directive defines the index file’s name (the default value is index.html
).
You can list more than one filename in the index
directive. NGINX searches for files in the specified order and returns the first one it finds.
location / { root /data; index index.html index.php; } location ~ .php { fastcgi_pass localhost:8000; #... }
Here, if the URI in a request is /path/
, and /data/path/index.html
does not exist but /data/path/index.php
does, the internal redirect to /path/index.php
is mapped to the second location. As a result, the request is proxied.
Trying Several Options
server { root /www/data; location /images/ { try_files $uri /images/default.gif; } }
In this case, if the file corresponding to the original URI doesn’t exist, NGINX makes an internal redirect to the URI specified by the last parameter, returning /www/data/images/default.gif
.
The last parameter can also be a status code (directly preceded by the equals sign) or the name of a location. In the following example, a 404
error is returned if none of the parameters to the try_files
directive resolve to an existing file or directory.
location / { try_files $uri $uri/ $uri.html =404; }
Optimizing Performance for Serving Content
Enabling sendfile
By default, NGINX handles file transmission itself and copies the file into the buffer before sending it. Enabling the sendfile
directive eliminates the step of copying the data into the buffer and enables direct copying data from one file descriptor to another. Alternatively, to prevent one fast connection from entirely occupying the worker process, you can use the sendfile_max_chunk
directive to limit the amount of data transferred in a single sendfile()
call (in this example, to 1
MB):
Optimizing the Backlog Queue
Displaying the Listen Queue
To display the current listen queue, run this command:
netstat -Lan
In contrast, in the following command the number of unaccepted connections (192
) exceeds the limit of 128
. This is quite common when a web site experiences heavy traffic.
Current listen queue sizes (qlen/incqlen/maxqlen) Listen Local Address 0/0/128 *.12345 192/0/128 *.80 0/0/128 *.8080
To achieve optimal performance, you need to increase the maximum number of connections that can be queued for acceptance by NGINX in both your operating system and the NGINX configuration.
1 Tuning the Operating System(ref: https://docs.nginx.com/nginx/admin-guide/web-server/serving-static-content/#tuning-the-operating-system)
2 Tuning NGINX
If you set the somaxconn
kernel parameter to a value greater than 512
, change the backlog
parameter to the NGINX listen
directive to match:
server { listen 80 backlog=4096; # ... }
NGINX Reverse Proxy
Passing a Request to a Proxied Server
location /some/path/ { proxy_pass http://www.example.com/link/; }
If the URI is specified along with the address, it replaces the part of the request URI that matches the location parameter. For example, here the request with the /some/path/page.html
URI will be proxied to http://www.example.com/link/page.html
. If the address is specified without a URI, or it is not possible to determine the part of URI to be replaced, the full request URI is passed (possibly, modified).
Passing Request Headers
To change these setting, as well as modify other header fields, use the proxy_set_header
directive.
To prevent a header field from being passed to the proxied server, set it to an empty string as follows:
location /some/path/ { proxy_set_header Accept-Encoding ""; proxy_pass http://localhost:8000; }
Configuring Buffers
By default NGINX buffers responses from proxied servers. A response is stored in the internal buffers and is not sent to the client until the whole response is received. Buffering helps to optimize performance with slow clients, which can waste proxied server time if the response is passed from NGINX to the client synchronously. However, when buffering is enabled NGINX allows the proxied server to process responses quickly, while NGINX stores the responses for as much time as the clients need to download them.
location /some/path/ { proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_pass http://localhost:8000; }
proxy_buffers number size
Sets the number and size of the buffers used for reading a response from the proxied server, for a single connection. By default, the buffer size is equal to one memory page. This is either 4K or 8K, depending on a platform.
proxy_buffer_size size
Sets the size
of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header. By default, the buffer size is equal to one memory page. This is either 4K or 8K, depending on a platform. It can be made smaller, however.
If buffering is disabled, the response is sent to the client synchronously while it is receiving it from the proxied server. This behavior may be desirable for fast interactive clients that need to start receiving the response as soon as possible.
To disable buffering in a specific location, place the proxy_buffering
directive in the location
with the off
parameter, In this case NGINX uses only the buffer configured by proxy_buffer_size
to store the current part of a response.
Choosing an Outgoing IP Address
If your proxy server has several network interfaces, sometimes you might need to choose a particular source IP address for connecting to a proxied server or an upstream. This may be useful if a proxied server behind NGINX is configured to accept connections from particular IP networks or IP address ranges.
proxy_bind
location /app1/ { proxy_bind 127.0.0.1; proxy_pass http://example.com/app1/; }