How to cache your website using NGINX Reverse Proxy and Proxy-Cache in CentOS 7

How to cache your website using NGINX and FastCGI in CentOS 7 with PHP FastCGI Process Manager PHP-FPM

A couple days ago I posted a tutorial about caching a web site using Nginx and FastCGI-Cache in CentOS 7, which is a great alternative to the Reverse-Proxy cache technique when dealing with PHP-based dynamic pages: today I would like to complete the picture by talking about Nginx's http reverse-proxying capabilities, which allow the web server to pass requests off to backend http servers for further processing.

As a matter of fact, Nginx is often set up as a reverse proxy - just like dedicated applications such as Squid and Varnish - to help scaling out an infrastructure or to ease the job of one or more upstream web servers that are not suited to handle large client loads. Nginx is an excellent product for these kind of scenarios, as it has great built-in load balancing, buffering and caching capabilities.

IMPORTANT: if you are using Nginx to act as a reverse-proxy for a WordPress-enabled web site (hosted by Apache, Nginx or IIS), you might also want to check out this other post for some additional settings required to fine-tune such specific scenario.

What's a Reverse Proxy?

Those who only used a single web server for simple tasks - such as a Joomla, WordPress or phpBB based web site - might be wondering what a reverse-proxy actually is. As its name implies, a reverse proxy does the exact opposite of what a forward proxy - often simply called "proxy" - actually does: while a forward proxy proxies in behalf of clients or requesting hosts, a reverse proxy proxies in behalf of servers. To put it in other words, if a forward proxy can be pictured as a centralized bucket that optimizes the requests made by one or more connected clients, a reverse proxy is a centralized bucket that optimizes the responses of one or more connected upstream servers (or workers) - and serve them to the requesting clients.

How to cache your website using NGINX Reverse Proxy and Proxy-Cache in CentOS 7

That's great to hear, but... why we need to proxy a HTTP request?

As a matter of fact, there are multiple reasons for doing that: for example scaling, intended as the ability to scale out our infrastructure. Nginx is built to handle many concurrent connections at the same time, thus making it ideal for being the clients' endpoint. Whenever it receives the requests, it can pass them to any number of backend servers to handle the bulk of the work, spreading out the load across our whole infrastructure. This design also provides us with flexibility,as we can easily add backend servers or remove them as needed for maintenance, dealing with increased bandwidth or request loads, tests and so on.

Another good reasons to proxy HTTP request is stability, which is also a field where Nginx shines: conversely, there are a lot of web servers - for example, those packaged with most development frameworks or web-based applications - that are not as robust. Putting Nginx in front of these servers will also lead to an incresed level of security, since most web-based malicious attacks are designed to cause stability issues and exploit them to harm the host.

Last but not least comes performance: as long as a reverse-proxy can conditionally cache some responses for multiple clients on a web-server level, it will actually prevent most server-side processors and pre-processors such as PHP, ASP.NET, Java, Phyton and so on from having to perform the same work multiple times, thus reducing their required overhead - with great benefits for CPU and memory.

How does it work

When Nginx acts as a proxy, it processes each request by checking its cache, then (if not present) passing it to the upstream server(s) for the actual processing, gets the result, conditionally cache it and then and relay it to the client: the next time it receives that same request, assuming it could be cached, it will just take it from the cache and serve it without passing anything to the upstream server(s).

This, in a nutshell, is what a reverse-proxy actually does.

It's worth noting that the upstream servers can be local applications, remote machines or even other virtual servers defined within the same Nginx instance: they can be standard HTTP(s) servers such as Apache and IIS or even FastCGI, SCGI, UWSGI and Memcached services. In this post we'll take for granted that we want to cache a standard HTTP Web Server: for other scenarios, check out the Nginx FastCGI-Cache article mentioned earlier.

Installing the prerequisites

Now that we're done with the introductory part, let's start to install Nginx.

Let's start with typing the following commands from a terminal:

Answer "Y" to all the questions until the terminal says that the installation is complete.

Once done, we could start the Nginx service and also have it start automatically on each startup with the following lines:

However, activating it now wouldn't be a wise choice, as Nginx would start with its default /etc/nginx/nginx.conf  configuration file, which is most likely not suited to our desired scenario - and lacks of any proxing feature. That's why, before doing that, we need to perform some changes to that configuration file's settings (see below).

Basic Proxy implementation

Let's start with a sample nginx.conf  configuration file that could be used implement an easy, straight-forward proxy without any caching features:

As we can see, we're using the origin-edge pattern: to put that concept in very concise words, Nginx works here as an edge server that will (reverse) proxy an Apache Web Server, which is called origin. As we can see, if the origin server is installed on the same machine, we need to configure it to listen to a non-standard TCP port for HTTP traffic - such as TCP 82 - because TCP 80 and/or TCP 443 will be used by the edge server; it goes without saying that, if the origin server is installed on a different machine, we just need to write the proper hostname (or IP address) - assuming that we can reach it via WWW, VPN or anything else.

Load-Balancing

In case we have multiple servers, we can configure multiple origin servers in the following way:

In the above example we've set up a simple round-robin selection process, meaning that each request will be routed to a different host in turn.

Balancing Algorithms

If we want more complex load-balancing we can configure a custom algorithm with the following available parameters:

  • least_conn: Specifies that new connections should always be given to the backend that has the least number of active connections: this can be especially useful in situations where connections to the backend may persist for some time.
  • ip_hash: This balancing algorithm distributes requests to different servers based on the client's IP address: the first three octets are used as a key to decide on the server to handle the request. The result is that clients tend to be served by the same server each time, which can assist in session consistency.
  • hash: This balancing algorithm is mainly used with memcached proxying: the servers are divided based on the value of an arbitrarily provided hash key (text, variables, or a combination of them). This is the only balancing method that requires the user to provide data, which is the key that should be used for the hash.

For example, if we configure our upstream block in the following way:

The origin server will be selected based on which one has the least connections.

If we do that instead:

Most of the requests originating from the same user will be routed to the same origin server: this could be a great way to implement "sticky sessions" for some simple scenarios.

Server Weight

When we load-balance our traffic among multiple origin servers, Nginx gives the same weight to each one of them by default. This means that we assume that each server can - and should - handle the same amount of requests, regardless of the chosen algorithm (see above).

If we're not happy with that, we can change the weight of each server in the following way:

In the above example, origin1.example.com will receive twice the traffic as the other two servers: the default weight (when not set) is always equal to 1.

Configure Buffering

Let's now introduce the first of the two advanced topics regarding Proxy-Cache: the Nginx buffering feature. In a nutshell, it's a tool to reduce the minimum amount of lag caused by the double connection required by the Proxy-Cache pattern:

  • The connection from the client to the Nginx proxy.
  • The connection from the Nginx proxy to the backend server.

Without buffers, data is sent from the proxied server and immediately begins to be transmitted to the client: conversely, Nginx gives the administrator the chance to optimize both of them by properly configure a wide set of buffers. If they are enabled, he Nginx proxy will temporarily store the backend's response and then feed this data to the client. If the client is slow, this behaviour will allow the Nginx server to close the connection to the backend sooner and free the related process (and system resources).

Nginx defaults to a buffering design since clients tend to have vastly different connection speeds: the buffering default behavior can be adjusted with the following directives:

  • proxy_buffering: This directive controls whether buffering for this context and child contexts is enabled. By default, this is "on".
  • proxy_buffers: This directive controls the number (first argument) and size (second argument) of buffers for proxied responses. The default is to configure 8 buffers of a size equal to one memory page (either 4k or 8k). Increasing the number of buffers can allow you to buffer more information.
  • proxy_buffer_size: The initial portion of the response from a backend server, which contains headers, is buffered separately from the rest of the response. This directive sets the size of the buffer for this portion of the response. By default, this will be the same size as proxy_buffers, but since this is used for header information, this can usually be set to a lower value.
  • proxy_busy_buffers_size: This directive sets the maximum size of buffers that can be marked "client-ready" and thus busy. While a client can only read the data from one buffer at a time, buffers are placed in a queue to send to the client in bunches. This directive controls the size of the buffer space allowed to be in this state.
  • proxy_max_temp_file_size: This is the maximum size, per request, for a temporary file on disk. These are created when the upstream response is too large to fit into a buffer.
  • proxy_temp_file_write_size: This is the amount of data Nginx will write to the temporary file at one time when the proxied server's response is too large for the configured buffers.
  • proxy_temp_path: This is the path to the area on disk where Nginx should store any temporary files when the response from the upstream server cannot fit into the configured buffers.

The above directives can be set in the http, server, or location contexts. It is important to keep in mind that the sizing directives are configured per request, so increasing them beyond your need can affect your performance when there are many client requests: the most useful to adjust are often the proxy_buffers and proxy_buffer_size, as the other already have great default values that will work well on most scenarios.

Configure Caching

Although the buffering feature can be great to free up the origin server(s) resources, Nginx has an ever better tool in its shed: we're talking about caching, which is often the main reason for a system administrator to setup a reverse proxy to begin with. If you stumbled upon this post, it probably means that this is the topic you were looking for to begin with; however, for those who don't, let's try to summarize what proxy-caching is about: in a nutshell, it's a Nginx feature that can be used to conditionally cache content from backend servers, thus eliminating the need to connect to the upstream at all for many requests.

The main directive required to set up the proxy-cache is proxy_cache_path: this will create an area where data returned from the proxied servers can be stored and kept for future usage. The proxy_cache_path directive must be set in the http  context of the nginx.conf  file.

Here's a basic example of proxy-caching:

As we can see, we will cache all the pages returning HTTP 200 (status OK) or 302 (temporary redirect) for 10 minutes, those returning HTTP 301 (permanent redirect) for an entire hour, and anything else (such as 404, error 500 and so on) for one minute.

If we add the above lines to our nginx.conf file, we can use the edge-cache zone in any location  sections the following way:

That's basically it.

Advanced Proxy-Cache Implementation

Time to put everything together: here's a fully-featured  nginx.conf  file implementing Proxy, Load-BalancingBuffering and Proxy-Cache features:

Again, in case you're reverse-proxying a WordPress web site, don't forget to also read this post.

Setting Permissions

Before firing the engine and starting Nginx we need to ensure that the Linux user we're using for the Web-Server related tasks (wwwapache, nginx, www-data or anyone else), together with its relevant group, will be able to access to the required www, Nginx-Cache and PHP-FPM-Cache folders. Here are the folder that you need to check:

  • Your website(s) path, such as /var/www/example.com
  • The Nginx cache folder: /var/cache/nginx
  • The Nginx cache temporary folder: /var/cache/nginx/tmp
  • The Nginx temporary folders: /var/lib/nginx and  /var/lib/nginx/tmp  (IMPORTANT: you need to set permissions on both of them - see this ServerFault thread).

That's it for now: is something goes wrong with your configuration, feel free to contact us and we'll be happy to help you figure out what you did wrong. If you're mostly using PHP-based web sites such as WordPress, Joomla, phpBB and so on we strongly suggest to read this other article about Nginx FastCGI caching, which is often a faster and more reliable alternative for those  scenarios.

Opening the Firewall Port(s)

The default CentOS firewall rules does not allow inbound HTTP / HTTPS traffic, hence its necessary to open up some TCP ports for a webserver such as Nginx to accept connections from the outside. How it can be done depends of the firewall that our CentOS machine is actually using: firewalld or iptables.

Firewalld

These are the shell commands to open up Firewalld (assuming that the public zone has been assigned to the WAN network interface):

Iptables

These are the rules to set for Iptables (assuming  that we want to accept traffic coming from the eth0 network interface):

 Further references

This post is part of a series of articles, tutorials and guides on the NGINX web server & reverse proxy. To read the other posts, click here!

 

About Ryan

IT Project Manager, Web Interface Architect and Lead Developer for many high-traffic web sites & services hosted in Italy and Europe. Since 2010 it's also a lead designer for many App and games for Android, iOS and Windows Phone mobile devices for a number of italian companies. Microsoft MVP for Development Technologies since 2018.

View all posts by Ryan

2 Comments on “How to cache your website using NGINX Reverse Proxy and Proxy-Cache in CentOS 7”

  1. Pingback: Cache your Web Site with NGINX and FastCGI in CentOS 7 - PHP-FPM
  2. Pingback: Vapaan ohjelmiston HTTP-kiihdyttimien vaikutus verkkosivujen suorituskykyyn (Varnish, Squid, Nginx) - Markus Pyhäranta

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

This site uses Akismet to reduce spam. Learn how your comment data is processed.