Using nginx’s fastcgi_cache
With this technique, we can essentially skip the CMS entirely for front-end page requests.
Nginx has a built in way to store the results of a PHP call, so the next time it’s needed it can pull the stored result from memory, rather than have PHP do the work again. This is a bit like Craft’s cache tag, only even more efficient.
Things to be aware of
Nginx doesn’t provide a way to clear its cache when something in the CMS changes.
That ability is kept for the commercial Nginx Plus product. However, there are two options available to those of us not wanting to pay $1,350 per year for this feature.
Option one is to manually delete the cache when we change something. As the cache is stored as files in a location you specify, you can use SSH or SFTP to delete those files when you make a change in the CMS. That works but is a bit clunky, so you could write a little script that listens on a particular URL and executes a bash script to do that for you.
Option two is to not worry about it. Instead set the cache period to something small but useful, like half an hour. That means when you make a change in your CMS it might take up to half an hour to be reflected on the front end of your site. No big deal for my use case, and likely not for most people’s blogs either.
Secret option number three is to use a third-party nginx module to manage cache invalidation. I’ve chosen not to do this: I’m wary of third-party modules, especially ones with little documentation, and given my lack of knowledge in this area I’d rather not go down that route yet.
I’ll be going with Option 2 – let my cache age out over a short period of time.
Setting up fastcgi_cache
The first thing we need to do is decide where we’re going to store the cache in the filesystem. That folder also needs to be owned by whichever user is running nginx – typically that’s www-data
. Create a folder wherever you want it, for example:
mkdir /etc/nginx-cache
chown www-data /etc/nginx-cache
Now we need to define a cache key-zone in nginx, this is done inside the http { ... }
block, because it will be accessible by any of the servers that may be defined later inside of server { ... }
blocks.
Open your /etc/nginx/nginx.conf
file and inside of the http { ... }
block add the following:
Setting up a cache key-zone called ‘phpcache’
fastcgi_cache_path /etc/nginx-cache levels=1:2 keys_zone=phpcache:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
Now all we need to do is configure the domain we’re interested in to use it. You should have an entry in your /etc/nginx/sites-available/
folder which defines your website, such as mysite.conf
. Open that, and inside the server { ... }
block add:
set $no_cache 0;
# Don't cache the CMS admin area
location /admin {
set $no_cache 1;
}
Next, you need to modify the block you have for handling php files, so it looks like this:
location ~ [^/]\.php(/|$) {
fastcgi_cache phpcache; # The name of the cache key-zone to use
fastcgi_cache_valid 200 30m; # What to cache: 'code 200' responses, for half an hour
fastcgi_cache_methods GET HEAD; # What to cache: only GET and HEAD requests (ot POST)
add_header X-Fastcgi-Cache $upstream_cache_status; # Allow us to see if the cache was HIT, MISS, or BYPASSED inside a browser's Inspector panel
fastcgi_cache_bypass $no_cache; # Dont pull from the cache if true
fastcgi_no_cache $no_cache; # Dont save to the cache if true
# the rest of your existing stuff to handle PHP files here
}
That’s it, done. You just need to reload the configuration in nginx (on Debian that’s a case of running /etc/init.d/nginx reload
).
My TTFB is now down in the 0.04 second range on any page which has been cached. That’s pretty much instant.
You can learn a lot more about what the various options and parts do at the official documentation. This should be enough to get things working for you though.
Source: Speed up website response… | Matt Wilcox, Web Developer & Tinkerer