Make WordPress Scale, on a Budget

Development

If you create great content, your WordPress site is going to get a lot of traffic. That’s a good thing! One of our clients has done just that, but we had a couple of problems – he’s become popular in general, bringing in, on busy days, over 10,000 visitors, many of whom look around the site. And worse, he’s also become popular on Twitter.

This means that when he tweets about an update he and his many followers create huge spikes in traffic. But there’s a an issue of cost – the site, Sniff Petrol, carries no advertising and is essentially a spare time project for the owner. And that means there isn’t thousands to be spent on its hosting. We needed to manage these spikes well, but keep the costs down. A bigger server, as offered by the hosting company, was not the answer. It was time to geek out.

Experiments

Running experiments is the only way to test what will improve your sites performance. Below are the admittedly rather technical findings. We hope you find them useful.

sniffpetrol.com is a WordPress based motoring and motorsport satire site. It is currently hosted on a Linode VPS (Virtual Private Server) [affiliate link] with 4 CPU cores running at 2.27GHz and 1GB of RAM. A LAMP (Linux Apache, MySQL and PHP) installation is used to serve the site.

This article outlines the problems we encountered when this site experienced a sudden spike in traffic as well as methods we have employed to make the site more responsive under heavy load, without having to resort to using a more expensive server. A brief guide to how we implemented our solution will also be given and Changes made to the server configuration settings for Apache, PHP and MySQL will also be outlined.

The Problem

When using the default configurations for Apache, PHP and MySQL and no server-side caching, we found that when load testing the site,  load times increased sharply as the number of concurrent users passed the 250 mark. Also, system load reached such high levels that the server became completely unresponsive (sometimes to the point of needing a manual reboot) due to excessive disk “thrashing” caused by Apache rapidly swapping from RAM to disk in an attempt to free-up more RAM to serve additional clients.

The Solution

The PHP, Apache and MySQL configurations on the server were changed from the defaults and the APC (Alternative PHP Cache) caching module was installed. In order to make best use of the APC caching module, the W3 Total Cache WordPress plug-in (version 0.9.1.3) was installed on the site. A brief guide to installing APC and W3-Total-Cache and getting them to work together can be found in the next section.

So, why did we decide to use the APC caching PHP module, or any other method of server-side caching for that matter? The short answer is: Efficiency.

APC allows us to cache dynamically generated content. This cached content can then be sent to the client when a request for it is received, instead of wasting more server resources to regenerate it when nothing has changed. This considerably reduces the load on the server.

We also made use of Amazon’s Cloudfront CDN (Content Delivery Network) and S3 services to store and serve static content (theme files and images, for example) to further lighten the load on the server. Our main reasons for choosing Amazon’s CDN solution were the pay-as-you-go pricing structure and the low storage/data transfer costs. A table detailing the costs can be found here.

The W3 Total Cache plug-in allows you to configure the site to make use of Amazon’s S3 and Cloudfront services as a CDN from the WordPress dashboard. It takes care of uploading the theme files – and other static content and also takes care of URL rewriting for uploaded files automatically. Overall, we were very impressed by how intuitive the whole setup process was. One online guide we found useful when setting up the CDN can be found on the Freedom Target site.

Getting APC and W3-Total-Cache Up and Running

If you are using Ubuntu Server, installing the APC Caching module on your server is as simple as running the command below:

sudo apt-get install php-apc

You will then need to restart Apache when the installation process has finished. Ubuntu/Debian users can do this by issuing the following command:

sudo /etc/init.d/apache2 restart

The installation and configuration of the W3-Total-Cache plug-in is a little more involved.

Before you install the plug-in, you will need to make sure that you have the following Apache server modules installed and enabled:

  • expires
  • mime
  • deflate
  • headers
  • env
  • setenvif
  • rewrite

It’s best to obtain the latest stable version from the WordPress plug-in SVN repository and upload the files to your server manually, rather than using the installer integrated into WordPress.

The plug-in comes with quite comprehensive documentation in the form of a readme file. Other setup guides can also be found quite easily on the Web. One installation guide we found useful can be found here.

When you have everything installed and the W3-Total-Cache plug-in has been activated, you will have to configure it to use the APC Caching module on the server. To do this, select the General Settings option from the Performance menu in the WordPress Dashboard and, from the dropdown list next to each option (Page Cache, Minify, Database Cache and Object Cache) select the ‘Opcode: Alternative PHP Cache (APC)’ option. Make sure that the Enable checkbox is checked for each option, and then click the Save Changes button next to each option.

Server Configuration Changes

The changes made to the configurations for each component of the LAMP stack are outlined below:

Apache

The following changes were made to the ‘Prefork MPM’, ‘Worker MPM’ and ‘Event MPM’ sections of the apache.conf configuration file:

  • The Timeout option was set to 150 seconds.
  • The KeepAliveTimeout option was set to 3 seconds to minimise the amount of time each apache process sits idle waiting for the client to send a KeepAlive request.
  • The MaxClients option was set to 250 to allow for more concurrent users.
  • The MaxRequestsPerChild option was set to 400 to both minimise the consumption of system resources by an individual server process and to allow resources (especially RAM) to be freed up quicker.

MySQL

The relevant lines for the MySQL configuration file can be found below:

[mysqld]

key_buffer              = 16M

max_allowed_packet      = 16M

thread_stack            = 192K

thread_cache_size       = 8

myisam-recover         = BACKUP

query_cache_limit       = 1M

query_cache_size        = 16M

[isamchk]

key_buffer              = 16M

PHP

To lessen the consumption of RAM by PHP scripts when under heavy load, the memory_limit option  in php.ini was changed to 64MB.

Testing Method

The load testing service Load Impact was used to perform the load testing on the server.

For each test, we used a simulated load of 250-1000 simultaneous clients with each ‘client’ spending an average of 20 seconds viewing a page. We started the test with an initial load of 250 clients and then ramped up the number of clients by 250 each time, up-to the limit of 1000 simultaneous clients.

Test Results

The User Load time results using the amended Apache, MySQL and PHP configurations without using APC caching or a CDN are shown below.

User Load Time (No APC caching or CDN enabled)
User Load Time (No APC caching or CDN enabled)

Although the server did not become completely unresponsive, the load time increases considerably after 250 clients, with load times exceeding 10 seconds after approximately 350 clients. The bandwidth usage results for this test can be found in the graph below:

Bandwidth Usage (No APC caching or CDN enabled)
Bandwidth Usage (No APC caching or CDN enabled)

The maximum amount of bandwidth used in this test (approximately 33 Mbps) was considerably less than the 100Mbps the server was capable of transferring. Taking both the user load time and bandwidth usage results into account, it was apparent that the server was not yet performing as efficiently as it should be.

With the APC caching module used in conjunction with the W3-Total-Cache plug-in on the site, the reduction in load times was considerable, with user load times at 1000 clients being approximately 25 times faster as the graph below shows:

User Load Time (Using W3-Total-Cache plug-in with APC caching)
User Load Time (Using W3-Total-Cache plug-in with APC caching)

The bandwidth usage results for this test can be found in the graph below:

Bandwidth Usage (Using W3-Total-Cache plug-in with APC caching)
Bandwidth Usage (Using W3-Total-Cache plug-in with APC caching)

Although there is a considerable improvement in bandwidth usage up-to 750 clients, the bandwidth usage drops to around the same level (33Mbps) at 1000 clients as was seen during the first test.  This is possibly a function of the VPS having to share its network interface with other websites and may even be due to a certain amount of bandwidth throttling at the hosts.

Switching to Content Delivery Networks

When static content was served from the Amazon CDN and APC caching was enabled from within the W3-Total-Cache plug-in, we found that performance could be further improved:

User Load Time (W3-Total-Cache Plugin and Amazon S3 CDN Used)
User Load Time (W3-Total-Cache Plugin and Amazon S3 CDN Used)

Although the performance is not as dramatic as the previous test, when compared to the load times with APC-caching only, the increase in load times as the number of concurrent clients increase is much smoother. The bandwidth usage graph for this test can be found below. The data shown is the combined bandwidth usage of both the server and the CDN:

Bandwidth Usage (W3-Total-Cache Plugin and Amazon S3 CDN Used)
Bandwidth Usage (W3-Total-Cache Plugin and Amazon S3 CDN Used)

Here, we found that the bandwidth usage increased far more smoothly as the test progressed, than in the test with APC caching and W3-Total-Cache only. This is to be expected, as the server no longer had to deal with serving large static files so fewer system resources were required to serve the same number of clients.

Conclusion

It’s easy to see that using server side caching and careful server configuration gives excellent results.  What using a content delivery network means is that the delivery of content will grow more consistently.  One problem with many servers, and one which is rarely acknowledged, is the performance available from the network interface.  Most won’t serve more than 100mb/s in theory, and about 70mb/s.  What can’t be seen in the charts is the momentary output peaks of over 130mb/s that we saw using the content delivery network.  The charts just show the averages.  As a consequence it’s hard to show the improvement gained from using a CDN at the 1000 user level.

What we’d like to do, in the future, is to test the server up to 5,000 concurrent users.  This is serious traffic, and also costs quite a bit of money to test. At the moment we know that the Sniff Petrol site can handle around 130,000+ page views per hour.  But it may be able to handle a lot more.  We’d love to see how far it can be pushed. Would it be possible to have the capacity to serve up to a million pages in an hour without having to commission a massive server? Keep coming back as we’ll be carrying out this test in the future.

As most of our clients use their own large scale hosting (we work with newspapers and publishers a lot!) we’ve generally let them worry about hosting requirements.  They usually do pretty well and have some impressive hardware.  But recently we’ve started offering a managed WordPress hosting service to our clients, and had to start learning about WordPress scaling ourselves.  We love efficiency, and the idea of simply buying bigger boxes as a solution to performance problems appalls us.  Modern computers are incredibly powerful – they can do a lot, for very little money.

11 responses to “Make WordPress Scale, on a Budget

Leave a Reply

Your email address will not be published. Required fields are marked *