The site was to be a in a similar vein editorially to buzzfeed / ampp3d etc… with a focus purely on social traffic and interactive content like quizzes. SEO wasn’t on the agenda at all which made a refreshing change.
We worked out what consisted the minimum viable product (MVP) which had a short turnaround time of 2 weeks and set to work. After that we had a few days a week to make improvements, add features and fix bugs.
The following is a discussion about the approaches taken, some specific outcomes and decisions made along the way.
Server setup
Typically we use our hosting partners in the Netherlands Kumina.nl to handle the server setups, maintenance and backups. In this instance TMG gave us 2 Amazon ec2 instances, a UAT server (m3 small) and a production server (m3 medium).
The site was obviously going to be a collection of plugins plus a theme (aren’t they all?) so I wanted all of that to be managed through version control to make collaboration easier. I liked the way humanmade’s project starter template was laid out with the WordPress install as a git submodule, the wp-config.php file set up to handle WordPress from any URL and the content folder at the root level.
https://github.com/humanmade/hm-base
I knew it would be a pain to install everything we’d need on the servers manually so I looked to the development environment as a basis for the production/UAT environments. We’ve been using Vagrant for local development for a while, personally I’ve been using VVV but we use PuPHPet as well to configure servers that match our own.
https://github.com/Varying-Vagrant-Vagrants/VVV
The VVV setup is pretty good out of the box and has a nice provisioning script that sets up the server, config files and sites. It made sense to me to make the development environment part of the project so that the scripts used to provision the vagrant machine could be used to provision staging and production in the same way.
Combining VVV into the project template was quite tricky and there are some things I could have done better but the deadline for getting the MVP out was really tight. I modified the project template so that I could cd into the vagrant folder and run vagrant up to get the server provisioned and site automatically set up. Maybe it’s a weird way to do it but the project now not only defined the WordPress site (theme & plugins) but also it’s own development environment. The obvious advantage being the software and setup would be the same whoever was working on it.
I split the VVV provisioning script into 2 parts, one that could be run on any server to install and configure all the needed software, and one that had the vagrant/development specific stuff like code sniffer, unit tests, phpmyadmin, memcacheadmin, and some code to generate SSH Keys to be used for deployments later.
Git deployments were an attractive option for rolling out code and provisioning as I was familiar with the process from working with heroku and rails. It wasn’t easy to set up from scratch though!
Most articles and tutorials suggest using a bare repository that points to the document root of the site but this method doesn’t work with submodules. If you want to push changes to a remote repository and trigger githooks eg. to update submodules you need to have the document root as a fully fledged repo. The downside is that the .git folder is inside the document root so you have to make sure outside access is locked down via the server config.
This article by Aaron Adams has the best way:
http://aaronadams.ca/post/37345904654/git-push-with-submodules
It also meant I could modify the git repo after the push to remove or move unnecessary files like vagrant and server config from the website document root.
Now I had everything in git I could look at creating the remote repositories and pushing to them. I decided to create a few files that would define the different environments, local.env, staging.env and production.env. These were just files I could source from a bash script to define some variables such as SSH user, hostname/IP, database details, website URL etc…
One important step I had to take before creating remotes was to set up SSH access from vagrant to the staging and production servers. The provisioning script copies public/private keys you create to the virtual machine. I just had to put my public key onto the production and staging servers, same for any colleagues working on the site.
VVV has a folder for scripts that get copied over to the virtual machine so I wrote a few helper scripts. The most important was the ‘create_remote’ script. Once I had SSH’d onto the vagrant box I could run this command:
create_remote <environment> <branch>
Where <environment> corresponded to the environment variable file eg. staging.env and <branch> was the git repository branch to use for the initial push. As part of creating the remote repository the script would set up a post-receive hook which runs the general provisioning script adapted from VVV. The nice thing about this is that when I push updates to the server the provisioner is run again so software is updated, new software is installed and configured etc… Pretty sweet.
I added Keychain to the list of software in provision.sh and added aliases for ssh & git commands to .bashrc so that you only have to put the SSH key password in once.
http://www.gilluminate.com/2013/04/04/ubuntu-ssh-agent-and-you/
I also wrote a script called migrate for pushing and pulling content in a similar way to heroku’s db:pull and db:push commands.
migrate pull <environment> migrate push <environment>
Where using pull would SSH onto the target environment, zip up the uploads folder and the database, use rsync to download them, extract and import the database and run a search replace using WP-CLI.
Theme
Any project ends up having spinoff products, typically plugins that can stand alone. The theme was fairly straightforward, on the whole just a standard blog with a few post formats. I’ll detail some of the added extra plugins that were written during development.
Here’s a list of some modules (mini-plugins we embed in the theme):
- Add to homescreen – the smartphone add to home screen js library
- Author list, simple template tag to list authors out
- Google content experiments
- Gfycat oembed
- Instagram video oembed
- Vine oembed
- Whatsapp sharing button
- Scroll tracking – track events in google analytics when users scroll to different items on the page by adding a data attribute
- Zero clipboard (no longer used)
Performance
The site needed to be able to take potentially large traffic spikes due to the nature of viral traffic. We used w3 total cache along with memcache for page caching, object and database caching. The server still struggled a bit when we reached about 800 concurrents due to serving images. We implemented amazon cloudfront as a dumb origin-pull mirror as we weren’t allowed an IAM account with more permissions from TMG webops which was fair enough. After the CDN was added capacity was drastically improved with no problems up to 2500 concurrents. After we added Varnish the server handled 5000+ concurrents without breaking a sweat so we’re yet to find out what its true limits are.
Infinite scroll, Lazy loading & History.js
We hit a problem later on when a post went viral which turned out to be down to the CDN rewriting not working with infinite scroll. Rather than loading the whole page and extracting the relevant bit any ajax requests to the page would just return the article HTML to keep things quick. W3 Total Cache doesn’t do anything with content it doesn’t recognise as XML so I had to add in an XML header eg. <?xml charset=”utf-8” ?>’ to get it to rewrite the URLs to the CDN again.
Something that became a serious UX problem was lazy loading. Because there were a few javascript elements listening to scroll events the site was really janky to scroll on iOS especially. Typically you would debounce triggering events on scroll using the technique shown here but the lazy load plugin we were using was triggering too frequently. While it may seem like it should improve the experience if you have some other code that listens to the scroll event, even if it’s debounced you’ll run into janky scrolling. The best approach is to have all of that scroll dependent code in one place so I got rid of lazy loading altogether.
As part of infinite scrolling I implemented history.js which plugs the holes in how different browsers implement the browser history API. As users scrolled down the page I wanted to make sure if they refreshed they wouldn’t lose their place. After a few poor attempts at maintaining the scroll position I gave up on that and just opted to use the replaceState method so that going back would take users to the place they came from rather than to a page forced into their history which can become incredibly frustrating.
Varnish
While W3 Total Cache is very good when you have memcache available for the page cache etc… you’ll get much much more out of a cheaper server using varnish. I used this template as a starting point: https://www.varnish-cache.org/trac/wiki/VCLExampleTemplateWordpressPurge
This guide from digital ocean shows you how to install and configure Varnish.
It had a few problems with logging in and out where the VCL was still killing the cookies and I had to change the ACL to allow localhost to purge the cache. W3 Total Cache can be made to purge varnish servers so the site is always up to date. It’s a more powerful way to purge varnish than the Varnish HTTP Purge plugin as it also purges archive pages and feeds.
There was a problem where varnish would trigger and cache 403 error pages from Apache, adding the following to sub vcl_fetch fixed that:
# prevent varnish caching 40X responses except 404s if (beresp.status >= 400 && beresp.status != 404) { return (hit_for_pass); }
I added varnish to the provisioning script so rolling it out to production meant it was installed and configured automatically and resulted in less than 30 seconds of downtime.
Gotcha – if you do use varnish with W3 Total Cache turn off the W3TC page caching! Bad things can happen on a high traffic site. Occasionally the CDN rewrite would fail due to a race condition between the 2 page caches.
Appcachify
We further eased the burden on the server with the Appcachify plugin I wrote. It does a few things:
- Generates an iframe linking to a generated page that references an appcache manifest.
- Appends the file’s modified time as a version query string
- Scans the theme folder for cacheable assets
The end result is that the common assets like header images, css, fonts and javascript can all be cached locally on the client and the best thing about that is the pages can begin rendering immediately once the HTML is delivererd. Loading, rendering and painting on google chrome desktop all happens in less than 300ms. Scripting of course takes a bit longer to run but the site is usable and readable in less than half a second typically, even on mobile once the appcache is primed loading/rendering times are typically less than 1 second.
https://github.com/interconnectit/appcachify
Quizzes & Quizzlestick
After reviewing various quiz plugins to see if any would fit the desire for a few different types of quiz they weren’t really suitable. Even the de facto favourite for WordPress WP Pro Quiz was a combination of overblown and not hackable enough. There are very few filters in it to modify the output for things like images and the like. The UX of the admin also left a lot to be desired with a big disconnect between the quizzes themselves and their questions.
Because quiz stats and the like weren’t a huge priority, these were just fun throwaway quizzes eg. ‘Which World Cup pundit are you?’ in the end I opted to roll my own. After a weekend of hacking I had a jquery plugin that used a lightweight custom template engine called Fumanchu (similar to Handlebars/Mustache) to output a quiz built from a JSON config object. It was highly configurable in that the output was all controlled through adding and removing classnames, lots of event triggers and also to suit the very specific UX requirements TMG had for the quizzes.
The WordPress backend plugin was completed (with a few bugs) 2 days later and we were able to start building quizzes on a single page in the admin very quickly to generate the JSON configs.
The plugin can be used to make the following types of quiz:
- Quickfire – 1 question at a time
- Single answer – list type quiz which tells correct/incorrect as soon as you choose
- Multiple answer – you can choose multiple answers and then submit them
- Which are you/which is it – answering questions scores a different number of points and different results can be shown depending on the score
- Timed quizzes – any of the above but with a time limit
- Polls – on Babb these are implemented funnyordie.com style but the templating system is incredibly flexible here.
The front end Quizzlestick plugin is on github here:
https://github.com/sanchothefat/quizzlestick
It’ll be moved under the interconnect/it account soon. There are plans to make the quiz plugin into a premium plugin or online service like wufoo.com etc…
TMG Ads
The Telegraph have their own ad delivery network that required a custom plugin to be written. It was a very basic plugin that can be extended in future should it be necessary. It provides template tags to output the script tags and special meta tags in the header to identify the content.
There was one specific challenge that cropped up with dynamically loaded ads. Because we were using infinite scrolling if the new HTML contains a document.write call it will wipe out the entire page.
I found and implemented an excellent library called postscribe.js to make these synchronous ad snippets work asynchronously, with the added benefit of making them into non-blocking javascript speeding up page rendering.
https://github.com/krux/postscribe
Native sharing buttons & SharedCount.com
I have 2 plugins that we use at interconnect/it. The first is a full Twitter API plugin with template tags for share & follow buttons (iframe & intent), a JS interface for the frontend with google anlaytics support, and a highly configurable timeline widget. The second is a Facebook plugin that covers all the social widgets, plus adds support for custom sharing and like buttons as well as google analytics support.
Using those 2 plugins I put together a template include that output the sharing links.
We tried out a generic button for copying the article URL with the thought that it opened up sharing to all the other platforms and mediums we hadn’t thought of. Generally you can use a javascript plugin eg. zero-clipboard to copy to the clipboard but it relies on flash which is an issue for iOS. I made this fallback to selecting the text in the input on iOS and showing the copy/paste interaction balloon thanks to this answer on stackoverflow. Surprisingly for us (because devs know best, obvs) in testing no one really interacted with it.
We changed to an email sharing button which gets more usage than the tweet buttons. Based on advice TMG received from a contact at Buzzfeed we also implemented a Whatsapp sharing button. It does involve browser/device sniffing but I’m honestly not going to lose sleep over that. It lead me to implement a native iOS twitter app sharing button as well rather than bouncing users to safari like every other site. The buttons don’t work if the app isn’t installed as it uses the twitter:// URL protocol however if you’re sharing to twitter chances are you have the app. Sadly facebook native app URLs on iOS are horribly documented and don’t support pre-filling a status update. Maybe that’s a good thing.
Later in the project to help improve sharing interactions we wanted to show sharecounts from different social networks. Typically we’d implement addthis for their excellent stats API however the custom designs for the share buttons weren’t really feasible using addthis.
The SharedCount.com API coupled with judicious caching using TLC transients was the answer. You get 50,000 requests per day for free but they can be maxed out surprisingly quickly. I wrote a plugin that collects share counts for a whole range of sources so we could display them next to the facebook and twitter buttons and even show total shares across all networks.
I ran into a problem with TLC transients when using memcache for transients. Memcached objects can be cached for up to 30 days. A higher number is treated as a timestamp so the default value of 1 year meant the expiry was 1st Jan 1971 and nothing was being cached. The SharedCount API was getting hammered and maxing out requests as a result. I modified TLC transients and made a pull request to change the default long-term cache to 30 days instead of a year but as of yet it hasn’t been merged.
https://github.com/markjaquith/WP-TLC-Transients
Analytics
Google analytics (or any analytics) is a super useful tool for improving any website. We used the analytics API in a few ways beyond the usual tracking code.
Most popular articles, most shared articles and related content, these are all driven by google analytics data which looks at other articles that were popular at roughly the same time as the current article and in the same or similar taxonomies.
We can easily create lists of posts from the analytics data based on what receives the most traffic in the last hour, day, week etc…
A/B testing using google content experiments
Google analytics also has a built in system for conducting content experiments, either A/B tests or multivariate tests. Typically this is for comparing 2 landing pages and checking whether you get more pageviews or sales or bounces or whatever. I reviewed the WordPress.com VIP A/B testing tool written for cheezburger.com but it was entirely geared towards working with Batcache. Because we were using W3 Total Cache this wasn’t really going to work for us.
Using the GA tools we already have eg. a common interface for selecting a profile and site I was able to write something similar to cheeztest in a few hours. It was essentially 2 function calls, add_ga_test() which just needed to check either for a query string or some other criteria and return any value and get_ga_test() which return that value or a default.
The plugin automatically included the content experiment code so as soon as the code for the test was in place we could create a content experiment and launch it. I’ll give an example to demonstrate:
We wanted to test some of the language on the archive pages that invited users to continue reading after an excerpt:
// add read more text variation - ?var=rm add_ga_test( 'continue_reading', function( $default, $experiment ) { $var = isset( $_GET[ 'var' ] ) ? $_GET[ 'var' ] : false; $output = ‘’; switch( $var ) { case 'rm': $output = __( 'Read more!' ); break; default: $output = $default; break; } return $output; }, ‘Read more text’ ); // in the template <div class="entry-content"> <?php the_content( get_ga_test( 'continue_reading', __( 'MOAR? Tap here!' ) ) ); ?> </div>
The above gets the return value of the test by the ‘continue_reading’ key and passes in a default value as its 2nd parameter. In google analytics we set up a test between babb.telegraph.co.uk and babb.telegraph.co.uk/?var=rm. Using the query string meant both variations were cached so we’d get no overall performance hit apart from the quick redirect when the javascript kicked in.
The add_ga_test() function has a 3rd parameter which if it matches the name of the content experiment in GA will make the experiment object available in the callback function. This means we can check if the experiment found a winner or not and override the default automatically so if done properly the best variation will automatically be used and no further code changes are needed.
It turned out ‘MOAR? Tap here!’ was more effective so no change was needed after the experiment finished.
We tested the following to help inform decisions about development:
- If share counts on share buttons increased interaction with the buttons (social proof)
- If showing most popular posts on the home page either with or without images decreased bounce rate
- Various text changes eg. for read more, most shared/most popular lead to more pageviews or lower bounce rate
- If infinite scrolling on articles reduced bounce rate
Switching to the new universal analytics
Google are rolling out an updated analytics script. It allows for up to 15 custom variables (now called dimensions) and has better support for users with anti-tracking modes enabled on their browsers as well as better demographic information.
The change meant having to set up the Yoast Google Analytics plugin differently, just using the plugin settings and generating the updated code snippet from functions.php instead. We plan to make a complete GA tool that lets you create the dimensions through the WordPress admin to make things much easier.
I had to modify the twitter and facebook plugins to support the new function calls for tracking social interactions, as well as the page and event tracking triggered by infinite scrolling.
XML sitemaps
There are quite a few plugins for generating search engine sitemaps, I chose Google XML Sitemaps. These are useful if you want to submit the site to google news which Alex at TMG did. It also has the added benefit that once you add your sitemap to Google webmaster tools you can see if you’re getting any 404s that you shouldn’t be.
I installed Safe Redirect Manager and was able to set up a few redirects to improve the UX for people coming from sites that had the broken links. It also flags up problems if you have old links in content that hasn’t been updated.