The diagram above shows the basic structure of each Virtual Private Cloud (VPC) cluster that we use. We can host multiple WordPress Multisite networks on each, though some customers will need or want their own dedicated cluster. We use similar VPCs to host this blog and also our Edublogs.org (which has over 4 million sites!).
Let’s look at the VPC in some detail…
The first thing each visitor will hit will be a Content Delivery Network or CDN. We are a CloudFlare hosting partner, so most of our customers use CloudFlare, which includes some additional security benefits like a WAF (web application firewall) and DDoS protection. Others choose AWS Cloudfront, and others still will enable any of the countless CDN services out there. The CDN serves images and static content from whichever data center is closest to a visitor, which limits the traffic that actually makes it to the web servers and can speed up your page load times.
EC2 and Elastic Load Balancing
For the actual web servers, we use at least 2 EC2 large C4 instances running Linux with 8GB memory each. Within each AWS region, there are multiple “availability zones”, which are separate physical data centers. This builds in redundancy, should there be an outage or natural disaster that affects one location, the other can take over.
Directing traffic to these EC2 instances is an Elastic Load Balancer that determines which EC2 virtual server should handle each page view or action from a visitor.
Docker containers keep different WordPress installations separate from each other across the instances.
For the database, which houses the content, comments, and user data, we use two RDS M4-Standard instances running MySQL. These are setup in a ‘master/standby’ arrangement with a failover to the standby should something go wrong with the master.
S3 File Storage
Using S3 for user file uploads like images and files was our first experience with AWS – and it is something you can (and should) do even if you are hosting your site somewhere other than Amazon. S3 is fast, redundant, and downright cheap for storage and bandwidth.
Your codebase, including WordPress core, plugins, and themes needs a home. We’ve become partial to the relatively new Elastic File System (EFS) on AWS to handle this. We use Bitbucket.com for code management and version control, and an in-house deployment application to make updates across all of the sites that we host. You could also use Git or other code hosting and management services.
Adding AWS Elasticache service to the mix means that we can serve any static HTML content to visitors without requiring any work in the database. Keep in mind that usually, logged in users aren’t served cached content. So if your entire site is private or a membership site, cache isn’t going to do much for you.
Ec2 instances can send emails from WordPress too, like comment notifications or password resets. But if your site sends a lot of emails, especially if you are using something like Subscribe By Email, you are better off using the service specifically designed to handle email. If nothing else, SES allows you to increases your odds of emails being delivered (and not being flagged as spam).
Cloudwatch Alarms and Logs
Watching over the entire VPC like a hawk is Cloudwatch. Collecting logs and monitoring resources, Cloudwatch alarms can automatically add (or remove) EC2 instances when load warrants it, so that you aren’t paying for virtual services when they aren’t needed, and you can also scale to handle the highest of traffic you can imagine.
Beyond The Infrastructure
The servers are just one part of hosting WordPress high availability sites that scale. Sites can go offline for many reasons, including plugin/theme conflicts, user error, a 3rd party service you rely on, and more. This is why we have pretty strict procedures in place to help prevent any of these possibilities from ever happening.
Code Guidelines For Plugins and Themes
For any of the enterprise sites that we host, one of the big differences the average user will notice is that plugins and themes can’t be added directly from the WordPress dashboard.
Over the years, we’ve created a list of functions and code requirements that must be met for any plugin or theme that we host. For those used to being able to just add any and all plugins willy-nilly to their sites, this can sometimes be a point of contention.
But we’re after high performance and secure code. And not all plugins and themes are created equal. So our team of developers manually reviews every single theme and plugin that we host.
Here’s a list of what we look for – all plugins and themes that we support must:
- adhere to the WordPress Theme Guidelines and WordPress Coding Standards;
- not rely on 3rd party services (unless we can ensure it fails gracefully and/or approve otherwise for well-established services);
- not automatically upgrade or modify files;
- not change timeout of wp_remote_* calls;
- not ever change wp_feed_cache_transient_lifetime (hook to the filter);
- not use SHOW TABLES, instead use SHOW TABLES LIKE ‘wp_xyz’;
- not use DESC to describe table, instead use DESCRIBE;
- not change WP_DEBUG, error_reporting or display_errors;
- not remove default roles (remove_role);
- not flush rewrite rules ($wp_rewrite->flush_rules is not allowed);
- not flush cache (wp_cache_flush is not allowed);
- not contain SQL queries. Should use WordPress built-in functions for fetching post, pages, attachments, users and respective meta tags;
- not create new tables or modify table schema;
- not use filesystem functions listed here;
- not store files in the server file system. Must always make use of WordPress attachments if it accepts file uploads;
You might be surprised at how many plugins and themes that we evaluate don’t pass these guidelines. Custom SQL queries is the most common problem that we see.
And each update of plugins and themes are checked to ensure nothing gets by.
Quality Assurance and Testing
We also turn off auto-updates of WordPress core, plugins, and themes. We want to thoroughly test updates before they go live. For most customers, we run a weekly ‘change management’ cycle where updates are pushed out to each region early on Tuesday mornings. This way, our customers know when to expect updates, and we can plan our team to be around and monitor. There are never any surprises.
Before a change or update can make its way through the process, it must:
- Be manually tested and reviewed fully in local testing environments by at least two developers
- Pass any possibly automated and/or unit testing in multiple development environments
- Pass manual testing by QA/support team in multiple development environments
- Be deployed to a small subset of live sites and all customers’ development/test sites that willingly participate in beta testing program for a minimum of 72 hours
- Pass a final manual code and performance review by technical team leadership
Putting It Together – The Costs
When you combine the technical infrastructure of AWS with the strict practice of code management, you get sites where you can expect 99.99% uptime or higher, and that can handle any traffic volume that you can throw at it.
But everything comes with a price. Just how much are we looking at if you try and set something up like this yourself?
Let’s start with the AWS private cloud cluster. Here is a rundown of current prices for the US-Virginia region:
Two RDS M4 Large instances for the database – $126.00 each.
Two EC2 C4 Large instances for the web servers – $144.00 each.
One ElastiCache M3 Large instance – $131.04
One Elastic Load Balancer instance w/ minimum 10GB data processed monthly – $18.08
One EFS file storage instance with 100GB – $30.00
This alone is $575.12 per month – and we have yet to pay for a single visitor, upload file storage, or even 1mb of bandwidth. You could easily add hundreds, if not thousands per month depending on your traffic.
We also have yet to factor in costs for the multiple developers and DevOps engineers you’d certainly need. Yikes!
Read Also – WordPress Migration to Google Cloud (Full Guide)