A Simple Storage Plan for Load Balancing Drupal

Submitted by Perignon on Sun, 08/02/2015 - 02:55

Building a fault tolerant website? Incorporating a load balancer with multiple web servers to maintain maximum uptime? Small business? Using Drupal?

Yes across the board, then

We approached this same issue a few years ago deploying an extensive Drupal eCommerce website. The problem was how to use a load balancer - in our case Elastic Load Balancer - to make sure we have maximum uptime, distributed load, and ability to do upgrades without bringing down the site. So the task was to distribute Drupal across multiple servers. Needed to accomplish this was shared storage for all the web servers to use. However, what to put in the storage? The entire Drupal code base? That seemed a little excessive when you realized that there is typically only two locations that files are changed while Drupal runs: public and private file system. Drupal makes running multiple servers easy.

The public and private file system of Drupal can be controlled and set to a defined location. Putting the public and private file system in under a single folder allows you to have one location for all files that change. That single location gives you a mount point to strap a network file system to your web servers, leaving the PHP code base on the web servers local storage since it does not change. For our problem, we choose to use Gluster File System since it was designed for cloud/virtual environments. We placed the public and private file system under sites/all/files/ and made the mount point for our Gluster file system be /path/to/website/sites/all/files.

This sharing of the files directory created a very simple storage plan for a load balanced Drupal installation. Our Drupal code was stored on our web servers and managed by Git then our user files, CSS/JS aggregation, and media uploads were stored on our replicated Gluster cluster. The land of virtual machines is a volitile one, so for data security we created cron jobs on our web servers that use S3cmd and Cronlock (more later on Cronlock) to copy the contents of the gluster file share to S3.