Day 89: Creating Highly Available Servers in AWS (Part 1)

Welcome to Day 89 of the “100 Days of DevOps with PowerShell”! For background on our goals in this series, see Announcing the “100 Days of Devops with PowerShell” Series here at SCC.

Update: 2014-12-30 – the code was updated below to fix a couple of typos.

In previous posts we have looked at configuring network infrastructure in AWS as well as deploying and configuring instances.  In today’s post we are going to bring together a lot of we’ve covered before and start a mini-series of posts on how to deploy highly available servers in AWS.  In our example we are going to look at using a load balancer to distribute requests between two web servers. A load balancer in AWS can be deployed in EC2 Classic or a VPC and in our example we are going to use a VPC.  As a refresher (from Day 74), a VPC is specific to a region, but within a region you may have multiple availability zones which would be physically separate datacenters.  Subnets are created within availability zones and you may create a VCP in the us-east-1 region with a CIDR block of, but create multiple subnets within the VPC across different availability zones.  Therefore if an availability zone encounters an outage which affects a web server within one its subnets, another web server in another availability zone would be unaffected.  It is the job of the load balancer monitor the health state of its assigned instances, so if a web server experiences an outage traffic will not be directed to it.

Configuring the environment for a load balancer.

A requirement when configuring a load balancer is any subnets assigned to it must reside in separate availability zones and this is shown in our architecture below:


Also notice there are two load balancers shown in the diagram.  In our example, we’ll only be deploying one load balancer initially, but if AWS detects the availability zone for that load balancer becomes unavailable or if that load balancer becomes under increasing load,  a second load balancer will be deployed.  Therefore separate subnets for the load balancer and web server need to be created in different availability zones.

Deploying the environment

Much of the deployment of this setup has already been created in previous posts, but for completeness we’ll show the end-to-end deployment and we’ll continue to use variables in the script as we make further enhancements to the environment.  In the script we are looking to do the following:

  • Deploy a VPC
  • Deploy subnets in different availability zones
  • Deploy the web servers
  • Create an internet gateway for the load balancer subnets
  • Create a security group for the load balancers
  • Deploy the load balancer

The environment is now setup and we’re ready to deploy the load balancers.  To deploy the load balancers we are going to do the following:

  • Create a listener for the Load balancer – in this case it will be port 80
  • Create the load balancer and assign the security groups (default and allowing HTTP traffic from the Internet) and the load balancer subnets



In this post we’ve looked at how at setting up the ground work for highly available web servers by creating the VPC, subnets, instances and security groups for our environment.  Finally we deployed the load balancer, assigning the security groups and its subnets.  However, we have not yet configured the load balancer which will require assigning the instances as well as the health checks.  We’ll do that in the next post!

Previous Installments

To see the previous installments in this series, visit “100 Days of DevOps with PowerShell”.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.