Day 60: Leveraging Azure Automation to Support DevOps (Part 1)

Welcome to Day 60 of the “100 Days of DevOps with PowerShell”! For background on our goals in this series, see Announcing the “100 Days of Devops with PowerShell” Series here at SCC. If this is the first time you are viewing an article of the series and you are new to DevOps and PowerShell Desired State Configuration (DSC), I encourage you to read through the previous installments, particularly Day 1. I can’t believe we are already in the second half of our 100 days! As we progress, expect us to push further into new concepts, as well we to tie together our work from previous days.

Objective of this post…

While I love PowerShell DSC, I find it does not yet easily do everything I would like really well. The focus of this post is to share some really useful tips on how to augment your DevOps efforts with Azure Automation to implement an efficient and sustainable DevOps practice within your organization. I am going to assume that you know Azure Automation is a cloud-based (Azure-based) automation engine MS developed, that along with Service Management Automation (SMA), will ultimately replace System Center Orchestrator (SCO) as the tool of choice IT process automation.

If you didn’t know this, I am not going to waste your time in this post on the level 100 concepts which have already been covered by Microsoft. Go to “Get Started with Azure Automation”, read up on Azure Automation, sign up for the Azure Automation Preview HERE and come back to Day 60 to pick up where you left off.

Once you have signed up, you will see a new Automation tab in your Microsoft Azure Portal.

image_thumb5

With that out of the way, I will talk to you today about

  • DevOps Use Cases for Azure Automation
  • Creating a Recurring Schedule (Monitoring runbooks in Orchestrator-speak)
  • Increasing the Precision of the Schedule Options

The bottom line here is I want to give you some real food for thought around how to make use of Azure Automation and SMA in your DevOps approach in the real world.

DevOps Use Cases for Azure Automation

The short answer to the question “Where can Azure Automation augment DevOps?” is that Azure Automation can bridge the gap in PowerShell DSC and other existing tools in a wide variety of scenarios too numerous to detail. In my case, I almost immediately had a need to create a runbook that would operate on a recurring schedule. In SCO, these were called monitoring runbooks. Some examples of important use cases where I needed to bridge a gap in a process that required manual effort included:

  • A runbook to monitor for availability of new code releases (to augment my efforts in Day 20 and Day 30)
  • Monitoring for Azure VMs with external RDP endpoints (limited access to across site-to-site VPN only)
  • Automatically starting development and test VMs in the morning and shutting them down at night (to minimize our Azure spend)
  • Automating smoke, unit and regression testing (automating some testing scenarios with Pester)

There are myriad other potential use cases, but that is a discussion for another day.

Creating a Recurring Schedule

Once you have a created a runbook in Azure Automation and published it, you can schedule it to run at a specific time, or even on a recurring basis. As in Orchestrator, schedules in Azure Automation are reusable assets. When configuring new runbooks, you can create a new schedule or link the runbook to an existing schedule.

image_thumb12

If you click the Link to a new schedule option, you will be asked to provide a name and description for the schedule.

image_thumb8

On the next screen, you will be asked to configure the schedule. You will immediately notice your maximum precision is 1 hour. The Recur every (number of hours) box only excepts values from 1 – 99. On a positive note, you can pick the minute within the hour when the schedule runs. This opens up one of two options we have for increasing the precision of our scheduling in Azure Automation to drive tasks more frequently that once an hour.

image_thumb9

I immediately wanted to increase the precision of the scheduling in Azure Automation, specifically for my high availability web application scenario in Day 30. While PowerShell DSC can be configured to configure a .NET website and copy content from a central source (a central secure file share in the Day 30 example), the xWebAdministration module does not have an option to watch that source for availability of updated source code versions. In an ideal world, I would fix this by implementing in Microsoft Team Foundation System (TFS). Sadly, at this time, I do not live in an ideal world, so TFS is not an option. And while I could write an extension to existing DSC functionality today, I am not going to, as I believe it’s not the most effective approach (or even 2nd best) for production use.

Increasing Schedule Precision

There are two ways to increase the schedule precision in Azure Automation; the by-the-book approach and then using a custom approach I will call “Plan B”. Each will have their place in your automation strategy.

By-the-book approach

The first option, mentioned briefly in the first paragraph of scheduling section of the Azure Automation documentation, is to associate a runbook with multiple schedules (hourly schedules in this case), each set to run on a specific minute of the hour, with an easily interpreted naming convention, like so:

image_thumb14

The bad news is that this approach would result in a whole bunch of schedules. For example, for a runbook you need to run every 30 minutes, that would take 48 schedules. Need to runbook to execute very 15 minutes? That’s 96 schedules!

Another approach (Plan B)

The approach I am about to describe here is not universally superior or inferior to the by-the-book approach, but may be more efficient in some situations, which I will call out for you specifically.

Here is a simple example of a DO UNTIL Loop in PowerShell you can use to get comfortable with the concept. If this is new to you, this sample out in your PowerShell ISE before continuing.

  • This script performs a unit of work repeatedly until the value of the variable “$I”  equals 3.
  • The beginning value of “$I” is 0, and its value is increased by 1 on each iteration, meaning the script will loop 3 times before stopping.
  • Within the script, Start-Sleep is used to pause the script for 60 seconds, so the script will take approximately 3 minutes to complete.
  • The unit work in this case is to simply echo current value if “$I” along with the current date and time to the screen so you have a clear picture of how the script works.

Yes, I know aliases and other shortcuts could shorten the code a bit, but I opted for simplicity, so all may benefit regardless of current skill level in PowerShell. I’ll explain how we can leverage this approach to improving script precision more efficiently than multiple schedules in some cases in just a moment.

If I wanted to execute ever 5 minutes within the hour leveraging a single hourly schedule, I would simply set the value of Start-Sleep to -s 300 (every 5 minutes) and the value of the Until line to Until ($i -ge 10). This would ensure the job runs 11 times (i = 0 through i = 10) and then terminate before the schedule kicked off again on the next hour.

The other important takeaway here is the unit of work could be replaced with any recurring unit of work, such as checking the date and time a file was last updated, which could be used to drive an automated process to push source code the servers in a web farm (like the web farm described in Day 30 of our series). For example, we could seed the share containing the web application source code with a release details file (kind of a readme.txt) and use the LastWriteTime property to determine if we have new source code to push to the web servers.

Here’s a quick sample of how we might make the comparison.

Where the “Plan B” approach may make sense

There are places where this approach would offer a distinct advantage over multiple schedules. For example, if I wanted to make calls repeatedly to an API like SCOM, SCSM, or TFS, this would means instantiating a connection to the system every X minutes when a new schedule kicked in. With the alternate approach demonstrated here, we just keep the connection open and just check for new items of interest based on our requirements. This could prove a huge advantage at scale when the interval on which you need to check is very short (2 minutes or less).

In my next post, we will put this logic to use to drive a release automation process from Azure Automation!

Conclusion

That’s all for this installment. I hope you are able to follow along as your schedule allows, building on your previous efforts. Be sure to check out my post next week (Day 65) in which I will build on this foundation to do some real DevOps-focused automation.

Previous Installments

To see the previous installments in this series, visit “100 Days of DevOps with PowerShell”.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.