My 2nd Week with the PowerShell Deployment Toolkit (PDT)

we-know-they-have-some-power-l_clink_large

My Second Week with the PowerShell Deployment Toolkit (PDT). This is part 2 of a series of posts on using PDT to rebuild my home lab with all the System Center 2012 SP1 roles. In part 1 I described my home lab setup and my desire to rebuild it and install all the Microsoft System Center 2012 SP1 roles using the PowerShell Deployment Toolkit built by Rob Willis and his team over on the Building Clouds blog.

My home server was rebuilt with a more modern Intel Core i5 Quad Core processor with virtualization support, but I only purchased 16GB of RAM during my first shopping trip. After several initial runs with the PDT, I quickly found this to be inadequate. I have since upgraded to 32GB RAM. With 16GB of RAM, I kept receiving script failures because SQL Server did not have enough start up memory. I also replaced the single SATA hard drive with 2 RAID0 SATA 6.0Gb/s striped drives.

In addition to the hardware upgrades, I have also configured PKI on my home lab domain to resolve the SPF validation error. The certificate is a duplicate of the Workstation Authentication certificate with Server Authentication added, additional properties of the certificate:

  • General
    • Publish certificate to Active Directory
  • Extensions
    • Application Policies
      • Server Authentication
      • Client Authentication
  • Subject Name
    • Build from this Active Directory information
    • Subject name format:
      • Common name
    • Include this information in the alternate subject name:
      • DNS name

In setting up certificates to resolve the validation issues with Service Provider Foundation, I also have made it a best practice of creating my VMs in all UPPER CASE. The validation within the installer script is case sensitive, so upper case server names resolves this.

Several commenters, Wnbowman and David Oliver Elgh in the TechNet gallery suggested several modifications to speed the VM builds using the VMCreator script, so I added those as well.

The first modification to the VMCreator script was to add <WindowsTimeZone> to the <Defaults> section of the script. So now, VM’s will have the correct Windows Timezone and give you one less thing to configure after deployment. I also added a section to specify the Network Interface, which is different for Windows Server 2008 and Windows Server 2012. Within the <NetworkAdapter> section, there is a place to add the correct Interface, which defaults to Ethernet since the majority of the builds will be Windows Server 2012. For Windows Server 2008, my customization looks like this:

<VM Count=”7″>

<OSDisk>

<Parent>E:\VHD\WS2008R2SP1Ent64.vhdx</Parent>

<Type>Differencing</Type>

</OSDisk>

<NetworkAdapter>

<Identifier>Local Area Connection</Identifier>

</NetworkAdapter>

</VM>

This shows the location of the WindowsTimeZone modification in the variable.xml:

<JoinDomain>

<Domain>contoso.com</Domain>

<Credentials>

<Domain>contoso.com</Domain>

<Password>p@ssW0rd</Password>

<Username>SCInstaller</Username>

</Credentials>

</JoinDomain>

<AdministratorPassword>p@ssW0rd</AdministratorPassword>

<WindowsTimeZone>Central Standard Time</WindowsTimeZone>

</Default>

My parent VHDX drives have also been increased in size, from the original 40GB to 80GB to resolve low diskspace issues that also caused the installer script to fail on some roles.

In part 1 of this series, I also experimented with different combinations of System Center roles on the individual VM’s to hopefully consolidate CPU and RAM requirements, but after re-watching Rob Willis’ MMS2013 presentation on Channel9, I used the Combinations section in the Workflow.xml to come up with a modified sequence of roles on the servers, as my original goal was to install the roles on the minimum number of servers possible to save memory and CPU contention. PDT only allows certain combinations of roles, so you must consult the workflow.xml to see what is possible.

For my small home lab, with only one Hyper-V server, I was much more successful when I consolidated roles onto only 7 VM’s and increased their memory than I was spreading the roles out on separate VM’s.

My attempts were also more successful with the following security changes:

  • SCInstaller was added the Domain Admins, SCAdmins and SQLAdmins groups.
  • sc_dw and sc_svc service accounts were also added to the SQLAdmins group.

It did not matter whether I tried to install the roles on 7 VM’s or 9 VM’s when trying to install all the roles at one time. In either case, the PDT Installer script would randomly error out on different roles due to a either the MSSQLSERVER service not starting or individual role services not starting. In many cases, I was successful in restarting the Installer script and it would complete all the role installs successfully, but in every case where the SharePoint Foundation/Webparts installed failed, I was unable to restart the Installer and successfully install it. If SharePoint fails, you have to snap the VM back to baseline before attempting a restart of the Installer.

For a small home lab server using PDT, my solution was to break up PDT into 2 passes. In pass one, I would install the VMM, App Controller, Orchestrator, Service Provider Foundation and Configuration Manager roles, and in pass 2, install Operations Manager, Service Manager, Service Manager Data Warehouse and SharePoint WebParts roles.

PDT Two Pass Procedure with 7 System Center role servers

Pass 1

  1. Copy Variable – Pass1.xml to Variable.xml
  2. Run VMCreator.ps1 to create System Center role server VM’s.
  3. Once all servers are built and domain joined, take snapshot of all VMs.
  4. Shutdown all VMs.
  5. Change memory settings of SC11, SC12, SC13 to 8192 Startup, disable dynamic memory.
  6. Start SC11, SC12, SC13.
  7. Run Installer.ps1

With increased memory, the run only took 1:08 on my small home server.

20130611-2005 Run 7 VMs 1st Pass Green

Pass 2

  1. Delete Variable.xml and copy Variable – Pass2.xml as Variable.xml
  2. Shutdown SC11, SC12, SC13 and change memory settings to 2048 Startup, 1024 Minimum and 4096 Maximum.
  3. You may also want to snap the VMs.
  4. Change memory settings of SC14, SC15, SC16 to 4096 Startup, 4096 Minimum and 8192 Maximum. SC17 should be 2048 Startup, 2048 Minimum and 4096 Maximum.
  5. Start all the VM’s – SC11, SC12, SC13, SC14, SC15, SC16 and SC17.
  6. After they have all come online, run Installer.ps1 again.

Less processor context switching and more memory equals a faster Pass 2 that took only 2:44 to complete!

20130611-2122 Run 7 VMs 2nd Pass GreenI also did a 2 pass run using 9 System Center role server VM’s and here are the results.

Pass 1 with 9 System Center role servers took 2 hours and 4 minutes to complete. The VM’s were only given 4096 of memory with Dynamic Memory disabled, so the extra memory in the 7 VM scenario took an hour off the completion time. I will need to go back and do a 2 pass 7 VM run with 4096 memory to see apples to apples what the impact of installing roles on more servers verses less has on the completion time.

Pass 1

20130610-2234 Run 9 VMs 1st pass GREENPass 2 with 9 role servers took 40 minutes longer than pass 2 in the 7 role server scenario, even though I gave the VM’s the same memory settings of 4096 Startup, 4096 Minimum and 8192 Maximum. Installing on more VMs at a time taxed the single quad core processor much more. So pass 2 in this instance is a direct apples to apples comparison between installing System Center roles on 7 VMs verses installing them on 9 VM’s.

Pass 2

20130610-0530 Run 9 VMs 2nd pass GREEN

Saul's Quote

Closing thoughts. PDT gets you much closer to a push button unified installation experience for System Center 2012 SP1, but it does not do all the work for you. It is still important to understand the prerequisites before you can use it. In a home lab environment, it is still possible to use the PowerShell Deployment Toolkit, it is just important how you sequence the installations.

Version 1005 of the workflow.xml has recently been released and allows you to install all the new Windows Azure Services 2012 roles, but we’ll save that for another post!

I have attached the PDT files here.

PDT2.4.1004.1

5 thoughts on “My 2nd Week with the PowerShell Deployment Toolkit (PDT)

  1. Pingback: PowerShell Deployment Toolkit – Automate System Center installation - 4sysops

  2. Avatar of Cameron FullerCameron Fuller

    Joe, this is excellent information! I’m working with PDT on a lab rebuild now and hitting similar failures which I expect may be due to the amount of memory on each virtual. Based upon your writeup I’m about to increase those and go try it again.

  3. Avatar of Joe ThompsonJoe Thompson Post author

    Glad you found the information helpful Cameron!

    The more memory (and disk) you can throw at the installation process the better! I have been disabling dynamic memory on the VM’s by specifying the same value for min and max menory in the variable.xml, which also helps. I just recently replaced my striped pair of sata disks on a 3 GB/s channel for SSD’s on the 6 GB/s channel and decreased the installation process by almost 40 minutes.

    Joe

  4. Avatar of LenarLenar

    This is very useful post, Joe! thank you.

    Could you tell me is it possible to install the whole SystemCenter stack using the only one SQL server with different instances? I tried to do this with PDT by modifying variable.xml, but didn’t have success.

  5. Avatar of Joe ThompsonJoe Thompson Post author

    Your best bet is to review the possible role combinations in the WORKFLOW.XML file. Most databases can be combined with the exception being Operations Manager and Service Manager.

    The possible combinations are listed in the <Roles> section of the workflow.xml

Leave a Reply