A little while back I stepped into a project where there was no automation. Releases were fairly haphazard and required a lot of tweaking to install into each deployment environment. A bigger issue was the amount of effort required to regenerate the .netTiers data/service layers when a column was added to a table. To be effective this stuff needs to be a push button exercise. In this project there was so much deferred maintenance on the templates it could take a day of manual tweaks to apply the necessary changes to the generated code before it would successfully compile - this was occurring instead of applying those changes to the actual templates that generated the code. And you should be familiar with how accurate and error-free tweaking generated code manually is right? I'm rambling, the moral of the story is that automation salves a myriad of ills and improves not only your ability to do your job but delivering installers that reliably install your application also vastly improves your relationship with the people responsible for looking after the servers your application runs on.
So this first post is to promote and for future reference record the sheer usefulness of PowerShell for getting your wares to where they need to be. Recently I've been refactoring a special application that keeps its data in XML files instead of, I don't know, an actual database? And for my first trick I need to separate one "manual" into two. Fortunately PowerShell knows XML in a very cool way. Time to look at a little code.
How great is that? Working with XML in PowerShell is easy. I'm not going to claim I'm writing great code here. More like just enough to get a one off upgrade sorted. The script should be largely self-explanatory but in case you’re after a little more detail I’ll go through the important bits. In line 3 I’m putting together the path to the manuals XML file using the Join-Path function. In this case the path is different for every location – in Test it’s on the C:\ drive, in Production it’s E:\ so for each case I load $APP_DATA to point to the website’s App_Data directory which is where the XML files are stored.
Line 6, it’s always good to check the file exists before trying to use it. Then the meat course in line 8, load all that XML into something I can manipulate, using [xml] to convert the file contents into an XML DOM object. In line 10 I’m looking for an <eisa /> element which contains the two manuals I need to separate. In lines 14 and 25 I’m creating the XML for the new manuals. And finally lines 36 – 40 where I import the new manuals and remove the manual I’m replacing.
I also like to produce installers for each of the environments where I need to deploy an application. In my next post I'm going to document some MSBuild code to automate building an app for each configuration. Once those installers are built I use PowerShell to put them where the tech guys expect them to be, including creating the correct directory structure based on the application version number defined in Web.config:
In the first couple of lines I’m declaring where to find the packages and a list of the configurations that target each of the different deployment environments. In line 9 is my function to retrieve the current version number from Web.config, again using the PowerShell [xml] facility. create_new_version_folders in line 22 does what its name suggests and creates the file structure where the installers will end up. And finally copy_package copies each installer to the correct location in the new directory structure.
For other projects I modify this script to copy e.g SQL scripts used to update the database for a particular version of the application. This can include modifying the SQL to match up any linked servers used in a particular deployment environment.
The amount of time that saves me is significant and makes my day just a little bit better each time. The converse is also true, whenever I find myself having to do something manually more than once I curse myself for not having spent the time to automate it the first time.