Wednesday, March 18, 2009

Software Craftsmanship

I'm a fan and subscriber of InfoQ. Lots of good stuff, a bit of dross, and the occasional jewel. The latest newsletter included news of the Software Craftsmanship Manifesto: A Call to Arms.

As aspiring Software Craftsmen we are raising the bar of professional software development by practicing it and helping others learn the craft. Through this work we have come to value:

  • Not only working software, but also well-crafted software

  • Not only responding to change, but also steadily adding value

  • Not only individuals and interactions, but also a community of professionals

  • Not only customer collaboration, but also productive partnerships

That is, in pursuit of the items on the left we have found the items on the right to be indispensable.

© 2009, the undersigned. this statement may be freely copied in any form, but only in its entirety through this notice.

History Lesson

Many years ago, somewhat fresh out of school, I figured I was doing ok as a coder. The work I was involved in was generally interesting and I felt I was doing a pretty good job. That all changed dramatically a short time later when I was given the responsibility for designing and constructing from scratch the debit card PIN encryption and verification infrastructure for a New Zealand household-name financial services provider. This involved writing C++ code to interface to the hardware encryption module locked into a rack in the computer room and communicating with PIN Pads and Encoders attached to computers in the branch network, plus a little GUI development to drive it all. These were the days before the emergence of Web Services and WCF; Windows NT was the target platform for both server and workstation — there was a lot of low-level boilerplate that just didn't exist. Ultimately it all worked well, I was reasonably happy with the resulting code, and the system provided many years of reliable, low-maintenance service.

However during the project I found myself in a deep hole with respect to a core aspect of the design. I was working for a vendor organisation at the time and when I asked for some assistance the silence was unnerving. I suspect there weren't any people in the organisation with relevant experience but what I needed was more peer review than technical assistance. Fortunately I'm not shy about asking for help when I need it. I finally got to sit down with another developer who pointed me towards the newly minted Component Object Model (COM).


I started reading and haven't really ever stopped. One of the early books I read was Scott Meyers awesome Effective C++. Finding out how many things I had been doing incorrectly was painful but enlightening and really highlighted to me what a challenge it is to produce good quality C++.

Eric Meyer's Eric Meyer on CSS was another watershed moment for me. After an initial foray into writing code for the Web I finally began to understand how it could work. Lets face it, up until recently everything we've done on the Web is a hack. We pretend the browser is just a dumb terminal and not a sophisticated development platform in its own right. With Web 2.0, AJAX, and REST we are beginning to see the browser platform being taken seriously with good separation between client-side and server-side.

Jeffrey Friedl's excellent Mastering Regular Expressions is another great book, David Flanagan's JavaScript: The Definitive Guide, Fowler's Patterns of Enterprise Application Architecture, Andrew King's Speed Up Your Site, Uncle Bob's (& son) Agile Principles, Patterns, and Practices in C#, Joe Celko's SQL Programming Style. Just a few of many great and often extremely readable tomes informing and affording better software craftmanship.


Having personally witnessed the likes of copy-paste coding, superstitious code, and voodoo chicken coding, I've often found myself trying to raise the standard of the development efforts in which I've been involved. But I've also frequently found myself working without peer-review. It is that latter situation where the right book can be valuable, far more so than wading through pages of Google search results trying to identify the one nugget of information I'm looking for.

I've rambled on long enough. Read the article, sign the manifesto, join the LinkedIn group, and raise the bar.

Thursday, March 12, 2009



Right now I need to park my attempts at configuration of a new Windows Server 2008 r2 Core web server. I've effectively managed to hose my configuration, I hope temporarily, in the process of applying a security template using secedit.exe. Not one of my best moments, particularly when I've had something similar occur previously. Unfortunately there just isn't a whole lot of info I can find about using secedit but I'll keep all those details for another post.


Having used the Java Hibernate ORM tool previously, plus having an interest in LINQ and wanting to cut some actual code, I figured I would kill three birds using NHibernate.Linq. Of course once I started googling it became apparent that there are many interesting things occuring in the Alt.Net space, enough to get happily side-tracked for some time.

I eventually found my way back and followed these instructions to build NHibernate from the latest subversion source. The mystery of the missing assemblyinfo.cs files was quickly solved although I did have to install NAnt and modify the NAnt file to correctly generate the missing file for the NHibernate.ByteCode.Spring project. I think the entry had been removed to decouple NHibernate from Spring but the work may not have been complete at the point that I retrieved the source. The source has a comprehensive test suite which, apart from highlighting issues using NHibernate with my PostgreSQL database, is also a great starting point for information about using NHibernate.


As mentioned above, I'm using Postgres which happens to be installed on my MacBook Pro. I wanted to run the test suite against it from my Vista VM. Connecting to Postgres was actually pretty straight forward. I created a database user nhibernate with a long random password (using uuidgen) and modified the provided Postgres template, renaming it to hibernate.cfg.xml (see below). Apart from setting the appropriate values for the user credentials and server, the connection string in the template incorrectly specified "Initial Catalog" — this needed to be changed to "Database". The Postgres driver is provided by the mono project which also contains good information about the connection string parameters. Two assemblies are required,, and the driver itself Npgsql.dll. Once these things are in place, and assuming you already have the nunit framework installed, the test suite runs quite happily against the Postgres database. A small percentage of tests fail, typically due to some feature difference between the target SQLServer database and Postgres, a good way to highlight those things that might bite later.

<?xml version="1.0" encoding="utf-8"?>
This template was written to work with NHibernate.Test.
Copy the template to your NHibernate.Test project folder and rename it in hibernate.cfg.xml and change it
for your own use before compile tests in VisualStudio.
<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2" >
<session-factory name="NHibernate.Test">
<property name="connection.driver_class">NHibernate.Driver.NpgsqlDriver</property>
<property name="connection.connection_string">
Server=mbp;Database=nhibernate;User ID=nhibernate;Password=D4F52CC8-5603-4C84-9147-3CC602EF359A;
<property name="dialect">NHibernate.Dialect.PostgreSQLDialect</property>
<property name="proxyfactory.factory_class">NHibernate.ByteCode.LinFu.ProxyFactoryFactory, NHibernate.ByteCode.LinFu</property>


NHibernate.Linq is part of the NH Contrib project and the source is available from subversion. I replaced the NHibernate.dll with the one generated above. The test suite uses the NorthWind database so no joy testing against Postgres. I have SQLExpress installed in my Vista VM and with some jiggery pokery the tests will run successfully. First allow the user Full control on folder C:\SQL Server 2000 Sample Databases (or second in the event that attempting to attach the NorthWind database results in an error 5 exception). After starting SQLExpress if it isn't already running, it's necessary to attach the NorthWind database using the sp_attach_db stored procedure. And might as well create the Test database that is also required by the test suite.

By default accessing SQLExpress is limited to the Shared Memory protocol (some info and much joy can be found at SQL Server 2005 SQLExpress error: ...provider:Named Pipes Provider, error 40 - Could not open connection to SQL Server — fortunately I was already familiar with the issue, others obviously weren't so lucky). The NHibernate.Linq test suite uses Named Pipes for connecting to the database and will choke if the protocol isn't enabled. Enabling Named Pipes is straight forward using SQLServer Configuration Manager and requires the service to be restarted.

The other trap to be aware of is the surface area configuration which needed to be modified to add my user as an administrator and ensure remote connections via named pipes are enabled. This is managed by the SQL Server 2005 Surface Area Configuration tool.

app.config file for the test suite also needs to be modified to connect to SQLExpress:

<?xml version="1.0" encoding="utf-8" ?>
<add name="Test"
connectionString="Data Source=.\SQLExpress;initial catalog=Test;Integrated Security=SSPI"/>
<add name="Northwind"
connectionString="Data Source=.\SQLExpress;Initial Catalog=NorthWind;Integrated Security=SSPI"/>


To actually run the tests, the NHibernate.Linq.Test project needs to be the Startup Project and the project properties modified to use NUnit for testing.

Finally the tests run successfully. 46 of the tests are ignored which indicates there is some work yet to be completed but at least it is obvious where the work is required.

Thursday, February 26, 2009

More PowerShell Remoting

I have spent some time on forums inquiring about the ability to transfer files between machines using PowerShell Remoting. I have an inkling I've been asking the wrong question. Until I have PowerShell Remoting configured correctly I'm not going to be able to do any useful experimentation.

Now, there are some breaking changes to PowerShell in v2 CTP3. For example Invoke-Expression no longer has a ComputerName parameter; this is now the domain of Invoke-Command. It makes searching for useful information on Remoting a little more interesting. So lets have a look at the result of issuing a remote Get-Process:

PS C:\Windows\System32> invoke-command -computername w2k8r2 {get-process}
Invoke-Command : [w2k8r2] Connecting to remote server failed with the following error message : The WinRM client cannot process
the request. If the authentication scheme is different from Kerberos, or if the client computer is not joined to a domain, then
HTTPS transport must be used or the destination machine must be added to the TrustedHosts configuration setting. Use
winrm.cmd to configure TrustedHosts. You can get more information about that by running the following command: winrm help config.
At line:1 char:15
+ invoke-command <<<< -computername w2k8r2 {get-process}
+ CategoryInfo : OpenError: (:) [Invoke-Command], PSRemotingTransportException
+ FullyQualifiedErrorId : RemoteRunspaceStateBroken

Fair enough, neither the client nor the server is part of a domain. Adding the destination machine to the TrustedHosts configuration setting is generally accompanied by warnings not to use this option for a production environment. That leaves me the option of using HTTPS and that requires a certificate. For my purposes a self-signed certificate is going to be perfectly adequate.

Server Activation

As an aside, I've not been able to activate my new server. Whenever I tried I would get error 0x80072EE7. The penny finally dropped - without a DNS server there is no way to identify the IP address of the Microsoft site used for activation, doh. So lets get that out of the way. The product key for the server is available on the Windows Server 2008 R2 Beta download page:

C:\Users\dave>netsh interface ipv4 add dnsservers "Local Area Connection"
The service has not been started.
PS C:\Users\dave> set-service dnscache -startupType automatic
PS C:\Users\dave> start-service dnscache
PS C:\Users\dave> cscript c:\windows\system32\slmgr.vbs -ipk FH4Y6-XGKH9-V6W22-YB7YT-HQDFB
Microsoft (R) Windows Script Host Version 5.8
Copyright (C) Microsoft Corporation. All rights reserved.

Installed product key FH4Y6-XGKH9-V6W22-YB7YT-HQDFB successfully.

PS C:\Users\dave> cscript c:\windows\system32\slmgr.vbs -ato
Microsoft (R) Windows Script Host Version 5.8
Copyright (C) Microsoft Corporation. All rights reserved.

Activating Windows(R) Web Server, ServerWebCore edition (220cd791-4e60-4412-8fcf-d605f54d3ae0) ...
Product activated successfully.

PS C:\Users\dave> cscript c:\windows\system32\slmgr.vbs -xpr
Microsoft (R) Windows Script Host Version 5.8
Copyright (C) Microsoft Corporation. All rights reserved.

Windows(R) Web Server, ServerWebCore edition:
The machine is permanently activated.


Sweet, now back to the task at hand. I'm using information from Brian Ehlert's IT Proctology to configure WinRM with a self-signed certificate. I don't have a copy of SelfSSL.exe but I do have two copies of makecert.exe. Past experience tells me all makecert.exe's are not created equal and as usual the first one I try fails. So the version I'm using is C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin\makecert.exe which I suspect was installed with Visual Studio. Attempting to run makecert on my server doesn't work:

PS E:\> .\makecert.exe
Program 'makecert.exe' failed to execute: The subsystem needed to support the image type is not present
At line:1 char:15
+ .\makecert.exe <<<< .
At line:1 char:1
+ <<<< .\makecert.exe
+ CategoryInfo : ResourceUnavailable: (:) [], ApplicationFailedException
+ FullyQualifiedErrorId : NativeCommandFailed


Obviously I'm missing something. WoW64 support for 32 bit applications is now an optional feature in Server Core and isn't installed by default. I also have the 32-bit .Net Frameworks installed so I figure I might as well include support for that. And a restart is required.

C:\Users\dave>start /w ocsetup ServerCore-WOW64
C:\Users\dave>Start /w ocsetup NetFx2-ServerCore-WOW64
C:\Users\dave>Start /w ocsetup NetFx3-ServerCore-WOW64
C:\Users\dave>shutdown /r /t 0

Using PowerShell to eject CD/DVD

Another short aside, the Mac Mini I'm using for my web server doesn't have an eject button. As I mentioned way back here it is possible to eject media by holding down the left mouse button during a restart. Even for a non-server environment this isn't particularly practical. Googling turned up a lot of suggestions around scripting the Windows media player, not an option for Server Core. I finally tracked down a workable solution (and useful explanation) on the vistax64 PowerShell forum. Note in the script below there is no ampersand in the verb Eject. This is contrary to the forum post and had me scratching my head for a while - I could hear the drive being accessed but the DVD wasn't being ejected. Notice also the call to ReleaseComObject and Remove-Variable, a little bit of tidying up to ensure the Shell.Application COM object is disposed of and the associated object reference is deleted:

PS C:\Users\dave\Documents> $sa = new-object -com Shell.Application
PS C:\Users\dave\Documents> $sa.Namespace(17).ParseName("D:").InvokeVerb("Eject")
PS C:\Users\dave\Documents> [System.Runtime.Interopservices.Marshal]::ReleaseComObject($sa)
PS C:\Users\dave\Documents> Remove-Variable sa

Creating a self-signed certificate

So lets see about creating a certificate. I need to create one that can be used for Server Authentication and because it's not from an existing Trusted Certificate Authority I'm going to also install it into the Trusted Root Certification Authorities store using the command line utility certutil.exe.

PS C:\Users\dave> .\makecert.exe -r -pe -n "CN=W2k8r2,O=Zebra Crossing Ltd" -e 01/01/2036 -eku -ss my
-sr localMachine -sky exchange -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 W2k8r2.cer

PS C:\Users\dave> certutil -store "my" "W2k8r2"
================ Certificate 2 ================
Serial Number: 18a1515e9f87bc8c4047985ca34ef87b
Issuer: CN=W2k8r2, O=Zebra Crossing Ltd
NotBefore: 27/02/2009 1:10 p.m.
NotAfter: 1/01/2036 12:00 a.m.
Subject: CN=W2k8r2, O=Zebra Crossing Ltd
Signature matches Public Key
Root Certificate: Subject matches Issuer
Cert Hash(sha1): 3d bf 69 dc 88 c1 bd 68 bc b0 dc 7c af 4c 9f 44 4a fb 78 73
Key Container = 6755b6e3-4b68-4444-8478-f418c9bf58c8
Unique container name: ec9335511231310c09e44db4db78a25b_2515a01f-d083-4c9d-abf6-5ef325a4f5e5
Provider = Microsoft RSA SChannel Cryptographic Provider
Encryption test passed
CertUtil: -store command completed successfully.

PS C:\Users\dave\Documents> certutil -addstore root .\W2k8r2.cer
Signature matches Public Key
Certificate "CN=W2k8r2, O=Zebra Crossing Ltd" added to store.
CertUtil: -addstore command completed successfully.

PS C:\Users\dave\Documents> certutil -store root W2k8r2
================ Certificate 0 ================
Serial Number: 5e4aae197bd461994867979c58549cff
Issuer: CN=W2k8r2, O=Zebra Crossing Ltd
NotBefore: 27/02/2009 1:37 p.m.
NotAfter: 1/01/2036 12:00 a.m.
Subject: CN=W2k8r2, O=Zebra Crossing Ltd
Signature matches Public Key
Root Certificate: Subject matches Issuer
Cert Hash(sha1): da bd de a7 72 a8 6d 0d 1f d4 4b b5 37 d9 12 c6 49 a9 56 0d
No key provider information
Encryption test passed
CertUtil: -store command completed successfully.

Configuring WinRM

The certificate hash from the previous step is the thumbprint required for setting up a new WinRM listener. Note in the next step we're not using PowerShell, just the standard command prompt. Trying to issue this same command in PowerShell gives Error: Invalid use of command line. Type "winrm -?" for help. I figure there must be some escaping needed to get it to work in PowerShell, something to look at another day.

C:\Users\dave>winrm create winrm/config/Listener?Address=*+Transport=HTTPS
Address =
ResourceURI =
Selector: Address = *, Transport = HTTPS

Time to test the configuration on the server.

C:\Users\dave>winrs -r:https://W2k8r2 hostname

On the client Invoke-Command can be used to make the same request. Of course the certificate on the server is not from a trusted Certificate Authority and the result is not pretty:

PS C:\Users\dave> invoke-command -useSSl W2k8r2 {hostname}
Invoke-Command : [W2k8r2] Connecting to remote server failed with the following error message : The server certificate on
the destination computer (W2k8r2:443) has the following errors:
The SSL certificate is signed by an unknown certificate authority.
At line:1 char:15
+ invoke-command <<<< -useSSl W2k8r2 {hostname}
+ CategoryInfo : OpenError: (:) [Invoke-Command], PSRemotingTransportException
+ FullyQualifiedErrorId : RemoteRunspaceStateBroken

Fortunately there is an option to ignore the Certificate Authority check:

PS C:\Users\dave> $so = New-WSManSessionOption -SkipCACheck
PS C:\Users\dave> invoke-command -useSSl -SessionOption $so W2k8r2 {hostname}


Cool, now lets see if I can get a file transferred between the server and my Vista VM. I specify a -ReadCount argument of 0 to indicate the entire contents should be sent at one time (the default is 1 line and for a binary file is extremely slow):

PS C:\Users\dave> invoke-command -useSSL -SessionOption $so W2k8r2 {get-content -encoding byte -ReadCount 0
"C:/Users/dave/Documents/makecert.exe"} | set-content -encoding byte .\Documents\makecert.exe

PS C:\Users\dave> .\Documents\makecert.exe
Error: Please either specify the outputCertificateFile or -ss option
Usage: MakeCert [ basic|extended options] [outputCertificateFile]
Basic Options
-sk <keyName> Subject's key container name; To be created if not present
[rest of output elided]

Hooray. A little tweaking and I should be able to reliably transfer files over the WinRM connection to any path on the server that I have the authority to access.

Not so fast

Not quite there yet, turns out the only time I can successfully connect using PowerShell Remoting is when I'm also logged in using Remote Desktop. <curses foiled="again"/>

Wednesday, February 25, 2009

Installing Windows Server 2008 Core part 4

Remote management of IIS 7 would be nice. I spent some time trying to figure out why the IIS Management Console on Vista didn't have the option to add a new connection to my new Win2K8 server. Turns out I need Internet Information Services (IIS) 7.0 Manager. Installing and starting it displays a substantially different UI.

On the server, the Web Management Service needs to be started. Also, checking the firewall configuration on the server reveals there is open access through the firewall for the service. I'm going to set the startup for the service to automatic and I'm also going to restrict incoming traffic to the local subnet:

C:\> Netsh advfirewall firewall set rule name="Web Management Service (HTTP Traffic-In)" new remoteip=localsubnet

Now I can connect through the firewall but on my first connection attempt I get a Server Certificate Alert because I don't have the servers self-signed certificate in my trusted certificate store. Viewing the certificate provides an option to install it into my local store which is nice. Sadly the remote server is not accepting connections. So how to enable remote management for IIS. Googling reveals any number of helpful pictures of where in the IIS Management Console to tick the box enabling remote management. Unfortunately there isn't any obvious information about how to do this on Core. I'm having a wee stab in the dark using regedit to modify the entry HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WebManagement\Server\EnableRemoteManagement, changing the REG_DWORD value from 0 to 1. Restart the Web Management Service and I can now successfully add the server as a new connection. Unfortunately when I open that connection there are only three options available to me; Feature Delegation; IIS Manager Permissions; and IIS Manager Permissions.

After some toing and froing on The Official Microsoft IIS Site I managed to get it sorted. The Official word from Anil Ruia is:

This is an issue with how clr looks for managed assemblies - in vista SP1, Microsoft.Web.Management.IisClient, version= (and AspnetClient) are also installed to the inetsrv directory along with being in the GAC - now when connecting to win7, version 7.5 of these dlls are downloaded, but when trying to load them, CLR first finds the 7.0 versions in the inetsrv directory and complains that the version does not match. The fix in vista sp2 is just to make sure that the 7.0 versions are not installed to inetsrv directory.

I created a sub directory C:\Windows\System32\inetsrv\Microsoft.Web and moved Microsoft.Web.*.dll into that directory. I did have to take ownership of those particular files and then allow administrators full control. But, yes it does work and I now have access to the complete list of features in iis7 on my remote server.

Monday, February 16, 2009

More PowerShell Adventures

The journey to administration of IIS 7 on Windows Web Server 2008 R2 Core has not been a happy time. Hopefully perseverance will eventually pay some dividends. So my objective right now is to install the IIS 7 PowerShell snap-in. I know where it is: I just need to download it to my server. Note my server doesn't currently have a DNS entry which means providing a hostname is just not going to work, I need to replace with the equivalent IP Address Google throws up a number of wget scripts; after trying various versions and failing to get anything to retrieve the file I went back to basics and used a page from B#.Net Blog. This uses .Net Framework class System.Net.WebClient to retrieve the file. The problem with most of the scripts I found is that error reporting is generally left to the default PowerShell Exception report which by and large is not particularly useful:

Exception calling "DownloadFile" with "2" argument(s): "An exception occurred during a WebClient request."
At line:1 char:21
+ $client.DownloadFile <<<< ($url, $file)
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : DotNetMethodException

In order to get a little more detail I need to trap the exception and delve into it to find out what caused it. This is relatively straight forward using the trap mechanism. To begin with lets display the methods available on the Exception object.

PS C:\Users\Administrator> trap { $error[0].Exception | get-member } $client.DownloadFile($url,$file)

TypeName: System.Management.Automation.MethodInvocationException

Name MemberType Definition
---- ---------- ----------
Equals Method System.Boolean Equals(Object obj)
GetBaseException Method System.Exception GetBaseException()
GetHashCode Method System.Int32 GetHashCode()
GetObjectData Method System.Void GetObjectData(SerializationInfo info, StreamingContext c...
GetType Method System.Type GetType()
ToString Method System.String ToString()
Data Property System.Collections.IDictionary Data {get;}
ErrorRecord Property System.Management.Automation.ErrorRecord ErrorRecord {get;}
HelpLink Property System.String HelpLink {get;set;}
InnerException Property System.Exception InnerException {get;}
Message Property System.String Message {get;}
Source Property System.String Source {get;set;}
StackTrace Property System.String StackTrace {get;}
TargetSite Property System.Reflection.MethodBase TargetSite {get;}
Exception calling "DownloadFile" with "2" argument(s): "An exception occurred during a WebClient re
At line:1 char:63
+ trap { $error[0].Exception | get-member } $client.DownloadFile <<<< ($url,$file)
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : DotNetMethodException

The ToString method is reasonably useful but generates a fair amount of output, including a full stack trace. Time to see what it looks like:

PS C:\Users\Administrator> $client = new-object System.Net.WebClient
PS C:\Users\Administrator> $url = ""
PS C:\Users\Administrator> $file = Join-Path $home Downloads\iis7psprov_x64_rc1.msi
PS C:\Users\Administrator> $file
PS C:\Users\Administrator> trap { $error[0].Exception.ToString() } $client.DownloadFile($url,$file)
System.Management.Automation.MethodInvocationException: Exception calling "DownloadFile" with "2" a
rgument(s): "An exception occurred during a WebClient request." ---> System.Net.WebException: An ex
ception occurred during a WebClient request. ---> System.Configuration.ConfigurationErrorsException
: Error creating the Web Proxy specified in the '' configuration section. --
-> System.DllNotFoundException: Unable to load DLL 'rasapi32.dll': The specified module could not b
e found. (Exception from HRESULT: 0x8007007E)
at System.Net.UnsafeNclNativeMethods.RasHelper.RasEnumConnections(RASCONN[] lprasconn, UInt32& l
pcb, UInt32& lpcConnections)
[full stack trace elided]
at System.Management.Automation.StatementListNode.ExecuteStatement(ParseTreeNode statement, Arra
y input, Pipe outputPipe, ArrayList& resultList, ExecutionContext context)
Exception calling "DownloadFile" with "2" argument(s): "An exception occurred during a WebClient re
At line:1 char:61
+ trap { $error[0].Exception.ToString() } $client.DownloadFile <<<< ($url,$file)
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : DotNetMethodException

As I said, a fair amount of output. But it does contain some useful information - Error creating the Web Proxy specified in the '' configuration section. ---> System.DllNotFoundException: Unable to load DLL 'rasapi32.dll'. I guess I should have read the release notes:

If you run applications that use managed code that communicates with the Internet with autoproxy
detection, the operation will fail with an unhandled exception that mentions an error creating the
Web proxy because Rasapi32.dll could not be found.

To correct this, open the Machine.config file (default location is
C:\Windows\Microsoft.NET\Framework64\v2.0.50727\CONFIG), locate the final closing </configuration>
tag, and append the following:

<> <defaultProxy> <proxy usesystemdefault="false" proxyaddress="<replace_with_proxy_address>"
bypassonlocal="true" /> </defaultProxy> </>

where <replace_with_proxy_address> is the address and port, if needed, of the proxy server
used by the client application to access the .NET application. For example, proxyaddress="<http://proxyserver:80>".

I'm not actually using a proxy server, what I want to do is avoid the system trying to load the rasapi32.dll. What I need to do is somehow indicate that I don't wish to use the default proxy, whatever that is defined as, when I'm making my WebClient.DownloadFile() request. The correct sequence of calls is then:

PS C:\Users\Administrator> $client = new-object System.Net.WebClient
PS C:\Users\Administrator> $url = ""
PS C:\Users\Administrator> $file = Join-Path $home Downloads\iis7psprov_x64_rc1.msi
PS C:\Users\Administrator> [System.Net.GlobalProxySelection]::Select = [System.Net.GlobalProxySelection]::GetEmptyWebProxy()
PS C:\Users\Administrator> trap { $error[0].Exception.ToString() } $client.DownloadFile($url,$file)

This next step ends up just being stupid, annoying, insulting, you get the picture, not happy. Installing the IIS 7.0 PowerShell Snap-in effectively says "go get the installer and run the installer". So I've finally got the installer. I go ahead and run the installer and the installation fails with an error IIS Version 7.0 or greater is required to install Microsoft Windows PowerShell provider for IIS 7.0. I'm running Windows Server 2008 r2, I've got IIS 7 installed. A forum post adds fuel to the fire, particularly this post:

This is "by design". What you have in Win7 beta is practically the same code as RC of powershell snapin, which is for downlevel versions only.

Really sucks. If something like this is "by design" then it should be documented somewhere obvious. This is just a waste of time for anyone who is trying to get enthusiastic about using this stuff. The workaround is also documented in the thread (thanks 13xforever), change minor version of IIS from 5 to 0 in HKLM:\SOFTWARE\Microsoft\InetStp (and back again after the installation is complete). Now I need to let PowerShell run scripts:

PS C:\Users\Administrator> Get-ExecutionPolicy
PS C:\Users\Administrator> Set-ExecutionPolicy RemoteSigned

And to make my life easier on Core I've created $home\Bin\IIS.cmd containing the Target: for the IIS PowerShell Management Console menu item shortcut installed with the snap-in. Unfortunately I can't post that information because blogger thinks I'm doing something dodgy and removes the offending code and the rest of the post (I guess if you've read this far you might consider that a bonus). I've added the Bin folder to the PATH environment variable. To avoid adding the Administrator's Bin folder to the path for all users it's first necessary to locate the appropriate registry key for the Administrator's environment variables. On my server it looks something like HKEY_USERS\S-1-5-21-1234567890-1234567890-1234567890-500\Environment. Then it's just a case of adding a new Expandable String Value called PATH and giving it a value of %PATH%;%UserProfile%\Bin. The command prompt needs to be restarted in order for the change to take effect.

To cap off a fairly unproductive day, I've just been reminded that I shouldn't be using the Administrator account for administration. Absolutely correct and in fact the Administrator account should be renamed and disabled. Way back in Installing Windows Server 2008 Core part 2 I created another administrative account. Time to start using it. This means I need to move my Bin directory and apply the registry changes to a different user profile. Finally:


Sunday, February 15, 2009

PowerShell Adventures

So I want to be able to administer IIS 7 remotely using PowerShell. Installation on W2K8r2 is easy via start /w ocsetup MicrosoftWindowsPowerShell. And there is a PowerShell Snap-in for IIS 7.0. And in case you're wondering, the location of powershell.exe is added to the path during installation, you just need to restart the console for it to take effect (i.e. there is no need to try to hack the registry in order to add the location of powershell.exe to the PATH environment variable). So a few common commands to get started:


Get help for a particular cmdlet, e.g. help get-process.


Returns a list of aliases for PowerShell cmdlets.


Returns a list of PowerShell cmdlets.


Returns a list of all the running processes

Get-Service | sort status,name

Get-Service | Where-object{$_.status -like "running"}

Get a list of services - in the first case sort by status and name, in the second case filter output to include only running services

Now it would be kind of cool to be able to use the new Windows PowerShell Integrated Scripting Environment to administer the server. Turns out in some cases it just works. Remoting however requires the Windows PowerShell V2 Community Technology Preview 3 (CTP3) on your Vista box. Check out the help text related to remoting:

PS C:\> help about_remote_requirements


NOTE: Many cmdlets, including the Get-Service, Get-Process, Get-WMIObject,
Get-Eventlog and Get-Event cmdlets get objects from remote computers,
but they use .NET methods to retrieve the objects. They do not use the
Windows PowerShell remoting infrastructure. The requirements in this
document do not apply to these cmdlets.

Currently the WinRM service is stopped on the server. Looks like I need to do some configuration:


The remoting features of Windows PowerShell are supported by the WinRM service, which is
the Microsoft implementation of the WS-Management protocol. To use the remoting features,
you need to change the default configuration of WS-Management on the system.

Windows PowerShell provides a script to configure WS-Management. The script is located
in the Windows PowerShell installation directory ($pshome).

To run the configuration script:

1. Open Windows PowerShell with the "Run as Administrator" option.

2. At the command prompt, type:

& $pshome\configure-wsman.ps1

Sadly there is no $pshome\configure-wsman.ps1 on either my Vista box or my server. What to do? I tracked down Installation and Configuration for Windows Remote Management. Following the instructions and running PS C:\> winrm quickconfig on the server gives me this:

PS C:\> winrm quickconfig
WinRM already is set up to receive requests on this machine.
WinRM is not set up to allow remote access to this machine for management.
The following changes must be made:

Create a WinRM listener on HTTP://* to accept WS-Man requests to any IP on this machine.
Enable the WinRM firewall exception.
Configure LocalAccountTokenFilterPolicy to grant administrative rights remotely to local users.

Make these changes [y/n]? y

WinRM has been updated for remote management.

Created a WinRM listener on HTTP://* to accept WS-Man requests to any IP on this machine.
WinRM firewall exception enabled.
Configured LocalAccountTokenFilterPolicy to grant administrative rights remotely to local users.

Cool, that all looks good. If only the same thing worked on Vista:

PS C:\> winrm quickconfig
WinRM is not set up to allow remote access to this machine for management.
The following changes must be made:

Set the WinRM service type to delayed auto start.
Start the WinRM service.
Create a WinRM listener on HTTP://* to accept WS-Man requests to any IP on this machine.
Enable the WinRM firewall exception.

Oh dear. I did however manage to locate PowerShell Remoting on Windows 2008 R2 Server Core. This indicates there is a built-in function called Enable-PSRemoting. Running this in my Vista PowerShell reveals:

PS C: Enable-PSRemoting
Windows PowerShell remoting features are not enabled or not supported on this machine.
This may be because you do not have the correct version of WS-Management installed or this version of Windows does not support remoting currently.
For more information, type 'get-help about_remote_requirements'.
At line:13 char:37

But I know where to get the WinRM CTP. This is a Windows Update and requires a restart. Now things are looking up:

PS C:\Windows\System32> enable-psremoting
WinRM has been updated to receive requests.
WinRM service type changed successfully.
WinRM service started.
Configured LocalAccountTokenFilterPolicy to grant administrative rights remotely to local users.

WinRM has been updated for remote management.
Created a WinRM listener on HTTP://* to accept WS-Man requests to any IP on this machine.
WinRM firewall exception enabled.

But I'm not quite there yet:

PS C:\Windows\System32> Enter-PSSession -computerName myServer
Enter-PSSession : Connecting to remote server failed with the following error message : The WinRM client cannot process the reque
st. If the authentication scheme is different from Kerberos, or if the client computer is not joined to a domain, then HTTPS tran
sport must be used or the destination machine must be added to the TrustedHosts configuration setting. Use winrm.cmd to configure
TrustedHosts. You can get more information about that by running the following command: winrm help config.
At line:1 char:16
+ Enter-PSSession <<<< -computerName myServer
+ CategoryInfo : InvalidArgument: (myServer:String) [Enter-PSSession], PSRemotingTransportException
+ FullyQualifiedErrorId : CreateRemoteRunspaceFailed,Microsoft.PowerShell.Commands.EnterPSSessionCommand

Checking my server listener configuration I have:

PS C:\Users\Administrator> winrm enumerate winrm/config/listener
Address = *
Transport = HTTP
Port = 80
Enabled = true
URLPrefix = wsman
ListeningOn =,, ::1, fe80::100:7f:fffe%6, fe80::5efe:, fe

So no joy. I need to remove the HTTP Listener and add an HTTPS Listener. I'm going to save that exercise for another post - this one's starting to go feral. Right now I'm more interested in getting the IIS PowerShell snap-in installed so I can start managing IIS. If you've read this far only to discover there is no resolution I apologise and I hope you appreciate that I share your frustration.

Tuesday, February 10, 2009

Installing Windows Server 2008 Core part 3

IIS 7.0. Time to have a look at the various options. A suggested commandline looks like the following. This is derived from Installing IIS 7.0 on Windows Server 2008 and Windows Server 2008 Packages.

C:\> start /w pkgmgr /iu:IIS-WebServerRole;IIS-WebServer;IIS-CommonHttpFeatures;IIS-StaticContent;IIS-DefaultDocument;IIS-HttpErrors;IIS-HttpRedirect;IIS-ASPNET;IIS-NetFxExtensibility;IIS-ISAPIExtensions;IIS-ISAPIFilter;IIS-HealthAndDiagnostics;IIS-HttpLogging;IIS-LoggingLibraries;IIS-RequestMonitor;IIS-HttpTracing;IIS-CustomLogging;IIS-ODBCLogging;IIS-Security;IIS-BasicAuthentication;IIS-WindowsAuthentication;IIS-DigestAuthentication;IIS-URLAuthorization;IIS-RequestFiltering;IIS-IPSecurity;IIS-Performance;IIS-HttpCompressionStatic;IIS-HttpCompressionDynamic;IIS-WebServerManagementTools;IIS-ManagementScriptingTools;IIS-ManagementService

Wading through each of the options should give some indication of what I need to install. Be warned that a sinlge misplaced character will cause the install to fail and tracking that sucker down can be a right pain. When the install has completed the last error can be checked to ensure the install was successful. An error code of 0 indicates success, an error code of -2146498548 indicates you might have accidentally typed an s on the end of IIS-ManagementService which annoyingly happens to be the last item in the command:

C:\> echo %errorlevel%


The Web server that provides the Web application infrastructure for all versions of the Windows operating system.


Installs the IIS Web server, which enables support for HTML Web sites, and the optional support for ASP.NET, classic ASP, and Web server extensions.


Installs support for static Web server content, such as for HTML and image files, for custom errors, and for redirections.


Serves the HTM, the HTML, and the image files from a Web site.


Enables the specification of a default file, which is loaded if a user does not specify a file in a request URL.


Installs the HTTP error files and enables the customization of the error messages.


Supports client request redirections.


Enables support for ASP.NET applications.


Enables a Web server to support the .NET Framework-managed module extensions.


ISAPI Extensions


Enables ISAPI filters to modify the behavior of a Web server.


Monitors and manages Web server, Web site, and Web application health.


Enables the logging of Web site activity for a specified Web server.


Installs the IIS logging tools and related scripts.


Monitors the health of Web servers, Web sites, and Web applications.


Enables tracing support for ASP.NET applications and failed requests.


Supports custom logging for Web servers, Web sites, and Web applications.


Supports logging to an ODBC-compliant database.


Supports additional security protocols for Web servers, Web sites, Web applications, virtual directories, and files.


Simple authentication that requires a valid Windows user name and password.


Authentication method that uses either NTLM or Kerberos to validate a user.


Authentication method that sends a password hash value to the Windows domain controller.


Enables a client computer to access the URLs that exist within a Web application.


Configures the rules that block selected client requests.


Enables or denies content access, based on an IP address or domain name.


Enables the compression of content before the information is returned to a client computer.


Compresses static content before returning the information to a client computer.


Compresses dynamic content before returning the information to a client computer.


Enables the management of a local Web server by using the IIS configuration scripts.


Enables a Web server to be managed remotely from another computer, by using the Web Server Management Console.

Installing IIS adds the appropriate firewall rule so the proof in the pudding can be determined by opening my preferred browser and pointing it at the new server. Hooray, it's the IIS7 welcome page.

Monday, February 9, 2009

Installing Windows Server 2008 Core part 2

So not exactly square one but time to repeat a couple of steps including configuring an IP address, changing the computer name, and enabling Remote Desktop. And why does the command prompt disable QuickEdit Mode by default — would anyone complain if Microsoft took it upon themselves to set some reasonable default settings?

Having installed R2, getting PowerShell is easy:

C:\> start /w ocsetup MicrosoftWindowsPowerShell

And starting it:

C:\> windows\system32\windowspowershell\v1.0\powershell.exe

The TechNet blog indicates .Net 2.0 is an optional install on Core. It's required for PowerShell and it is installed along with PowerShell as a prereq if it isn't already installed. And while I'm doing the .Net thing I might as well install 3.0 and 3.5 (note the command prompt, not PowerShell):

C:\> start /w ocsetup NetFx3-ServerCore

Set the date/time/timezone:

C:\> control timedate.cpl

And the international settings:

C:\> control intl.cpl

Allow the Event Viewer MMC snap-in to connect through the Windows Firewall. As an aside, I originally copied the following command from a TechNet webpage and when I ran it I was getting an error Group cannot be specified with other identification conditions. It turns out the quotes used on the webpage were the issue; replacing them with the standard double quotes fixed the problem:

C:\> Netsh advfirewall firewall set rule group="Remote Event Log Management" new enable=yes

And the same for Services, Windows Firewall with Advanced Security. I would kind of like to know how to restrict the MMC connectivity to particular machines on the local network, something to add to my TODO list. Also note the group name Remote Service Management has Service singular. The sites I've been referencing, including TechNet, all have this as Services plural:

C:\> Netsh advfirewall firewall set rule group="Remote Service Management" new enable=yes

C:\> Netsh advfirewall firewall set rule group="Windows Firewall Remote Management" new enable=yes

After some digging I believe I can now at least restrict MMC connectivity to my local subnet. This obviously isn't a panacaea but does adhere to the principle of defence in depth. Note these commands can't be applied to a rules group but must instead be applied to each individual rule in the group:

C:\> Netsh advfirewall firewall set rule name="Remote Event Log Management (NP-In)" new remoteip=localsubnet

C:\> Netsh advfirewall firewall set rule name="Remote Event Log Management (RPC)" new remoteip=localsubnet

C:\> Netsh advfirewall firewall set rule name="Remote Event Log Management (RPC-EPMAP)" new remoteip=localsubnet

C:\> Netsh advfirewall firewall set rule name="Remote Service Management (NP-In)" new remoteip=localsubnet

C:\> Netsh advfirewall firewall set rule name="Remote Service Management (RPC)" new remoteip=localsubnet

C:\> Netsh advfirewall firewall set rule name="Remote Service Management (RPC-EPMAP)" new remoteip=localsubnet

C:\> Netsh advfirewall firewall set rule name="Windows Firewall Remote Management (RPC)" new remoteip=localsubnet

C:\> Netsh advfirewall firewall set rule name="Windows Firewall Remote Management (RPC-EPMAP)" new remoteip=localsubnet

C:\> Netsh advfirewall firewall set rule name="Remote Desktop (TCP-In)" new remoteip=localsubnet

Managing services remotely from my Vista VM using the Services snap-in doesn't provide an option to specify alternate credentials and as a consequence the connection fails with access denied. The convenient workaround for this is to add an administrative account with the same username and password as my Vista account to Win2K8:

C:\> net user dave letmein /add

C:\> net localgroup Administrators /add dave

So now I have convenient access to view the event log on the server and a familiar interface for managing services. So I might as well take the opportunity to disable DHCP, DNS Client, TCP/IP NetBIOS Helper.

Monday, February 2, 2009

Installing Windows Server 2008 Core on Mac Mini

As a Mac-using Windows developer I have had for good measure a small and extremely low traffic OpenBSD web server running on an Apple Cube. It's been a fun project. The Cube happily runs headless on my shelf and lacking a fan is extremely quiet. I was intending to replace it with an Intel Mac Mini which is slightly louder but also a nice form factor and has a bit more grunt. Now I find myself needing to upskill my knowledge of Windows server products. I have in the past been responsible for maintaining and securing Internet-facing Windows 2003 servers. Time to transition to Windows Server 2008 Core Web.

An iso for W2K8 Web Server is available for download here. Leopard's Disk Utility can be used to burn the iso to a DVD. Microsoft allows the use of the software for trial purposes for a period of 60 days and provides instructions to extend that for a further 180 days. The attraction of the Web edition of W2K8 is the lowish price compared to their other server products. The limitation is the inability to run any Microsoft server products except IIS. For my purposes the limitation is not an issue.

So armed with the DVD, the next step is to partition the Mini using Boot Camp Assistant and boot from the install DVD. The install process is short and sweet, and fairly quickly the Windows login prompt is displayed. Unfortunately there's no indication what the initial username and password might be. Without referring to the documentation my first guess was that the initially entered values for username and password would be used as the identity for the administrator. No joy, no joy, fail. My second guess was correct, a username of Administrator and a blank password were accepted and a change password form displayed.

Having successfully signed on, the only UI provided with Core is a command prompt. Kind of daunting. First things first (apart from configuring the command prompt for useful defaults), how do I change the default computer name, configure an IP address, and access the Mini from my MBP using Microsoft© Remote Desktop Connection Client for Mac.

Change the name of the computer:

C:\> netdom renamecomputer {current name} /NewName:{new name} (minus the curly braces).

Note a reboot is required before the new computer name will take effect:

C:\> shutdown /r /t 0

Setting the network configuration:

C:\>netsh interface ipv4 set address name="Local Area Connection" source=static address= mask= gateway=

Allow access via RDC:

C:\> Cscript %windir%\system32\SCRegEdit.wsf /ar 0

So far so good. Unfortunately it's not as simple as unplugging the display, keyboard, and mouse from the Mini. The Mini requires a video signal in order to boot successfully. A simple hack using a resistor is required, details are available from Blackfriars.

OK, at this point I've just discovered that ASP.Net is not supported on Server Core. That seems like a fairly major oversight and a little googling reveals it will be supported in Windows Server 2008 R2 which is currently in beta. So time to go back and download the beta, which succeeded finally after 4 failed attempts I can only ascribe to gremlins.

Hurdles coming thick and fast now. There's no eject button on a Mac Mini; how to extract the disk currently resident in the drive? Sadly there is no obvious mechanism to do this from a command prompt and there are many vb scripts that instantiate a media player and issue it a command to eject the drive — not feasible on Windows 2008 Server Core. Fortunately there is a simple Mac mechanism that only requires the Mini to be restarted while holding down the left mouse button. Not exactly intuitive but achieves the desired result. Next, attempting to boot the Mini from the new disk displayed a prompt "Select CD-ROM Boot Type :" and no matter how hard I press either "1" or "2" the installation progresses no further. Without going into detail there is a solution. The first step is to download oscdimg.exe. This .exe is also available in the Windows Automated Installation Kit which is a 992.2 MB download. Next and using instructions borrowed from Sergio Mcfly, assuming your Parallels Vista install maps D:\ to the DVD drive in your MBP, run the following command oscdimg.exe -n -m -bd:\boot\ d:\ c:\w2k8r2Web.iso. Burn the resulting iso to a new DVD and the Mini now boots from the install disk.

This time, with a clean install, I don't have to guess the Administrator username. Instead I'm immediately prompted to set the password for Administrator and I now have a command prompt and the desktop background informs me I'm running Windows Web Server 7 For testing purposes only. Build 7000.

Thursday, January 22, 2009

Upgrading to jQuery 1.3

Suddenly it's 2009 and jQuery 1.3 has just been released. Upgrading was largely a no-brainer but there were a couple of hurdles. First if using jQuery UI it is necessary to upgrade to version 1.6r5, i.e. a beta release. Hopefully a full release will be available shortly.

Having upgraded to jQuery UI 1.6r5, if you are using the tabs widget there is a bug in the code that is triggered when adding a new tab (ticket #3875). This has just been fixed in trunk. Basically when creating a tab the correct classes weren't being added and the contents of the newly created tab would be displayed on the page. Getting the latest ui.tabs.js from the subversion repository fixes the problem.

Up until now I've been using the jquery.delegate plugin to handle event delegation. I've now switched to using jQuery 1.3 live events. This required some minor surgery on the code. Where previously I had something like:

$("#someDiv).delegate('click change', ".hookAddReq", addRequirement);

I now have:

$(".hookAddReq", "#someDiv").live("click", addRequirememnt);

This bears a little more scrutiny. The class .hookAddReq is assigned to a <select/> element. I have been adding the hook prefix to class names that I use specifically for adding behaviour and I try to add behaviour using classes rather than attaching handlers to HTML elements. There are a couple of reasons for doing this. Firstly by having the hook prefix I know immediately that there is behaviour attached to an element (to distinguish the class name from classes used purely for adding styles). Secondly, using a hook class I can hopefully modify the structure of the HTML without breaking the behaviour.

In the original code I was delegating both the click and change events. This came about because I was developing using Firefox and discovered during testing with IE that the change event doesn't bubble in IE. I guess this is probably documented somewhere but I was scratching my head for a while. Having moved to jQuery 1.3 and actually reading the documentation, live events don't support the change event regardless. I still wanted to be able to delegate those events and I was able to reuse the code I had already written to handle the click events for IE (see below). Effectively just checking to see if the selectedIndex is non-zero and setting it to zero to prevent more than one click event being processed.

addRequirement = function()
   var selIdx = this.selectedIndex;
   if (selIdx)
      this.selectedIndex = 0;
      var control = this.options[ selIdx];
      $(view).trigger("addRequirementEvent", $this.control);

The other plugin that broke is the excellent jquery.transform that I use to convert XML from the server into HTML on the client. The version I was using still had the @ character in jQuery attribute selectors. This has been fixed in version 3.0.1 released on 20 January.

Finally, the jQuery UI skin needs to be updated. Get the latest from jQuery ThemeRoller.