Retention policy and crypto

I just deleted an old post because I re-read it and I was attempting my own crypto on config files instead of using DPAPI. I reserve the right to delete old posts if they turn out to be complete bollocks.

Just to show how you encrypt an app config file on a machine where you do not have IIS installed and cannot use the traditional aspnet_regiis command-line tool that your first googlebing will tell you all about – I give you the below piece of code.

Note that you need to encrypt the file on the target machine as DPAPI is machine specific, so there will be a brief moment when the file is on disk in clear text which is a basic flaw of the entire DPAPI concept, but at least you are not rolling your own crypto.

static int Main(string[] args)
    if (args.Length != 1)
        Console.Error.WriteLine("Wrong number of arguments.\r\n{0} <configfile_to_encrypt>", GetExeName());

    return EncyptAppSettings(args[0]);

private static int EncyptAppSettings(string pathToFile)
    if (!File.Exists(pathToFile))
         return LogFatalError(string.Format("Executable {0} not found", pathToFile), 2);
    if (!File.Exists(pathToFile + ".config"))
         return LogFatalError(string.Format("Config file {0} not found", pathToFile), 3);

   var configuration = ConfigurationManager.OpenExeConfiguration(pathToFile);
   var appSettings = configuration.GetSection("appSettings") as AppSettingsSection;
   appSettings.SectionInformation.ForceSave = true;
   return 0;

private static int LogFatalError(string message, int exitCode)
    Console.Error.WriteLine("{0} failed: {1}", GetExeName(), message);
    return exitCode;

private static string GetExeName()
    return Process.GetCurrentProcess().ProcessName;


More about configs while developing

In my previous post I discussed what you can and shall automate in terms of your development environment and I speculated in one place about what one ought to do with the web.config file in Visual Studio. I have made a script that uses my local chef repo to configure the configuration files using chef-zero and clearly that works. It is a bit messy -but it works. This was where I was when I wrote the last blog post containing my speculation.

Getting further – to an actually workable solution – I have run into what seems like a brick wall. The complexity of using chef-zero seems to need some refactoring, but the biggest problem is maintaining the templates. Nuget keeps editing binding redirects in web.config and it would be nice if those changes could be applied to the erb files. Since this is how ASP.NET works, the source web.config combined with per-environment transforms is highly favoured because it has the lowest maintenance burden. There are a number of plugins that supports this workflow even for EXE projects and other things.

My problem with those is that they violate a basic principle of keeping configuration away from the source repository. Also, I’m happy with chef and I don’t want to replace its templating engine with web.config transforms so trying to adhere to the concept of running the same setup in development as in production makes it difficult unless I can find a way to dynamically update the erb file.



Sprint 0 for old school .NET devs

When you start work on a code base, either from scratch or as you approach it as a new dev, there are a few things you should ensure are in place before you get going. Like most other things, these are easier and more natural on more mature platforms than it tends to be on .NET where the toy app tends to be king, but it is doable. I will here list some technologies I use and I may mention some competing ones for completeness. I can’t share most of the code because reasons, but if there is interest, I can put together samples showing specific techniques of the ones I have used.


The checklist of what you need to put in place if it doesn’t exist is the following:

  • Version control
  • Build server
  • Automated tests run as part of the build process
  • Automated packaging
  • Automated deployment
  • Automated monitoring
  • Automated setup of development environment

I know most of that reads as “a house must have a roof” but there were, and still are,  places where people go “We’re only a couple of guys, we can just leave the source on the NAS, it’s backed up”, so I’m just being clear here.

The last point may seem like overkill, but especially if you use the abomination that is 3rd party components, it is crucial to save time and more easily onboard new guys.

Version control

I’m going to shock you here and not require that you use Git. You might want to, because it’s becoming a skill everybody has, and GitHub is a beautiful place to keep your code, but if your workflow is centralised anyway. you can legitimately stay with Subversion as long as you take care of the basics, as in backing up the repository properly. Recent versions have less horrible diffing, so it is is not that bad anymore. The most crucial aspect is that you do need a version control system that has a good powerful command line interface.

Build server

I’m going to suggest TeamCity here, because I know it well, but it has some real drawbacks. GitHub comes with TravisCI which is a modern CI system. Jenkins has been popular. The point is – make sure your build configuration is the same on the development machine as it is on the build server, and make sure the build configuration is version controlled with the source, ideally that you can run all of it from the command line. This way you can know you are not about to break the build before you push your changes and also you know that you can recreate a previous version of the code without faff because a contemporary build configuration can be found in source control along side the source.

Automated tests run as part of the build process

The obvious bit here are to run your favourite command-line testrunner on your build output to make sure all your tests run on the build server.

You can always write powershell scripts to make simple but high-value integration tests, it doesn’t all have to be Selenium / FitNesse even though those are quite nice once you are over the initial hurdle, but yes, there is a cost.

Automated deployment

Many ways to deploy exist, OctopusDeploy is popular to use with TeamCity, but you should look at chef and puppet as well. They tend to be hostile to Windows, but it is now possible to use them, and they make sense. It is as if they are specifically designed for the purpose of deploying and maintaining software infrastructure.

Years ago I looked very briefly at Puppet, but I have come to work with chef recently and it does what it is supposed to do, but I have no factual reason to rate chef higher than puppet other than that I now know it.

For chef you can look at kitchen to test your deployment scripts in transient VMs. It can use Azure VMs, VMWare WorkStation or Vagrant / VirtualBox and it does shorten the feedback cycle considerably.

Automated monitoring

You can use pingdom, Monitis or Nagios or just run a few curl/wget in a scheduled task and send an email if they don’t return the expected information. Either way, you need to be able to know if things aren’t working. Use the smallest possible thing you can get away with if your budget is constrained, but do use something.

Automated setup of the development environment

This may be sensitive, as developers tend to be particular about where their code lives and how their machine is organised, but having the developers just be able to run a script and end up with all their development tools set up and ready to debug.

Some things are going to be difficult. Installing Redgate SQL Source Control doesn’t seem to be scriptable, but other than that you can:

  • install Visual Studio
  • Install plugins
  • Use chocolatey to install source control management
  • Get the sources locally

In some cases, as your system grows, and you break bits out into separate micro services, you will need this scripting to make sure you can set up the entire system to debugged locally. Ideally you would, as an additional means to verify your deployment method, use chef-zero or corresponding technology from Puppet to use your normal deployment templates as you configure the system on the development machines as well. This is another situation where clever IDE tooling actively makes things difficult for you, but any work you put in to automate here will pay huge dividends.

But what does this all mean in practice?

In short, if you are going to keep working on Windows exclusively, at least learn Powershell. It is not that horrible. The stance among Windows users have long been anti-scripting, and with the state of CMD.exe it has been for good reason. Powershell, though, has a lot of features that make sense even though the syntax can be confusing at first. Learning ruby may be more viable from a cross-platform perspective, but Powershell is evidently more suited for Windows and a lot of useful cmdlets for enabling windows features et cetera are only available in Powershell.

Red mist

Sometimes you get so upset about something you need to blog about it, and ranting on Twitter, Facebook and Channel 9 comments just did not quite seem enough.

The insult, and the injury

I watched a Channel 9 talk about the “future of C# and VB” by Jay Schmeltzer from DEVintersection where he went over the future of the .NET stack, and started out by looking over the Stack Overflow statistics of programming languages that I have addressed before. He showed the graph of Most Popular Technologies showing C# placed prominently fourth behind Java, SQL and JavaScript, then Jay showed the category Most Loved languages, where Microsoft  have a runaway hit with F# on third place, and then C# way down on 10th place. Obviously, VB was right on top of the Most Feared, but that you knew already. Jay expressed much pride in having C# all the way up on 10th of most loved languages and then said “and we have Microsoft’s F# up here on 3rd, but that is of course more of a … well,  C# is of course mainstream in a different way”. So in other words, not even a pretence that F# is a first class citizen in .NET.

Contrast with Apple. From almost single handed creating Objective C into something that is in existence and popular despite the crushing superiority in funding and mindshare of C++ – Apple basically told everybody that (the F# clone) Swift is it from now on. They basically did a VB6 -> C# sort of story, telling everybody that they were welcome to use that old stuff but they really should get on board with the modern technologies.

Contrast that with the above statement from top brass at Microsoft.

So basically, in this blog I am trying to collect ways which a dark matter C# developer can just start using F# today thanks to the extremely ambitious efforts of the friendly and hard-working F# community, and show to the extreme extent Microsoft isn’t bothering at all – ignoring the goldmine they are sitting on. I’m trying to find ways which make things easier for existing Microsoft-focused developers, so I will be collating the things you can use to go F# today for the crummy ASP.NET LOB apps that make up the bread and butter of the C# world.

The comeback

My goal is providing ways which you can go F# today, and immediately write less code with fewer bugs that do the things your C# code does today. You will eventually discover cooler things, like package management using Paket (wich support Nuget package streams and maintaining dependencies directly from GitHub or similar) amd the very F# web framework and using FAKE as a build system rather than MSBuild, which helps if you have complex builds where you would like to not mess with XML and rather read F# code. You may perhaps find other ways to persist data that are more natural to use in F# and you will have little problem learning them once you get over the hump, but just to make the barrier to entry extremely low, let’s keep things familiar and non-scary. The things that would come for free if Microsoft had devoted more than a fraction of means towards F# in Visual Studio is the templating that makes C# so easy to use when creating websites and web services. To achieve that ease-of-use we have to rely on the community, and they have despite the odds come up with a few competitive options over the years. I have compiled these on my F# for C# people page which I hope to keep updated.

Passwords – for wannabe techies

EDIT: As you will notice a lot of my links point to the works of security researcher Troy Hunt, and I should point out that I have no affiliation with him. I just happen to find articles that make sense and they happen to be written by him or refer to his work. Anyway here is another one, addressing the spirit of this blog post, with horrifying examples.

If you google for “ASP.NET login form” or similar, you will get some hits with really atrocious examples of how to NOT handle peoples’ credentials. Even if you are a beginner writing an app that nobody will use, you should never do user login in a shitty way. You will end up embarrassing yourself and crucially your users, who most likely are friends and family when you are at the beginner stage, when – inevitably – your database gets stolen.

How do you do it properly then, you ask?

Ideally – don’t. Let other people worry about getting hacked. Plenty of large corporations are willing to take the risk.

Either way though,  get a cert and start running HTTPS only.  HTTP is fast and nice for sites that allow anonymous access, such as blogs, but as soon as you are accepting sensitive data such as passwords you need to go with HTTPS.

If you are in the Microsoft sphere, just create a new web project in Visual Studio, register yourself as a developer with Google, Facebook, Microsoft or similar, create apps there, and configure those app credentials in the Visual Studio app – and do make sure you store those app secrets outside of source control – and run that app. After minimal tweaking you should have something working where you can authenticate in your app using those authentication providers. After that –  you can then, with varying effort required, back-port this authentication to whichever website you were trying to add authentication to.

But I insist on risking spreading my users’ PII when I get hacked

OK, fair enough, on your head be it.

Separate your auth storage from your app data storage. At least by database connection user rights. When they hack your website using SQL-injection, they shouldn’t be able to get hashed passwords. That just isn’t acceptable. So, yes, don’t do Windows Authentication on your dev box, define SQL Database Users and SQL Server Logins and make scripts that ensure they are created if they don’t exist.  Again, remember to keeps secrets away from the stuff you commit to version control. Use alternative means to store secrets for production. Azure will help you with these settings in the admin interface, for instance.

Tighten the storage a bit

Store hashed password and the salt. Set your storage permissions such that the password hash cannot be retrieved from the database at all. The database login used to access the storage should not have any SELECT rights, only EXECUTE on stored procedures. That way you write one stored procedure that retrieves the salt for a login so that the application can calculate a hash for the password supplied by the user in trying to log in,  then another stored procedure that takes a username and the hash and compares it internally to the one stored in the database, returning back to the caller whether or not the attempt was successful, without ever exposing the stored hash.

Note that if you tighten SELECT rights  but don’t separate storage between auth and app, every single Entity Framework sample will crash and burn as EF requires special administration to use stored procedures rather than direct CRUD.

Go for a stupidly complex hashing algorithm

Also – you are not going for speed when you are looking for password hashes. MD5 and SHA1 are really nice for checksumming files, but they suck for passwords. You are looking for very slow and complex algorithms.

.NET come with Microsoft.AspNet.Cryptography.KeyDerivation.Pbkdbf2 , but the best and most popular password hashing algorithm is bcrypt.

The hashes generated are of fixed size, so just define your storage to be big enough to take the output of the algorithm and then there is no need to limit the size of the password beyond the upload connection limits defined by the web server due to DoS protection, but for a password, those limits are ludicrously high, so you don’t have to even mention them to the user. Also, don’t mess with copy/paste in the password box. You want to be password manager friendly.

Try and hack yourself

Use Zed Attack Proxy or similar to try and break into your site. It will tell you some of the things you need to change in terms of protecting against XSS and CSRF. The problem isn’t that your little site is going to be the target of the NSA or the Chinese government, what you need to worry about is the tons of automated scripts that prod and poke into every site everywhere and collect vulnerabilities with zero effort from the point of the attacker. If you can avoid being vulnerable to those most basic attacks, you can at least have some self-respect.

In conclusion

These are just the basics, and I am mostly writing this to discourage people from writing login forms as some kind of beginner exercise. Password reuse is rampant still, so if you make a dodgy login form you are most likely going to collect some userid / password combinations people really use at other sites, and those sites may very well be way more important than yours. Not treating that information seriously is extremely unprofessional and bad karma. Please do not build user authentication yourself if you don’t intend to make some kind of serious effort in protecting people’s data. Instead use OAuth solutions and let people authenticate using Google, Facebook, Twitter, Microsoft or whichever auth provider you prefer. That way you will never see any passwords and won’t have anything that can be stolen, which is a much easier life to lead


Passwords. For non-techies

Passwords are a – for the most part – necessary evil in our connected lives. There has to be a way to know that you are you in a transaction, and a piece of information that only you and your counterpart know about seems perfect for that.

You have indubitably been told many times you should never reuse passwords and you should get a password manager. Guess what – that is correct. When you get one, and it proceeds to chastise you about your instances of password reuse and you use the magnificent automated features to reset passwords, you will notice that many sites not only have minimum requirements for length and complexity – they also have maximum limits. This is where alarm bells should ring in your mind.

Passwords shall only be stored as hashes (what people used to call checksums back in my day) because it is mathematically impossible, well, to deduce the original from the hash. This is not enough to be safe, of course, as evildoers quickly figured out that they could hash (using all major algorithms) enormous dictionaries and lists of popular passwords and then had prefab hashes to compare to ones they found in hacked databases. You need to hash several times over and apply salt to the hash and so on in processes that are better described elsewhere, but the point is – it starts out with a checksum. A checksum is always of constant length. So it doesn’t matter if you submit the Holy Bible as a password, the hash will be the same size.

This means, the only reason people put a size limitation on the password is because they are storing it in clear text in the database, or otherwise process it in clear text.

When you complain about it to customer services they will say either of these things:

  1. We are storing passwords encrypted securely in accordance with industry standards
  2. You don’t understand these things – clearly you must be new to computers.
  3. We need to limit password length for security reasons
  4. We take the security of your data very seriously.

All of that is bollocks. The only reason other than storing data in plain text (or storing it encrypted, so that it can be easily displayed in the admin interface which still also VERY BAD, see below for further rant) could possibly be a misguided attempt at usability, where the UI might limit the text field length to prevent the user from getting lost, but I still would say it is 99.999% likely that it is bollocks and they are showing your password to everybody. At least that is the safest assumption. .

So they know my password –  what is the big deal?

OK, so why does that matter? Well, probably it means that tech support or custom service people can see your password, so you can’t use the one that is basically a bunch of rude words strung together or it is going to be awkward on the phone.

Also, it means that if they have one mischievous intern or employee that copies some passwords from the system, they can really do harm to any site which you STUPIDLY used the same password for. Also, if somebody manages to steal one login to the administrative interface they have your password. In a secure system, that level of data breach should take a little more effort.

I have read somewhere that the far biggest problem with weak password security is password reuse, and you can end that today, for yourself, by using a password manager of some kind. Preferably a good one, but I’m no security person so I couldn’t tell you which one to go with. There is though a strong eggs-in -one-basket aspect to this, but given the scale of breaches I don’t see any other way forward.


There are plenty of password jokes out there, like the one with “User enters password ‘docGrumpyHappySleepyBashfulSneezyDopeyAlbany’, requirement was seven characters and a capital” or the classic XKCD password complexity suggestion correcthorsebatterystaple as an example of a secure password that was also easy to remember.

Since early 2000 Windows Servers have been pestering users to use passwords with capitals, numbers and special characters and a minimum password length of 8, and also to change passwords continuously.  

A few things have happened. Graphics hardware and research. Graphics hardware has made it possible for somebody ambitious to buy (or steal, yes) a bunch of graphics cards and use the phenomenal 3D processors to brute-force compare password hashes (yes, even correctly stored passwords) with hashes of dictionary words and popular passwords and common v@r1ati0ns on them. The rate at which these comparisons can be done is only slowed down slightly if you use a very complex hash, but even with that, the enormous compute power these people can muster makes short passwords trivial to reveal. Yes, that includes correcthorsebatterystaple. To just refer back to the case where reversible encryptions are used to store passwords to that customer service/tech support can read them – yes, those encryptions are extremely easy to decrypt with this kind of firepower. Essentially, there is no difference between an encrypted password and a plain text password in terms of security.

The best passwords are the auto-generated ones from a password manager that also can give you variable length, where you might as well crank it up as far as it’ll go, but of course, you’ll quickly find out many shady websites that won’t let you use a secure password.

Also, regarding changing passwords – The more esoteric password complexity requirements are and the more you make people change passwords, the worse their passwords get, because although I’m sure this surprises security people I must mention that normal people have to work too, they can’t spend all their time fiddling with passwords. Of course cheating by using a format that gets incremented as the system makes you change passwords isn’t exactly more secure- in fact a better solution may even be writing the password on a post-it, unless of course you broadcast video from your office and people deface your twitter because they see your password in the background. Of course, again the answer is a password manager. And also, the answer may be to give people more time with the same password, but bring more draconian complexity requirements.

But I though we were all post-password now… I have an iPhone

Good for you. Remember that it is hard to reset your fingerprints. The consumer grade fingerprint readers are far from fool-proof, and once somebody has a decent copy of your prints they have your things.We will see how things develop as the bio-metric identification thing gains more traction, but security people are already concerned about a case where a woman was made to unlock her iPhone because she had TouchID. The coercion did not count as violating the right against self incrimination the way forcing somebody to divulge a password would have. I’m sure further issues will come up. The general advice seems to be, use bio-metric data for identification, possibly, but rely on alternative means for authorisation.

Anyway – the point is, get a password manager if you don’t have one, and don’t forget to name and shame those that limit password length or – heaven forbid – email passwords in clear text. Do disable auto-fill, which is a feature where the password manager automatically adds username and password on a login page as it loads. There are often exploits that tricks password managers and automatically capture username and passwords by tricking the user to browse to evil websites. These vulnerabilities get patched, and new ones get discovered. Just disable auto-fill and you will be OK.

Troy Hunt on passwords and complexity, and how to build a good password reset feature..


Regarding the previous post about running Linux processes on Windows, I have noticed a few common questions and the corresponding answers at various places on the Internet.

The process and fork

The process that runs the Linux binary is a Pico process that comes with a minimal ABI and having all syscalls filtered through a library OS. For the Linux subsystem they have added real fork() semantics to the Pico process which has long been a popular gripe among those in the Linux community trying to port software to Windows. Although the normal case when creating a process is fork() and then lots of cleanup to create the pristine child process you always wanted, in the 1% of the cases where you actually wanted to create a verbatim copy of the current process including open files and memory extents et cetera, doing so in Windows was very complicated and extremely slow. This is now a native feature of the Pico process.


The security incompatibility between the faux Linux universe and the Windows universe, or rather the fact that Linux is unaware of the security settings and unable to affect the in any way has been raised and allegedly improvements are on their way.








After my previous post, Microsoft and Canonical reacted almost immediately – ahem – and on the //BUILD/ conference the Windows Subsystem for Linux and the ability to run the Ubuntu userspace under Windows was presented. If you have the latest fast ring insider build of Windows 10, you can just set your Windows installation into  Developer mode, search for Windows Features in Cortana and activate Windows Subsystem for Linux ~(Beta) reboot and then type bash at the windows command prompt. You will be asked to confirm, and then Ubuntu 14.04 will be downloaded, and off you go.

What is happening? A few things. Instead of the NT subsystem, NTDLL.dll, being loaded, a faux Linux kernel is presented to the process, allowing Linux userland binaries to be executed natively in the context of the current user. The abstraction is leaky at best right now, and as usual Microsoft aren’t even aiming to do this reimagining of the Linux kernel completely exact, so there will always be userland executables that will not work with this subsystem. Their goal is to make the most common dev things work out of the box to make Windows less frustrating for developers to work on. You would be doing people a favour by trying this out and complain loudly about everything that doesn’t work for you.

I just now did Win+R, typed bash and then vim. In SysInternals Process Explorer I see init -> bash -> vim in the process hierarchy. It is completely integrated in Windows, only the filesystem you see from Linux has been organised by Canonical to make sense. Since the context of the WSL is per-user, the Linux filesystem is actually located in the user’s profile folder while all drives on the system are mounted under /mnt/c /mnt/d et c. Because of this you can access “Linux” files from Windows and Windows files from “Linux” in a way that sort of makes sense. I mean, I’ve seen far worse.

I must say I didn’t think Microsoft would take this route, so kudos. What I still feel the need to complain about is that the security situation is complicated with the Windows user always being root in his own faux Linux universe. Server deployment could be magical if there was a way to have a global Linux context and have root access hidden behind elevation or sudo, but developing such a joint security model would be enormously complicated. Running X in Windows, that would be the next dream.

Desktop OS for developers

The results of the latest StackOverflow Developer Survey just came out, showing – among other interesting things – that Windows is dying as a developer OS. Not one to abandon ship any time soon I’d still like to offer up some suggestions.


  • Make the commandline deterministic.
  • Copying files across the network cannot be a lottery.
  • Stop rebooting UI-frameworks
  • Make F# the flagship language

Back in the day, Microsoft through VB and Visual C++ overcame some of the hurdles of developing software for Windows – then the only, effectively, desktop OS in the enterprise. Developers, and their managers, rallied behind these products and several million kilometres of code was written over a couple of decades.

The hurdles that were overcome were related to the boilerplate needed to register window classes, creating a window and responding to the basic window messages required to show the window in Windows and have the program behave as expected vis-a-vis the expectations a Windows user might have. Nowhere in VB6 samples was anybody discussing how to write tests or how, really, to write good code. In fact, sample code, simplified on purpose to only showcase one feature at a time, would not contain any distractions such as test code.

When Classic ASP was created, a lot of this philosophy came a cross to the web, and Microsoft managed to create something as horrible as PHP, but with less features, telling a bunch of people that it’s OK to be a cowboy.

When the .NET framework was created as a response to Java, a lot of VB6 and ASP.NET  programmers came across and I think Microsoft started to see what they had created. Things like Patterns & Practices came out and the certification programmes were taking software design and testing into consideration. Sadly, however, they tended to give poor advice that was only marginally better than what was out there in the wild.

Missed the boat on civilised software development

It was a shock to the system when the ALT.NET movement came out and started to bring in things that were completely mainstream in the Java community but almost esoteric in .NET. Continuous integration – unit testing – TDD – DDD. Microsoft tried to keep up by creating TFS that apart from source code version in had ALM tools to manage bugs and features as well as a built-in build server but it became clear to more and more developers that Microsoft really didn’t understand the whole thing about testing first or how lean software development needs to happen.

While Apple had used their iron fist to force people to dump Mac OS for the completely different, Unix-based operating system OS X (with large bits of NextStep brought across, like the API and InterfaceBuilder) – Microsoft were considering their enterprise customers and never made a clean break with Gdi32. Longhorn was supposed to solve everything, making WPF native and super fast, obsoleting the old BitBlt malarkey and instead ushering in a brighter future.

As you are probably aware, this never happened. .NET code in the kernel was a horrible idea and the OS division banned .NET from anything ever being shipped with Windows, salvaged whatever they could duct tape together – and the result of that was Vista. Yes, .NET was banned from Windows and stayed banned up until Powershell became mainstream a long, long time later. Now, with Universal Windows Apps, a potentially viable combo of C++ code and vector UI has finally been introduced, but since it is the fifth complete UI stack reboot since Longhorn folded, it is probably too little too late and too many previously enthusiastic Silverlight or WPF people have already fallen by the wayside. Oh and many of the new APIs are still really hard to write tests around, and it is easy finding yourself in a situation where you need to install Visual Studio and some SDK on a build server, because the dependency relies on the Registry or the GAC rather than things that come with the source.


As Jeffrey Snover mentions in several talks, Windows wasn’t really designed with automation in mind. OLE Automation possibly, but scripting? Nooo. Now, with more grown-up ways of developing software – automation becomes more critical. The Windows world has developed alternate ways of deploying software to end-user machines than work quite well, but for things like automated integration tests and build automation you should still be able to rely on scripting to set things up.

This is where Windows really lets the developer community down. Simple operations in Windows aren’t deterministic. For a large majority of things you call on the command-line  – you are the only one responsible for determining if the command ran successfully. The program you called from the command-line may very well have failed despite it returning a 0 exit code. The execution just might not have finished despite the process having ended, so some files may still be locked. For a while, you never know. Oh, and mounting network drives is magic and often fails for no reason.

End result

Some people leave for Mac because everything just works, if you can live with bad security practices  and sometimes a long delay before you get some things like Java updates. Some people leave for Linux because if you script everything, you don’t really mind all those times you have to reinstall because thing like a change in screen resolution or a security update killed the OS to the point you can’t log in anymore, you just throw away the partition and rerun the scripts. Also, from a developer standpoint, everything just works, in terms of available tools and frameworks.

What to do about it

If Microsoft wants to keep making developer tools and frameworks, they need to start listening to the developers that engage whenever Microsoft open sources things. They most likely have valuable input into how things are used by your serious users – beyond the tutorials.

Stop spending resources duplicating things already existing for Windows or .NET as that strikes precisely at the enthusiasts that Microsoft needs in order to stop hemorrhaging developers.

What is .NET Core – really? Stop rewriting the same things over and over. At least solve the problems the rewrite was supposed to address first before adding fluff. Also – giving people the ability to work cross-platform means people will, so you are sabotaging yourselves while building some good-will, admittedly.

Most importantly – treat F# like Apple treats Swift. Something like – we don’t hate C# – there is a lot of legacy there but F# is new, trendier and better. F# is far better than Swift and has been used in high spec applications for nine years already. Still Microsoft after years of beta testing still manages to release a JITer that has broken tail call optimisation (a cornerstone of functional runtimes as it lets you do recursion effectively). That is simply UNACCEPTABLE and I would have publicly shamed then fired so many managers for letting that happen. Microsoft needs to take F# seriously – ensure it gets the best possible performance, tooling and templating. It is a golden opportunity to separate professional developers from the morons you find if you google “ login form” or similar.

In other words – there are many simple things Microsoft could do to turn the tide, but Im not sure they will manage, despite the huge strides taken of late. It is also evident that developers hold a grudge for ages.

Salvation through a bad resource

Me and a colleague have been struggling for some time deploying an IIS website correctly using chef. As you may be aware, chef is used to describe the desired state of the configuration of a system through the use of resources, which know how to bring parts of the system into the desired state – if they, for some reason, should not be – during a process called convergence.

Chef has a daemon (or Service, as it is called in the civilised world) that continually ensures that the system is configured in accordance with the desired state. If the desired state changes, the system is brought into line automagically.

As usual, what works nicely and neatly in Unix-like operating systems requires volumes of eloquent code literature, or pulp fiction rather, to implement on Windows, because things are different here.

IIS websites are configured with a file called web.config. When this file changes, the website restarts (the application threadpool does, to be specific). Since the Chef windows service is running chef-client in regular intervals it is imperative that chef-client doesn’t falsely assume that the configuration needs to be overwritten  every time it runs is as that would be quite disruptive to any would-be users of the application. Now, the autostart behaviour can be disabled, but that is not the way things should have to be.

A common approach on Windows is to disable the chef service and to just run the chef client manually when you know you want to deploy things, but that just isn’t right either and it takes a lot away from the basic features and the profound magic of chef. Anyway, this means we can’t keep tricking chef into believing that things have changed when they really haven’t, because that is disruptive and bad.

So like I mentioned earlier – IIS websites are configured with a file called web.config. Since everybody that ever encountered an IIS website is aware of that, there is no chance that an evildoer won’t know to look for the connection strings to the database in that very file. To mitigate this well-knownness, or at least make the evil-doer first leverage a privilege escalation vulnerability,  there is a built-in feature that allows an administrator to encrypt the file so that the lowly peon account that the website is executing as doesn’t have the right to read it.  For obvious reasons this encryption is tied to the local machine, so you can’t just copy the file to a different machine where you happen to be admin to decrypt it. This does however mean that you have to first template the file to a temporary location and then check if the output of the chef template, the latest and greatest of website configuration, is actually any different from what was there before.

It took us ages to figure out that what we need to do is to write our web.config template exactly as it looks once it has been decrypted by Windows, then start our proceedings by decrypting the production web.config into a temporary location. We then set up the chef template resource to try and overwrite the temporary file with new values, and if there has been a change, use notifications to trigger a ruby_block that normally doesn’t execute, but when triggered by the template resource both encrypts the updated config and copies it across to prod.


But wait… The temporary file has to be deleted. It has highly sensitive information (I would like to flatter myself) and shouldn’t even have made it to disk in its clear-text form, and now it’s still there waiting to be read by the evildoer.

Using a ruby block resource or a file resource to delete the temporary file causes chef to record this as a change, and change that isn’t a change is bad. Or at least misleading in this case.

Enter colleague nr 2 “just make a bad resource that doesn’t use converge_by”.

Of course! We write a resource that takes a path and deletes it using pure ruby code, but it “forgets” to tell chef that a file was deleted, so chef will update the configuration when it should but will gladly report 0 resources updated at the end of a run where nothing has changed. Beautiful!

DONE. Week-end. I’m off.