Desktop OS for developers

The results of the latest StackOverflow Developer Survey just came out, showing – among other interesting things – that Windows is dying as a developer OS. Not one to abandon ship any time soon I’d still like to offer up some suggestions.


  • Make the commandline deterministic.
  • Copying files across the network cannot be a lottery.
  • Stop rebooting UI-frameworks
  • Make F# the flagship language

Back in the day, Microsoft through VB and Visual C++ overcame some of the hurdles of developing software for Windows – then the only, effectively, desktop OS in the enterprise. Developers, and their managers, rallied behind these products and several million kilometres of code was written over a couple of decades.

The hurdles that were overcome were related to the boilerplate needed to register window classes, creating a window and responding to the basic window messages required to show the window in Windows and have the program behave as expected vis-a-vis the expectations a Windows user might have. Nowhere in VB6 samples was anybody discussing how to write tests or how, really, to write good code. In fact, sample code, simplified on purpose to only showcase one feature at a time, would not contain any distractions such as test code.

When Classic ASP was created, a lot of this philosophy came a cross to the web, and Microsoft managed to create something as horrible as PHP, but with less features, telling a bunch of people that it’s OK to be a cowboy.

When the .NET framework was created as a response to Java, a lot of VB6 and ASP.NET  programmers came across and I think Microsoft started to see what they had created. Things like Patterns & Practices came out and the certification programmes were taking software design and testing into consideration. Sadly, however, they tended to give poor advice that was only marginally better than what was out there in the wild.

Missed the boat on civilised software development

It was a shock to the system when the ALT.NET movement came out and started to bring in things that were completely mainstream in the Java community but almost esoteric in .NET. Continuous integration – unit testing – TDD – DDD. Microsoft tried to keep up by creating TFS that apart from source code version in had ALM tools to manage bugs and features as well as a built-in build server but it became clear to more and more developers that Microsoft really didn’t understand the whole thing about testing first or how lean software development needs to happen.

While Apple had used their iron fist to force people to dump Mac OS for the completely different, Unix-based operating system OS X (with large bits of NextStep brought across, like the API and InterfaceBuilder) – Microsoft were considering their enterprise customers and never made a clean break with Gdi32. Longhorn was supposed to solve everything, making WPF native and super fast, obsoleting the old BitBlt malarkey and instead ushering in a brighter future.

As you are probably aware, this never happened. .NET code in the kernel was a horrible idea and the OS division banned .NET from anything ever being shipped with Windows, salvaged whatever they could duct tape together – and the result of that was Vista. Yes, .NET was banned from Windows and stayed banned up until Powershell became mainstream a long, long time later. Now, with Universal Windows Apps, a potentially viable combo of C++ code and vector UI has finally been introduced, but since it is the fifth complete UI stack reboot since Longhorn folded, it is probably too little too late and too many previously enthusiastic Silverlight or WPF people have already fallen by the wayside. Oh and many of the new APIs are still really hard to write tests around, and it is easy finding yourself in a situation where you need to install Visual Studio and some SDK on a build server, because the dependency relies on the Registry or the GAC rather than things that come with the source.


As Jeffrey Snover mentions in several talks, Windows wasn’t really designed with automation in mind. OLE Automation possibly, but scripting? Nooo. Now, with more grown-up ways of developing software – automation becomes more critical. The Windows world has developed alternate ways of deploying software to end-user machines than work quite well, but for things like automated integration tests and build automation you should still be able to rely on scripting to set things up.

This is where Windows really lets the developer community down. Simple operations in Windows aren’t deterministic. For a large majority of things you call on the command-line  – you are the only one responsible for determining if the command ran successfully. The program you called from the command-line may very well have failed despite it returning a 0 exit code. The execution just might not have finished despite the process having ended, so some files may still be locked. For a while, you never know. Oh, and mounting network drives is magic and often fails for no reason.

End result

Some people leave for Mac because everything just works, if you can live with bad security practices  and sometimes a long delay before you get some things like Java updates. Some people leave for Linux because if you script everything, you don’t really mind all those times you have to reinstall because thing like a change in screen resolution or a security update killed the OS to the point you can’t log in anymore, you just throw away the partition and rerun the scripts. Also, from a developer standpoint, everything just works, in terms of available tools and frameworks.

What to do about it

If Microsoft wants to keep making developer tools and frameworks, they need to start listening to the developers that engage whenever Microsoft open sources things. They most likely have valuable input into how things are used by your serious users – beyond the tutorials.

Stop spending resources duplicating things already existing for Windows or .NET as that strikes precisely at the enthusiasts that Microsoft needs in order to stop hemorrhaging developers.

What is .NET Core – really? Stop rewriting the same things over and over. At least solve the problems the rewrite was supposed to address first before adding fluff. Also – giving people the ability to work cross-platform means people will, so you are sabotaging yourselves while building some good-will, admittedly.

Most importantly – treat F# like Apple treats Swift. Something like – we don’t hate C# – there is a lot of legacy there but F# is new, trendier and better. F# is far better than Swift and has been used in high spec applications for nine years already. Still Microsoft after years of beta testing still manages to release a JITer that has broken tail call optimisation (a cornerstone of functional runtimes as it lets you do recursion effectively). That is simply UNACCEPTABLE and I would have publicly shamed then fired so many managers for letting that happen. Microsoft needs to take F# seriously – ensure it gets the best possible performance, tooling and templating. It is a golden opportunity to separate professional developers from the morons you find if you google “ login form” or similar.

In other words – there are many simple things Microsoft could do to turn the tide, but Im not sure they will manage, despite the huge strides taken of late. It is also evident that developers hold a grudge for ages.

Salvation through a bad resource

Me and a colleague have been struggling for some time deploying an IIS website correctly using chef. As you may be aware, chef is used to describe the desired state of the configuration of a system through the use of resources, which know how to bring parts of the system into the desired state – if they, for some reason, should not be – during a process called convergence.

Chef has a daemon (or Service, as it is called in the civilised world) that continually ensures that the system is configured in accordance with the desired state. If the desired state changes, the system is brought into line automagically.

As usual, what works nicely and neatly in Unix-like operating systems requires volumes of eloquent code literature, or pulp fiction rather, to implement on Windows, because things are different here.

IIS websites are configured with a file called web.config. When this file changes, the website restarts (the application threadpool does, to be specific). Since the Chef windows service is running chef-client in regular intervals it is imperative that chef-client doesn’t falsely assume that the configuration needs to be overwritten  every time it runs is as that would be quite disruptive to any would-be users of the application. Now, the autostart behaviour can be disabled, but that is not the way things should have to be.

A common approach on Windows is to disable the chef service and to just run the chef client manually when you know you want to deploy things, but that just isn’t right either and it takes a lot away from the basic features and the profound magic of chef. Anyway, this means we can’t keep tricking chef into believing that things have changed when they really haven’t, because that is disruptive and bad.

So like I mentioned earlier – IIS websites are configured with a file called web.config. Since everybody that ever encountered an IIS website is aware of that, there is no chance that an evildoer won’t know to look for the connection strings to the database in that very file. To mitigate this well-knownness, or at least make the evil-doer first leverage a privilege escalation vulnerability,  there is a built-in feature that allows an administrator to encrypt the file so that the lowly peon account that the website is executing as doesn’t have the right to read it.  For obvious reasons this encryption is tied to the local machine, so you can’t just copy the file to a different machine where you happen to be admin to decrypt it. This does however mean that you have to first template the file to a temporary location and then check if the output of the chef template, the latest and greatest of website configuration, is actually any different from what was there before.

It took us ages to figure out that what we need to do is to write our web.config template exactly as it looks once it has been decrypted by Windows, then start our proceedings by decrypting the production web.config into a temporary location. We then set up the chef template resource to try and overwrite the temporary file with new values, and if there has been a change, use notifications to trigger a ruby_block that normally doesn’t execute, but when triggered by the template resource both encrypts the updated config and copies it across to prod.


But wait… The temporary file has to be deleted. It has highly sensitive information (I would like to flatter myself) and shouldn’t even have made it to disk in its clear-text form, and now it’s still there waiting to be read by the evildoer.

Using a ruby block resource or a file resource to delete the temporary file causes chef to record this as a change, and change that isn’t a change is bad. Or at least misleading in this case.

Enter colleague nr 2 “just make a bad resource that doesn’t use converge_by”.

Of course! We write a resource that takes a path and deletes it using pure ruby code, but it “forgets” to tell chef that a file was deleted, so chef will update the configuration when it should but will gladly report 0 resources updated at the end of a run where nothing has changed. Beautiful!

DONE. Week-end. I’m off.


How to hire devs for a small project

Do you have a smallish bit of programming type work you need done? You are thinking of either hiring somebody or getting contractors in? There are several pitfalls that you could experience and not all of them are the fault of the people you hire. I am here to set your expectations.

Know what type of people you hire

By the way I will use hire in this post to mean either employ or contract, you choose.

Historically programming have been done by programmers, or software development engineers. These would throw code together and tell you it is finished. Sometimes within deadline and sometimes not. You would then have QA people write test specifications and test the software and produce a list of bugs that then the programmers would have to fix. if you had a small budget you would forego the QA and try and test it yourself. This is not the way to do things.

If you insist on hiring an old-fashioned programmer, make sure you also hire an old-fashioned QA because otherwise the software you are making will never work. This project of yours is still probably doomed to be over budget.

A better way would be to hire a pair of modern senior agile developers that have learned the skills involved in QA-work and can design correctness into the system by clearly defining acceptance critera and handling edge-cases up front. You can advertise for an agile tester or something, if no developer you find admits to being great at testing, but ideally you are looking for a pair of developers that are close to the complete package.

Yes, by the way, hire at least two developers. As the mythical man-month says you can’t double project performace by doubling the number of resources, but a pair of developers are much more effective than a lone developer to the point that it definitely is worth the upfront cost.

Spend time

You will need to be on hand to look at the software as it is being constructed and make decisions on what you want the product to do. Your developers may give you rough estimates (S/M/L or similar) on how much effort certain things are and suggest solutions that achieve your objectives in an effective way, but you need to be available or else development will halt. This is not a fire-and-forget thing, unless you can fully authorise some person to be product owner for you that you can place next to the developers full time.

The thing you are doing, owning the product, is crucial to the success of your venture. I will post a link below to reading material as recommended by Stephen Carratt, a product owner I work with on a daily basis, that helps you define the work in small, clearly defined chunks that your developers call User Stories.

Work incrementally

Start out with small things and add features iteratively. Start by finishing core functionality first and always only add the most important thing. Don’t get me wrong, do have a roadmap of what you want the piece of software to do so you know where you want to eventually end up, but avoid spending time on specifying details right away. There will be cheaper or better options becoming apparent as you go along. You may be able to release working pre-releases to a subset of the intended audience to validate your assumptions and adapt accordingly.

Read up on working in a modern way

To help you create user stories that will help organise the work Steve recommends User Story Mapping (Patton).

Read about Lean Software Development (M Poppendeick) and Agile Testing (Crispin, Gregory) to know why the old way of making overly precise specifications (either of tests and code) ahead of time is wasteful and how working closesly with the development team is more effective in making sure the right thing is built.

Also, to understand why your developers keep banging on about delivery automation look at  Continuous Delivery (Humble, Farley)  although there are fictionalised books that illustrate all these benefits such as The Phoenix Project (Kim, Behr, Spafford).

But hey – when am I done? And what about maintenance?

Exactly! Why did you develop this product? To sell it? Is it selling? To help sell something else? Is it working? Hopefully you are making money or at least building value. Can you make the product better in a cost effective way – do so, if you can’t – stop. Purely budgetary concerns rule here, but also consider the startup costs included in dismissing your first developers and then shortly aftewards getting two other ones in to evolve the product,  needing to first spend some time learning the codebase from scratch before they can work at full speed.

Tech stuff for non-technical people

This is not a deeply technical post. Even less technical than my other ones. I write this inspired by a conversation on Twitter where I got asked about a few fairly advanced pieces of technical software by a clever but not-that-deeply technical person. It turned out he needed access to a website to do his human messaging thing and what he was given was some technical mumbo jumbo about git and gerrit. Questions arose. Of course. I tried to answer some of the questions on Twitter but the 140 char limit was only one of the impediments I faced in trying to provide good advice. So, I’m writing this so I can link to it in the future if I need to answer similar questions.

I myself am an ageing programmer, so I know a bunch of stuff. Some that is true, some that was true and some that ought to be true. I am trying to not lie but there may be inaccuracies because I am lazy and because reality changes while blog posts remain the same because, again – lazy.

What is Git?

Short answer: A distributed version control system.

Longer answer: When you write code you fairly quickly get fed up with keeping track of files with code in them. The whole cool_code.latest24.txt naming scheme to keep old stuff around while you make changes quickly goes out the window and you realise there should be a better way. There is. Using a version control system, another piece of software keeps track of all the changes you have made to the bunch of files that make up your website/app/program/set of programs, and let you compare versions, find the change that introduced a nasty bug et c. With Git you have all that information inside your local computer and can work very quickly and if you need to store your work somewhere else like on a server or just with a friend that has a copy of the same bunch of files (repository), only the changes you have made get sent across to that remote repository.

There are competing systems to git that do almost the same thing (mercurial is also distributed) and then a bunch of systems that instead are centralised and only let you see one version at the time, and all the comparison/search/etc stuff happen across the network. Subversion (SVN) or CVS are common ones. You also have TFS, Perforce and a few others.

What is a build server?

Short answer: a piece of software that watches over a repository and tries to build the source code and run automated tests when source code is modified.

Longer answer:

The most popular way for a developer to dismiss a user looking for help is to say “That works on my machine”. Legitimately, it is not always evident that you have forgotten to add a code file to your version control system. This causes problems for other people trying to use your changes as the software is incomplete and probably not working as a result of the missing files. Having an automated way of checking that the repository is always in a state where you can get the latest version of the code in development, build it and expect it to work is very helpful. It may seem obvious to you, but it was not a given until fairly recently. Any decent group of developers will make sure that the build server at least has a way of deploying the product, but ideally deployment should happen automatically. This does require that the developers are diligent with their automatic tests and that the deployment automation is well designed so that rare problems can be quickly rolled back.

What is Gerrit?

Short answer: I’ll let Wikipedia answer this one as I have managed to avoid using it myself despite a few close calls. It seems to be a code review and issue tracking tool that can be used to manage and approve changes to software and integrate with the build server.

Long answer:

Before accepting changes to source code, maintainers of large systems will look at submitted changes first and see if it is correct and in keeping with the programming style that the rest of the system uses. Despite code being written “for the computer”, maybe the most important aspect of source code quality and style is that other contributors understand the code and can find their way. To keep track of feature suggestions, bug reports and the submitted bits of code that pertain to them, software systems exist that can help out here too.

I don’t know, but I would think I could get most of that functionality with a paid GitHub subscription. Yes, gerrit is free, but unless you are technical and want to pet a server you will have to hire somebody to set up and maintain the (virtual) machine where your are running gerrit.

Where do I put my website?

Short answer: I don’t know, the possibilities are endless today. Don’t pay a lot of money.

Longer answer: Amazon decided, a long time ago, to sell a lot of books. They figured that their IT infrastructure had to be easy to extend as they were counting on having to buy a ton of servers every month so their IT stuff had to just work and automatically just use the new computers without a bunch of installing, configuration and the like. They also didn’t like it that the most heavy duty servers cost an arm and a leg, and wanted to combine a stack of cheaper computers and pool those resources to get the same horsepower at a lower cost. They built a bunch of very clever programs that provided a way to create virtual servers on demand that in actual fact were running on these cheap “commodity hardware” machines and they created a way to buy disk space as if it was electricity, you paid per stored megabyte rather than having to buy and entire hard drive et c. This turned out to be very popular and they started selling their surplus capacity to end users, and all of a sudden you could rent computing capacity rather than making huge capital investments up front. The cloud era was here.

Sadly, there was a bunch of companies out there that already owned a bunch of servers, the big iron expensive ones even, and sold that computing capacity to end users, your neighbourhood ISP, Internet Service Provider, or Web Hotel.

Predictably, the ISPs aren’t doing too well. They niche themselves in providing “control panels” that make server administration slightly less complicated, but the actual web hosting aspect is near free with cloud providers, so for almost all your website needs, look no further than Amazon or Microsoft Azure, to name just two. Many of these providers can hook into your source control system and request a fresh copy of your website as soon as your build server says the tests have run and everything is OK. Kids today take that stuff for granted, but I still think it’s magical.

Don’t forget this important caveat: Your friendly neighbourhood ISP may occasionally answer the phone. If you want to talk to a human, that is a lot easier than to try and get a MS or Amazon person on the phone that can actually help you.

Getting back into WPF

These last couple of weeks I have been working with a Windows desktop app based on WPF. I hadn’t been involved with that in quite some time so there was some trepidation before I got stuck in.

I have been pairing consistently throughout and I believe it has been very helpful for both parties as the union of our skill sets have been quite large and varied regardless of which colleague I was working with at the time. The app had interesting object life-cycles and has some issues in object creation when viewed from the standpoint of what we need to do today although it was well suited to solve the problems it did at the time it was written.

Working closely with colleagues mean that we could make fairly informed decisions whilst refactoring and the discussions have seemed productive. I tend to always feel that the code is significantly improved as we finish tasks even though we have stayed close to the task at hand and avoided refactoring for its own sake.

Given the rebootcamp experience, I’m always looking to make smaller classes with encapsulated functionality, but I still have room for improvement. As I’m fortunate enough to have very skilled colleagues it is always useful to discuss these things in the pair.  It helps to have another pair of eyes there to figure out ways to proceed – getting things done whilst working to gradually improve the design. I haven’t felt disappointed with a piece of code I’ve helped write for quite a while.

The way we currently work is that we elaborate tasks with product and test people and write acceptance criteria together before we set out to implement the changes. This means we figure out, most of the time, how to use unit tests to prove our acceptance criteria,without having to write elaborate integration tests, keeping them fairly simple trying to wrestle the test triangle the right way up .

All this feels like a lot of overhead for a bit of hacking, but we tend to do things only once now rather than having to go back and change something because QA or PM aren’t happy. There are some UI state changes which are difficult to test comprehensively, so we have had things hit us that were unexpected, and we did have to change the design slightly in some cases to make it more robust and testable, but that still felt under control compared with how hard UI bugs can be to track down.

Whatevs. This is where I am on my journey now. I feel like I’m learning more stuff and that I am developing. At my age that is no small thing.




Simple vs “Simple”

F# has two key features that makes the code very compact. Significant whitespace and forced compilation order.

Significant whitespace removes the need for curly braces or the use of keywords such as begin/end. Forced compile order means your dependencies have to be declared in code files above yours in the project declaration. This gives your F# projects a natural reading order and makes projects follow a natural order that transcends individual style.

Now there is a user voice suggestion that the enforced compile order be removed.

I think this is a good idea. I am against project files. As soon as you have three or more developers working in the same group of files, any one file used to maintain state is bound to become a source of merge conflicts and strife. Just look at C#.

I am sure your IDE could evaluate the dependency order of your files and present them in that order for you, heck one could probably make a CLI tool to show that same information if the navigational benefits of the current order is what is holding you back. Let us break out of IDE-centric languages and allow the programs to be defined in code rather than config.


I have been saying a bunch of things, repeating what others say, mostly, but never actually internalised what they really meant. After a week with Fred George and Tom Scott I have seen the light in some way. I have seen proof of the efficacy of pair programming, I have seen the value of fast red-green-refactor cycles and most importantly I have learnt just how much I don’t know.

This was an Object Bootcamp developed by Fred George and Deliberate and basically consisted of problem solving in pairs going over various patterns and OO design in general, pointing out various code smells to look out for and how to refactor your way out of trouble. The course packed in as much as the team could take over the course of the week and is highly recommended. Our finest OO developers in the team still learned new things over the week and the rest of us learned even more.

Where to go from here? I use this blog as a way to write down things I learn so I can reference it later. My fanbase tends to stick to my posts about NHibernate and ASP.NET MVC 3 or something from several years ago, so I need not worry about making things fresh and interesting for the readership. The general recommended reading list that came off of this week reads as follows:

So, GoF and Refactoring – no shockers, eh? We have them in our library and I’ve even read them, even though I first read some other derivative book on design patterns back in the day, but obviously there are things that didn’t quite take the first time. I guess I was too young. Things make so much more sense now when you have a catalogue of past mistakes to cross-reference against various patterns.

The thing is, what I hadn’t internalised properly is how evil getters and setters are. I had some separation of concerns in terms of separating database classes from model classes, but still the classes didn’t instantiate good objects, they were basically just bags of data, and mediator classes had business logic, messing with other classes data instead of proper objects churning cleanly.

Encapsulating information in the system is crucial. It is hard to do correctly, but by timeboxing the time from red to green you force yourself to build the next simplest clean thing before you continue. There is no time for gold plating, and boy you veer off and try something clever only to realise that you needed to stop and go back. Small changes. I have written this so many times before, but if you do it properly it really works.  I have seen JB Rainsberger and Greg Young talk about this, and I have nodded and said sure. Testify! “That would be nice to get to do in practice” was my thinking. And then I added getters and setters to my classes. Or at least made them anemic by having a constructor with parameters and then getters, used by demigod classes. The time to make a change is yesterday, not tomorrow.

So, yes. Analysis Patterns is a hard read, said Fred. Well, then. It seems extremely interesting. I think Refactoring to Patterns will be the very next thing I read, but then I will need to take a stab at it.

I need to learn where patterns could get rid of code smells, increase encapsulation and reduce complexity.

There is a handy catalogue of refactorings that I already have a shortcut to in the chrome toolbar. It gets a lot more clicks now, but in general I will not make any grand statements now but rather come back with a post showing results.

Powershell woes

What is the point of powershell?
Trying to orchestrate and automate distributed tests with powershell leads to things such as using remote powershell or psexec to start tests on VMs and to use PSDrives to set up tests and collect logs.
So then you would like some calls to be reliable. Starting WinRM on a remote box, or setting up a PSDrive to get fresh executables to VMs or collecting logs neesd to be reliable.
With a 10% successrate out-of-the-box you wonder how people ever use this stuff, and the truth is of course people don’t, they write reliability layers around it with retries and exception handlers, just for the Happy-path nothing-important-has-failed scenario
Had MS spent the extra five minutes to make the built-in code be less unreliable and weak and perhaps to only return a failure once every means to recover the situation had been exhausted would have spared every single powershell user the work to write their own reliability layer just to get the basics working.
Case in point:  New-PSSession. There is no way in the known universe that should be allowed to fail. OK, if there are HTTPS certificat problems or network connectivity problems, say so, but other than that I deserve an apology. Eloquent and part of the return object.  Problem is 90% of the time New-PSSession will fail, for no reason. No reason whatsover.
I mean I know the test triangle and all that, but for that tip of the triangle I don’t need any help from MS to make the system test EVEN MORE EXPENSIVE to make and maintain.


Having grown up in a society evolved beyond the confines of 7-bit ASCII and lived through the nightmare of codepages as well as cursed the illiterate that have so little to say they can manage with 26 letters in their alphabet, I was pleased when I read Joel Spolsky’s tutorial on Unicode. It was a relief – finally somebody understood.

Years passed and I thought I knew now how to do things right. And then I had to do Windows C in anger and was lost in the jungle of wchar_t and TCHAR and didn’t know where to turn.

Finally I found this resource here:

And the strategies outlined to deal with UTF-8 in Windows are clear:

  • Define UNICODE and _UNICODE
  • Don’t use wchar_t or TCHAR or any of the associated macros. Always assume std::string and char * are UTF-8. Call Wide windows APIs and use boost nowide or similar to widen the characters going in and narrowing them coming out.
  • Never produce any text that isn’t UTF-8.

Do note however that the observations that Windows would not support true UTF-16 are incorrect as this was fixed before Windows 7.


I am extending some tests we are running in powershell. It is used to validate that the chef configs we are setting up so that all moving parts are running.

One thing I tried to google that wasn’t  perfectly clear to a n00b like me was how to use static objects in powershell, specifically take byte array and interpret it as a UTF8 string.

Apparently this is how you do it:

$enc = [System.Text.Encoding]::UTF8
$c = $enc.GetString($result.Content)

Where the $result came out of a WebRequest. This means I can test that $c contains an expected value and fail the script, and thus – the build, if the result is unexpected.