Category Archives: .NET

More F# – Giraffe

So finally the opportunity arose to do some real world F# at work. Being involved in Enterprise Sofware Development a “real world” coding assignment is more akin to Enterprise FizzBuzz than cool Data Science. A colleague had had earlier success using Giraffe, so I favoured that for this task and this blog post is a charting of my struggles. I needed to create an API to server a specfic type of files and non-functional requirements were that I needed to support an existing deployment pipeline for various environments that currently rely on configuration using appSettings.json as well as logging to Serilog and additionally I need OIDC support to authenticate between services. Finally – of course – I need to store data in old school SQL Server.  Ideally, I would have liked to provide a Swagger page which I could point to when I want front-end peeps to RTFM, but that is not supported, so I’ll have to tell them to use the source.

Getting started

In order to get off the ground I installed VS Code with Ionide using the getting started guide. I already had .NET Core 2.0 installed, so I could just use dotnet new to create a F# Giraffe scaffold.

Configuration

In order to set up the correct Identity Server parameters I needed this web site to support configuration, and I already knew I wanted to use bog standard ASP.NET Core configuration (including inheritance – yes, I need it. No it’s not worth the effort to change the circumstances) rather than fancier stuff like yml, the best config markup available. Yes, I could save the world by building a config provider for yml and update all our configs, but today is not that couple of weeks that would take to land a change of that magnitude.

.NET Core 2.0 for n00bs

Oh, yes, I forgot to mention – this is my first outing on .NET Core 2.0, so there are a couple of changes you notice in the hosting and app configuration pipeline. One of them is I don’t know where my configuration is supposed to live. Or rather, it can live where it used to live, in the Startup class, but the startup class is now optional, as you have other calls on the host builder that can let you configure app configuration and logging without needing a startup class. For my purposes that was a no go, I needed to get hold of my configuration somehow, so I created a startup class and created a singleton to hold the complete configuration. This singleton got initialised when the startup class was created. With this I could now configure authentication. This isn’t the nicest way to do things, and I’m open to better ideas, but google had nothing. It was as if I was the first one ever to attempt this, which is partially why I’m writing this down. If I’m wrong, hopefully somebody will give me snark about it so I can update with a corrected version.

Authentication

There is a sample for how to use JwtBearerToken authentication, so I just used my new-found configuration skills to add a layer of usefulness to it. Essentially in the jwtBearerOptions function I get the config singleton from above and I use

  configMgr.GetSection("JwtBearerOptions").Bind(cfg);

to bind the configuration file settings to the object and afterwards I set some defaults that I don’t want to be configurable. To my shock, I ran postman and sent a valid token to the claims Giraffe sample endpoint and I got me some claims. Incredibly, it worked.

Database access

I have, in C#, come to enjoy Dapper. No nonsense, you write the queries, you get the data sort of lightweight ORM that is simply enjoyable to use. I found an F# wrapper over Dapper that Just Worked. Asked my config singleton for the connection string and we were off and running. Most enjoyable. There were some gotchas involving querying with GUIDs, which I circumvented by typecasting, as in

WHERE CONVERT(varchar(60), FieldA) = @FieldA

I assume you have to cast in your selects the other way around to query uniqueidentifiers as well. but that’s not the worst thing in the world.

Also, multiple parameters in a map to the F# wrapper means you have to cast the variables to obj before you call the query method.

 Map [ "FieldA", fieldA :> obj; "FieldB", fieldB :> obj]

Serving files

I made a tiny hack to serve binaries:

let stream (streamInstance : Stream) : HttpHandler =
    fun (next : HttpFunc) (ctx : HttpContext) ->
    task {
        ctx.Response.Headers.["Content-Type"] <- StringValues("application/pdf")
        ctx.Response.Headers.["Content-Length"] <- StringValues(streamInstance.Length.ToString())
        do! streamInstance.CopyToAsync(ctx.Response.Body)
        return Some ctx
    }

Of  course, a proper implementation will have some kind of record type or at least tuple to provide the mime type with the stream rather than hardcode it.

Service Locator

I set up the IoC container in my unsuccessful attempt at getting Swashbuckle to document the API. I registered an operation filter I normally use to make Swagger ask for an authorisation header on API operations that require it, and that was a bit fiddly, but not very weird. I just made a module that gets called from the ConfigureServices method in the startup class.

Conclusion

Yes, loads of classes and mutable methods. This really is a mess from an F# perspective. Not a lot of tail recursion and immutability. I put the blame on ASP.NET Core. And also I suck at F#, although I’m trying.. My hope is that once this is done I can revise and do better. No, I can’t show you teh codez, but suffice it to say, it looks a lot like the above sample code.

 

 

 

 

Advertisements

n Habits of a successful developer

I thought I was going to write one of those listicles just to get going as I have posted nothing for half a year. Recent developments have made me take stock and figure out what certain people have taught me while working with them and what I should try and learn from them going forward. I will not cover the basics of TDD, vim vs Emacs, tab vs spaces or any of that. I will assume you write good tests and create largely correct, well structured code that your coworkers can understand clearly.  I will just address things you can do right now – outside of the actual coding – to be the best you can be.

1. Go home at the end of the day

This isn’t a new lesson, but it is important to note again. Do not give the company any more of your time than you agreed when you signed on. You may love the company and the management, but you are not doing yourself any favours by spending too much time in the office. You may think you get more stuff done, but what I have learned is that you can get more done in 8 hours per day than most people do in 9 or 10, but it takes some hard work and focus. In the other points below I’ll try and identify some of the tricks these people use and see if I can perhaps pick up on those I’ve yet to implement myself.

Another reason I brought up this topic is that I have recently seen people fall into the trap of giving their life to the company. If you own equity and have a chance of a real upside that may be a trade-off worth making, but if you are a lowly employee, even if you have option rather than stock, it is highly unlikely that the arrangement is going to pay back time lost from seeing your family or even just resting to be fit-for-fight the next day. The company will never love you back – it cannot.

Even with equity, think about it – you wouldn’t cut an employee any slack that had worked themselves to the bone and after a while started making serious mistakes. Miss a client meeting due to oversleeping after an all-nighter in the office? Starting to create more bugs than they fix? Maybe started yo act bitterly in the office when interacting with coworkers due to the asymmetric workload this individual had voluntarily taken upon him- or herself? The problems and even the discord being sown in the workplace must eventually be addressed despite the employee having put in enormous hours for the company. I’m saying there is a  way to overwork yourself out of a job, which probably nobody in the company wants to go through and as an employee it is bound to be a bitter experience.

So – given the potential productivity and longevity of people that stay within their normal hours and the fact that they get to go home and chill with the family – this is the course of action I recommend..

2. Prefer early mornings to evenings regarding extracurricular activities

This is something I could be better at, but starting out at 5am if you want to do something extra curricular such as blogging or trying out a new language or technology is extremely effective. It is exactly like going out for a run in that a) I haven’t done it very often, but often enough to know that b) the hard bit is just getting started, and c) it is fantastic as you do it and notice how much progress you are making.

3. Never leave a question unasked

When you hammer out details about a piece of code about to be written – never leave a question unaddressed. Sometimes people hang back and assume that all edge cases are handled and that there are no more loose ends.

  1. Do we have everybody here that we need to flesh out this story?
  2. What is the expected outcome of the feature?
  3. What is the expected input?
  4. How will we handle untrusted input?
  5. Security concerns?
  6. Usability?
  7. How do we present errors?
  8. Will the stakeholders present, yes I was serious about 1, accept a solution that allows us to write less code?
  9. What is the lowest level at which we can automatically test the acceptance criteria?
  10. How do we deploy the feature?
  11. How do we monitor it?
  12. Do we need to produce any additional deliverable (yes, docs, manuals)?

The biggest way you can save time and get to go home on time is by not making mistakes, and one way to not make mistakes is to know that you are building the right thing the first time. Not advocating a Big Design Up Front, just a Right-Sized Discussion Just In Time.

4. Be the one that takes notes at design meetings

After asking the right questions-volunteer to do the dirty work such as updating ticketing systems, to make sure none of the information you are just about to use to write code is lost on the way. Documentation is often a waste, but this bit – details about the acceptance criteria for the feature or bit of code you are about to write is actually useful for a while, at least until the code is in production later today or tomorrow.

5. Maintain standards in terms of tooling and infrastructure

Keep your house in order. Don’t have your development machine behind on patches, behind on OS versions, on old development tools or in a state where you and or a coworker cannot be immediately productive. I have struggled with this as I for a while as I recently out of stubbornness tried to run a Linux desktop in anger. I thought 2016 would be the year of the Linux desktop, and in a lot of ways it was. Debian feels very natural for a Windows user. For chef, Javascript, even some PowerShell I did fine and  was productive. However since most of my work is in C# on .NET  I had to employ all kinds of other ways of running Visual Studio on VMs, on a laptop, or wherever which was annoying to everybody. Thankfully once I lost patience, thanks to my efforts in scripting, I had new Windows environment running directly on the metal set up in hours.

Do not let broken builds stay broken. If tests are flaky-  address that, either by replacing the tests with more robust ones, or remove them completely – but do put yourself in a position where any build failure is probably legitimate and is resolved immediately.

Constantly challenge the automation – do you need it? Is it over-engineered? Does it cover as much as it needs? Is it flexible or is it brittle?

6. Be helpful

Be ready to talk to anybody in the company that has questions about what you do. Pretend like you have boundless energy (which, to be fair you probably do since you now go home on time everyday). Even parts of the organisation that for historical reasons doesn’t have much faith in development/engineering (yes, that happens everywhere). Be there, answer stupid questions, insinuant questions and honest questions with a smile or at least a reasonable facsimile. Try and note down any specific complaints and welcome your critics to sit in when you elaborate stories regarding their favourite topic in the future.  If you are prepared to do some internal promotion you will be trusted in the rest of the company and liked by your colleagues who probably avoid those people like the plague.

Don’t arm yourself with headphones and plug away leaving your more junior colleagues stranded if they ave any questions – the total productivity of the team isn’t helped by you being in the zone if at the same time three people are struggling with something that you could have spotted right away.If you really do need to be alone to solve something, book a meeting room or something, get out of the landscape/team room.

If a couple of people have questions about anything you are doing, offer to do a brown bag on it, send out invites and see what the traction is. If you create a culture of curiosity and willingness to learn the company will make money and everybody will appreciate your effforts. Myself I have known that these things are useful and peple find them interesting, but it is only the brownbags or lunch & learn sessions that actually get scheduled and actually happen that are beneficial, the ones you ponder quietly to yourself but never actually set up are worthless. Take action.

7. Interact with peers outside of your company

Now, this seems to fly in the face of that Go home at the end of the day-bit, but this benefits mostly you as a developer and only secondly your employer. There are meetups and user groups in loads of places and you should find some and go there. This is an item where I really need to improve. Especially if you are involved in a technology that is quickly evolving, such as the language Elixir has been over the last couple of years, swapping war stories to the extent your NDAs will allow can be quite useful. Any piece of new discovery that you can share can benefit the local community and in turn you can have some of your queries addressed. It is a very good way to figure out how much of advertised technologies actually get used, and in what way and can thus help you correctly judge what new technologies are worth looking into to solve business problems at work.

Right, so for this listicle n appears to equal seven. Do you have any more traits of successful developers you have noticed that you would recommend, or just stuff that you do that is awesome and that we all are fools if we don’t emulate right now? Feel free to share.

Sprint 0 for old school .NET devs

When you start work on a code base, either from scratch or as you approach it as a new dev, there are a few things you should ensure are in place before you get going. Like most other things, these are easier and more natural on more mature platforms than it tends to be on .NET where the toy app tends to be king, but it is doable. I will here list some technologies I use and I may mention some competing ones for completeness. I can’t share most of the code because reasons, but if there is interest, I can put together samples showing specific techniques of the ones I have used.

Introduction

The checklist of what you need to put in place if it doesn’t exist is the following:

  • Version control
  • Build server
  • Automated tests run as part of the build process
  • Automated packaging
  • Automated deployment
  • Automated monitoring
  • Automated setup of development environment

I know most of that reads as “a house must have a roof” but there were, and still are,  places where people go “We’re only a couple of guys, we can just leave the source on the NAS, it’s backed up”, so I’m just being clear here.

The last point may seem like overkill, but especially if you use the abomination that is 3rd party components, it is crucial to save time and more easily onboard new guys.

Version control

I’m going to shock you here and not require that you use Git. You might want to, because it’s becoming a skill everybody has, and GitHub is a beautiful place to keep your code, but if your workflow is centralised anyway. you can legitimately stay with Subversion as long as you take care of the basics, as in backing up the repository properly. Recent versions have less horrible diffing, so it is is not that bad anymore. The most crucial aspect is that you do need a version control system that has a good powerful command line interface.

Build server

I’m going to suggest TeamCity here, because I know it well, but it has some real drawbacks. GitHub comes with TravisCI which is a modern CI system. Jenkins has been popular. The point is – make sure your build configuration is the same on the development machine as it is on the build server, and make sure the build configuration is version controlled with the source, ideally that you can run all of it from the command line. This way you can know you are not about to break the build before you push your changes and also you know that you can recreate a previous version of the code without faff because a contemporary build configuration can be found in source control along side the source.

Automated tests run as part of the build process

The obvious bit here are to run your favourite command-line testrunner on your build output to make sure all your tests run on the build server.

You can always write powershell scripts to make simple but high-value integration tests, it doesn’t all have to be Selenium / FitNesse even though those are quite nice once you are over the initial hurdle, but yes, there is a cost.

Automated deployment

Many ways to deploy exist, OctopusDeploy is popular to use with TeamCity, but you should look at chef and puppet as well. They tend to be hostile to Windows, but it is now possible to use them, and they make sense. It is as if they are specifically designed for the purpose of deploying and maintaining software infrastructure.

Years ago I looked very briefly at Puppet, but I have come to work with chef recently and it does what it is supposed to do, but I have no factual reason to rate chef higher than puppet other than that I now know it.

For chef you can look at kitchen to test your deployment scripts in transient VMs. It can use Azure VMs, VMWare WorkStation or Vagrant / VirtualBox and it does shorten the feedback cycle considerably.

Automated monitoring

You can use pingdom, Monitis or Nagios or just run a few curl/wget in a scheduled task and send an email if they don’t return the expected information. Either way, you need to be able to know if things aren’t working. Use the smallest possible thing you can get away with if your budget is constrained, but do use something.

Automated setup of the development environment

This may be sensitive, as developers tend to be particular about where their code lives and how their machine is organised, but having the developers just be able to run a script and end up with all their development tools set up and ready to debug.

Some things are going to be difficult. Installing Redgate SQL Source Control doesn’t seem to be scriptable, but other than that you can:

  • install Visual Studio
  • Install plugins
  • Use chocolatey to install source control management
  • Get the sources locally

In some cases, as your system grows, and you break bits out into separate micro services, you will need this scripting to make sure you can set up the entire system to debugged locally. Ideally you would, as an additional means to verify your deployment method, use chef-zero or corresponding technology from Puppet to use your normal deployment templates as you configure the system on the development machines as well. This is another situation where clever IDE tooling actively makes things difficult for you, but any work you put in to automate here will pay huge dividends.

But what does this all mean in practice?

In short, if you are going to keep working on Windows exclusively, at least learn Powershell. It is not that horrible. The stance among Windows users have long been anti-scripting, and with the state of CMD.exe it has been for good reason. Powershell, though, has a lot of features that make sense even though the syntax can be confusing at first. Learning ruby may be more viable from a cross-platform perspective, but Powershell is evidently more suited for Windows and a lot of useful cmdlets for enabling windows features et cetera are only available in Powershell.

Red mist

Sometimes you get so upset about something you need to blog about it, and ranting on Twitter, Facebook and Channel 9 comments just did not quite seem enough.

The insult, and the injury

I watched a Channel 9 talk about the “future of C# and VB” by Jay Schmeltzer from DEVintersection where he went over the future of the .NET stack, and started out by looking over the Stack Overflow statistics of programming languages that I have addressed before. He showed the graph of Most Popular Technologies showing C# placed prominently fourth behind Java, SQL and JavaScript, then Jay showed the category Most Loved languages, where Microsoft  have a runaway hit with F# on third place, and then C# way down on 10th place. Obviously, VB was right on top of the Most Feared, but that you knew already. Jay expressed much pride in having C# all the way up on 10th of most loved languages and then said “and we have Microsoft’s F# up here on 3rd, but that is of course more of a … well,  C# is of course mainstream in a different way”. So in other words, not even a pretence that F# is a first class citizen in .NET.

Contrast with Apple. From almost single handed creating Objective C into something that is in existence and popular despite the crushing superiority in funding and mindshare of C++ – Apple basically told everybody that (the F# clone) Swift is it from now on. They basically did a VB6 -> C# sort of story, telling everybody that they were welcome to use that old stuff but they really should get on board with the modern technologies.

Contrast that with the above statement from top brass at Microsoft.

So basically, in this blog I am trying to collect ways which a dark matter C# developer can just start using F# today thanks to the extremely ambitious efforts of the friendly and hard-working F# community, and show to the extreme extent Microsoft isn’t bothering at all – ignoring the goldmine they are sitting on. I’m trying to find ways which make things easier for existing Microsoft-focused developers, so I will be collating the things you can use to go F# today for the crummy ASP.NET LOB apps that make up the bread and butter of the C# world.

The comeback

My goal is providing ways which you can go F# today, and immediately write less code with fewer bugs that do the things your C# code does today. You will eventually discover cooler things, like package management using Paket (wich support Nuget package streams and maintaining dependencies directly from GitHub or similar) amd the very F# web framework Suave.io and using FAKE as a build system rather than MSBuild, which helps if you have complex builds where you would like to not mess with XML and rather read F# code. You may perhaps find other ways to persist data that are more natural to use in F# and you will have little problem learning them once you get over the hump, but just to make the barrier to entry extremely low, let’s keep things familiar and non-scary. The things that would come for free if Microsoft had devoted more than a fraction of means towards F# in Visual Studio is the templating that makes C# so easy to use when creating websites and web services. To achieve that ease-of-use we have to rely on the community, and they have despite the odds come up with a few competitive options over the years. I have compiled these on my F# for C# people page which I hope to keep updated.

Passwords – for wannabe techies

EDIT: As you will notice a lot of my links point to the works of security researcher Troy Hunt, and I should point out that I have no affiliation with him. I just happen to find articles that make sense and they happen to be written by him or refer to his work. Anyway here is another one, addressing the spirit of this blog post, with horrifying examples.

If you google for “ASP.NET login form” or similar, you will get some hits with really atrocious examples of how to NOT handle peoples’ credentials. Even if you are a beginner writing an app that nobody will use, you should never do user login in a shitty way. You will end up embarrassing yourself and crucially your users, who most likely are friends and family when you are at the beginner stage, when – inevitably – your database gets stolen.

How do you do it properly then, you ask?

Ideally – don’t. Let other people worry about getting hacked. Plenty of large corporations are willing to take the risk.

Either way though,  get a cert and start running HTTPS only.  HTTP is fast and nice for sites that allow anonymous access, such as blogs, but as soon as you are accepting sensitive data such as passwords you need to go with HTTPS.

If you are in the Microsoft sphere, just create a new web project in Visual Studio, register yourself as a developer with Google, Facebook, Microsoft or similar, create apps there, and configure those app credentials in the Visual Studio app – and do make sure you store those app secrets outside of source control – and run that app. After minimal tweaking you should have something working where you can authenticate in your app using those authentication providers. After that –  you can then, with varying effort required, back-port this authentication to whichever website you were trying to add authentication to.

But I insist on risking spreading my users’ PII when I get hacked

OK, fair enough, on your head be it.

Separate your auth storage from your app data storage. At least by database connection user rights. When they hack your website using SQL-injection, they shouldn’t be able to get hashed passwords. That just isn’t acceptable. So, yes, don’t do Windows Authentication on your dev box, define SQL Database Users and SQL Server Logins and make scripts that ensure they are created if they don’t exist.  Again, remember to keeps secrets away from the stuff you commit to version control. Use alternative means to store secrets for production. Azure will help you with these settings in the admin interface, for instance.

Tighten the storage a bit

Store hashed password and the salt. Set your storage permissions such that the password hash cannot be retrieved from the database at all. The database login used to access the storage should not have any SELECT rights, only EXECUTE on stored procedures. That way you write one stored procedure that retrieves the salt for a login so that the application can calculate a hash for the password supplied by the user in trying to log in,  then another stored procedure that takes a username and the hash and compares it internally to the one stored in the database, returning back to the caller whether or not the attempt was successful, without ever exposing the stored hash.

Note that if you tighten SELECT rights  but don’t separate storage between auth and app, every single Entity Framework sample will crash and burn as EF requires special administration to use stored procedures rather than direct CRUD.

Go for a stupidly complex hashing algorithm

Also – you are not going for speed when you are looking for password hashes. MD5 and SHA1 are really nice for checksumming files, but they suck for passwords. You are looking for very slow and complex algorithms.

.NET come with Microsoft.AspNet.Cryptography.KeyDerivation.Pbkdbf2 , but the best and most popular password hashing algorithm is bcrypt.

The hashes generated are of fixed size, so just define your storage to be big enough to take the output of the algorithm and then there is no need to limit the size of the password beyond the upload connection limits defined by the web server due to DoS protection, but for a password, those limits are ludicrously high, so you don’t have to even mention them to the user. Also, don’t mess with copy/paste in the password box. You want to be password manager friendly.

Try and hack yourself

Use Zed Attack Proxy or similar to try and break into your site. It will tell you some of the things you need to change in terms of protecting against XSS and CSRF. The problem isn’t that your little site is going to be the target of the NSA or the Chinese government, what you need to worry about is the tons of automated scripts that prod and poke into every site everywhere and collect vulnerabilities with zero effort from the point of the attacker. If you can avoid being vulnerable to those most basic attacks, you can at least have some self-respect.

In conclusion

These are just the basics, and I am mostly writing this to discourage people from writing login forms as some kind of beginner exercise. Password reuse is rampant still, so if you make a dodgy login form you are most likely going to collect some userid / password combinations people really use at other sites, and those sites may very well be way more important than yours. Not treating that information seriously is extremely unprofessional and bad karma. Please do not build user authentication yourself if you don’t intend to make some kind of serious effort in protecting people’s data. Instead use OAuth solutions and let people authenticate using Google, Facebook, Twitter, Microsoft or whichever auth provider you prefer. That way you will never see any passwords and won’t have anything that can be stolen, which is a much easier life to lead

 

Desktop OS for developers

The results of the latest StackOverflow Developer Survey just came out, showing – among other interesting things – that Windows is dying as a developer OS. Not one to abandon ship any time soon I’d still like to offer up some suggestions.

TL;DR

  • Make the commandline deterministic.
  • Copying files across the network cannot be a lottery.
  • Stop rebooting UI-frameworks
  • Make F# the flagship language

Back in the day, Microsoft through VB and Visual C++ overcame some of the hurdles of developing software for Windows – then the only, effectively, desktop OS in the enterprise. Developers, and their managers, rallied behind these products and several million kilometres of code was written over a couple of decades.

The hurdles that were overcome were related to the boilerplate needed to register window classes, creating a window and responding to the basic window messages required to show the window in Windows and have the program behave as expected vis-a-vis the expectations a Windows user might have. Nowhere in VB6 samples was anybody discussing how to write tests or how, really, to write good code. In fact, sample code, simplified on purpose to only showcase one feature at a time, would not contain any distractions such as test code.

When Classic ASP was created, a lot of this philosophy came a cross to the web, and Microsoft managed to create something as horrible as PHP, but with less features, telling a bunch of people that it’s OK to be a cowboy.

When the .NET framework was created as a response to Java, a lot of VB6 and ASP.NET  programmers came across and I think Microsoft started to see what they had created. Things like Patterns & Practices came out and the certification programmes were taking software design and testing into consideration. Sadly, however, they tended to give poor advice that was only marginally better than what was out there in the wild.

Missed the boat on civilised software development

It was a shock to the system when the ALT.NET movement came out and started to bring in things that were completely mainstream in the Java community but almost esoteric in .NET. Continuous integration – unit testing – TDD – DDD. Microsoft tried to keep up by creating TFS that apart from source code version in had ALM tools to manage bugs and features as well as a built-in build server but it became clear to more and more developers that Microsoft really didn’t understand the whole thing about testing first or how lean software development needs to happen.

While Apple had used their iron fist to force people to dump Mac OS for the completely different, Unix-based operating system OS X (with large bits of NextStep brought across, like the API and InterfaceBuilder) – Microsoft were considering their enterprise customers and never made a clean break with Gdi32. Longhorn was supposed to solve everything, making WPF native and super fast, obsoleting the old BitBlt malarkey and instead ushering in a brighter future.

As you are probably aware, this never happened. .NET code in the kernel was a horrible idea and the OS division banned .NET from anything ever being shipped with Windows, salvaged whatever they could duct tape together – and the result of that was Vista. Yes, .NET was banned from Windows and stayed banned up until Powershell became mainstream a long, long time later. Now, with Universal Windows Apps, a potentially viable combo of C++ code and vector UI has finally been introduced, but since it is the fifth complete UI stack reboot since Longhorn folded, it is probably too little too late and too many previously enthusiastic Silverlight or WPF people have already fallen by the wayside. Oh and many of the new APIs are still really hard to write tests around, and it is easy finding yourself in a situation where you need to install Visual Studio and some SDK on a build server, because the dependency relies on the Registry or the GAC rather than things that come with the source.

Automation

As Jeffrey Snover mentions in several talks, Windows wasn’t really designed with automation in mind. OLE Automation possibly, but scripting? Nooo. Now, with more grown-up ways of developing software – automation becomes more critical. The Windows world has developed alternate ways of deploying software to end-user machines than work quite well, but for things like automated integration tests and build automation you should still be able to rely on scripting to set things up.

This is where Windows really lets the developer community down. Simple operations in Windows aren’t deterministic. For a large majority of things you call on the command-line  – you are the only one responsible for determining if the command ran successfully. The program you called from the command-line may very well have failed despite it returning a 0 exit code. The execution just might not have finished despite the process having ended, so some files may still be locked. For a while, you never know. Oh, and mounting network drives is magic and often fails for no reason.

End result

Some people leave for Mac because everything just works, if you can live with bad security practices  and sometimes a long delay before you get some things like Java updates. Some people leave for Linux because if you script everything, you don’t really mind all those times you have to reinstall because thing like a change in screen resolution or a security update killed the OS to the point you can’t log in anymore, you just throw away the partition and rerun the scripts. Also, from a developer standpoint, everything just works, in terms of available tools and frameworks.

What to do about it

If Microsoft wants to keep making developer tools and frameworks, they need to start listening to the developers that engage whenever Microsoft open sources things. They most likely have valuable input into how things are used by your serious users – beyond the tutorials.

Stop spending resources duplicating things already existing for Windows or .NET as that strikes precisely at the enthusiasts that Microsoft needs in order to stop hemorrhaging developers.

What is .NET Core – really? Stop rewriting the same things over and over. At least solve the problems the rewrite was supposed to address first before adding fluff. Also – giving people the ability to work cross-platform means people will, so you are sabotaging yourselves while building some good-will, admittedly.

Most importantly – treat F# like Apple treats Swift. Something like – we don’t hate C# – there is a lot of legacy there but F# is new, trendier and better. F# is far better than Swift and has been used in high spec applications for nine years already. Still Microsoft after years of beta testing still manages to release a JITer that has broken tail call optimisation (a cornerstone of functional runtimes as it lets you do recursion effectively). That is simply UNACCEPTABLE and I would have publicly shamed then fired so many managers for letting that happen. Microsoft needs to take F# seriously – ensure it gets the best possible performance, tooling and templating. It is a golden opportunity to separate professional developers from the morons you find if you google “asp.net login form” or similar.

In other words – there are many simple things Microsoft could do to turn the tide, but Im not sure they will manage, despite the huge strides taken of late. It is also evident that developers hold a grudge for ages.

Salvation through a bad resource

Me and a colleague have been struggling for some time deploying an IIS website correctly using chef. As you may be aware, chef is used to describe the desired state of the configuration of a system through the use of resources, which know how to bring parts of the system into the desired state – if they, for some reason, should not be – during a process called convergence.

Chef has a daemon (or Service, as it is called in the civilised world) that continually ensures that the system is configured in accordance with the desired state. If the desired state changes, the system is brought into line automagically.

As usual, what works nicely and neatly in Unix-like operating systems requires volumes of eloquent code literature, or pulp fiction rather, to implement on Windows, because things are different here.

IIS websites are configured with a file called web.config. When this file changes, the website restarts (the application threadpool does, to be specific). Since the Chef windows service is running chef-client in regular intervals it is imperative that chef-client doesn’t falsely assume that the configuration needs to be overwritten  every time it runs is as that would be quite disruptive to any would-be users of the application. Now, the autostart behaviour can be disabled, but that is not the way things should have to be.

A common approach on Windows is to disable the chef service and to just run the chef client manually when you know you want to deploy things, but that just isn’t right either and it takes a lot away from the basic features and the profound magic of chef. Anyway, this means we can’t keep tricking chef into believing that things have changed when they really haven’t, because that is disruptive and bad.

So like I mentioned earlier – IIS websites are configured with a file called web.config. Since everybody that ever encountered an IIS website is aware of that, there is no chance that an evildoer won’t know to look for the connection strings to the database in that very file. To mitigate this well-knownness, or at least make the evil-doer first leverage a privilege escalation vulnerability,  there is a built-in feature that allows an administrator to encrypt the file so that the lowly peon account that the website is executing as doesn’t have the right to read it.  For obvious reasons this encryption is tied to the local machine, so you can’t just copy the file to a different machine where you happen to be admin to decrypt it. This does however mean that you have to first template the file to a temporary location and then check if the output of the chef template, the latest and greatest of website configuration, is actually any different from what was there before.

It took us ages to figure out that what we need to do is to write our web.config template exactly as it looks once it has been decrypted by Windows, then start our proceedings by decrypting the production web.config into a temporary location. We then set up the chef template resource to try and overwrite the temporary file with new values, and if there has been a change, use notifications to trigger a ruby_block that normally doesn’t execute, but when triggered by the template resource both encrypts the updated config and copies it across to prod.

Result!

But wait… The temporary file has to be deleted. It has highly sensitive information (I would like to flatter myself) and shouldn’t even have made it to disk in its clear-text form, and now it’s still there waiting to be read by the evildoer.

Using a ruby block resource or a file resource to delete the temporary file causes chef to record this as a change, and change that isn’t a change is bad. Or at least misleading in this case.

Enter colleague nr 2 “just make a bad resource that doesn’t use converge_by”.

Of course! We write a resource that takes a path and deletes it using pure ruby code, but it “forgets” to tell chef that a file was deleted, so chef will update the configuration when it should but will gladly report 0 resources updated at the end of a run where nothing has changed. Beautiful!

DONE. Week-end. I’m off.

 

Getting back into WPF

These last couple of weeks I have been working with a Windows desktop app based on WPF. I hadn’t been involved with that in quite some time so there was some trepidation before I got stuck in.

I have been pairing consistently throughout and I believe it has been very helpful for both parties as the union of our skill sets have been quite large and varied regardless of which colleague I was working with at the time. The app had interesting object life-cycles and has some issues in object creation when viewed from the standpoint of what we need to do today although it was well suited to solve the problems it did at the time it was written.

Working closely with colleagues mean that we could make fairly informed decisions whilst refactoring and the discussions have seemed productive. I tend to always feel that the code is significantly improved as we finish tasks even though we have stayed close to the task at hand and avoided refactoring for its own sake.

Given the rebootcamp experience, I’m always looking to make smaller classes with encapsulated functionality, but I still have room for improvement. As I’m fortunate enough to have very skilled colleagues it is always useful to discuss these things in the pair.  It helps to have another pair of eyes there to figure out ways to proceed – getting things done whilst working to gradually improve the design. I haven’t felt disappointed with a piece of code I’ve helped write for quite a while.

The way we currently work is that we elaborate tasks with product and test people and write acceptance criteria together before we set out to implement the changes. This means we figure out, most of the time, how to use unit tests to prove our acceptance criteria,without having to write elaborate integration tests, keeping them fairly simple trying to wrestle the test triangle the right way up .

All this feels like a lot of overhead for a bit of hacking, but we tend to do things only once now rather than having to go back and change something because QA or PM aren’t happy. There are some UI state changes which are difficult to test comprehensively, so we have had things hit us that were unexpected, and we did have to change the design slightly in some cases to make it more robust and testable, but that still felt under control compared with how hard UI bugs can be to track down.

Whatevs. This is where I am on my journey now. I feel like I’m learning more stuff and that I am developing. At my age that is no small thing.

 

 

 

Rebootcamp

I have been saying a bunch of things, repeating what others say, mostly, but never actually internalised what they really meant. After a week with Fred George and Tom Scott I have seen the light in some way. I have seen proof of the efficacy of pair programming, I have seen the value of fast red-green-refactor cycles and most importantly I have learnt just how much I don’t know.

This was an Object Bootcamp developed by Fred George and Deliberate and basically consisted of problem solving in pairs going over various patterns and OO design in general, pointing out various code smells to look out for and how to refactor your way out of trouble. The course packed in as much as the team could take over the course of the week and is highly recommended. Our finest OO developers in the team still learned new things over the week and the rest of us learned even more.

Where to go from here? I use this blog as a way to write down things I learn so I can reference it later. My fanbase tends to stick to my posts about NHibernate and ASP.NET MVC 3 or something from several years ago, so I need not worry about making things fresh and interesting for the readership. The general recommended reading list that came off of this week reads as follows:

So, GoF and Refactoring – no shockers, eh? We have them in our library and I’ve even read them, even though I first read some other derivative book on design patterns back in the day, but obviously there are things that didn’t quite take the first time. I guess I was too young. Things make so much more sense now when you have a catalogue of past mistakes to cross-reference against various patterns.

The thing is, what I hadn’t internalised properly is how evil getters and setters are. I had some separation of concerns in terms of separating database classes from model classes, but still the classes didn’t instantiate good objects, they were basically just bags of data, and mediator classes had business logic, messing with other classes data instead of proper objects churning cleanly.

Encapsulating information in the system is crucial. It is hard to do correctly, but by timeboxing the time from red to green you force yourself to build the next simplest clean thing before you continue. There is no time for gold plating, and boy you veer off and try something clever only to realise that you needed to stop and go back. Small changes. I have written this so many times before, but if you do it properly it really works.  I have seen JB Rainsberger and Greg Young talk about this, and I have nodded and said sure. Testify! “That would be nice to get to do in practice” was my thinking. And then I added getters and setters to my classes. Or at least made them anemic by having a constructor with parameters and then getters, used by demigod classes. The time to make a change is yesterday, not tomorrow.

So, yes. Analysis Patterns is a hard read, said Fred. Well, then. It seems extremely interesting. I think Refactoring to Patterns will be the very next thing I read, but then I will need to take a stab at it.

I need to learn where patterns could get rid of code smells, increase encapsulation and reduce complexity.

There is a handy catalogue of refactorings that I already have a shortcut to in the chrome toolbar. It gets a lot more clicks now, but in general I will not make any grand statements now but rather come back with a post showing results.

Tip: Clearing QueryString when using RadControls and ASP.NET WebForms

I had to google myself silly and the page I ended up finding was so hard to find I couldn’t actually find it again to link to it properly. The below code is not my idea, so if you feel wronged, submit your link and I shall attribute properly.

The problem is: I want to be able to navigate, or find my way back if you will, to a certain MultiPage and TabStrip that is nested inside a Telerik RadGrid. As it is a view and not a state change, I would like the querystring to handle this. However, to be nice, or obnoxious, depending on how you see it, Telerik will resubmit the same querystring when you navigate the tabs yourself, which will make it so that after changing tabs via querystring once, you cannot navigate to a different page by clicking anymore.

Anyway: People on the Internet, especially Telerik forums tell you to do Request.QueryString.Clear(), but that just doesn’t work because it is a readonly Collection and although it compiles, you will get a runtime error. So: What to do?

The post that I found simply used violence and Reflection to force the collection into being Read/Write, and then when Telerik reads from Request.Querystring to needlessly resubmit the exact previous query, it actually, accidentally, posts the correct, cleaned querystring. This enables my expected scenario where you by using GET can change tabs in a TabStrip/MultiPage.

        private void ClearQueryStringParam(string paramName)
        { Continue reading Tip: Clearing QueryString when using RadControls and ASP.NET WebForms