Category Archives: .NET

Spite is the mother of invention

Premise

This is a tale about a blog on WordPress.com that had a loyal readership and regular, high quality content (so yeah, not writing about this blog). The owner wanted to use the odd plugin and advanced theme, and I was always bothered that WordPress was living off of, well I exaggerate wildly now, this person’s words by putting ads everywhere in addition to the massive annual fee.

Liberation

So with the lure of freedom on them yonder hills, we moved the blog off of WordPress.com onto a Windows VM on Azure (yeah, well… yeah…). Domain hosted on dnSimple, so the logistics of pointing the domain to Azure instead of WordPress and setting up verification TXT records and such was a doddle.

Hosted MySQL instance on Azure was easy enough, but the WordPress.COM theme we had been using was not available on WordPress.Org so we had to pick another one. Sadly we went with Customizr which really means vendor lock-in, as you do a bunch of customisations, hence the name, that are all out the window once you change themes.

Of course, there is no option but to run HTTPS today, and trying to pinch pennies we weren’t going to buy an EV cert from one of the remaining dodgy CAs out there, but iinstead we went with Let’s Encrypt using a tutorial posted by Scott Hanselman. 

Selling out – but is anybody buying?

To make the big bucks we hooked the the site up to Google Analytics and ditto Adsense, and there were plugins to really automate that stuff. Yoast SEO beat out MonsterInsights on features for the analytics and integrates both with Search Console and Analytics. The killer feature for Yoast SEO is the customisable canonical URL which is useful if you reprint blog posts from another site and want to beg Google for mercy for the crime of duplicate content.

The actual ads, how do they work? Well by cunningly just clicking like an insane person (which really is the best way to learn), I managed to understand the concept of Auto Ads. This again is abstracted away by a plugin, in our case Advanced Ads. As the site owner didn’t want ads on all pages, we had to hack it by creating a plain text and code ad with the Auto Ad code from Google pasted in there and then the Advanced Ads thing deciding which pages to actually serve the ad code. The downside is a persistent nagging that you ‘shouldn’t display visible ads in headers’, but I guess that’s fine. They are just script tags, so there is nothing visible there..

Also, all the cool kids enter the Amazon Affiliate program, so we did that. They do have a minimum number of referrals you have to make as they don’t want to deal with tiny unprofitable sites, so I suspect we shall be unceremoniously booted out fairly soon, but the concept of having widgets where you choose your favourite books related to the subject of your blog and maybe in the long term share some revenue if people take you up on your recommendations seems fair. Shame that the widgets themselves are so immensely horribly broken and difficult to use. Allegedly, they are supposed to update when you make changes in the affiliate program site, but they really aren’t. I don’t get paid by Amazon so I shan’t debug their system, but it can really only be that the command that goes back to save settings isn’t picked up, or that they are unable to bust the cache and have old widgets served, but I strongly suspect it is the actual save that is broken, since the widget loses the data already in the wizard before you even enter the last page.

AMPed up

After a few hours I noticed that all the permalinks from the old site were broken on the new one, so I checked the Permalinks tab and it turned out there was a custom setting that I just set to default which made things work and there was much rejoicing. No audit log here so I can’t check, but if I made that change it must have been unintentional. My favourite hypothesis is that somehow the otherwise impressive WordPress XML-based import somehow failed to bring over the settings correctly.

As I rarely venture out into the front end I had not quite grasped what AMP is. I realised I was getting another load of 404s – his time for URL’s ending in /amp. I did a bit of googling and I realised I should probably get yet another plugin to handle this. Like with most WordPress plugins there are varying degrees of ambition and usually they want you to spend $200 in extras to get what you need, but although I brought the site off WordPress.com to deny them ad revenue for the site in question, I was under no illusion that I would be able to produce any such revenue to the owner as whatever $3 would be produced would definitely be eclipsed by the hosting cost.

By going with the default WordPress AMP plugin you can’t do ads, but it works – ish, by using the major competitor you get a functional site, but a completely different look compared to the non-AMP site, and we didn’t want that after all the effort we had already put in.

After reading some more, I realised that everybody was going off AMP anyway, for varying reasons, but that was all the peer pressure I needed, so I broke out the Azure debug console and edited web.config to put in a URL redirect from AMP URL to a normal one.

This was incredibly frustrating as first I forgot that .NET Regexes are different from normal regexes and also you have to not be stupid and use the correct match in the redirect expression ({R:0} is the whole source data, while {R:1} is the first match, which is what I needed).

<staticContent>
<remove fileExtension=".woff2" />
<mimeMap fileExtension=".woff2" mimeType="font/woff2" />
</staticContent>
<rewrite>
<rules>
<rule name="Disable AMP" stopProcessing="true">
<match url="^(.)amp\/?\r?$" />
<action type="Redirect"
  url="https://<awesomesite>.com/{R:1}"
  redirectType="Found" />
  </rule>
  <rule name="Redirect to naked" stopProcessing="true">
  <match url="(.)" />
<conditions>
<add input="{HTTP_HOST}"
pattern="www.<awesomesite>.com" />
</conditions>
<action type="Redirect"
url="https://<awesomesite>.com/{R:0}"
/>
</rule>
<rule
name="WordPress: https://<awesomesite>.com"
patternSyntax="Wildcard">
<match url="*"/>
<conditions>
<add input="{REQUEST_FILENAME}"
matchType="IsFile" negate="true"/>
<add input="{REQUEST_FILENAME}"
matchType="IsDirectory" negate="true"/>
</conditions>
<action type="Rewrite" url="index.php"/>
</rule>
</rules>
</rewrite>

So there are a couple of things here – first a mime type correction to make IIS server web fonts, thne a redirect for AMP sites, then a redirect from http://www.awesomesite.com to awesomesite.com for prettiness, and also to canonicalise it to avoid duplicate records in the offices of Google, which they do not like. WordPress itself will force https if necessary, so all we need to do in this config file is to curb the use of www.

Summary

The actions we took to move the blog were the following:

  1. Set up the new site
    1. Create site
    2. Create blob storage
    3. Create redis cache (I did this later, but you might as well)
    4. Set up a database
  2. Export existing data from old blog
  3. Import data into new system
  4. Choose a theme
  5. Verify that old google links work on the new site (I didn’t do this fast enough)
  6. Verify that any way you try and call the site is redirected to a canonical represenation. Use a hosts file if you haven’t redirected the DNS yet, which with hindsight should have been the way I did it.
  7. Move the DNS to point to the new site
  8. Add the LetsEncrypt support to the site by following the guide. No more certificate errors
  9. Install plugins for analytics and ads.
  10. Create a Google account
    1. Register with Google Search Console
    2. Register with Bing search console (for those two or three people that don’t know Google.
    3. Register with Google Analytics
    4. Register with Google AdSense

Conclusion

So this was very easy and horribly frustrating at once. DnSimple and provisioning resources was a doddle. Following the internet guide to set up Let’s Encrypt and HTTPS was super straightforward, but then WordPress plug-in management, PHP and Amazon widgets were shit shows to be honest. I mean I realise Amazon has a complex architecture and their systems are never 100% up or 100% down and so on, but a save button being completely broken doesn’t feel even slightly “up” from the point of view of the end user.

PHP is garbage and brittle and you are hard-pressed to build anything viable on top of it (but obviously some have succeeded). These plugin smiths aren’t Facebook though. They would correctly interject that I am running WordPress on the least suitable platform imaginable. That is true (it has to do with how the Azure VM instances are set up, on the fact that they run on Windows and most importantly NTFS which has performance characteristics that are completely unsuitable for Unix style applications and favour a small number of large files where EXT4 favours large amounts of small files), but if the Powers that Be really consider Windows and NTFS to be such tremendous deal-breakers, then they should simply not allow Microsoft to host WordPress on Windows at all. As it stands, it does WP no favours with 1 minute turnaround to save settings for a plugin and similar. Then again, I also live in the UK which notoriously has a Internet infrastructure dating back to the Victorian Era, so it’s hard to tell what’s actually the worst culprit, but the sidecar web app that hosts the debug console for the blog is a lot snappier than WordPress, and that is hosted in the same IIS intallation as the WordPress site, although not in the same app pool.

Advertisements

Structural Equality – or is it?

I was recently presented with a conundrum. We had constrained data valid for the domain in a record type. Sadly this record type contained a reference datatype, so built-in structural equality broke down as the reference type never was equal in the way we thought would make sense.

This gave me the opportunity to learn how you override the implementation of Equals and GetHashCode in F# which I was previously unfamiliar with.

This is the finished implementation of the record type, or one like it, rather:

 [<CustomEquality>]
[<CustomComparison>]
type Structure =
{
Name: StructureName
Status: StructureStatus
Format: Regex
}
with
interface IComparable with
member this.CompareTo { Name = name; Status = status; Format = format } =
compare ( this.Name, this.Status, this.Format.ToString() ) (name, status, format.ToString())
interface IComparable with
member this.CompareTo obj =
match obj with
| null -> 1
| 😕 Structure as other -> (this :> IComparable<_>).CompareTo other
| _ -> invalidArg "obj" "not a Structure"
override this.Equals (o: obj) =
match o with
  | :? Structure as os ->
  (this.Name, this.Status, this.Format.ToString()) =
(os.Name, os.Status, os.Format.ToString())
| _ -> false
  override this.GetHashCode() =
  (this.Name, this.Status, thus.Format.ToString()).GetHashCode()

So yeah, ujse of pattern matching to determine data types in the non-generic functions and extensive use the built-in structural equality in tuples.

Very nice. With thanks to TeaDrivenDev and Isaac Abraham on Twitter (and this StackOverflow response)

Bind? No.. Apply…? No… Map!?

After continuing my foray into the functional with Giraffe and .NET Core, I have struggled with the code not quite reading very well. Various data sources are pluggable, i.e. they can be replaced with mocks if you don’t want to run other supporting systems while debugging. This pluggability along with the way Giraffe handles some of its configuration means that – while we would like to compose partially applied functions with either mocked or actual services baked in that are then passed in to the relevant handlers, we instead have to pass a HttpContext around everywhere. If you followed that link you probably wondered why we did not just use the strategy outlined in that blog series. I can only say that we did not understand fish operators at this point. 

So we turn to the googles, and as usual the first or second hit lands us on F# for fun and profit. Scott Wlaschin introduces us to the Reader Monad. We watch his talks, we read his blog posts. A week of enthusiastic coding goes by.

Essentially – the reader monad allows you to write and construct all the code and only at the end plug in the HttpContext as the whole thing is evaluated and returned to the caller.

You make a Reader<‘env, ‘data>  and you create functions to wrap and unwrap things in and out of the Reader. Then you make functions that mind their own business but they return Reader<‘env, b: and you allow the reader to handle some railway oriented programming for you. Essentially you lay the track all through the code but on the last line in the handler you put the train on the tracks and set it off, only then realising whether or not it ended up running through the failure cases or went all through the success track.

Wrappers, essentially, like Option<‘a>, IEnumerable<‘a> or Reader as described above, are introduced as Elevated things. The proper word seems to be Monadic types, but that does not seem to be a good term, as there is no need for a wrapper type to be a monad, although they can be. After a few specific examples the general case is presented and it turns out you can do a bunch of stuff with these elevated types only using functions like bind, return, apply and map. There is a brief epiphany as we experience a flicker of understanding of  the monad laws.

A monad is just a monoid in the category of endofunctors, what is the problem?

Now I cannot explain the chaining of elevated and non-elevated functions properly. I know what bind does, and I can use it correctly on the first try. Return is obvious. Map and apply though… it’s like being ten years old again and round-robining commands until the compiler is happy. As it stands all I can do is link to the posts we have read and the metaphors that have been presented to me, that I have half understood and then tried to reconcile with other conflicting metaphors.

Inevitably the golden success case of “first I need this, then I pass it to this other thing and then I flip it, kick it and reverse it and then I return it” is soon replaced by “well, first I need this one thing, that I need to just check against this other thing, but then use it again in this third thing which I convert to this fourth thing that I might return if this fifth thing is true” and the nice chaining goes out the window and it makes you sad.  Every let b =  … feels like a let-down (hence the name).

This let = thing is apparently called applicative style, where you compose values, while chaining is called monadic style. At least it has name. 

So after a while when you have been writing things like

task {
    let! a = coolFunc b c
    return! a |> modifyInAnAwesomeWay
}
async {
   let! a = thingThatDoesDatabaseThings b 
   return! match a with 
      | Some data -> coolThing data
      | None -> unCoolError
}

You start wondering what the heck it is you are doing. What are these things? It turns out they are computational expressions. But how do I make them?  You essentially create a ThingBuilder class with members for bind and return, and then create a let thing = new ThingBuilder(), after which you can write expressions like:

thing {
    let! unwrapped = funcThatGetsPassedToBind arg
    return unwrapped
}

To be a bit enterprisey I have come up with a draft of a brilliant thing. The F# Functional Maturity Model (FFMM):

  1. Can get samples to compile even with modifications
  2. Can get write small programs that do useful things in prod
  3. Can write software systems that fulfill a business purpose
  4. Can see how bad OO in F# looks and strive to create functional style code
  5. Attempt to write a monad tutorial
  6. See the beauty of computational expressions and want to use them everywhere
  7. Stops being afraid and starts to love Kliesli composition (the fish operator)
  8. Realise computational expressions are an antipattern and curse their existence
  9. ..

 As you can see I have yet to ascend the ladder enough to know what all the levels are, but I am currently on 6 I believe, but given the hate people have for do in Haskell, it seems that there must be a level where you realise the computational expressions are of the devil and must be eliminated at all cost, but at the level where I am now, they do indeed seem like they cut away a lot of plumbing code where I otherwise map and bind to call various things. Also, I see the fish operator everywhere so clearly it must be awesome. I now at least understand what it does, but I can’t say it fits very well in the code I want to write. I’m sure that is an epiphany for a later date. 

Giraffe F# update

In a previous post I have written about Giraffe and shown some workarounds for what I perceived as limitations considering my existing enterprise ecosystem.

Now that Giraffe has turned 3.0 loads of things have improved even from my perspective. Obviously they are now competing in performance benchmarks, so loads of optimisation work has happened behind the scenes. For me, though – the most interesting bit is to see how they dealt with my complaints.

Configuration, logging and service location is now available through extension methods on the context.

Response handlers now handle streams and chunked responses.

Content negotiation looks very clean.

We have of course welcomed these improvements and use them in all our F# APIs.

The only remaining complaint is that the Swagger support is still not usable enough for our purposes. The fact that we would have to configure an additional separate nuget source onto get the package prevents us from actually using it in anger although showing API actions was simple enough but I’m unclear as to how to document request and response payloads – but that only becomes worth exploring when the package is served from nuget.org

Contrarian style guide for C#

Goal

I have decided to try and formulate what I want C# code to look like. Over the years I have accumulated a whole host of opinions that may or may not be complimentary or maybe contradict each other. The purpose of this post is to try and make sense of all of it and maybe crystallize what my style is – so that I can tell all of you how wrong you are and what you crazy kids should be doing.

Is this all my intellectual property? Hells to the no. I have read and heard a bunch of stuff and probably stolen ideas from most people, but the most important people I have listened to or read are Fred George and Greg Young. Sadly it didn’t take.

Object oriented programming

Loads of people hate it and cite typical C# code as the reason why. Yes. That code is horrible, but it isn’t OOP.

Why was Object Oriented programming invented? It was a way to make sense of large software systems. Rather than a very long list of functions that could – and did – call any other function perhaps modifying global state on the way, people were looking for a way to organise code so that changes were predictably difficult to make. This was achieved by introducing encapsulation. Private methods and fields could not be affected by external code and external code could not depend on the internal implementation of code in a class. If implemented correctly “change” means adding a new class and possibly deleting an old one.

Was this the only way people tried to solve this problem? No – functional programming was popularised, relying on pure functions and composition again offering a way to make changes in a predictable way.

I don’t mind you using C# to write in a functional style and by using composition, but if you’re creating classes – take encapsulation seriously. Encapsulation is The Thing with OOP. This means properties are evil. Think about it. Don’t put properties in the code.

But what about inheritance?

Yeah, when Borland C++ came out in 1992 the ads were full of Porche Targas that inherited from Carrera S but with the roof overridden to be missing (!) and similar. Oh – the code we would reuse. With inheritance.

Well. Barbara Liskov probably has some things to say about the Targa I suspect.

In short – don’t use inheritance unless the relationship between the classes is “is a” and the relation is highly unlikely to change.

Single responsibility principle

So you have heard about the single responsibility principle. When you see duplication you swoop in, add a new parameter and delete one of the functions. Boom!

Except what if those two functions weren’t doing the exact same thing? Their similarity was only fleeting. With the refactoring you have just made you have introduced a very hard coupling.

Code size

A method shouldn’t have more lines than three or four. Be liberal with the extract method refactoring. To “fit” in this metric, make guard clauses one line each. Put in one empty line after guard clauses and the meat of the class.

Why? What is this? Why are we counting lines now? Well code tends to attract more code as it ages and methods grow. Having a “magic number” that is The Limit helps you put in the time to refactor and thus battle the bloat before it happens.

Classes should not have more than two or three fields – as discussed earlier there will be no properties. If you feel yourself struggling trying to convince yourself to add another private field, do investigate whether or not there isn’t a new class in there waiting to break free.

Don’t be afraid to leverage the fact that private is a code level constraint, not an instance level one. I e, you can let instances of a class talk to each other and they will have access to the private fields of the sibling. This is one of the ways in which you can do actual work without properties.

Edges

Well with this whole Amish attitude to properties – how do you even deserialise json payloads or populate DTOs from the database? How do you write files?

By using the mediator pattern or writing adapters. At the edges you will find code getting less OO and more … procedural? I e anemic classes with only data, loads of members et c and verbose code that talks to very microsofty framework classes.

This is OK, and to some extent natural – but do not let it leak into the domain. Create domain and value objects for the actual processing without leaking any of the ugliness into the domain.

Avoid raw primitives

If you throw around a customer ID in your code – how do you represent it? As an int? How about an order ID? You know the drill – an ID cannot reasonably be added, subtracted, multiplied or divided and it probably can’t be <=0. And a customer ID and order ID should probably never be assignable to the same parameter.

Yes – take the time to do value objects. Resharper will generate equality and GetHashCode for you, so yes it is harder than in F# with more required boilerplate, but it is worth it.

Conditionals

Where the arbitrary constraint on method size really cuts into your Microsoft sample code style is in the area of conditionals. Do you even WinMain bruv? Nested if statements or case statements – what about them huh? Huh?!?

Yes. Get rid of them.

When making small classes an if statement is almost always too big. Lift the variability into its own class – a strategy.

Unit tests

There are two schools of unit testing. The London School and the Chicago School. The Chicago school relies on setup of instances, calling them and asserting the results. The London School relies on setting up a series of mocks, passing them into a class and then asserting that the mocks were called in the right order with the right values.

Of course – me being lazy – I prefer the Chicago school. With small classes you can just call the public methods and assert that the right results come out. If you feel the need to get inside the class to look at and validate internals – your class under test is too big or does too many things.

Creating a class

When you create a class, think about its purpose. “What’s its job?” as Fred George asks. Don’t create a class where the name ends in er or or. Find a concept that describes what the class understands.

It’s ok to spend 15 minutes drawing on paper how you see the next bit of coding to go before you write the first test and get going.

Disclaimer

It depends. Of course it all depends. But I’ve never noticed “omg these classes are too small and have no weird dependencies- I need to refactor” so I am fairly confident in these recommendations.

Reading list

The following books are useful to have around or to have read.

Some other books that are good for you as a developer:

More F# – Giraffe

So finally the opportunity arose to do some real world F# at work. Being involved in Enterprise Sofware Development a “real world” coding assignment is more akin to Enterprise FizzBuzz than cool Data Science. A colleague had had earlier success using Giraffe, so I favoured that for this task and this blog post is a charting of my struggles. I needed to create an API to server a specfic type of files and non-functional requirements were that I needed to support an existing deployment pipeline for various environments that currently rely on configuration using appSettings.json as well as logging to Serilog and additionally I need OIDC support to authenticate between services. Finally – of course – I need to store data in old school SQL Server.  Ideally, I would have liked to provide a Swagger page which I could point to when I want front-end peeps to RTFM, but that is not supported, so I’ll have to tell them to use the source.

Getting started

In order to get off the ground I installed VS Code with Ionide using the getting started guide. I already had .NET Core 2.0 installed, so I could just use dotnet new to create a F# Giraffe scaffold.

Configuration

In order to set up the correct Identity Server parameters I needed this web site to support configuration, and I already knew I wanted to use bog standard ASP.NET Core configuration (including inheritance – yes, I need it. No it’s not worth the effort to change the circumstances) rather than fancier stuff like yml, the best config markup available. Yes, I could save the world by building a config provider for yml and update all our configs, but today is not that couple of weeks that would take to land a change of that magnitude.

.NET Core 2.0 for n00bs

Oh, yes, I forgot to mention – this is my first outing on .NET Core 2.0, so there are a couple of changes you notice in the hosting and app configuration pipeline. One of them is I don’t know where my configuration is supposed to live. Or rather, it can live where it used to live, in the Startup class, but the startup class is now optional, as you have other calls on the host builder that can let you configure app configuration and logging without needing a startup class. For my purposes that was a no go, I needed to get hold of my configuration somehow, so I created a startup class and created a singleton to hold the complete configuration. This singleton got initialised when the startup class was created. With this I could now configure authentication. This isn’t the nicest way to do things, and I’m open to better ideas, but google had nothing. It was as if I was the first one ever to attempt this, which is partially why I’m writing this down. If I’m wrong, hopefully somebody will give me snark about it so I can update with a corrected version.

Authentication

There is a sample for how to use JwtBearerToken authentication, so I just used my new-found configuration skills to add a layer of usefulness to it. Essentially in the jwtBearerOptions function I get the config singleton from above and I use

  configMgr.GetSection("JwtBearerOptions").Bind(cfg);

to bind the configuration file settings to the object and afterwards I set some defaults that I don’t want to be configurable. To my shock, I ran postman and sent a valid token to the claims Giraffe sample endpoint and I got me some claims. Incredibly, it worked.

Database access

I have, in C#, come to enjoy Dapper. No nonsense, you write the queries, you get the data sort of lightweight ORM that is simply enjoyable to use. I found an F# wrapper over Dapper that Just Worked. Asked my config singleton for the connection string and we were off and running. Most enjoyable. There were some gotchas involving querying with GUIDs, which I circumvented by typecasting, as in

WHERE CONVERT(varchar(60), FieldA) = @FieldA

I assume you have to cast in your selects the other way around to query uniqueidentifiers as well. but that’s not the worst thing in the world.

Also, multiple parameters in a map to the F# wrapper means you have to cast the variables to obj before you call the query method.

 Map [ "FieldA", fieldA :> obj; "FieldB", fieldB :> obj]

Serving files

I made a tiny hack to serve binaries:

let stream (streamInstance : Stream) : HttpHandler =
    fun (next : HttpFunc) (ctx : HttpContext) ->
    task {
        ctx.Response.Headers.["Content-Type"] <- StringValues("application/pdf")
        ctx.Response.Headers.["Content-Length"] <- StringValues(streamInstance.Length.ToString())
        do! streamInstance.CopyToAsync(ctx.Response.Body)
        return Some ctx
    }

Of  course, a proper implementation will have some kind of record type or at least tuple to provide the mime type with the stream rather than hardcode it.

Service Locator

I set up the IoC container in my unsuccessful attempt at getting Swashbuckle to document the API. I registered an operation filter I normally use to make Swagger ask for an authorisation header on API operations that require it, and that was a bit fiddly, but not very weird. I just made a module that gets called from the ConfigureServices method in the startup class.

Conclusion

Yes, loads of classes and mutable methods. This really is a mess from an F# perspective. Not a lot of tail recursion and immutability. I put the blame on ASP.NET Core. And also I suck at F#, although I’m trying.. My hope is that once this is done I can revise and do better. No, I can’t show you teh codez, but suffice it to say, it looks a lot like the above sample code.

 

 

 

 

n Habits of a successful developer

I thought I was going to write one of those listicles just to get going as I have posted nothing for half a year. Recent developments have made me take stock and figure out what certain people have taught me while working with them and what I should try and learn from them going forward. I will not cover the basics of TDD, vim vs Emacs, tab vs spaces or any of that. I will assume you write good tests and create largely correct, well structured code that your coworkers can understand clearly.  I will just address things you can do right now – outside of the actual coding – to be the best you can be.

1. Go home at the end of the day

This isn’t a new lesson, but it is important to note again. Do not give the company any more of your time than you agreed when you signed on. You may love the company and the management, but you are not doing yourself any favours by spending too much time in the office. You may think you get more stuff done, but what I have learned is that you can get more done in 8 hours per day than most people do in 9 or 10, but it takes some hard work and focus. In the other points below I’ll try and identify some of the tricks these people use and see if I can perhaps pick up on those I’ve yet to implement myself.

Another reason I brought up this topic is that I have recently seen people fall into the trap of giving their life to the company. If you own equity and have a chance of a real upside that may be a trade-off worth making, but if you are a lowly employee, even if you have option rather than stock, it is highly unlikely that the arrangement is going to pay back time lost from seeing your family or even just resting to be fit-for-fight the next day. The company will never love you back – it cannot.

Even with equity, think about it – you wouldn’t cut an employee any slack that had worked themselves to the bone and after a while started making serious mistakes. Miss a client meeting due to oversleeping after an all-nighter in the office? Starting to create more bugs than they fix? Maybe started yo act bitterly in the office when interacting with coworkers due to the asymmetric workload this individual had voluntarily taken upon him- or herself? The problems and even the discord being sown in the workplace must eventually be addressed despite the employee having put in enormous hours for the company. I’m saying there is a  way to overwork yourself out of a job, which probably nobody in the company wants to go through and as an employee it is bound to be a bitter experience.

So – given the potential productivity and longevity of people that stay within their normal hours and the fact that they get to go home and chill with the family – this is the course of action I recommend..

2. Prefer early mornings to evenings regarding extracurricular activities

This is something I could be better at, but starting out at 5am if you want to do something extra curricular such as blogging or trying out a new language or technology is extremely effective. It is exactly like going out for a run in that a) I haven’t done it very often, but often enough to know that b) the hard bit is just getting started, and c) it is fantastic as you do it and notice how much progress you are making.

3. Never leave a question unasked

When you hammer out details about a piece of code about to be written – never leave a question unaddressed. Sometimes people hang back and assume that all edge cases are handled and that there are no more loose ends.

  1. Do we have everybody here that we need to flesh out this story?
  2. What is the expected outcome of the feature?
  3. What is the expected input?
  4. How will we handle untrusted input?
  5. Security concerns?
  6. Usability?
  7. How do we present errors?
  8. Will the stakeholders present, yes I was serious about 1, accept a solution that allows us to write less code?
  9. What is the lowest level at which we can automatically test the acceptance criteria?
  10. How do we deploy the feature?
  11. How do we monitor it?
  12. Do we need to produce any additional deliverable (yes, docs, manuals)?

The biggest way you can save time and get to go home on time is by not making mistakes, and one way to not make mistakes is to know that you are building the right thing the first time. Not advocating a Big Design Up Front, just a Right-Sized Discussion Just In Time.

4. Be the one that takes notes at design meetings

After asking the right questions-volunteer to do the dirty work such as updating ticketing systems, to make sure none of the information you are just about to use to write code is lost on the way. Documentation is often a waste, but this bit – details about the acceptance criteria for the feature or bit of code you are about to write is actually useful for a while, at least until the code is in production later today or tomorrow.

5. Maintain standards in terms of tooling and infrastructure

Keep your house in order. Don’t have your development machine behind on patches, behind on OS versions, on old development tools or in a state where you and or a coworker cannot be immediately productive. I have struggled with this as I for a while as I recently out of stubbornness tried to run a Linux desktop in anger. I thought 2016 would be the year of the Linux desktop, and in a lot of ways it was. Debian feels very natural for a Windows user. For chef, Javascript, even some PowerShell I did fine and  was productive. However since most of my work is in C# on .NET  I had to employ all kinds of other ways of running Visual Studio on VMs, on a laptop, or wherever which was annoying to everybody. Thankfully once I lost patience, thanks to my efforts in scripting, I had new Windows environment running directly on the metal set up in hours.

Do not let broken builds stay broken. If tests are flaky-  address that, either by replacing the tests with more robust ones, or remove them completely – but do put yourself in a position where any build failure is probably legitimate and is resolved immediately.

Constantly challenge the automation – do you need it? Is it over-engineered? Does it cover as much as it needs? Is it flexible or is it brittle?

6. Be helpful

Be ready to talk to anybody in the company that has questions about what you do. Pretend like you have boundless energy (which, to be fair you probably do since you now go home on time everyday). Even parts of the organisation that for historical reasons doesn’t have much faith in development/engineering (yes, that happens everywhere). Be there, answer stupid questions, insinuant questions and honest questions with a smile or at least a reasonable facsimile. Try and note down any specific complaints and welcome your critics to sit in when you elaborate stories regarding their favourite topic in the future.  If you are prepared to do some internal promotion you will be trusted in the rest of the company and liked by your colleagues who probably avoid those people like the plague.

Don’t arm yourself with headphones and plug away leaving your more junior colleagues stranded if they ave any questions – the total productivity of the team isn’t helped by you being in the zone if at the same time three people are struggling with something that you could have spotted right away.If you really do need to be alone to solve something, book a meeting room or something, get out of the landscape/team room.

If a couple of people have questions about anything you are doing, offer to do a brown bag on it, send out invites and see what the traction is. If you create a culture of curiosity and willingness to learn the company will make money and everybody will appreciate your effforts. Myself I have known that these things are useful and peple find them interesting, but it is only the brownbags or lunch & learn sessions that actually get scheduled and actually happen that are beneficial, the ones you ponder quietly to yourself but never actually set up are worthless. Take action.

7. Interact with peers outside of your company

Now, this seems to fly in the face of that Go home at the end of the day-bit, but this benefits mostly you as a developer and only secondly your employer. There are meetups and user groups in loads of places and you should find some and go there. This is an item where I really need to improve. Especially if you are involved in a technology that is quickly evolving, such as the language Elixir has been over the last couple of years, swapping war stories to the extent your NDAs will allow can be quite useful. Any piece of new discovery that you can share can benefit the local community and in turn you can have some of your queries addressed. It is a very good way to figure out how much of advertised technologies actually get used, and in what way and can thus help you correctly judge what new technologies are worth looking into to solve business problems at work.

Right, so for this listicle n appears to equal seven. Do you have any more traits of successful developers you have noticed that you would recommend, or just stuff that you do that is awesome and that we all are fools if we don’t emulate right now? Feel free to share.

Sprint 0 for old school .NET devs

When you start work on a code base, either from scratch or as you approach it as a new dev, there are a few things you should ensure are in place before you get going. Like most other things, these are easier and more natural on more mature platforms than it tends to be on .NET where the toy app tends to be king, but it is doable. I will here list some technologies I use and I may mention some competing ones for completeness. I can’t share most of the code because reasons, but if there is interest, I can put together samples showing specific techniques of the ones I have used.

Introduction

The checklist of what you need to put in place if it doesn’t exist is the following:

  • Version control
  • Build server
  • Automated tests run as part of the build process
  • Automated packaging
  • Automated deployment
  • Automated monitoring
  • Automated setup of development environment

I know most of that reads as “a house must have a roof” but there were, and still are,  places where people go “We’re only a couple of guys, we can just leave the source on the NAS, it’s backed up”, so I’m just being clear here.

The last point may seem like overkill, but especially if you use the abomination that is 3rd party components, it is crucial to save time and more easily onboard new guys.

Version control

I’m going to shock you here and not require that you use Git. You might want to, because it’s becoming a skill everybody has, and GitHub is a beautiful place to keep your code, but if your workflow is centralised anyway. you can legitimately stay with Subversion as long as you take care of the basics, as in backing up the repository properly. Recent versions have less horrible diffing, so it is is not that bad anymore. The most crucial aspect is that you do need a version control system that has a good powerful command line interface.

Build server

I’m going to suggest TeamCity here, because I know it well, but it has some real drawbacks. GitHub comes with TravisCI which is a modern CI system. Jenkins has been popular. The point is – make sure your build configuration is the same on the development machine as it is on the build server, and make sure the build configuration is version controlled with the source, ideally that you can run all of it from the command line. This way you can know you are not about to break the build before you push your changes and also you know that you can recreate a previous version of the code without faff because a contemporary build configuration can be found in source control along side the source.

Automated tests run as part of the build process

The obvious bit here are to run your favourite command-line testrunner on your build output to make sure all your tests run on the build server.

You can always write powershell scripts to make simple but high-value integration tests, it doesn’t all have to be Selenium / FitNesse even though those are quite nice once you are over the initial hurdle, but yes, there is a cost.

Automated deployment

Many ways to deploy exist, OctopusDeploy is popular to use with TeamCity, but you should look at chef and puppet as well. They tend to be hostile to Windows, but it is now possible to use them, and they make sense. It is as if they are specifically designed for the purpose of deploying and maintaining software infrastructure.

Years ago I looked very briefly at Puppet, but I have come to work with chef recently and it does what it is supposed to do, but I have no factual reason to rate chef higher than puppet other than that I now know it.

For chef you can look at kitchen to test your deployment scripts in transient VMs. It can use Azure VMs, VMWare WorkStation or Vagrant / VirtualBox and it does shorten the feedback cycle considerably.

Automated monitoring

You can use pingdom, Monitis or Nagios or just run a few curl/wget in a scheduled task and send an email if they don’t return the expected information. Either way, you need to be able to know if things aren’t working. Use the smallest possible thing you can get away with if your budget is constrained, but do use something.

Automated setup of the development environment

This may be sensitive, as developers tend to be particular about where their code lives and how their machine is organised, but having the developers just be able to run a script and end up with all their development tools set up and ready to debug.

Some things are going to be difficult. Installing Redgate SQL Source Control doesn’t seem to be scriptable, but other than that you can:

  • install Visual Studio
  • Install plugins
  • Use chocolatey to install source control management
  • Get the sources locally

In some cases, as your system grows, and you break bits out into separate micro services, you will need this scripting to make sure you can set up the entire system to debugged locally. Ideally you would, as an additional means to verify your deployment method, use chef-zero or corresponding technology from Puppet to use your normal deployment templates as you configure the system on the development machines as well. This is another situation where clever IDE tooling actively makes things difficult for you, but any work you put in to automate here will pay huge dividends.

But what does this all mean in practice?

In short, if you are going to keep working on Windows exclusively, at least learn Powershell. It is not that horrible. The stance among Windows users have long been anti-scripting, and with the state of CMD.exe it has been for good reason. Powershell, though, has a lot of features that make sense even though the syntax can be confusing at first. Learning ruby may be more viable from a cross-platform perspective, but Powershell is evidently more suited for Windows and a lot of useful cmdlets for enabling windows features et cetera are only available in Powershell.

Red mist

Sometimes you get so upset about something you need to blog about it, and ranting on Twitter, Facebook and Channel 9 comments just did not quite seem enough.

The insult, and the injury

I watched a Channel 9 talk about the “future of C# and VB” by Jay Schmeltzer from DEVintersection where he went over the future of the .NET stack, and started out by looking over the Stack Overflow statistics of programming languages that I have addressed before. He showed the graph of Most Popular Technologies showing C# placed prominently fourth behind Java, SQL and JavaScript, then Jay showed the category Most Loved languages, where Microsoft  have a runaway hit with F# on third place, and then C# way down on 10th place. Obviously, VB was right on top of the Most Feared, but that you knew already. Jay expressed much pride in having C# all the way up on 10th of most loved languages and then said “and we have Microsoft’s F# up here on 3rd, but that is of course more of a … well,  C# is of course mainstream in a different way”. So in other words, not even a pretence that F# is a first class citizen in .NET.

Contrast with Apple. From almost single handed creating Objective C into something that is in existence and popular despite the crushing superiority in funding and mindshare of C++ – Apple basically told everybody that (the F# clone) Swift is it from now on. They basically did a VB6 -> C# sort of story, telling everybody that they were welcome to use that old stuff but they really should get on board with the modern technologies.

Contrast that with the above statement from top brass at Microsoft.

So basically, in this blog I am trying to collect ways which a dark matter C# developer can just start using F# today thanks to the extremely ambitious efforts of the friendly and hard-working F# community, and show to the extreme extent Microsoft isn’t bothering at all – ignoring the goldmine they are sitting on. I’m trying to find ways which make things easier for existing Microsoft-focused developers, so I will be collating the things you can use to go F# today for the crummy ASP.NET LOB apps that make up the bread and butter of the C# world.

The comeback

My goal is providing ways which you can go F# today, and immediately write less code with fewer bugs that do the things your C# code does today. You will eventually discover cooler things, like package management using Paket (wich support Nuget package streams and maintaining dependencies directly from GitHub or similar) amd the very F# web framework Suave.io and using FAKE as a build system rather than MSBuild, which helps if you have complex builds where you would like to not mess with XML and rather read F# code. You may perhaps find other ways to persist data that are more natural to use in F# and you will have little problem learning them once you get over the hump, but just to make the barrier to entry extremely low, let’s keep things familiar and non-scary. The things that would come for free if Microsoft had devoted more than a fraction of means towards F# in Visual Studio is the templating that makes C# so easy to use when creating websites and web services. To achieve that ease-of-use we have to rely on the community, and they have despite the odds come up with a few competitive options over the years. I have compiled these on my F# for C# people page which I hope to keep updated.

Passwords – for wannabe techies

EDIT: As you will notice a lot of my links point to the works of security researcher Troy Hunt, and I should point out that I have no affiliation with him. I just happen to find articles that make sense and they happen to be written by him or refer to his work. Anyway here is another one, addressing the spirit of this blog post, with horrifying examples.

If you google for “ASP.NET login form” or similar, you will get some hits with really atrocious examples of how to NOT handle peoples’ credentials. Even if you are a beginner writing an app that nobody will use, you should never do user login in a shitty way. You will end up embarrassing yourself and crucially your users, who most likely are friends and family when you are at the beginner stage, when – inevitably – your database gets stolen.

How do you do it properly then, you ask?

Ideally – don’t. Let other people worry about getting hacked. Plenty of large corporations are willing to take the risk.

Either way though,  get a cert and start running HTTPS only.  HTTP is fast and nice for sites that allow anonymous access, such as blogs, but as soon as you are accepting sensitive data such as passwords you need to go with HTTPS.

If you are in the Microsoft sphere, just create a new web project in Visual Studio, register yourself as a developer with Google, Facebook, Microsoft or similar, create apps there, and configure those app credentials in the Visual Studio app – and do make sure you store those app secrets outside of source control – and run that app. After minimal tweaking you should have something working where you can authenticate in your app using those authentication providers. After that –  you can then, with varying effort required, back-port this authentication to whichever website you were trying to add authentication to.

But I insist on risking spreading my users’ PII when I get hacked

OK, fair enough, on your head be it.

Separate your auth storage from your app data storage. At least by database connection user rights. When they hack your website using SQL-injection, they shouldn’t be able to get hashed passwords. That just isn’t acceptable. So, yes, don’t do Windows Authentication on your dev box, define SQL Database Users and SQL Server Logins and make scripts that ensure they are created if they don’t exist.  Again, remember to keeps secrets away from the stuff you commit to version control. Use alternative means to store secrets for production. Azure will help you with these settings in the admin interface, for instance.

Tighten the storage a bit

Store hashed password and the salt. Set your storage permissions such that the password hash cannot be retrieved from the database at all. The database login used to access the storage should not have any SELECT rights, only EXECUTE on stored procedures. That way you write one stored procedure that retrieves the salt for a login so that the application can calculate a hash for the password supplied by the user in trying to log in,  then another stored procedure that takes a username and the hash and compares it internally to the one stored in the database, returning back to the caller whether or not the attempt was successful, without ever exposing the stored hash.

Note that if you tighten SELECT rights  but don’t separate storage between auth and app, every single Entity Framework sample will crash and burn as EF requires special administration to use stored procedures rather than direct CRUD.

Go for a stupidly complex hashing algorithm

Also – you are not going for speed when you are looking for password hashes. MD5 and SHA1 are really nice for checksumming files, but they suck for passwords. You are looking for very slow and complex algorithms.

.NET come with Microsoft.AspNet.Cryptography.KeyDerivation.Pbkdbf2 , but the best and most popular password hashing algorithm is bcrypt.

The hashes generated are of fixed size, so just define your storage to be big enough to take the output of the algorithm and then there is no need to limit the size of the password beyond the upload connection limits defined by the web server due to DoS protection, but for a password, those limits are ludicrously high, so you don’t have to even mention them to the user. Also, don’t mess with copy/paste in the password box. You want to be password manager friendly.

Try and hack yourself

Use Zed Attack Proxy or similar to try and break into your site. It will tell you some of the things you need to change in terms of protecting against XSS and CSRF. The problem isn’t that your little site is going to be the target of the NSA or the Chinese government, what you need to worry about is the tons of automated scripts that prod and poke into every site everywhere and collect vulnerabilities with zero effort from the point of the attacker. If you can avoid being vulnerable to those most basic attacks, you can at least have some self-respect.

In conclusion

These are just the basics, and I am mostly writing this to discourage people from writing login forms as some kind of beginner exercise. Password reuse is rampant still, so if you make a dodgy login form you are most likely going to collect some userid / password combinations people really use at other sites, and those sites may very well be way more important than yours. Not treating that information seriously is extremely unprofessional and bad karma. Please do not build user authentication yourself if you don’t intend to make some kind of serious effort in protecting people’s data. Instead use OAuth solutions and let people authenticate using Google, Facebook, Twitter, Microsoft or whichever auth provider you prefer. That way you will never see any passwords and won’t have anything that can be stolen, which is a much easier life to lead