Category Archives: Architecture

SD&D 2022

I had the chance to attend the Software Design & Development conference at the Barbican Centre this week. It was my first conference since the plague so it was a new experience. I will here coalesce some of the main points that I gathered. I may go into specifics in a couple of topics that stood out to me, but this is mostly so that I have some notes to aid my memory.

Overall

The conference is very well organised, you can tell it’s not their first rodeo. From a practicality point of view the Barbican is equally easy/cumbersome to get to from all directions, which is about as good as you can get in London where usually you will end up favouring proximity to a subset of main line termini, thus making the location cumbersome to get to for at least half the population (since there are airports in every direction out from the city). There were a number of timeslots where you truly had FOMO for choosing one track over another, which is a design goal of a program committee – so, well done SD&D! There was some unfortunate setting in the AV equipment which interfered with shortcuts in Visual Studio in interesting ways, but surely that can be addressed somehow.

Kevlin Henney

The first law of developer conferences states Always Catch a Talk by Kevlin Henney if Available. It doesn’t teach you anything about a specific new thing, but it puts the entire universe into context, and always inspires you to go ahead and look up papers from the olden days that describe modern phenomena.

C# features

My biggest bafflement came not from the talks that showcased new C# features, such as the latest iteration of pattern matching and the like – as I usually get introduced to them when they show up as options in ReSharper – but by the old abomination that is default implementations on interfaces in C#8. This talk by Jeremy Clark was an eye opener. It is so jank you wonder how it could ever be released into production. My guess is that the compatibility argument from MAUI was the big reason, and it makes sense, but basically my instinct to stay away from it was right, but my guess is that you will have weird bugs because of it at some point in the future.

I also caught the C# Channels talk. I seem to recall the gestation of Channels, as if I remember correctly it was publicly brought into being through discussions on David Fowler’s twitter account (unless I misremember). Anyway, it is a highly civilised way of communicating between two async tasks with back pressure. Intuitive to use and safe. Like a concurrent queue but a lot nicer.

Micro services

Allen Holub had a number of talks on Microservices and if you can I’m sure you should see more of them, due to the abundance of other brilliant talks I only caught his test driven architecture talk which I will mention below. Other than him though there were talks by Juval Löwy and Neal Ford that covered architecture and micro services. I caught Software architecture foundations: identifying characteristics by Neal Ford and Sander Hoogendoorns’s talk on migrating to microservices in small steps, both worth watching.

Security

There were a couple of security talks focussing on automating security as well as the updated OWASP top 10 chart of threats as well as a couple of talks by Scott Brady about – I am guessing – – Identity Server. The dizzying array of choices meant I didn’t go to the identity server talks, but they are definitely on my to watch-list if there is ever video from this.

The security automation ones though Continuous Security by Kim van Wilgen and Add Security into your Agile Process by Cecilia Wirén took you through what you need to really improve the security posture of your development process. Kim v Wilgen was more in-depth on tooling whilst Cecilia Wiren was more about the whole process and what to consider where. The only sustainable way forward is to automate things like dependency vetting/tracking and as much static code analyisis you can, to bring these concerns as far left as you can, menaing early in the process. Rewriting a function even before you commit it is easy. Having an automated fuzzing tool discover you have a buffer overrun vulnerability when you thought you were done is worse from a cost-of-remediation point of view, but of course letting a bad vulnerability out into the wild is an order of magnitude worse, so whilst catching them in the dev cycle is preferable, attempting to catch stuff late through more heavy handed automation that is too time consuming to run on every commit is still worth considering running periodically as it’s better than the alternative.

Tests drive everything

How to stop testing and break your codebase by Clare Sudbery was an amazing talk that really hit home. It was like an experiment of what if I just skip test-first for a bit and see what happens? brought out of time crunch, tiredness and a sense that it would be a safe trade-off because I know what I’m doing and – in her case – I have acceptance tests . Like I have noticed in various side projects, when you let go of the discipline the drawbacks come at you hard and fast – almost cartoonishly so. It was one of the most relatable talks I have ever attended.

Allen Holub’s DbC (Design by Coding):applying TDD principles to architecture was fascinating. It started out by being a bit “old man yells at clouds” about how real agile is index cards on a board, not Jira, and although yes, index cards or post-it’s on a physical board is preferable to an electronic board, the electronic board prevents me from having to commute four days out of the five, so – no. The points raised about authoring million detailed tickets ahead of time being a waste though, hard agree, and the rest of the diatribe against big design up-front I was all aboard with and probably to some extent already doing. The interesting bit was to come though.
He presented a hypothetical technical problem that needed architecting. Instead of drawing a diagram or writing specs, he whipped out an editor with his favourite version of junit and wrote some java code through TDD – all in one file – that implemented tests and classes that symbolised microservices and their endpoints, as well as the interactions between them. Light weight, easy to read for developers, pleasant tooling and at least as useful as a diagram. In both cases you start writing the code from scratch, but you have a design document that makes sense. I’m not 100% sold on the concept but it’s worth considering.

Various highlights

Tuesday morning keynote was about quantum computing, and reluctantly I must concede that it probably is the future, and the community seems to be looking for converts, but I just can’t go from quantum entanglement to taking data from a web page and shoving it into a database, so I guess I have to wait those three years before it hits the mainstream and it will become digestable for the likes of me. If you are into cryptography as in encryption, not the various ponzi schemes, you should probably get into it now.

I caught a Kate Gregory talk – on naming – after only being a fan off of her YouTube talks on C++ vs C, definitely worth seeing. She proposed the strategy of “just mash the keyboard if you can’t think of a name and go back to it later” was also brought forward elsewhere on the conference, the point being: don’t get stuck trying to come up with a name, start writing the code and as you start talking about what the thing you just wrote is and what it is responsible for, a great name will eventually become evident and then you use that. The effort of coming up with a good name is worth it, and with refactoring tools it’s worth just moving past the instant rather than making a bad decision. Once you have finished the feature and proceeded, the caveats and difference between the name and the actual implementation will fade into obscurity and when you come back to the same code in three weeks you will have forgotten all about it and can be misled by bad names as easily as if you hadn’t written the code yourself.

Conclusion

I really enjoyed this conference. Again, with a past in a program committee I really admired the work they put in to cause so much anxiety when picking talks. Obviously the QE2 conference centre in Westminster is newer so NDC London benefits from that, and if you want to see Troy Hunt and the asp.net core guys Fowler and Edwards you would be better off over to NDC, but SD&D had all the core things right and a wider array of breakout sessions, and infrastructure like the food was a lot less chaotic at SD&D than it usually is at NDC. Compared to BuildStuff and Øredev – those conferences I only attended as visitor to the city, so I didn’t have to commute, meaning of course that’s nice.

If you like the environment you’ll be pleased that there was no shilling or abundance of obscure t-shirts handed out that would have drained natural resources, but I suspect that SD&D would have enjoyed more sponsorships and an expo floor which allegedly they have had before . Since this conference was actually SD&D2020 postponed several times over the course of two years, it is possible that SD&D 2023 will have pre-pandemic levels of shilling. All I know is I enjoy free t-shirts to clothe the child. Regardless I strongly recommend attending this conference.

Transformers

What are the genuinely difficult aspects of transforming your software function?

It seems everybody intuitively understands what brings speed and short time to market, and how that in turn automatically allows for better innovation. Also people seem to get that in the current stale market, with fast enough delivery you could even forego smarts and just brute force innovation launching new concepts and tweaks until profits go up and then declare a win as if you knew what you were doing that whole time. Secretly people also know that although you could rinse and repeat doing the naïve approach until retirement, optionally you could exert minimum effort and measure a bit better so that you know what you are doing so that you can focus your efforts.

So why aren’t everyone moving on this?

When you get a bunch of people in the same organisation you want to achieve some economies of scale and solve common problems once rather than once per team.

This means you delegate some functions into separate teams. Undoing this, or at least mitigating this, is difficult politically. Some people – with some cause – fear for their jobs when reorgs happen.

Sudden unexpected cost runaway is the biggest recurring nightmare of middle managers. Controls are therefore in place to prevent developer cloud spend to balloon.

Taken together however, this means teams are prevented from innovating independently as they cannot construct the virtual infrastructure as needed because of cost not being authorised, and they cannot play with new pieces of virtual infrastructure because they haven’t been approved by the central tech authority yet.

The Power of Sample Code

What is wrong with OOP?

In the culture wars between “Object Oriented Programming” and Functional Programming, you will find proponents of OOP that argue that we are doing fine – why should we change? and proponents of FP that lists a litany of inherent problems with what we are doing today and point to the ways FP solves them. After I once was at an Object Bootcamp with Fred George I believe the two main schools of thought are both wrong. All the problems listed by the FP peeps are correct, but they are not inherent in OOP, actually OOP addresses a few of them, but we are as an industry not doing OOP.

I may feel Fred George is the Messiah, but he is not alone in his views. Greg Young has similar concerns.

Inheritance is not the Big Deal

I am old enough to remember Borland C++ ads from the 90s. It focused a lot on inheritance, and reuse through inheritance became the USP for object oriented languages.

As soon as you have written some code though, you realise inheritance is the worst, as it creates undue coupling, making changes very hard to implement.

When Borland made those ads about how Porsche Turbo inherited the Carrera but implemented a big fat rear wing, they had begun their foray into C++ because it offered a way to handle the substantial boilerplate involved in writing a program in Windows. It was relatively straightforward to implement the basics and create a usable abstraction on top of the raw Windows API that made the developer experience much more pleasant.

As visual designers became a thing, they wanted a way to map properties with code, so that UI components (those things implemented as objects we mentioned above) could be manipulated by a developer in design mode. “Property” setters, basically syntactic sugar disguising normal functions, allowed the UI designers to read settings from the object, and replace them with what the developer types in. With this work, Borland and Microsoft were working to catch up with InterfaceBuilder from NeXT Computer (the same thing that lives on today in the Apple MacOS/iOS SDK) that had bolted a different type system on top of C and called it Objective C – but that had a world leading visual designer at the time. Anyway, I think they were in a hurry and didn’t think things through.

Approaches to deal with big programs

In a large codebase, the big problem is achieving low coupling but high cohesion. This means, you want all the code that belongs together to live together but you don’t want to have to make changes in seemingly unrelated code to modify a piece of functionality.

In large problems of old, you could call any subroutine from anywhere else, and many resources were shared, meaning between the time you set a value in a variable and you read from it, some piece of code in between could have modified the value, and you would not be automatically able to know where this access is made and how to prevent it.

In FP, we use modules for scoping, meaning you group functions into modules to aid readability, but the key concept, the Big Idea is immutability. After a value is created, it exists globally, but since they are read-only once created, the drawbacks of global state go away. There is no way to change something that somebody else relies on. You can transform it into a new thing that you need, but the original value hangs around until it’s no longer needed. It is harder to accidentally break other code with changes you are making

The Big Idea in Object oriented development is Encapsulation. You put the data with the code and manipulate abstractions. This means that if you get your abstractions right, you can change or replace these abstractions without needing to make sweeping changes in the codebase.

The original concept of object orientation relied on independent small sub programs that communicated by message passing, implicitly imagining like an “in tray” of messages that the object could process at its own pace and then send a response when the work was completed. However – objects were in C++, Java and C# was implemented as special dynamically allocated structs to which you made function calls, i e they became decidedly more synchronous than they were in Smalltalk or Simula. You would recognise Erlang Processes and Actors as looking more like OG objects. You also see that what made objects useful were that they shared properties we today associate with the term micro services, but on a smaller scale.

So what’s the problem, and what’s up with the title of this blog post?

Java, and C# arguably even more so, took the Big Idea and tossed it out the window. Property Get/Property Set to support novelties like graphical designers and visual components are a clear violation of encapsulation. Why are we letting objects access data that lives in other objects? The need to do that is a huge red flag that your model is incorrect. Both the bible, i.e. Refactoring, by Fowler and the actual Bible condemn this, this feature envy.

But why did these properties survive the nineties and live on into modern day? Why have they made things worse with auto properties?

Sample code

When you learn a new language, or to code in general, the main threshold is getting to the point where you write idiomatic code in that language. I e you use familiar phrases. You indent the code in a certain way, you name things according to a certain standard and you use familiar ways to do things like open a database connection, make a HTTP request et c, that a seasoned programmer would be familiar with. Unfortunately- in C# at least, these antipatterns are canon at this point, so to write properly encapsulated code would maybe cause a casual reviewer to ask WTF and be sceptical.

What is canon comes from the publicly available body of work that a beginner can reasonably access. Meaning, effectively Microsoft sets the bar when they announce features, document them and create samples.

There are some issues here. If you look at a large piece of sample code, you may notice how difficult it is to identify the key concept being demoed as the logging code or error handling bulk up the code in a way that is distracting, so brevity must be allowed to remain a priority, clearly.

At the edges where the code starts interacting with network and storage, this type of organisation isn’t inherently despicable either, so a blanket ban is perhaps not the way forward either.

How do we make it clear to new OO devs that when they fill that empty Models folder their project template creates for them with code they would be better off thinking OO proper?

By that I mean making classes that are extremely small, use value objects, prefer private fields, avoid properties et cetera. My suspicion is that any attempt at conveying this programming style through the medium of sample code in templates or documentation is doomed. The bulk of code necessary to not only prove the concept but to in fact make it part of the vernacular would require a large number of people making quite a lot of good code public do that new learners can assimilate the knowledge.

I think good OO code is scarce. Getting the abstractions right is just too hard, you will have compromises in various places, and all the tools tempt you with ways to stray from the narrow path of righteousness, but with modern refactoring tools you should be able to address some of the issues amd continually strive to make the code better.

Incidentally, with properly sized objects you can unit test without cheating (using internal helper methods, or by using mocks), so there is scope to brighten up the tests as well.

Automation and security

There is a recent spate of sophisticated attacks on software delivery mechanisms where cyber criminals have had massive success in breaching one organisation to get automatic access to hundreds of thousands of other organisations through the update mechanism the breaches organisation provides.

Must consider security at design time

I think it needs reiterating that security needs to be built in by default, from the beginning. I haven’t gone back to check properly, but I know I went back and deleted an old blog post because it had some dubious security practice in it. My new policy is, I would rather omit some part of a process than show a dodgy sample. There are so many blog posts you find if you search for “login form asp.net” that don’t even hash passwords. And rather than point beginners to the built-in password hashing algorithms that are available in .NET, and the two lines of code you have to write, they leave some beginners thinking it’s all right, just this once and breed this basic idea that security is optional. Something you test for afterwards if you are building something “important” and not something you think about all the time.

The thing is, we developers have tools that help us do complicated things – like break bits of code out from other bits of code automatically or rename specific constructs by a certain name, including surrounding text comments, without also incorrectly renaming unrelated constructs that share name.

It turns out cyber criminals too have plenty of automation that helps them spend very little effort breaking in to companies, and exploit this access in a number of different ways.

There is maybe no “why”

This has a couple of implications. First off, attackers are probably not looking for you per se. You may be a nobody, you will still be exposed to automated attacks that test your network for known vulnerabilities and apply automated suites of exploits to see what happens. This means that even if you don’t do anything that conceivably could have value to an attacker, you will still be probed.

The second thing is, to prevent data loss you need to make every step the attacker has to take a hardship. Don’t advertise what software versions your public facing servers are running, don’t let service accounts have access to things beyond what they need, do divide networks into segments so that – for example – one machine with ransomware cannot directly infect your entire network.

Defend in depth

Change any business processes that require people to open e-mail attachments as part of their job. Offer services that help people do their job in a more convenient way that is also more secure. You cannot berate people for attempting to do their job. I mean, you can but it is not helpful.

Move backups off site and offline of course, for many reasons. But, do remember that having to recover a massive storage system from a backup can still be an extinction level event for a business even if you do have a working reliable off site backup solution. If you lose a large SAN you may be offline for days, and people will not be able to work, you may need to bring sites offline while storage recovers. When you procure a sophisticated storage solution, do not forget to design a recovery strategy ahead of time for how to rebuild a massive spinning rust storage array from absolute zero while new data is continuously generated. It is a non-trivial design challenge that probably needs tailoring to how your business operates. The point is, avoiding the situation where you need to actually restore your entire storage from tapes is always best.

Next level

Despite the intro, I have so far only mentioned things that any company needs to think about. There are of course organisations that are actually targeted. Financial institutions, large e-retailers or software supply chain companies run a greater risk of being manually targeted by evildoers.

Updates

Designing a secure process for delivering software updates is not trivial, I am not in any position to give direct advice beyond suggesting that if you are intending to do that, to consider from the beginning how to track vulnerabilities but also how to effectively remove versions that have been flagged as actively harmful, and how to support your users if they have deployed something dodgy. If that day comes, you need to at least be able to help your users. It will still be awful, but if you treat your users right, you might still make it.

Humans

Your people will be exploited. Every company that has an army of customer service representatives will need to make a trade-off between customer convenience and security. Attacks on customer service reps are very common. If you have high-value clients, people will use you to get to your clients’ money. There is nothing to say here, other than obviously you will be working with relevant authorities and regulatory bodies, as well as fine tune your authentication process so that you ask for confirmation information that is not readily available to an attacker.

Insiders

I don’t have any numbers on this, so I am unsure how big of a problem this is, but it is mentioned often in security. Basically, humans can be exploited in a different way. Employees can be coerced through intimidation, blackmail or bribery to act maliciously on behalf of an attacker. My suspicion is that this is less common than employers think, and that times when an employee was stressed or distracted and fell for a phishing e-mail, the employer would think “that is too obvious of a phish, this guy must have been in on it”.

It makes me think of that one time when a systemic failure on multiple levels meant that a cleaner accidentally started a commuter train that ran from the depot the length of the commuter railway Saltsjöbanan – at maximum speed – eventually crashing through the buffers and into a building at the terminus. In addition to her injuries, she suffered the headlines “train stolen and crashed” until the investigation revealed the shocking institutional failings that had made this accident possible. I can’t remember all of them but there were things from the practices in how cleaners accessed the trains, how safety controls were disabled as a matter of course, how trains were stabled, the fact that points were left set so that a runaway train would actually leave the depot. A shambles. Yet the first reaction from the employer was to blame the cleaner.

Anyway, to return to the matter at hand – yes, although I cannot speculate on the prevalence it is a risk. Presumably, if you hire right and look after your people you can get them to come to you if they have messed up and gotten themselves into a compromised situations where they are being blackmailed or if somebody is leaning on them. Breeding a strong culture of fear can be counterproductive here – i.e. let people think that you will help them rather than fire them and litigate as long as they voluntarily come forward. If you are working in a regulated industry, things are complicated further by law enforcement in various jurisdictions.

Passwords – for wannabe techies

EDIT: As you will notice a lot of my links point to the works of security researcher Troy Hunt, and I should point out that I have no affiliation with him. I just happen to find articles that make sense and they happen to be written by him or refer to his work. Anyway here is another one, addressing the spirit of this blog post, with horrifying examples.

If you google for “ASP.NET login form” or similar, you will get some hits with really atrocious examples of how to NOT handle peoples’ credentials. Even if you are a beginner writing an app that nobody will use, you should never do user login in a shitty way. You will end up embarrassing yourself and crucially your users, who most likely are friends and family when you are at the beginner stage, when – inevitably – your database gets stolen.

How do you do it properly then, you ask?

Ideally – don’t. Let other people worry about getting hacked. Plenty of large corporations are willing to take the risk.

Either way though,  get a cert and start running HTTPS only.  HTTP is fast and nice for sites that allow anonymous access, such as blogs, but as soon as you are accepting sensitive data such as passwords you need to go with HTTPS.

If you are in the Microsoft sphere, just create a new web project in Visual Studio, register yourself as a developer with Google, Facebook, Microsoft or similar, create apps there, and configure those app credentials in the Visual Studio app – and do make sure you store those app secrets outside of source control – and run that app. After minimal tweaking you should have something working where you can authenticate in your app using those authentication providers. After that –  you can then, with varying effort required, back-port this authentication to whichever website you were trying to add authentication to.

But I insist on risking spreading my users’ PII when I get hacked

OK, fair enough, on your head be it.

Separate your auth storage from your app data storage. At least by database connection user rights. When they hack your website using SQL-injection, they shouldn’t be able to get hashed passwords. That just isn’t acceptable. So, yes, don’t do Windows Authentication on your dev box, define SQL Database Users and SQL Server Logins and make scripts that ensure they are created if they don’t exist.  Again, remember to keeps secrets away from the stuff you commit to version control. Use alternative means to store secrets for production. Azure will help you with these settings in the admin interface, for instance.

Tighten the storage a bit

Store hashed password and the salt. Set your storage permissions such that the password hash cannot be retrieved from the database at all. The database login used to access the storage should not have any SELECT rights, only EXECUTE on stored procedures. That way you write one stored procedure that retrieves the salt for a login so that the application can calculate a hash for the password supplied by the user in trying to log in,  then another stored procedure that takes a username and the hash and compares it internally to the one stored in the database, returning back to the caller whether or not the attempt was successful, without ever exposing the stored hash.

Note that if you tighten SELECT rights  but don’t separate storage between auth and app, every single Entity Framework sample will crash and burn as EF requires special administration to use stored procedures rather than direct CRUD.

Go for a stupidly complex hashing algorithm

Also – you are not going for speed when you are looking for password hashes. MD5 and SHA1 are really nice for checksumming files, but they suck for passwords. You are looking for very slow and complex algorithms.

.NET come with Microsoft.AspNet.Cryptography.KeyDerivation.Pbkdbf2 , but the best and most popular password hashing algorithm is bcrypt.

The hashes generated are of fixed size, so just define your storage to be big enough to take the output of the algorithm and then there is no need to limit the size of the password beyond the upload connection limits defined by the web server due to DoS protection, but for a password, those limits are ludicrously high, so you don’t have to even mention them to the user. Also, don’t mess with copy/paste in the password box. You want to be password manager friendly.

Try and hack yourself

Use Zed Attack Proxy or similar to try and break into your site. It will tell you some of the things you need to change in terms of protecting against XSS and CSRF. The problem isn’t that your little site is going to be the target of the NSA or the Chinese government, what you need to worry about is the tons of automated scripts that prod and poke into every site everywhere and collect vulnerabilities with zero effort from the point of the attacker. If you can avoid being vulnerable to those most basic attacks, you can at least have some self-respect.

In conclusion

These are just the basics, and I am mostly writing this to discourage people from writing login forms as some kind of beginner exercise. Password reuse is rampant still, so if you make a dodgy login form you are most likely going to collect some userid / password combinations people really use at other sites, and those sites may very well be way more important than yours. Not treating that information seriously is extremely unprofessional and bad karma. Please do not build user authentication yourself if you don’t intend to make some kind of serious effort in protecting people’s data. Instead use OAuth solutions and let people authenticate using Google, Facebook, Twitter, Microsoft or whichever auth provider you prefer. That way you will never see any passwords and won’t have anything that can be stolen, which is a much easier life to lead

 

Getting back into WPF

These last couple of weeks I have been working with a Windows desktop app based on WPF. I hadn’t been involved with that in quite some time so there was some trepidation before I got stuck in.

I have been pairing consistently throughout and I believe it has been very helpful for both parties as the union of our skill sets have been quite large and varied regardless of which colleague I was working with at the time. The app had interesting object life-cycles and has some issues in object creation when viewed from the standpoint of what we need to do today although it was well suited to solve the problems it did at the time it was written.

Working closely with colleagues mean that we could make fairly informed decisions whilst refactoring and the discussions have seemed productive. I tend to always feel that the code is significantly improved as we finish tasks even though we have stayed close to the task at hand and avoided refactoring for its own sake.

Given the rebootcamp experience, I’m always looking to make smaller classes with encapsulated functionality, but I still have room for improvement. As I’m fortunate enough to have very skilled colleagues it is always useful to discuss these things in the pair.  It helps to have another pair of eyes there to figure out ways to proceed – getting things done whilst working to gradually improve the design. I haven’t felt disappointed with a piece of code I’ve helped write for quite a while.

The way we currently work is that we elaborate tasks with product and test people and write acceptance criteria together before we set out to implement the changes. This means we figure out, most of the time, how to use unit tests to prove our acceptance criteria,without having to write elaborate integration tests, keeping them fairly simple trying to wrestle the test triangle the right way up .

All this feels like a lot of overhead for a bit of hacking, but we tend to do things only once now rather than having to go back and change something because QA or PM aren’t happy. There are some UI state changes which are difficult to test comprehensively, so we have had things hit us that were unexpected, and we did have to change the design slightly in some cases to make it more robust and testable, but that still felt under control compared with how hard UI bugs can be to track down.

Whatevs. This is where I am on my journey now. I feel like I’m learning more stuff and that I am developing. At my age that is no small thing.

 

 

 

Rebootcamp

I have been saying a bunch of things, repeating what others say, mostly, but never actually internalised what they really meant. After a week with Fred George and Tom Scott I have seen the light in some way. I have seen proof of the efficacy of pair programming, I have seen the value of fast red-green-refactor cycles and most importantly I have learnt just how much I don’t know.

This was an Object Bootcamp developed by Fred George and Deliberate and basically consisted of problem solving in pairs going over various patterns and OO design in general, pointing out various code smells to look out for and how to refactor your way out of trouble. The course packed in as much as the team could take over the course of the week and is highly recommended. Our finest OO developers in the team still learned new things over the week and the rest of us learned even more.

Where to go from here? I use this blog as a way to write down things I learn so I can reference it later. My fanbase tends to stick to my posts about NHibernate and ASP.NET MVC 3 or something from several years ago, so I need not worry about making things fresh and interesting for the readership. The general recommended reading list that came off of this week reads as follows:

So, GoF and Refactoring – no shockers, eh? We have them in our library and I’ve even read them, even though I first read some other derivative book on design patterns back in the day, but obviously there are things that didn’t quite take the first time. I guess I was too young. Things make so much more sense now when you have a catalogue of past mistakes to cross-reference against various patterns.

The thing is, what I hadn’t internalised properly is how evil getters and setters are. I had some separation of concerns in terms of separating database classes from model classes, but still the classes didn’t instantiate good objects, they were basically just bags of data, and mediator classes had business logic, messing with other classes data instead of proper objects churning cleanly.

Encapsulating information in the system is crucial. It is hard to do correctly, but by timeboxing the time from red to green you force yourself to build the next simplest clean thing before you continue. There is no time for gold plating, and boy you veer off and try something clever only to realise that you needed to stop and go back. Small changes. I have written this so many times before, but if you do it properly it really works.  I have seen JB Rainsberger and Greg Young talk about this, and I have nodded and said sure. Testify! “That would be nice to get to do in practice” was my thinking. And then I added getters and setters to my classes. Or at least made them anemic by having a constructor with parameters and then getters, used by demigod classes. The time to make a change is yesterday, not tomorrow.

So, yes. Analysis Patterns is a hard read, said Fred. Well, then. It seems extremely interesting. I think Refactoring to Patterns will be the very next thing I read, but then I will need to take a stab at it.

I need to learn where patterns could get rid of code smells, increase encapsulation and reduce complexity.

There is a handy catalogue of refactorings that I already have a shortcut to in the chrome toolbar. It gets a lot more clicks now, but in general I will not make any grand statements now but rather come back with a post showing results.

Castle Windsor + NHibernate + MVC revisited

I have noticed some people come to this blog for reference on using Castle Windsor together with NHibernate and ASP.NET MVC, so I have to mention a few things that have happened since I last posted about it several years ago. First of all ,there is an excellent sample application that shows the simplicity with which you can hook these things up.

The sample shows you how to register Castle in an MVC application. how to add logging and how to hook up NHibernate 3.2 . This is far, far easier than what I described years ago thanks to improvements in all involved frameworks, really.

The important things to remember are:

  1. Use NuGet to install your libs
  2. Use MVC:s way of installing an IoC container (as demonstrated)
  3. Use logging, you might as well.
  4. Hook up the NHibernate Session Factory as a singleton and hook up ISession to request a new transient session from your singleton SessionFactory. There you go, the magic is taken care of.
  5. This is not mentioned in the demo, but if you are going building production code consider buying a subscription for NHProf if you don’t have one already. It wlll tell you where you mess up and need to do better.
  6. As above, look into hooking up Glimpse, see Scott Hanselman’s blog about it below.

Castle Windsor + NHibernate + log4net + ASP.NET MVC Sample

Windsor tutorial ASP.NET MVC 3-application To-be-Seen

Scott Hanselman on Glimpse:
If you’re not using Glimpse…

F# Actor: An Actor Library

F# Actor: An Actor Library.

This is a follow-up on my post on the Actor Model, as I was very generous in my definition of F# Agents as a full-fledged Actor framework. F#:er s have been wanting something like the Akka framework for Scala. Colin Bull from whom I also reblogged the F# and Raven post below, posted about a framework on his blog that will provide this, and it also explains what the difference is and what the requirements are for an Actor framework.