Category Archives: Uncategorized

Console setup for powershell

After not really seeing the value of console replacements in Windows I have completely come around.

To make life more enjoyable when developing on a Windows 10 machine, install the following things:

  • Install Sublime
  • Install cmder
  • Enable Windows Subsystem for Linux
  • Choose a WSL compatible distro, I went with Ubuntu
  • Install MobaXterm X Server
  • Clone powerline patched fonts
  • Clone Hub CLI to get better Github tools for the commandline
  • On WSL:
    • Install zsh
    • Install the above powerline fonts
    • Config dircolors
    • Install oh-my-zsh
    • Set agnoster theme
  • In Powershell
  • Optionally install i3-wm and terminator under X in WSL. Not super practical but cool.

I have previously had problems with cmder acting strangely when using bash for Windows, but with this setup the experience is seamless. Cmder also provides excellent autocorrect for powershell.

If necessary install a separate Sublime under WSL, but normally editing files in Windows but for instance running yarn under WSL by accessing the files under /mnt/c/ works perfectly well.

The PowerShell setup requires some modifications to the profile to load PoSh Git, DirColors, set themes and aliases et cetera.

Advertisements

F# vs C#. Again

Loads of stuff has happened since my last post. I changed jobs. Doing a lot more .NET Core now. Better commute. Still working with really clever people. Anyway – that isn’t the point. 

Build 2017 happened. ASP.NET Core 2.0 happened. Apart from pointless Cortana and Azure debugging features inaccessible to professional developers (as you’d need to deploy directly from VS like a cowboy) – the ASP.NET Core 2.0 changes are sound. 
With ASP.NET Core 1.x they tried to break apart the framework to deliver only the bits you need. That lead to amazing amounts of boilerplate code in Startup.cs and made it difficult for people to discover which features are available due to the various references you might need to add first. Now you have to install the entire framework on the server – but allegedly .NET will remove unused DLLs when packaging for self-hosting.

This, however, isn’t the point either.

One Twitter flamewar that blew up during build was that you can’t target this preview of ASP.NET Core to run on full fat .NET Framework 4.6.2- which was initially communicated as being On Purpose – move to Core or stay on old ASP.NET. This – of course – alienated a majority of the .NET community whose legacy code and legacy licensing pay for all this innovation were mightily upset and within several hours – a new story emerged stating that the incompatibility is temporary. The core developers, ahem, in the ASP.NET Core team, are unrepentant however.

The next flamewar had to do with F#. This is actually the point. Mark Rendle said he preferred C# to F# which caused outrage and he didn’t back down when all the heavy hitter F# evangelists came out to explain that F# can do anything and isn’t fringe at all. 

They are both right, and the problem still is that despite the big words, Microsoft won’t spend the £50 it would cost to have template parity between F# and C#.

Anyway – Build 2017 also featured Mass Torgersen weighing in on F#.

Retention policy and crypto

I just deleted an old post because I re-read it and I was attempting my own crypto on config files instead of using DPAPI. I reserve the right to delete old posts if they turn out to be complete bollocks.

Just to show how you encrypt an app config file on a machine where you do not have IIS installed and cannot use the traditional aspnet_regiis command-line tool that your first googlebing will tell you all about – I give you the below piece of code.

Note that you need to encrypt the file on the target machine as DPAPI is machine specific, so there will be a brief moment when the file is on disk in clear text which is a basic flaw of the entire DPAPI concept, but at least you are not rolling your own crypto.

static int Main(string[] args)
{
    if (args.Length != 1)
        Console.Error.WriteLine("Wrong number of arguments.\r\n{0} <configfile_to_encrypt>", GetExeName());

    return EncyptAppSettings(args[0]);
}

private static int EncyptAppSettings(string pathToFile)
{
    if (!File.Exists(pathToFile))
         return LogFatalError(string.Format("Executable {0} not found", pathToFile), 2);
    if (!File.Exists(pathToFile + ".config"))
         return LogFatalError(string.Format("Config file {0} not found", pathToFile), 3);

   var configuration = ConfigurationManager.OpenExeConfiguration(pathToFile);
   var appSettings = configuration.GetSection("appSettings") as AppSettingsSection;
   appSettings.SectionInformation.ProtectSection("DataProtectionConfigurationProvider");
   appSettings.SectionInformation.ForceSave = true;
   configuration.Save();
   return 0;
}

private static int LogFatalError(string message, int exitCode)
{
    Console.Error.WriteLine("{0} failed: {1}", GetExeName(), message);
    return exitCode;
}

private static string GetExeName()
{
    return Process.GetCurrentProcess().ProcessName;
}

 

More about configs while developing

In my previous post I discussed what you can and shall automate in terms of your development environment and I speculated in one place about what one ought to do with the web.config file in Visual Studio. I have made a script that uses my local chef repo to configure the configuration files using chef-zero and clearly that works. It is a bit messy -but it works. This was where I was when I wrote the last blog post containing my speculation.

Getting further – to an actually workable solution – I have run into what seems like a brick wall. The complexity of using chef-zero seems to need some refactoring, but the biggest problem is maintaining the templates. Nuget keeps editing binding redirects in web.config and it would be nice if those changes could be applied to the erb files. Since this is how ASP.NET works, the source web.config combined with per-environment transforms is highly favoured because it has the lowest maintenance burden. There are a number of plugins that supports this workflow even for EXE projects and other things.

My problem with those is that they violate a basic principle of keeping configuration away from the source repository. Also, I’m happy with chef and I don’t want to replace its templating engine with web.config transforms so trying to adhere to the concept of running the same setup in development as in production makes it difficult unless I can find a way to dynamically update the erb file.

 

 

Passwords. For non-techies

Passwords are a – for the most part – necessary evil in our connected lives. There has to be a way to know that you are you in a transaction, and a piece of information that only you and your counterpart know about seems perfect for that.

You have indubitably been told many times you should never reuse passwords and you should get a password manager. Guess what – that is correct. When you get one, and it proceeds to chastise you about your instances of password reuse and you use the magnificent automated features to reset passwords, you will notice that many sites not only have minimum requirements for length and complexity – they also have maximum limits. This is where alarm bells should ring in your mind.

Passwords shall only be stored as hashes (what people used to call checksums back in my day) because it is mathematically impossible, well, to deduce the original from the hash. This is not enough to be safe, of course, as evildoers quickly figured out that they could hash (using all major algorithms) enormous dictionaries and lists of popular passwords and then had prefab hashes to compare to ones they found in hacked databases. You need to hash several times over and apply salt to the hash and so on in processes that are better described elsewhere, but the point is – it starts out with a checksum. A checksum is always of constant length. So it doesn’t matter if you submit the Holy Bible as a password, the hash will be the same size.

This means, the only reason people put a size limitation on the password is because they are storing it in clear text in the database, or otherwise process it in clear text.

When you complain about it to customer services they will say either of these things:

  1. We are storing passwords encrypted securely in accordance with industry standards
  2. You don’t understand these things – clearly you must be new to computers.
  3. We need to limit password length for security reasons
  4. We take the security of your data very seriously.

All of that is bollocks. The only reason other than storing data in plain text (or storing it encrypted, so that it can be easily displayed in the admin interface which still also VERY BAD, see below for further rant) could possibly be a misguided attempt at usability, where the UI might limit the text field length to prevent the user from getting lost, but I still would say it is 99.999% likely that it is bollocks and they are showing your password to everybody. At least that is the safest assumption. .

So they know my password –  what is the big deal?

OK, so why does that matter? Well, probably it means that tech support or custom service people can see your password, so you can’t use the one that is basically a bunch of rude words strung together or it is going to be awkward on the phone.

Also, it means that if they have one mischievous intern or employee that copies some passwords from the system, they can really do harm to any site which you STUPIDLY used the same password for. Also, if somebody manages to steal one login to the administrative interface they have your password. In a secure system, that level of data breach should take a little more effort.

I have read somewhere that the far biggest problem with weak password security is password reuse, and you can end that today, for yourself, by using a password manager of some kind. Preferably a good one, but I’m no security person so I couldn’t tell you which one to go with. There is though a strong eggs-in -one-basket aspect to this, but given the scale of breaches I don’t see any other way forward.

Complexity

There are plenty of password jokes out there, like the one with “User enters password ‘docGrumpyHappySleepyBashfulSneezyDopeyAlbany’, requirement was seven characters and a capital” or the classic XKCD password complexity suggestion correcthorsebatterystaple as an example of a secure password that was also easy to remember.

Since early 2000 Windows Servers have been pestering users to use passwords with capitals, numbers and special characters and a minimum password length of 8, and also to change passwords continuously.  

A few things have happened. Graphics hardware and research. Graphics hardware has made it possible for somebody ambitious to buy (or steal, yes) a bunch of graphics cards and use the phenomenal 3D processors to brute-force compare password hashes (yes, even correctly stored passwords) with hashes of dictionary words and popular passwords and common v@r1ati0ns on them. The rate at which these comparisons can be done is only slowed down slightly if you use a very complex hash, but even with that, the enormous compute power these people can muster makes short passwords trivial to reveal. Yes, that includes correcthorsebatterystaple. To just refer back to the case where reversible encryptions are used to store passwords to that customer service/tech support can read them – yes, those encryptions are extremely easy to decrypt with this kind of firepower. Essentially, there is no difference between an encrypted password and a plain text password in terms of security.

The best passwords are the auto-generated ones from a password manager that also can give you variable length, where you might as well crank it up as far as it’ll go, but of course, you’ll quickly find out many shady websites that won’t let you use a secure password.

Also, regarding changing passwords – The more esoteric password complexity requirements are and the more you make people change passwords, the worse their passwords get, because although I’m sure this surprises security people I must mention that normal people have to work too, they can’t spend all their time fiddling with passwords. Of course cheating by using a format that gets incremented as the system makes you change passwords isn’t exactly more secure- in fact a better solution may even be writing the password on a post-it, unless of course you broadcast video from your office and people deface your twitter because they see your password in the background. Of course, again the answer is a password manager. And also, the answer may be to give people more time with the same password, but bring more draconian complexity requirements.

But I though we were all post-password now… I have an iPhone

Good for you. Remember that it is hard to reset your fingerprints. The consumer grade fingerprint readers are far from fool-proof, and once somebody has a decent copy of your prints they have your things.We will see how things develop as the bio-metric identification thing gains more traction, but security people are already concerned about a case where a woman was made to unlock her iPhone because she had TouchID. The coercion did not count as violating the right against self incrimination the way forcing somebody to divulge a password would have. I’m sure further issues will come up. The general advice seems to be, use bio-metric data for identification, possibly, but rely on alternative means for authorisation.

Anyway – the point is, get a password manager if you don’t have one, and don’t forget to name and shame those that limit password length or – heaven forbid – email passwords in clear text. Do disable auto-fill, which is a feature where the password manager automatically adds username and password on a login page as it loads. There are often exploits that tricks password managers and automatically capture username and passwords by tricking the user to browse to evil websites. These vulnerabilities get patched, and new ones get discovered. Just disable auto-fill and you will be OK.

Troy Hunt on passwords and complexity, and how to build a good password reset feature..

More LXSS

Regarding the previous post about running Linux processes on Windows, I have noticed a few common questions and the corresponding answers at various places on the Internet.

The process and fork

The process that runs the Linux binary is a Pico process that comes with a minimal ABI and having all syscalls filtered through a library OS. For the Linux subsystem they have added real fork() semantics to the Pico process which has long been a popular gripe among those in the Linux community trying to port software to Windows. Although the normal case when creating a process is fork() and then lots of cleanup to create the pristine child process you always wanted, in the 1% of the cases where you actually wanted to create a verbatim copy of the current process including open files and memory extents et cetera, doing so in Windows was very complicated and extremely slow. This is now a native feature of the Pico process.

Security

The security incompatibility between the faux Linux universe and the Windows universe, or rather the fact that Linux is unaware of the security settings and unable to affect the in any way has been raised and allegedly improvements are on their way.

 

 

 

 

 

 

Microsoft-bashing

After my previous post, Microsoft and Canonical reacted almost immediately – ahem – and on the //BUILD/ conference the Windows Subsystem for Linux and the ability to run the Ubuntu userspace under Windows was presented. If you have the latest fast ring insider build of Windows 10, you can just set your Windows installation into  Developer mode, search for Windows Features in Cortana and activate Windows Subsystem for Linux ~(Beta) reboot and then type bash at the windows command prompt. You will be asked to confirm, and then Ubuntu 14.04 will be downloaded, and off you go.

What is happening? A few things. Instead of the NT subsystem, NTDLL.dll, being loaded, a faux Linux kernel is presented to the process, allowing Linux userland binaries to be executed natively in the context of the current user. The abstraction is leaky at best right now, and as usual Microsoft aren’t even aiming to do this reimagining of the Linux kernel completely exact, so there will always be userland executables that will not work with this subsystem. Their goal is to make the most common dev things work out of the box to make Windows less frustrating for developers to work on. You would be doing people a favour by trying this out and complain loudly about everything that doesn’t work for you.

I just now did Win+R, typed bash and then vim. In SysInternals Process Explorer I see init -> bash -> vim in the process hierarchy. It is completely integrated in Windows, only the filesystem you see from Linux has been organised by Canonical to make sense. Since the context of the WSL is per-user, the Linux filesystem is actually located in the user’s profile folder while all drives on the system are mounted under /mnt/c /mnt/d et c. Because of this you can access “Linux” files from Windows and Windows files from “Linux” in a way that sort of makes sense. I mean, I’ve seen far worse.

I must say I didn’t think Microsoft would take this route, so kudos. What I still feel the need to complain about is that the security situation is complicated with the Windows user always being root in his own faux Linux universe. Server deployment could be magical if there was a way to have a global Linux context and have root access hidden behind elevation or sudo, but developing such a joint security model would be enormously complicated. Running X in Windows, that would be the next dream.

Simple vs “Simple”

F# has two key features that makes the code very compact. Significant whitespace and forced compilation order.

Significant whitespace removes the need for curly braces or the use of keywords such as begin/end. Forced compile order means your dependencies have to be declared in code files above yours in the project declaration. This gives your F# projects a natural reading order and makes projects follow a natural order that transcends individual style.

Now there is a user voice suggestion that the enforced compile order be removed.

I think this is a good idea. I am against project files. As soon as you have three or more developers working in the same group of files, any one file used to maintain state is bound to become a source of merge conflicts and strife. Just look at C#.

I am sure your IDE could evaluate the dependency order of your files and present them in that order for you, heck one could probably make a CLI tool to show that same information if the navigational benefits of the current order is what is holding you back. Let us break out of IDE-centric languages and allow the programs to be defined in code rather than config.

Powershell woes

What is the point of powershell?
Trying to orchestrate and automate distributed tests with powershell leads to things such as using remote powershell or psexec to start tests on VMs and to use PSDrives to set up tests and collect logs.
So then you would like some calls to be reliable. Starting WinRM on a remote box, or setting up a PSDrive to get fresh executables to VMs or collecting logs neesd to be reliable.
With a 10% successrate out-of-the-box you wonder how people ever use this stuff, and the truth is of course people don’t, they write reliability layers around it with retries and exception handlers, just for the Happy-path nothing-important-has-failed scenario
Had MS spent the extra five minutes to make the built-in code be less unreliable and weak and perhaps to only return a failure once every means to recover the situation had been exhausted would have spared every single powershell user the work to write their own reliability layer just to get the basics working.
Case in point:  New-PSSession. There is no way in the known universe that should be allowed to fail. OK, if there are HTTPS certificat problems or network connectivity problems, say so, but other than that I deserve an apology. Eloquent and part of the return object.  Problem is 90% of the time New-PSSession will fail, for no reason. No reason whatsover.
I mean I know the test triangle and all that, but for that tip of the triangle I don’t need any help from MS to make the system test EVEN MORE EXPENSIVE to make and maintain.

Powershell

I am extending some tests we are running in powershell. It is used to validate that the chef configs we are setting up so that all moving parts are running.

One thing I tried to google that wasn’t  perfectly clear to a n00b like me was how to use static objects in powershell, specifically take byte array and interpret it as a UTF8 string.

Apparently this is how you do it:

$enc = [System.Text.Encoding]::UTF8
$c = $enc.GetString($result.Content)

Where the $result came out of a WebRequest. This means I can test that $c contains an expected value and fail the script, and thus – the build, if the result is unexpected.