Falling with Style


I took Charlotte for a ride along on a flying lesson for the first time on Sunday. I may have neglected to mention that the lesson was focused around learning how to recover from a stall. As you can see from the images below, she didn’t find that entirely amusing.

Afterwards she complained about feeling a little unwell. Considering we fell ~300 feet in a few seconds when my instructor was demonstrating what a stall feels like I can understand why. We then went through a few rounds of recovering from stalls just by pitching the aircrafts nose down, in which we fell around 250 feet, and a few rounds of recovering from stalls by both pitching the nose down and increasing the power of the engine — using this method we fell only 50 feet.

The lesson this Sunday was quite memorable for me because it was also the first time I had control of the aircraft the entire time from walk-around checks through take off, the lesson itself, landing and taxiing back to the parking area. I’ll keep the blog up to date with my progress as I move further toward getting my PPL.


The Danny Test

The Joel Test is a very quick way of measuring the quality of a software engineering team by asking them 12 questions, which can only yield a yes or no answer. A score of 12 is great, 11 is tolerable and 10 or less is a fail. It’s harsh, but fair — a combination of two or more fails could result in larger problems.

When trying to size up a development team I usually ask them “Have you heard of The Joel Test?”. If they’ve heard of it I ask them if they know their score, we then usually have a discussion about each question. If not, I introduce them to it and do the same thing. Anecdotally, I’ve found that being aware of the Joel test increases the likelihood of a tolerable or passing score.

When Joel Spolsky, a former program manager on Microsoft Excel and CEO at Stack Exchange, first published the blog post outlining this test in 2000 questions such as “Do you make daily builds?” were probably a lot more relevant than they are now in a world where a lot of development is for the web. A lot of modern web development doesn’t require a build stage, and even when it does it’s usually quick enough to take place on every commit rather than once a day. Whilst big software packages such as Microsoft Excel probably do still require daily builds the vast majority of software teams I talk to aren’t making things like that.

Because the teams I usually speak to are doing different things I end up altering some of the questions and I thought it might be worth noting down my changes; even if it’s only myself that ever refers back to them.

  1. Do you use source control and follow a workflow?
  2. Are your test and local environments highly similar to your production environment?
  3. Do you use a continuous integration server to build and test every change (be that at the granularity of per branch or per commit?)
  4. Do you have a bug database?
  5. Do you fix bugs and make time for refactoring before writing new code?
  6. Are developers involved early on in design and product decisions?
  7. Do you have a roadmap?
  8. Do programmers have quiet working conditions?
  9. Do you use the best tools regardless of cost or license?
  10. Do you make testing everyones concern?
  11. Do you use Code Reviews?
  12. Do new candidates write code in their interviews?
  13. Do developers have access to stats and metrics for the live product?

As you can see, I’ve got 13 questions rather than 12 — therefore a perfect score is 13; 12 is a tolerable score and anything less means you should be looking to make improvements.

1. Do you use source control and follow a workflow?

In the Joel Test, the question just asks if source control is used. I’ve not come across a commercial team — so far at least — that doesn’t utilise source control. However, merely using git doesn’t mean you’re using it in the most optimal way. Following a known workflow such as GitFlow or the GitHub Flow, makes it easier to maintain multiple versions of a product and work on multiple different new features at the same time.

2. Are your test and local environments highly similar to your production environment?

There have been a number of times where I’ve had a bug that only appears in production environments, serving live traffic; this sucks because you can end up testing fixes in production and affecting your real users. Whilst having an identical system locally can be difficult — by design most large scale web systems are distributed so couldn’t be fully emulated on a single machine — it should be possible to have an environment that is highly similar at a component level. Tools like Docker make this easier.

3. Do you use a continuous integration server to build and test every change?

Joel speaks about utilising a daily build to catch mistakes programmers routinely make, such as not checking in a new file resulting in broken builds. Shortening the feedback look from once every 24 hours to a few minutes after every change by following the principles of continuous integration is a more modern approach. TravisCI, Jenkins and CodeShip are popular tools for achieving this.

4. Do you have a bug database?

This question remains as relevant now as when Joel asked it back in 2000.

5. Do you fix bugs and make time for refactoring before writing new code?

Joels test focused just on fixing bugs before writing new code. However, I think making time for refactoring and the reduction of technical debt in small amounts over a period of time is also crucial to the continued effectiveness of a team.

6. Are developers involved early on in design and product decisions?

I’ve experienced development teams where web developers were given a final pixel-perfect design by a graphic designer and asked to reproduce it — this rarely works well because a single PSD rarely shows the complex interaction a user can have with, for example, a web page. Having developers collaborate with a graphic designer and other product stakeholders from early on the process can result in better, more complete, specifications and a better understanding of the business and user needs by the people developing the software for those people — that can be no bad thing.

7. Do you have a roadmap?

Joels question asked if the software team had access to an up-to-date schedule. Most teams I’ve worked with take a more agile approach to development and therefore don’t have a schedule set in stone, or often at all. However, it is important to know the direction the ship is sailing in still. What projects are on your roadmap?

8. Do programmers have quiet working conditions?

This one is more important than people give it credit for. I like the approach to working conditions taken by Stack Exchange.

9. Do you use the best tools regardless of cost or license?

Joels question originally asked if a company uses the best tools money can buy, however in many cases now the best tools don’t need to be bought — they’re free as in beer or open source. So I’ve simply clarified that with my version of the question.

10. Do you make testing everyones concern?

Some developers like having a team whose sole purpose is to test other peoples code; and Joel’s original question marks you down if you don’t have dedicated testers. However, I am of the opinion that testing should be every individuals concern. Given that code that is easy to test exhibits different attributes to code that is difficult to test, removing the responsibility of testing from developers increases the chance that they may produce code that the testers subsequently struggle with.

However, only having develops test their own code would result in poorly tested code due to code being tested with the same set of assumptions as it was developed with.

Therefore, everyone needs to be concerned with testing and quality in general. Developers in the first instance, a second pair of eyes — be it a dedicated test engineer or another developer — and a product stakeholder should be the minimum number of people involved in the quality assurance of any change.

11. Do you use Code Reviews?

Code Reviews serve a few purposes;

1) They improve the quality of code by allowing for input and discussion with developers not initially involved in its development.
2) They help mistakes to be spotted before they cause any problems.
3) Code reviews are, in my opinion, the single best way to disseminate knowledge within a team — you can’t have a part of the system only one person understands if other people have been involved in regular code reviews

I personally like to have a code review as part of accepting any pull request in the Github flow, but this isn’t necessary to pass the question. They should, however, be regular.

12. Do new candidates write code in their interviews?

This question is unchanged from The Joel Test.

13. Do developers have access to stats and metrics for the live product?

Its important that developers can see how their work is managing in the real world; whether this be performance metrics (RAM usage, number of concurrent connections, etc.) or business statistics such as step conversion. Allowing the people who are building the product to be able to see the results of their work means that they can see if they need to take a new approach or if their work is paying dividends; therefore this aids both motivation as well as early detection of possible problems. Note: Having a business analyst or similar between the developer and the metrics doesn’t count, the developers should be able to access them directly and in real time if possible.

Final Thoughts

This test has the same caveats as The Joel Test; you can get 0/10 and by some divine intervention have a team that is constantly delivering, conversely you can ace the test and still be working in a dysfunctional way and obviously you shouldn’t be using this as a checklist to see if your team is capable of working on nuclear power plant control software.

However, what this test should do is let you know how much a team has thought about quality and developer experience, and open up a dialogue which allows you to investigate further their ideals around development.


What are you doing to my phone?

There has been a lot of discussion recently about the advantages and disadvantages of both native mobile applications — colloquially referred to as just “apps” — and Progressive Web Apps, which allow people to use website functionality whilst offline.

One of the big advantages of PWAs is that you don’t have to install them; they reside in your browsers cache, no user interaction required. However, I think a bigger advantage in that area is that you don’t have to update them, just refresh the page!

Updating apps is a chore — yes, it’s better than the experience on a desktop machine where every application has its own update daemon or requires complete re-installation — but it’s still not something that as a user I particularly look forward to doing.

It’s made all the worse when something like the below screenshot is what awaits me behind the App Store update badge.


Update, but I won’t tell you why!

You wouldn’t let someone borrow your phone without them telling you why they want access to it (ringing international numbers or randomly deleting my contacts is a big no-no) so I don’t see why we should allow developers who won’t even tell us their intentions install things to our devices.

Yes, you’ll never truely know whats going on unless you have a copy of their source repository and a corresponding hash to test the binary you’re going to install against (and even then, what if someone hacked their compiler… etc.) but it would at least be nice to know what, in their own words, they’re intending to get your device to do. Especially seen as data caps and storage limitations are still a big deal on mobile.

Normally applications say something along the lines of “We update every week, make sure to keep updating us”, but I felt that Twitter really won the battle for most absurd update description this week:

Not too much has changed. But enough to warrent an update. Happy Tweeting!

Even if we ignore that fact that the first full-stop in that paragraph should be a comma, it still doesn’t make much sense. What size does a change have to be in order to warrent sending a 78.4MB package to millions of users? How do they quantify not much? What are you doing to my phone?

Now, I’m not for a moment suggesting we get a list of git commits which have entered an apps master branch and display those to the user — the average user is certainly not technical enough to appreciate that, however, it would be nice to at least list features or things that have been fixed.

Credit where credit is due, Spotify release fantastic update descriptions for their applications, where they use this exact approach. New features are highlighted, followed by bugs which have been resolved and finally, to make it a little bit fun, they add a description of a “Fictitious” improvement such as “This app is now available in three new fruit flavours. (Berry Surprise is still quite buggy.)”. That bit of humour makes people more likely to check out app updates and question what is being ran on their device — which I think is no bad thing.

So developers, please write better update descriptions. It’s exciting to release new features or fix a bug that has been haunting someone for a week — let the people know you’ve done just that!


Pilot’s Flying Logbook

AFE Pilot's Flying Logbook

I bought myself a Pilot’s Flying Logbook on Sunday after an interesting lesson in which I learnt more about Straight and Level Flight, saw two de Havilland Tiger Moth biplanes, did my first take off and experienced my first go-around (aborted landing due to a plane taxiing onto the runway we were about to land on)

It has three entries so far; I look forward to adding a lot more in the next few months whilst the weather is so nice.



Open Source JavaScript talk at JavaScript Cambridge

Presenting at JavaScript Cambridge

I presented a talk about my experiences developing Open Source JavaScript applications for CS Blogs today at the JavaScript Cambridge user group.

I covered

  • how your intended developer audience should affect your technical decisions
  • how a good application doesn’t always become a good open source project
  • how to structure an application or system to make it easy to contribute to
  • how the CS Blogs workflow fits together

After having received some feedback from Charlotte, who watched the whole thing, I intend to make some improvements to the structure of the presentation and submit it to some other conferences.

I’m not sure how much sense the slides will make without me talking around them, but they’re available here.



There are a few things that any seasoned Software Engineer will have had arguments discussions about. Windows vs Linux, Merge vs Rebase and inevitably code indentation style.

Just today Rob and I discussed whether we should diverge from the “One True Brace Style” (1TBS) decreed by the AirBnB JavaScript Guide toward the Stoustrup style of indentation. The only difference? Stroustrup does not use a “cuddled else”, instead else keywords are on their own line.

Does such a minor difference matter? I would argue it does. If being able to read code in a certain style increases a programmers productivity then that is no bad thing. However, this increase in productivity can be easily offset by having to change style when working in different codebases. Consistency is important.

To maintain consistency in the CS Blogs codebase every component would have to be updated. This would mean 100s of lines changing for style, reducing the effectiveness of git blame and muddying the commit history. Even if we were to do this eslint-config-airbnb was downloaded 399,657 times in the last month and I would wager most of the projects using it are sticking with the suggested 1TBS style. The advantage of having code that looks like the “standard” for an open source project is that it enables potential contributers to get involved that bit easier.

My theory about code style guidelines is that in a team of people n-1 people will be unhappy with at least part of the guideline. The only person that will be completely happy with them will be the person whom decided upon the rules. Programming is merely transcribing processes and thoughts into a language a computer can understand, and in that sense it is very personal and everyone is likely therefore to have strong feelings around how those thoughts look on screen.

As with so many things in Software Engineering, in many ways the style you choose doesn’t matter, but sticking to it and enforcing consistency does. This is why I am against changing the CS Blogs codebase even though I agree with Rob that the stoustrup style is nicer on the eye.

So, what can Rob do in this situation? The first option would be to just keep writing in the 1TBS style until it seems natural (this took me a few days of writing), however he could also use an automated code formatter to change how his local code looks and then automatically have it changed to the prescribed style before any commits to version control. Any mistakes by the automated code formatter would be caught by the ESLint commit hook.


Re-architecting CS Blogs

Where are we now?

As I mentioned in my previous post the current CS Blogs system grew out of a prototype. This meant that the requirements of the system were discovered in parallel with designing and implementing the system, resulting in the slightly weird architecture shown below.

Old CSBlogs Architecture

Old CSBlogs Architecture

I say its weird because the `web-app` component isn’t really just a web application — it’s also an API server for the android application (and in theory any other app) and includes all the business logic of the system.

The decision to use MongoDB was born partly out of the desire to be “JavaScript all the way down” and partly out of the desire to be using what was cool at the time. Unfortunately at the time of building the system MongoDB wasn’t supported as a SaaS offering on Microsoft Azure — where CS Blogs is currently hosted — so the database was hosted on MLab, making database calls more expensive in terms of networking time than necessary.

The `feed-aggregator` is a small node.js application ran as an Azure WebJob. It was hacked together in a few days and really only supports certain RSS and ATOM feeds. For example it works great for ATOM feeds using <description> tags, but not ones which use <content> tags. These oversights were made due to the software not being developed on much real data, essentially only my own feed, and the homogeneous nature of our users blogs — they’re mainly all Blogger or WordPress.com.

Despite the obvious and numerous flaws of the system it has worked well for the past year or so. However, when I wanted to add the concepts of organisation to the system — a way of seeing blogs only written by people at a certain company or university — I found the system to be a hodge-podge of technical debt, to the point where adding new features was going to take longer than developing a good, modular, expandable system. It was time to pay down the technical debt.


The first thing to do was to determine what parts of the old system were good — and try to ensure that these poistive things didn’t regress in the new system –, which things were in need of improvement and what new features we should add in at the same time.

Fortunately CS Blogs does do a number of things well:

  • Short lead time — New posts appear in the system within 15 minutes
  • Good Web App — The front end works well both on desktop and on mobile and is very performant due to its lack of scripts. The work Rob did on the styling makes it a joy to use
  • Good Authentication — Users enjoy being able to use Github, Stack Exchange or WordPress to sign in and I enjoy not having to look after their passwords

A few things it could improve on are:

  • Support for a larger range of RSS and ATOM feeds  — ATOM support in particular isnt great in the current system
  • A lot of functionality only works in the web app — Any method which requires authentication, such as signing up to the system, isn’t avaliable through the API
  • Feed aggregation downloads every authors feed every 15 minutes, this is a lot of data and wouldn’t be economic to scale to 100s of users
  • Code maintainability is poor due to a complete lack of automated testing and linting

The additional user-facing features I want to implement are:

  • Notifications of new blog posts for CS Blogs applications on Android/iOS
  • Support for the aforementioned organisations feature

Designing a Distributed System

The system you can see in the diagram below was designed with the intention of fulfilling out of the requirements which I outlined above. You’ll notice the use of Amazon Web Services icons, as I have recently switched hosting from Azure to AWS. There are a enough reasons for this decision to warrent its own blog post, so I wont go into detail here.


The new CS Blogs Architecture

In the new system all applications are treated as first class citizens, meaning there is nothing that the web application can do that any other application can’t. This is achieved by having all of the business logic, authentication and database interaction handled by the `api-server` — which is accessable by anthing that can make HTTPS:// requests and handle JSON responses.

This means that the mobile applications will be able to perform actions such as registering a user and editing their details, which they cannot under the current system. Another benefit to the mobile applications that isn’t shown on this diagram is that the `feed-downloader` calls Amazon SNS with information about how many new blog posts it has found every time it runs, this in turn is relayed to the mobile applications in the form of notifications.

Whereas in the old system we used MongoDB, I’ve opted to use PostgreSQL — via the Sequelize Node.js ORM — this time around. Some of the features I want to implement in the future, such as organisations, make more sense as relations rather than as document in my mind and the ecosystem of applicatons for interacting with SQL databases, and in partciular PostgreSQL, is much more mature than MongoDB.

The `feed-downloader` is portable, but contains an entry point so that it can be used as a infrastructureless AWS Lambda function (and I suppose this entry point would also work for the newly released Azure Function system). It’s a bit more clever than the old `feed-aggregator` in that it uses If-Modified-Since HTTP requests to only download and parse RSS or ATOM feeds that purport to have changed since the last time an aggregation was ran.


The implementation of the `feed-downloader`, `api-server` and `web-app` components follows my guide to writing better quality Node.js applications. Node.js was chosen due to its abundance of good quality libraries, ease of interaction with JSON objects and the authors familarity with it in production scenarios.

ES2015 JavaScript features including the module system, string interpolation and destructuring are used throughout to aid readability of the system — therefore Babel is required for transpilation.

Just some of the feed-downloader tests

Just some of the feed-downloader tests

In order to meet the requirement of good maintainability the `feed downloader` was built using the test driven development methodology and currently has 99% test coverage. These tests use real data, feeds from actual CS Blogs authors, including feeds from Blogger, WordPress.com, WordPress.org, Ghost and Jekyll.

Theres still a lot to be done before before the new CS Blogs can be released, so why not hit up the contribution guide and get involved?


%d bloggers like this: