Chris James

Developer and other things

Improving the quality and performance of your code, guided by the tooling in Go

Published on 04 April 2015

What’s rad about writing Go is how the tooling is really well catered to every day software development concerns.

I recently did a lunch and learn at Springer on why I think Go is great and did a case-study showing how Go makes it easy to create consistent looking, performant and reliable code.

Goal

We are going to write a function which calls two APIs, one is the “Hello” service and the other is a “World” service. Our code will stitch the results together to return the string “Hello, World”.

Iteration 1

To start off we’ll just get the signature of our function together so we can write a failing test against it.

Before we get too excited though, let’s run golint. Go Lint is a tool which points out style inconsistencies in your code. One of the key goals for the language is that all the code should feel recognisable and consistent, we can see that I have already made some mistakes.

stitcher.go:7:1: exported function Stitcher should have 
comment or be unexported
stitcher.go:7:15: don’t use underscores in Go names; 
func parameter hello_url should be helloURL

Whether you think comments are good or not, what’s nice about this message is that it makes you think about the public surface of your library. Does every function need to be public? Probably not. However if you do cave in to the linter’s demands, the GoDoc tool will make some lovely documentation for your code; so it’s worth the effort.

Admittedly a small improvement in this context, but in larger code bases consistent naming and well documented public methods will help you out. The linter tool, like all Go tools are very fast so there’s little reason not to add it to your build scripts.

Now that our stub code is linted we can write a test.

Calling APIs and creating new functionality is a very common task for many developers and Go makes testing these operations a breeze all within the standard library. This test fails so now we can fill out our function with some real code that we can be reasonably confident will work and is safe to refactor.

We know we haven’t finished yet as there is some naivety to the code in a number of places, most especially error handling, but we will get on to that later. For now we can celebrate the awesomeness of our first pass.

Iteration 2 - Performance

The cowboys we outsourced the two APIs to have said that the APIs cannot return responses quicker than half a second. As we’re calling both APIs we can assume our awesome function will take at least a second to complete.

Intuitively we know that we could do the HTTP calls concurrently which might halve that time.

We shouldn’t just dive in straight away and use Go’s concurrency tools because like all things we need to validate our assumptions otherwise we could be wasting our time. We must be able to prove the code we write adds value in a repeatable way.

Let’s use Go’s benchmarking tools to simulate this and then fix the problem

The benchmark tool will run the loop N times until it thinks it has a consistent benchmark time. Running the tool confirmed our assumption, the function takes ~1 second.

This code has successfully halved the execution time and I have the code to prove that’s the case and that it still works as intended from the test we had earlier.

That being said, the code is starting to feel a little unwieldy, there is a lot of repetition so lets use the safety net of our tests to refactor the code a bit before moving on.

Iteration 3 - Error handling

Now that we have a performant happy path, we should tackle the fact that HTTP calls can and will fail; so let’s write a new test to simulate this scenario and that can help us make the code more robust.

The product owners are a fairly vindictive bunch and have told us that if either API fails we are to default to a string which says “ALL OTHER SOFTWARE TEAMS ARE USELESS, OUTSOURCING IS A SHAM”

Here is our test simulating the hello API returning a 500 which results in our code panicking.

We have a test which proves this problem so we can fix the code and be confident that this particular bug will not come up when we go to production.

This code makes our new test pass, hooray! Some Go experts might have some better ways to write this and that's fine because we now have a suite of tests to make sure no matter how much someone tinkers with the implementation they can be confident they have not broken existing functionality.

Summary

Even though this is a contrived example I hope this demonstrates how Go’s ecosystem and tooling can really help to create robust, well tested and readable software.

Go makes it really easy to write maintainable open source software, if you are mad enough to want to use something that blindly concatenates two successful HTTP calls, all you need to do is

go get github.com/quii/go-perf-test-example

and then just check the almost entirely auto-generated go doc to see how to use it.

The final codes are available here: https://github.com/quii/go-perf-test-example

If you have any comments on feedback, please do get in touch with me.

I originally posted this at spiking the solution