Implementing graceful shutdown for docker containers in go (part 2)

In part 1 I demonstrated how to implement graceful shutdown of http containers in kubernetes, however there were some issues that arose where requests could cause errors. After a bit of discussion on stack overflow I have updated my code to include an extra http listener to service pre-stop and readiness checks from kubernetes.

The way this works is as such:

  • A container needs to be shutdown
  • A pre-stop call is made
    • We take this opportunity to update our readiness to false, so that we are removed from the proxy
    • As this operation is synchronous you can optionally sleep here for a few seconds, to give kubernetes time to check and update based on your readiness (useful if scaling dramatically)
  • A SIGTERM is received
    • As before we wrap up all current requests and shut down
    • We have 10 seconds to do this before we get a SIGKILL
  • Our container is shut down


If you would like to test this on your cluster I have a sample project uploaded onto github which follows my model for running minimal go apps in a kubernetes cluster.

You can follow the setup on that page, or tweak the files to point at a different docker registry, then just run make to get a docker image.

Following on from that you need to test in kubernetes, for which you have 2 options:

  • Scaling
    • This is useful to simulate growing and shrinking your service, but can occasionally have issues when doing large scaling operations in a single command (i.e. from 20 containers to 1)
    • Scaling up and down incrementally would be advised for any services (i.e. from 20 to 19, then 19 to 18, etc)
  • Rolling Updates
    • This is the setup I have opted for, swapping out one container for a new one over a period of time
    • The files included always use the latest docker image so actually update from the old latest, to the new latest
      • This isn’t meant to be how production deploys work, you should use versions, but its good for development

Given you have a running kubernetes cluster you can bring up the service as such:

./cluster/ create -f ~/Go/src/
./cluster/ create -f ~/Go/src/

Obviously substitute paths accordingly for yourself.

This will give you a hello world style output on port 31000 of any of your node ip’s. You can then modify the backend cluster either via scaling:

./cluster/ scale --replicas=10 rc httpgraceful-controller-1

Or via a rolling update

./cluster/ rollingupdate httpgraceful-controller-1 -f ~/Go/src/

Throughout this you should always get a request routed to an active cluster and see no errors on the client side. I have noticed occasionally I do see a spike in latency which needs a little investigation however we are talking of a handful of requests in 100k only during a rollout of upgrades, which I plan to retest when not running in vagrant but on more production like hardware.

My stats during a rolling update showed:

  • Replicas: 10
  • Requests: 197,357
  • Average Request: 1039 ms (I have a 1000 ms sleep in the code)
  • Min: 1000 ms
  • Max 57,477 ms (this is to be investigated)
  • Standard Deviation: 1364 ms
  • Errors: 0%

I plan to retest this on a more production like system with far more replicas (nearer to 1000), where I expect to see less jitter in timings. After the rollout was complete the timings settled down to a standard deviation around 1040 ms.

Read More

Implementing graceful shutdown for docker containers in go (part 1)

As part of ensuring the servers I write are compatible with docker and kubernetes I wanted to ensure all my http servers shutdown gracefully so as not to drop any transactions. This is especially useful when scaling down in kubernetes or as part of rolling deploys of new pods, ensuring your containers have the best chance to not drop any transactions.

The standard shutdown procedure for a docker container is to send the container a SIGTERM to indicate a desire to shutdown, if the container doesn’t shutdown it is issued with a SIGKILL after a grace period. As such we need to hook the SIGTERM and ensure we finish all our communications.

My solution for this is as per the following gist:

There are a couple of important things to point out here. Firstly you will notice the use of manners, which is a wrapper for the standard http server to allow for graceful shutdown.

You will also notice the all important hooks for both SIGTERM and also Ctrl+C, which makes things a lot easier for quick tests and also operation outside of docker.

This approach ends up working nicely, you can boot a number of instances of this server and for the most part scale up and down without seeing any dropped requests. However I have noticed that when I scale to a single instance I do see some errors for a very brief period. I’m not sure if this is due to it being a single instance, or just less than the number of test nodes I was running but I’m wondering if there is some issue around etcd timing its changes across the cluster as requests come in.

Overall the approach works for the use case in mind, as a cluster system I don’t expect to run single instances of services that get scaled up and down rapidly, but I’m hoping I can investigate more to find out where the issue is coming from.

Read More

Running minimal golang apps in kubernetes cluster

I’ve recently been working at migrating some of my projects to docker, and ultimately kubernetes to run inside a cluster. It’s an interesting workflow that results in very rapid development cycles and the ability to constantly verify an application via a rolling deploy and testing with GoConvey.

The process looks a bit like this:

  • Write code
  • Build & unit test code
  • Build minimal docker container with statically linked go app
  • Push docker container to local registry
  • Deploy into kubernetes cluster as a rolling deploy

This process ensures that its very easy for developers to constantly be testing in a local environment, which for me is the perfect development setup.

A quick caveat, as this is quite complex to setup I have chosen to skip a few things like the unit tests and rolling deployments. These are well documented elsewhere, but if you get stuck just comment and I’ll do a follow up post.

The setup

So here is the overall setup:


  • Everything runs on a single computer, in this case my mac
  • The docker registry runs inside a boot2docker VM
  • Kubernetes runs locally via vagrant
  • Kubernetes runs a master and a pair of nodes (minions in the old kubernetes speak)
  • The nodes run 3 copies of our container
    • We don’t care which nodes they run on
  • A service exposes our application to the host machine via a fixed port on both nodes

Note: Blue = The computer, Red = VM, Green = Docker Container, Purple = Relevant parts of kubernetes (there’s more, but its easier to think this way)

First Steps

For doing this work, you need to have:

  • Go 1.4 installed (and setup for cross compilation if you’re not on linux)
  • Git
  • Boot2docker running (for mac users)
  • Make
  • Vagrant & VirtualBox (VirtualBox was tested but should work with any provider)

To get up and running I have created a sample project that contains all the files needed for this work. As you are working in go the best way to get hold of this will be

go get

This will clone the project under your $GOPATH/src/ structure for you ready to be used.

Run your registry

Before we build our code you need to have a working local docker registry. You could skip this stage and push to the public docker repository, but that slows the cycle time for feedback and ideally you should only push releases to that, not just ad-hoc development builds.

Luckily the docker team have made it very easy to get your own local registry up and running. Make sure you have boot2docker ready to go:

boot2docker init
boot2docker up

Then you can setup your registry with one line:

docker run -p 5000:5000 -d --name=basic_registry registry

Finally you need to check the IP this is running on from boot2docker

boot2docker ip

You can then load <ipaddress>:5000 into your browser and confirm you get a message stating you have hit the registry server. Take a note of the IP address and port, you will need this later!

Build the app and docker image

Open a terminal prompt in the minimal-docker-go project and run the make file, this does a few things:

  • This builds your code
    • It cross compiles for linux as the target system
    • It statically links the c libraries go needs
    • It forces all packages we depend on to recompile under these rules also
    • In the end you get a binary for linux that is totally self contained
  • This builds and pushes your docker image to your local docker registry
    • Thanks to our self contained binary we build off the scratch image, resulting in a tiny file
    • Note you may need to change localhost here if you have issues
  • Finally it cleans up the binary to keep things tidy

Once you are done here you should have a local docker registry with a copy of your container, in which it has your code.

Note: The use of the scratch container here is really pretty amazing, you are essentially sitting on top of a kernel with hardly anything else in your way. The final image size ends up being around 6mb for a container, your app, the garbage collector, everything!

Boot kubernetes

Your first job is to download and extract kubernetes from github to somewhere useful on your machine. As noted above we will be using vagrant to simulate our cluster. However before we get started there is one important change to make to the configuration. Normally docker pulls images from a valid https endpoint, however in our example we’re using a simple local registry that doesn’t have a valid https endpoint. We need to ensure that we can pull our docker images in kubernetes.

Open a terminal and cd into the kubernetes folder.

Open kubernetes/cluster/vagrant/ and search for the line containing “insecure-registry”, in my release it looks as such:

EXTRA_DOCKER_OPTS="-b=cbr0 --insecure-registry"

We need to change this so images from our insecure registry are accepted, as such take the IP from your boot2docker VM and update accordingly, mine ended up looking as such:

EXTRA_DOCKER_OPTS="-b=cbr0 --insecure-registry"

With that in place, we can bring up our cluster by running:

export NUM_MINIONS=2

This ensures our script knows to boot via vagrant and creates a cluster with a master and 2 minions (now known as nodes). Go get a cup of tea, this takes a while!

Once its finally booted you can get the IP’s of your nodes

./cluster/ get nodes
NAME       LABELS                            STATUS Ready Ready

Deploy the pods

There are enough tutorials on kubernetes terminology around on youtube, I won’t go into it. What we will do here is bring up a controller that ensures we have 3 replicas of our container (from our local docker registry) running across our 2 nodes. We don’t care how many run and where, we just care that we have 3.

As can be seen from the gist we run a controller, which runs 3 replicas of our controller. You may need to adjust the IP of your docker registry host accordingly. It’s important to note we are demanding the latest container version, which ensures docker asks our registry for the latest image and doesn’t use its cached copy. To get our cluster into this state we submit this file to kubernetes, it then takes responsibility of booting docker containers on various nodes and keeping the correct number of replicas up. To do this run:

./cluster/ create -f 01-goapp-controller.json

You can check on the progress of this by running:

./cluster/ get pods

In the end your cluster should look like this:

goapp-controller-08xl7 1/1   Running 0        15s
goapp-controller-33lr2 1/1   Running 0        15s
goapp-controller-ev3bp 1/1   Running 0        15s

Create the services

Now that our pods are running we need to be able to access them. In our example we don’t need access from elsewhere in our cluster, but from outside our cluster, however the approach is the same. We need to create a service, which proxies to our pods.

Most of this is pretty standard, however its important to note that we have specified a type of Nodeport to ensure that the service is exposed on the host machines (in this case our minions/nodes), we have also specified a port to expose this service on just to make our lives easier.  In a real world deployment there would be a managed load balancer in front of the service, but for now this does the job.

As above you can run this config into your cluster as such:

./cluster/ create -f 01-goapp-service.json

And you can confirm its all looking good as such:

./cluster/ get services
NAME          LABELS              SELECTOR      IP(S)          PORT(S)
goapp-service <none>              app=goapp-app 80/TCP
kubernetes    component=apiserver <none>     443/TCP


Finally we have our service deployed, so we can test. Remember this is now being exposed on our host machines, to get their IP’s just run:

./cluster/ get nodes
NAME       LABELS                            STATUS Ready Ready

In the above example you can hit either or and our service will respond. Whats really cool is even if our service isn’t running on that node/minion then kubernetes is smart enough to proxy us to another one! You can change the replication to 1 and redeploy, you will still get results from both IP’s!

You can also tear the cluster down as such:

./cluster/ delete -f 01-goapp-controller.json
./cluster/ delete -f 01-goapp-service.json


Now that you have a cluster up and running you can change code and run make to get a tiny container that is entirely self contained. Then using kubernetes you can quickly tear down and redeploy an entire, highly resilient, cluster of the containers to arbitrary machines.

In the future I will blog about how to test using goconvey and use rolling deploys to ensure zero downtime in your cluster, but hopefully this is a good starting point for those exploring kubernetes and go. Any questions just drop them in the comments.

Read More

TLS Mutual Auth in GoLang

Golang takes no prisoners.

Don’t use a variable? Your code isn’t going to run!

Import something you don’t use? Nope, no chance.

Realistically I should have expected their attitude to security was just as tough. So implementing TLS mutual auth was never going to be simple. Simply put go does all the checks it quite rightfully should do, but those checks make it a nightmare when you just want to test some TLS work locally.

Strict TLS

There are a few points that trip you up locally, especially since go 1.3 where things got even more strict. When you hit any https endpoint online the certificate is usually backed to a domain name, however when working locally often you use localhost and interchangeably, when doing mutual auth you can’t help but bump into this issue. Since TLS is designed primarily around domains you need to add IP SAN’s. Secondly x.509 certificates have an extension to define the usage of keys in terms of server or client auth, which of course go checks.

Older versions of go used to have options to disable certain checks, but those have long since gone. Its secure, or nothing!

All this boils down to it being a real nightmare to generate the right keys so you can get up and running. However now that I’ve been through that pain, the solution is quite simple.

First up you need to generate your keys with IP SAN’s, to do this get hold of this file which does a great job of making your life easier.

Generate your certificates

To generate the server cert (assuming localhost) run the program with the options:

  • Common Name: localhost
  • DNS or IP Address 1:
  • Number of days: 365 (or whatever you like)

To create the client cert there’s one thing missing. You need the extension for client certs, find the line that reads:

ExtKeyUsage:           []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},

Change this to:

ExtKeyUsage:           []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth},

If you are being lazy you can also just permanently change it to:

ExtKeyUsage:           []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth, x509.ExtKeyUsageClientAuth},

Copy your previously generated server certs somewhere and run the program again to generate client certs.

This will give you 4 files, a cert and key for the server, the same for the client. The code to use them in mutual auth is pretty simple at this point.

I created 2 projects called secure-server and secure-client, the code looks like this:

Secure Server

Secure Client

Run it

First run the server, which will load its own cert and key, along with adding a client cert for the client application (you can add more if you need to) and then run the client, which loads its own cert and key, along with creating a new root ca pool with just that one server.

The great thing about this setup is once you have gone through the pain of creating your certificates and the code is somewhat trivial, after you have it all figured out.

Hopefully this should save people a bit of time in getting this working.

Read More

Cross compiling go on Mac OS X

After a lot of fun with this I thought I would write a quick note on setting up your mac for cross compiling with go lang.

I make 2 assumptions here, firstly you use homebrew and secondly you are only writing pure go code (no calling c functions etc). With that in mind the complex process boils down to this:

brew install go --with-cc-all

Yep, that’s it. With that one line you get all the go compilers, if you only want common platforms you can save yourself some space and just run:

brew install go --with-cc-common

Personally I like to couple this with a makefile that allows me to generally build and test on my mac, but when I really want to I can create cross compiled binaries. I like to run a makefile that looks a bit like this:

all: install

    go build ./...

    go install ./...

    GOOS=linux GOARCH=amd64 go install
    GOOS=darwin GOARCH=amd64 go install
    GOOS=windows GOARCH=amd64 go install
    GOOS=windows GOARCH=386 go install

And that should be about it to be running a cross compiling go setup!

Read More

Solving the Go Lang Web Crawler

I just posted my solution to the golang web crawler excersise on github but wanted to give a quick explanation of my approach.

Originally I had 2 channels, one for data to flow and one to signal processing was complete, but nesting this is a recursive function lead to quite messy code. I then watched a great presentation by Rob Pike and started to rethink my solution in terms of gophers!

I moved to a solution that was far more simple:

  • A single function would load a page, process it and extract links
    • The result of the processing would be sent to a channel as a work queue
  • A controller type function would take work and spin up a worker (go routine) to process this data
  • A simple counter would clock when work starts and ends

Thinking in terms of gophers vastly simplified my work, I then used a map to keep a track of pages I’ve visited to not go back to them. The 2 important functions are:

func getPage(url string, depth int, r chan unprocessed){
body, urls, err := fetcher.Fetch(url)
fmt.Printf("found: %s %q\n", url, body)

if err != nil {

r <- unprocessed{ depth - 1, urls }

As you can see this simply performs a fetch on the page and sends the results to the channel, the object sent is a simple struct. The important point here is this function always sends something, even if there is an error. Its up to the controller to be smart about what to send, this is just a dumb fetch function.

func Crawl(url string, depth int, fetcher Fetcher) {

//setup channel for inputs to be processed
up := make(chan unprocessed, 0)

//kick off processing and count how many pages are left to process
go getPage(url, depth, up)
outstanding := 1

visited := make(map[string]bool)
for outstanding > 0 {

//pop a visit from the channel
next := <-up

//if were too deep, skip it
if next.depth <= 0 {

//loop over all urls to visit from that page
for _, link := range next.url {

//check we havent visited them before
if visited[link] {

//all good to visit them
visited[link] = true
go getPage(link, depth, up)

The second function starts to make sense with that dumb page fetch, it essentially sets up a loop with a counter that increments when a getPage is called, and decrements when it processes some results. This allows the channel to act a bit like a stack and moves all the clever work into the controlling loop.

After I managed to get a working solution I looked around for what others had done, interestingly my final solution looks a bit similar to this one. Though I used a struct to pass data around not just a raw string array which results in a slightly different processing of the depth check. Over time I guess I will see what is more idiomatic golang.

Read More

CH340G USB Arduino clones on OSX Yosemite

Recently I picked up a few Arduino clones on eBay and have faced a few issues in getting them to be recognised by Arduino IDE on OS X Yosemite. Here’s a quick guide to getting them running:

Firstly the chips on the boards for USB comms are called CH340G USB and you need to get the drivers in place on OS X, so grab them off this site:

After loading them in there is a problem, nothing works still! Reboot and try, it won’t help.

The reason for this is that kext (kernel extension) signing in OS X yosemite has changed, it seems you need a special sort of developer license to sign kext’s that these drivers don’t have. To fix this you need to put your kext’s into dev mode (you really should be sure you know what you’re doing now):

sudo nvram boot-args="kext-dev-mode=1"

Now if you reboot you the kext will load up just fine, your Arduino should show up and you can get to business.

Read More

Why our kickstarter failed

As the year draws to an end I realise I never summarised why we failed at kickstarting. I think there is some wisdom in what we did that will be of use to others looking into kickstarting a project. I’ve managed to boil it down to 3 things we did wrong:

Target Market

When we started our kickstarter we focused on both businesses who owned bars and home users who wanted a cool party device. Let that sink in for a moment. We, the 2 person startup, tried to focus our (tiny) marketing efforts on 2 very large and very different markets. How on earth did we think that would work!

The reality is we ended up speaking to a lot of bars etc during the kickstarter but they had legitimate concerns we hadn’t addressed in our marketing, because it just added extra fluff that home users didn’t need to see. We also spoke to quite a few end users who were interested, but a lot commented it seemed more at home in a bar!

We managed to aim ourselves right down the middle of 2 markets and miss both.

Lesson: Pick your target demographic and ensure you focus everything on that, don’t hedge your bets.


The price really stemmed from our first decision to market to 2 demographics, it also lied right in the middle of what would be acceptable to all, which meant it wouldn’t fit for either. For a reasonable machine you were looking at near £800, which is the starting price of a macbook air! Yet for a business this is too low as we can’t offer things like maintenance and guarantees at that price point.

Lesson: Ensure the pricing fits with your users


Hitting the ground running is a big part of Kickstarter, you need to be seen to get anyone to support you and you get seen by people supporting you! We did a load of press and work building up to the kickstarter, we had a decent Facebook group setup, twitter with a decent number of followers, we had a website that was getting good traffic, it was all in place. The key part of the puzzle we were missing was some pre-sales, we should have had a number of people lined up ready to buy so as soon as it landed of kickstarter we started getting sales.

I honestly think if we had pursued this we would have noticed the issue in our marketing to 2 demographics earlier and maybe even fixed it.

Lesson: Kickstarter isn’t where you start, make sure you have sales ready before you kickstart


I think our mistakes are clear and I hope it helps others to focus even more specifically and avoid some of the traps we fell into, that being said we made it into some TV snippets, radio, blogs, Facebook, twitter, emails, etc and the whole few months of kickstarting was a real blast. I’m proud that we attempted it and I’m really pleased with all the skills we picked up along the way.

As of now we have folded the boozebots company and will be moving the website to a holding status soon. It’s time for us to regroup and come back in 2015 a little wiser.

Read More

Kickstarting Progress

Its been a few days since we kicked off our Kickstarter for Boozebots, our automated cocktail dispensing machine! It started off with an unbelievable high of one of our top bots sold within 30 minutes and a number of small support bids. We launched at 5pm on the Monday and I remember struggling to sleep that night, buzzing from the excitement, we were on to something!

Tuesday morning wasn’t as good, our bid for the top model had been removed so we were left with a handful of bids for general support of the project. I went to bed Monday thinking we were going to be amazing and smash our kick starter goals, the reality of Tuesday took a while to sink in.


We have 3 main competitors, all of which have kickstarted.

Bartendro – A success in raising $197,464 on kickstarter back in April 2013. Using a similar dispensing approach to us, however the overall product packaging, I believe, isn’t as nice and complete as ours. They are also more expensive that a BoozeBot.

Monsieur – A success in raising $140,105 on kickstarter in November. These guys have an amazing looking product, sharing a lot of similar features as the BoozeBot but in a really nice finished package, however the price is a lot more.

Barobot – This was the most recent launch, falling short of their funding target very recently. The Barobot however still lives on via external negotiations, it uses an entirely different set-up that a BoozeBot and costs more too.

Looking at the simple facts we have a machine that is as capable as a all the others, as fast as the best, as easy to use as the best, costing a lot less too. The pure economics of this are simple, however there is one thing missing, traffic. Having the best product counts for nothing if people don’t see it, so now our focus changes.

We’ve created something amazing, its time people got to see it!

Read More

Writing my first iPhone App

I’ve had a few days off recently so I decided to wrap up an iPhone app that I have been playing around with lately.

I first stumbled on an application called SpriteBuilder about a month ago, before I go any further, just check this out:

I was hooked, graphically creating the glue code then finishing things up in Xcode seemed like a great approach to creating simple games. So I decided a few weeks back to start creating a flappy birds type game.

Firstly I know this formula has been used time and time again, but as a first step into creating a game it seemed like a good way to go. Armed with this tool and some reading from this great site I was able to piece this game together in what turned out to be a week or so overall, including creating all the artwork, the app store screenshots, logos, etc. Apple really ask for a lot!

I wanted my first game to have quite a few features, so I really made an effort to get quite a bit of code in, my 1.0 version has:

  • Physics
  • Particle Effects
  • Sounds (background and SFX)
  • Analytics
  • Admob (those adverts at the bottom of the screen)
  • In App Purchases
  • In game currency

I figured if I was going to make an effort to create a game, I would really go for it and try to make it a complete game!

After the hassle of setting up my certificates, building for the correct iOS version, the 32 bit chips only and other such fun (in reality its not that difficult, its just knowing what you need to do!) I have managed to submit the game to apple and its pending review.

I’m going to be quite public about this app, I’ve gone from nothing to a full app in around 4 weeks from start to end, with probably about a weeks worth of development time in all. I think it will be an interesting experiment in which people can understand what is possible in the app store in reality, I don’t know what it will do, if people will play it, if no one will bother, but it’s going to be fun to see!

Read More