Docker setup for containerized ghost blogs

Toward the end of 2013, I was shopping around for some kind of system to use for my personal blog. The easy choice was (and probably still is) WordPress. I've never been one to do anything the easy way and I didn't want to deal with any of the PHP, MySQL, or plug-in dependencies that come along with WordPress. Nor did I want any part of the frequent security patches required by the WordPress ecosystem.

I wanted a lighter and simpler system...hopefully one built using hipper technologies that I was curious to learn about. I've been using markdown for a few years, so static blog generators like Jekyll were intriguing. The static site generators also fit in well with the minimal Linode VPS I was already using to run nginx to serve the At Least You're Trying Podcast feed. I didn't last long with any of the Ruby-based options since RVM would never cleanly install on my local PC (Linux Mint 13).

Since I could (still) use some JavaScript experience, I started looking into node.js-based static blog generators (poet, wintersmith, blacksmith). These were all too bleeding-edge and not very n00b-friendly, though. They all also suffer from the, "how would I explain this workflow if I were setting up a blog for someone who's never used the terminal or version control?" problem.

There was also another possibility based on node.js...ghost had a successful kickstarter campaign and released a public beta sometime around October-2013. I was able to get ghost installed and running on my local system and I liked what I saw. Ghost makes for a less arcane workflow for non-programmers, while still being lightweight.

I was excited until I began reading tutorials on getting ghost running on my server. Did I mention that my server setup and admin experience are minimal? Well, I managed to render my precious VPS unbootable while trying to add some (probably over-kill) security changes prior to installing ghost. Linode support was quickly able to save my from my own stupidity. At that point, I gave up since it seemed out of my reach to be able to get all the node.js, ghost, and nginx configuration correct and stable.

The container ship comes in

A few months go by and I start to hear and learn about Docker. Nifty, lots of the benefits of virtual machines, but without all the performance overhead...something like BSD jails on Linux. The mental hamster wheel started creaking away once I saw what was possible with Docker manifest files and the Docker index. Of course some neckbeard-angel had already created a dockerfile to reliably install node.js...some other wonderful individual even created one with a ghost installation.

Ok, at that point I was itching to snap all of these pieces together and things went well at first. I saw that Linode had added support for Docker, so I installed new kernels on my VPS and on my local PC. I got Docker installed and pulled my first Docker image. I built a ghost blog image from the files posted to the Docker index.

Then it was on to testing my new ghost container. When starting a container, Docker allows for a folder on the host file system to be mounted in the container. The ghost container uses this so all ghost-related files (content, images, themes) can be stored separate from the container. Here was my main source of frustration. I wanted to be sure that all my content would survive if the container crashed, but stopping and running a new ghost container during testing would fail to bring back any uploaded images. The ghost container would also not recognize any changes to the ghost config.js file.

I fought with it and cursed at it and nearly gave up again, but at some point I'd gained enough bash scripting experience to see the problems with the setup bash file used by the ghost container. This was a great reason to play around on github a little, so I made my own branch of the ghost container repository and set about fixing things.

Update Feb-2014, similar fixes to my own have been made to the central dockerfile/ghost repository. This is a great sanity check for myself, but it probably also means I missed out on getting my first accepted pull request.

This has proven to be a stable and performant setup. I'm using it for the At Least You're Trying podcast website and this very blog. Running two node.js/ghost containers on my VPS with 1GB of RAM hasn't been a drama. The server usually shows about 30% RAM usage and there wasn't any notable performance hit for running the second container. It's doubtful any of my efforts will generate enough traffic to really load test this setup (not planning on being internet famous any time soon). Using one Docker container for each ghost blog has nicely enforced storing the content separately. Since ghost only currently supports a single admin log-in, separate container instances of ghost make even more sense.

I'm currently hoping to fully crash-proof the running containers using upstart, but at the moment the containers don't come back up after a full system reboot. As far as I can tell, I correctly followed the Docker upstart tutorial. If you have any ideas as to what I've done wrong, shoot me an answer on