The Ups and Downs of running Docker in development

22 Oct 2016

When I got my new development machine at the end of August 2015, about 14 months ago as of this writing, I made a promise to myself: I would not throw my development tools directly onto it, as I had done on my previous machine.

In the past I had a massive pile of software installed for writing other software. As I work with different programming languages and each bring with them at least a package manager, testing tools and some sort of application server, I ended up with a whole mess on my system. Most packages would lie dormant when I paused a project.

Usage of disk space and a subtle feeling that I should clean up soon™ were the least annoying consequences.

As I was using Docker for building isolated environments for applications (and I never got to try Vagrant), I took the opportunity of a fresh new machine as a start for a fresh approach to set up development environments.

Install everything needed to develop each app using Docker only.

Cost

Development environments in Docker containers do not come for free, though.

I found it to be a more seamless process to run installation instructions on my local machine directly and tinker around until everything plays nicely. (And then never touch it again.)

Using Docker containers requires a little more effort:

  1. Research if there is an (offical) up-to-date Docker image for the language/tool
  2. Learn how to plug-in local configuration and other files into a container based on that image
  3. Tear your hair out. OK, now calm down.
  4. Set up volume mounts to speed up dependency installation in containers (looking at you, npm install).
  5. Set up .dockerignore and .gitignore to keep everything nice and tidy
  6. Mess around with file permissions.
  7. Document how a fellow developer (or You-in-six-months) should boot everything up.
  8. Bonus round: Establish debug sessions from IDE into container.

I’ve gone through this process for a handful of languages, tools and environments now, and got better at it at each iteration - but it still is some hit-and-miss for each new setup. On the positive side, though, I could re-use a lot of basic setup between projects in similar environments.

Benefit

Is the upfront cost and hassle worth it at all?

I believe it is.

I’ll try to list what leads me to that conclusion, and afterwards give a brief description of each of the benefits.

  1. Transparent, documented version and environment dependencies for each project.
  2. Only dependency to develop project on another machine is Docker itself (and Docker-Compose).
  3. Run project and its tools on any other machine (e.g. on your CI server) with only Docker installed.
  4. No more dormant or orphaned packages on your machine.
  5. No more resolution of conflicting version requirements of different projects on your machine.

Transparent, documented version and environment dependencies for each project.

Setting up tools in Docker containers makes their installation requirements transparent. Building a Dockerfile specifies paths and includes configuration files. Following best practice, it specifies versions running in your containers, e.g. ruby:2.3.1 so the environment is fully reproducible. Once you have this, an update to ruby:2.4.0 is merely a one-line change.

Only dependency to develop project on another machine is Docker itself (and Docker-Compose).

You get a new colleague to work on your project and want to get him started in no time. With Docker powering your development environment, this consists of two steps: a) Install Docker and b) run your bin/start.sh script that builds and boots all containers (application server, database, queue, etc.), probably utilizing Docker Compose.

She can get started in minutes instead of wasting time setting up. Happiness all around!

Run project and its tools on any other machine (e.g. on your CI server) with only Docker installed.

In a recent project, I run bin/test.sh to execute all unit tests locally. When it came to running tests on a Continuous Integration server, it came in handy to install Docker and run bin/test.sh on that machine as well. Simple as that, isolated of other builds, easy to clean up and without installation of anything on that server.

No more dormant or orphaned packages on your machine.

Want to try that new mutation testing tool? Great! Create a new Docker image for it, wrap it in a slim bash script, and let it run with your project code mounted. And once it finds too many test deficiencies, remove container, image, Dockerfile and script without any trace on your dev machine and pretend it never happened…

No more resolution of conflicting version requirements of different projects on your machine.

Got that old project requiring Ruby 1.9 set up? Want to start your new project with Ruby 2.4 Preview 1? Well, have fun installing both side-by-side on your local machine. While totally possible, of course, using RVM (in case of Ruby), why should you have to get used to yet another tool when instead you could go and build away on your new project?

Conclusion

I went with this practice for over a year now, and I haven’t looked back at all.

Rather, I’m most of the time happy with the approach of running my development environment based on Docker. Granted, there are situations which require a lot more work to set up, like debugging from PhpStorm into PHPUnit running in Docker. But I could resolve them and had to work through set up once per project, not once per machine.

I encourage you to try the same for your next project. If you need help, shoot us an email. We are happy to assist in your transition to a Docker-powered development environment!