I can’t remember the first time I heard:

What do you mean it doesn’t work in production? It worked fine on my machine.

But I’ve heard it a lot. Indeed, it’s a frequent refrain when development, and sometimes testing, is local to developers’ personal machines. Code works fine locally, but shipped into production, the local development environment is sufficiently different that the code fails: Differences in libraries, versions, and operating systems are enough to break locally perfect code. These issues are hard to test forand somewhat pointless to test for, given how much personal machines are a moving configuration target.

But it’s this baffled refrain that’s inspired the use of containers, particularly Docker, for local development.

 

A bit about containers

Containers are lightweight and fast. They provide a simple way to replicate an application’s environment or stack locally. More importantly, they enable a workflow for your code that allows you to develop and test locally, push to upstream, and ensure what you build locally will likely work in production, too.

Containers are more light weight because they are a brand of virtualization called operating system, or OS, virtualization. Containers use capabilities built into the OS. Traditional virtual machines require ahypervisor”a software bridge between the OS and your virtual machinesas well as a guest operating system. This results in several layers of code between you and the virtual machines, which can make them slower and more resource intensive.

Container virtualization is supported natively in Linux and recent Microsoft Windows releases, as well as via various interfaces on MacOS.

Containers enable a workflow for your code that allows you to develop and test locally, push to upstream, and ensure what you build locally will likely work in production, too.

How containers can make life easier for developers

Let’s consider a typical workflow for developers:

  • Check out source code from source control locally and create a branch

  • Develop a feature and (maybe) write tests

  • Push the branch upstream to source control

From here, many organizations rely on other parties, operations teams, and release engineers to deploy that feature or package the code. Sometimes staging environments and QA teams are also involved. Deployment of the code can be a long path, and where it is ultimately deployed can be a very different platform and configuration from the local development environment where it was built.

Containers reduce the friction in this process.

Containers are created from images. Images are collections of software and code that you can buildfor example, you could create an image that is your production Rails setup or your database server. You can then distribute these images to your team and have developers create containers from them locally. Your team is then developing in an identical environment to the one in which the code is going to run. This will reduce the risk that something different locally will result in an issue in production.

You can also deploy natively to containers. Instead of creating a package or deploying a bundle of your code, you can create an image with your code and run containers from that. People often do this with continuous integration: When a branch is merged, a new image is created containing the updated code. This can then be shared with your team or deployed into production. The runtime environment with which you were developing locally then becomes the runtime environment your code runs in production. This creates a virtuous circle of images being updated, deployed, and shared with developers, ensuring the risk of changes breaking code is limited. Your code is portable and can move towards productionand delivering value to your customersmore quickly and with fewer errors.

This creates a virtuous circle of images being updated, deployed, and shared with developers, ensuring the risk of changes breaking code is limited.

So, how do you get in on local development with containers? The easiest way is to install a container platform like Docker. Here we’ll go through how to install Docker locally on a macOS machine, and we’ll do some development locally. We’ll use some sample applications, a website, and a Node application, that we’ll create as we go.

Getting set up

First, download and install Docker for Mac (you can find a package for Microsoft Windows, too). It’s a standard point-and-click installer and should leave you with Docker running.

To take the next steps, we’ll need to launch a terminal. Open Terminal or your favorite terminal app.

Next, make a directory to hold your source code.

$ mkdir increment_src
$ cd increment_src

As our first development environment, we’re going to create a static website and run it in a container. Fire up your editor, create an index.html file in the increment_src directory, and add the following:

<head>
  <title>Hello, Increment Readers!</title>
</head>
<body>
  <h1>Hello, Increment Readers!</h1>
</body>

Now, let’s run our website in a container. To do this we’re going to grab a container image. As mentioned earlier, images are the building blocks of containers; they contain packaged collections of software, like a web server, database, or a programming framework. Containers are created from imagesthink of them as an executable copy of the image.

Back in our terminal window, in the increment_src directory, run this command:

$ docker run -ti -v "$PWD":/usr/local/apache2/htdocs/ -p 8080:80 httpd:2.4

This runs the docker binary, which controls the Docker containers. The run subcommand creates a new container. The -ti flags tell Docker to run the container in the foreground, interactively.

The -v flag adds our index.html file by adding the current working directory inside our container as the directory: /usr/local/apache2/htdocs/. This is the root of our web server, from which our index.html file will be served.

The -p flag controls what port to run our web server on. This exposes our web server locally on port 8080 on our development machine.

The last item on the command line, httpd:2.4, tells Docker what image to run. In this case we’re running version 2.4 of the Apache HTTP web server.

When run, this command will download the image, run a container, and serve our static website on our local machine.

If we browse to http://localhost:8080, we’ll see our website.

Hello, Increment Readers!

If we edit our index.html file and refresh the browser we’ll see any changes. And, hey, presto: instant HTML web development environment!

This also works for application servers and frameworks. Let’s see this at work with a Node.js application.

Back in our increment_src directory, create a Node app in a file called hello_world.js.

var http = require('http');

var server = http.createServer(function (request, response) {
  response.writeHead(200, {"Content-Type": "text/plain"});
  response.end("Hello, Increment readers!\n");
});

server.listen(8000);
console.log("Server running at http://127.0.0.1:8000/");

This is a very simple web server that returns …

Hello, Increment readers!

… when you connect to it.

To run this app in a container, issue the command:

$ docker run -ti -v "$PWD":/usr/src/app -p 8000:8000 node:6 node /usr/src/app/hello_world.js

This is pretty similar to the last command we ran. We again run the container interactively and mount our source directory instead of the container, seen here as the /usr/src/app directory.

This time, our -p flag runs our Node.js app on port 8000.

We specify a new container image, node:6, that runs Node.js V6. Lastly, we run node itself, specifying the app we just created in the directory we mounted: /usr/src/app/hello_world.js.

Our container will launch and we should see a message in the terminal:

Server running at http://127.0.0.1:8000/

If we browse to this URL, we’ll see our Node.js app running.

Hello, Increment readers!

As our Node.js app is a server, we can’t just edit and refresh our codebut we can restart the container when we change the code, which is a lightning-fast step, or use a tool like Nodemon to auto-refresh our environment. What we’ll get is a running Node.js server without any need to install local Node.js packages, plus minimized worrying about versions, updates, or inconsistencies.

This introduction just scratches the surface of local development with containers. In addition to developing and testing your applications, you can take the next step, packaging them into images to share and deploying them. You can then create that virtuous circle of develop, update, and deploy that should increase your velocity and reduce your risk of errors.