|5 min read

Docker Changes Everything About Deployment

Docker 0.1 just dropped and containers are suddenly accessible to everyone, not just kernel wizards

Two days ago, a company called dotCloud released something called Docker. It is an open source project that makes Linux containers accessible to normal human beings. I have been reading about it nonstop and I think this is one of those moments where everything shifts.

Let me explain why I am excited.

The Problem Docker Solves

If you have ever tried to deploy an application, you know the pain. You build it on your machine, it works perfectly, and then you try to run it on the server and everything breaks. Different library versions, different OS configurations, missing dependencies, wrong permissions. The phrase "it works on my machine" has become a running joke in our industry, but it is not funny when you are the one debugging a production deployment at midnight.

At my current job managing Linux infrastructure, I see this every single day. Developers hand us their application and a list of requirements, and we spend hours, sometimes days, setting up the server environment to match what they need. And God help us if two applications need different versions of the same library on the same server.

We have tried to solve this problem in various ways. Virtual machines work but they are heavy. Each VM needs its own operating system, its own kernel, its own set of system resources. Running five VMs means running five complete operating systems. That is a lot of overhead.

Containers: The Lightweight Answer

Linux containers have existed for a while. LXC, cgroups, namespaces; these kernel features let you isolate processes without the overhead of a full virtual machine. But using them directly is painful. You need deep kernel knowledge. You need to understand cgroups hierarchies and namespace configurations. It is not something a typical developer or even a typical sysadmin wants to deal with.

Docker changes that equation. It wraps all of that kernel complexity in a simple, developer-friendly interface. You write a Dockerfile (basically a recipe for your application environment), run a build command, and you get a container image. That image runs the same way everywhere. On your laptop, on a test server, on production. Same image, same behavior.

I downloaded Docker this morning and had a container running within twenty minutes. Twenty minutes. Setting up an equivalent LXC environment would have taken me a full day, and I consider myself fairly comfortable with Linux internals.

What This Means for My Work

Right now, provisioning a new server for an application takes us anywhere from two days to a week, depending on complexity. We manually install packages, configure services, set up monitoring, handle dependencies. Every server is a unique snowflake that we maintain by hand.

Imagine instead that every application comes as a Docker container. The developers build it, test it, and hand us an image. We run the image. Done. No dependency conflicts, no configuration drift, no "but it worked in dev." The container is the deployment artifact.

This could cut our provisioning time dramatically. Instead of spending days configuring servers, we could spend that time on more important work like monitoring, security, and automation.

The Dockerfile Concept

What really impressed me is the Dockerfile. It is a simple text file that describes how to build your container. Something like this:

FROM ubuntu:12.04
RUN apt-get update && apt-get install -y python
COPY app.py /app/
CMD ["python", "/app/app.py"]

Four lines. That describes an entire application environment. It is reproducible, version-controllable, and self-documenting. Anyone can look at a Dockerfile and understand exactly what the application needs to run.

Compare that to our current process of maintaining wiki pages with setup instructions that are always slightly out of date and never quite match what is actually on the server. The Dockerfile is the documentation because it is the actual build process.

Questions I Still Have

Docker is at version 0.1. It is explicitly not production-ready. The project leaders are very clear about that. So I have questions.

How does networking work between containers? Right now it seems basic. What about storage? Containers are ephemeral by design; what happens to your data when the container stops? How do you manage dozens or hundreds of containers across multiple hosts? What about security? Running containers as root feels wrong.

These are not criticisms. These are the problems that need solving for Docker to become a real production tool. And based on the energy I am seeing in the community around this project, I think people are going to solve them fast.

The Community Response

I have been watching the GitHub repository and the mailing lists. The response has been enormous. Developers, operations people, cloud providers; everyone seems to recognize that this is important. PaaS companies are especially interested because containers could fundamentally change how they build their platforms.

There is something about Docker that just clicks. The metaphor of shipping containers (standardized boxes that work with any ship, any crane, any truck) maps perfectly to the problem of software deployment. Your application goes in a container, and it does not matter where you ship it. The container is the standard.

What I Am Going to Do

I am going to start experimenting with Docker in our non-production environments. I want to understand its limitations firsthand, not just read about them. I want to try containerizing some of our simpler applications and see how the workflow changes.

I am also going to start learning more about the underlying Linux kernel features: cgroups, namespaces, union filesystems. Even if Docker abstracts them away, understanding the fundamentals will help me make better decisions about where and how to use containers.

This feels like one of those technologies that you either get ahead of or get left behind by. I do not want to be the sysadmin who is still manually provisioning servers five years from now while everyone else has moved to containers.

The future of deployment just changed. I can feel it.

Share: