|6 min read

Linux Containers Before Docker: LXC, Cgroups, and Namespaces

The container landscape before Docker existed, and why LXC, cgroups, and namespaces matter more than most people realize

I have been spending the last few weeks deep in Linux container territory, and I want to write about it because I think this technology is going to be important. Not in the distant future, but soon.

Most people I talk to at work think virtualization means VMware or KVM. Full virtual machines with their own kernel, their own OS, their own everything. And that works, but it is heavy. Spinning up a VM takes minutes. Each one eats hundreds of megabytes of RAM just for the OS overhead before your application even starts.

There is another way. It has been hiding in the Linux kernel for years, and I think it deserves a lot more attention than it gets.

The Building Blocks

Linux containers are not a single technology. They are built from two kernel features that work together: cgroups and namespaces. Understanding these two pieces is the key to understanding everything else.

Cgroups (control groups) let you limit and account for resource usage. You can say "this group of processes gets a maximum of 512MB of RAM and 50% of one CPU core." The kernel enforces it. If a process in the cgroup tries to exceed its memory limit, the kernel kills it. No negotiation.

Google contributed cgroups to the Linux kernel back in 2007. They were using something similar internally to manage their massive workloads, and they upstreamed it. Think about that for a moment: the same basic resource isolation that runs Google's infrastructure is available in the kernel on your machine right now.

Namespaces provide isolation. They make a process think it has its own private view of the system. There are several types:

  • PID namespace: Process 1 inside the container is not process 1 on the host. The container has its own process tree.
  • Network namespace: The container gets its own network stack, its own IP address, its own routing table.
  • Mount namespace: The container sees its own filesystem tree.
  • UTS namespace: The container can have its own hostname.
  • IPC namespace: Inter-process communication is isolated.
  • User namespace: UID 0 inside the container does not have to be UID 0 on the host.

Put cgroups and namespaces together and you get something powerful: a process that thinks it is running on its own machine, with its own resources, its own filesystem, its own network, but it is actually sharing a kernel with the host and every other container on that host.

LXC: Linux Containers

LXC is the tool that ties all of this together into something usable. It stands for Linux Containers, and it provides a userspace interface for creating and managing containers using cgroups and namespaces.

I have been running LXC containers on my test machines and the difference compared to VMs is dramatic. A container starts in under a second. Not minutes, not even tens of seconds. Under one second. You run lxc-start and the container is up.

The resource overhead is almost nothing. There is no guest kernel. There is no hypervisor layer. The processes inside the container run directly on the host kernel. They are just isolated from everything else using namespaces and constrained using cgroups.

Here is what a basic LXC container creation looks like:

# Create a container based on Ubuntu
lxc-create -n mycontainer -t ubuntu

# Start it
lxc-start -n mycontainer -d

# Attach to it
lxc-console -n mycontainer

That is it. You now have a running Ubuntu environment, isolated from your host, sharing the host kernel, using minimal resources.

Why This Matters

I keep thinking about what this means for the infrastructure work I do every day. Right now, when a developer needs a new environment, we provision a VM. That takes time, uses real resources, and we end up with dozens of VMs that are mostly idle but still consuming memory and disk.

With containers, I could give every developer their own isolated environment on a single physical server. I could run ten, twenty, fifty containers on hardware that currently runs three or four VMs. The density improvement is not incremental; it is an order of magnitude.

The startup time matters too. In our current setup, deploying a new service means provisioning infrastructure, and that has become a bottleneck. If containers can start in under a second, deployment becomes a fundamentally different problem.

The Rough Edges

I should be honest about the current state of things. LXC is functional but not polished. The documentation is sparse. The tooling is bare bones compared to what VMware or even KVM gives you. There is no equivalent of vCenter for containers, no nice management UI, no enterprise support contract.

Security is a real concern. Containers share a kernel with the host. If a process inside a container finds a kernel exploit, it potentially compromises the host and every other container running on it. VMs have a much stronger isolation boundary because the hypervisor provides a hardware-level separation.

Networking is also more complex than it needs to be. LXC gives you bridge networking out of the box, but anything more sophisticated requires manual configuration of iptables rules and bridge interfaces. Compared to the networking options in VMware, it feels primitive.

And the ecosystem is thin. There is no standard way to package and distribute container images. If I build a container configuration that works perfectly, sharing it with a colleague means copying files around and hoping their host kernel is compatible. There is no "container hub" where you can pull pre-built environments.

Looking Forward

Despite the rough edges, I am convinced that OS-level virtualization is the future for a huge category of workloads. Not for everything, and VMs are not going away, but for application deployment and development environments, containers make too much sense to ignore.

The performance characteristics are just too compelling. Sub-second startup. Near-native execution speed. Fraction of the memory overhead. If someone builds better tooling around these kernel primitives, better packaging, better networking, better orchestration, containers will change how we think about deploying software.

I have been reading about some projects trying to make containers more accessible. There is work happening on better image formats, on simplifying the namespace and cgroup configuration, on making container networking less painful. The kernel features are solid. What we need is the tooling to make them practical for everyday use.

For now, I am going to keep experimenting with LXC in my lab environment. I have a few ideas about using containers for our testing infrastructure, where we currently spin up VMs for each test run and then tear them down. Containers could make that process nearly instantaneous.

I will write more about this as I learn. The kernel documentation for cgroups and namespaces is dense but worth reading if you are interested in understanding how this works at the lowest level. Start with the cgroups documentation in the kernel source tree and work your way up from there.

The container story is just beginning. I have a feeling we are going to be hearing a lot more about this in the next few years.

Share: