|7 min read

Google Announces Kubernetes

Google just open-sourced their container orchestration system and this could change everything about how we run infrastructure

Google just did something extraordinary. They announced an open-source container orchestration system called Kubernetes (from the Greek word for helmsman or pilot), and it is based on over a decade of experience running containers at Google-scale with their internal system called Borg.

I have been reading every piece of information I can find, and I think this is the missing piece of the container puzzle.

The Problem Kubernetes Solves

Docker showed us how to package and run individual containers. CoreOS showed us how to build an operating system for containers. But neither of them fully answered the question: how do you manage hundreds or thousands of containers across dozens or hundreds of machines?

This is the orchestration problem, and it is genuinely hard.

When you run one container on one machine, everything is simple. You start the container, it runs, you are done. But what about when you need to run fifty copies of your web server container across ten machines? How do you decide which containers go on which machines? What happens when a machine dies and the containers on it need to be rescheduled somewhere else? How do you update your containers without downtime? How do containers find each other across machines?

These are not theoretical problems. They are the daily reality of anyone running containers in production at any meaningful scale. And until now, the solutions have been ad-hoc: custom scripts, basic tools like fleet, manual management, or expensive proprietary platforms.

Kubernetes promises to solve all of these problems in an open-source, vendor-neutral way. And it comes with Google's credibility of running containers at a scale that nobody else even approaches.

What We Know So Far

The initial release is early, but the design documentation and talks from Google engineers have been revealing.

Pods: The basic unit of deployment in Kubernetes is not a single container but a "pod," which is a group of containers that are always co-located and co-scheduled. Containers in the same pod share a network namespace (they can talk to each other on localhost) and can share storage volumes. This is a smart abstraction because many real applications consist of multiple tightly coupled processes.

Services: Kubernetes has a concept of services that provides a stable network endpoint for a set of pods. Pods are ephemeral (they can be created, destroyed, and moved), but services provide a consistent way to access them. When you create a service, Kubernetes assigns it a stable IP address and DNS name, and routes traffic to the appropriate pods.

Replication Controllers: These ensure that a specified number of pod replicas are running at any given time. If a pod dies, the replication controller starts a new one. If there are too many pods, it terminates some. This is the self-healing mechanism that makes Kubernetes resilient.

Labels and Selectors: Instead of configuring things by specific machine names or IP addresses, Kubernetes uses labels (key-value pairs attached to objects) and selectors (queries that match labels). This is a flexible, declarative way to organize and reference groups of objects.

Why This Matters More Than You Think

Every major technology company has built internal container orchestration. Google has Borg. Twitter has Mesos (well, Apache Mesos, which came out of UC Berkeley but was heavily influenced by Twitter's scale). Facebook has their own system. These companies learned long ago that containers without orchestration are like bricks without architecture.

But none of them open-sourced their orchestration system. Until now.

Google open-sourcing Kubernetes means that everyone, from a startup running ten containers to an enterprise running ten thousand, gets access to orchestration concepts that were previously available only to the world's largest technology companies.

And it is not just the software. It is the patterns. The ideas about how to think about container workloads, how to handle service discovery, how to manage rolling updates, how to build self-healing systems. These ideas were locked inside Google. Now they are public.

How This Fits With Docker

An important question: how does Kubernetes relate to Docker?

Docker provides the container runtime: building images, running containers, managing the container lifecycle on a single machine. Kubernetes operates at a higher level: deciding which containers should run on which machines, ensuring they stay running, handling networking between them, and managing rolling updates.

Kubernetes uses Docker (or other container runtimes) under the hood. It is not a replacement for Docker. It is a layer on top of Docker that handles the orchestration that Docker alone cannot provide.

Think of it this way: Docker is the engine in a single car. Kubernetes is the traffic management system for an entire city. You need both, but they solve different problems.

The Competitive Landscape

Kubernetes is not the only container orchestration project. Docker itself is working on orchestration capabilities. Mesos, from Apache, handles cluster management and can run containers. Fleet from CoreOS provides basic orchestration. Various cloud providers have their own container services.

But Kubernetes has a significant advantage: Google's operational experience. The design decisions in Kubernetes are not theoretical. They come from running billions of containers per week at Google. Every abstraction, every API, every operational pattern reflects lessons learned from over a decade of production experience.

That does not mean Kubernetes will automatically win. Being technically superior does not guarantee adoption (see: Betamax vs VHS). But the combination of Google's backing, strong technical design, and open-source licensing makes it a formidable contender.

What I Want to Try

I want to get my hands on Kubernetes as soon as possible. The code is on GitHub, and I plan to set up a small cluster in my lab environment. A few machines running CoreOS as the host OS, with Kubernetes managing containers on top.

I am particularly interested in a few areas:

Networking: How Kubernetes handles inter-container networking across hosts is one of the harder problems in container orchestration. The pod networking model, where every pod gets its own IP address and can communicate with any other pod without NAT, is elegant in concept. I want to see how it works in practice.

Rolling updates: The ability to update a running application without downtime is essential for production use. Kubernetes claims to support rolling updates natively. I want to test how this works, how it handles failures during updates, and how it rolls back if something goes wrong.

Self-healing: If I kill a pod, does Kubernetes actually reschedule it? If I take down a node, do its pods move to other nodes? How quickly? How reliably?

The Big Picture

I have been writing about containers for over a year now. Docker for packaging. CoreOS for the host OS. And now Kubernetes for orchestration. The pieces of a complete container infrastructure are coming together.

We are watching the birth of a new way of running software. Not virtual machines and traditional configuration management, but containers and orchestration. Not pets, but cattle. Not manual operations, but declarative, self-healing systems.

This transition will take years. Most enterprises are still figuring out virtualization, let alone containers. But the direction is clear. And the fact that Google, one of the most sophisticated operators of infrastructure in the world, is sharing their approach with everyone is going to accelerate the transition dramatically.

I do not know if Kubernetes will become the dominant orchestration platform. It is too early to call. But I do know that the ideas it embodies, declarative infrastructure, self-healing systems, label-based organization, pod abstractions, are going to influence everything that comes after it.

Today is a good day to be in infrastructure.

Share: