|7 min read

2012: The Year the Cloud Became Real

A year-end reflection on how 2012 marked the turning point when cloud computing stopped being experimental and became the default

The year is almost over, and I want to take a step back and think about what happened in technology over the last twelve months. When I look at the trajectory of the industry from where I sit, one thing is clear: 2012 was the year the cloud stopped being a buzzword and became the way things are done.

Let me explain what I mean.

AWS Became Unavoidable

Amazon Web Services has been around since 2006, but this year it crossed a threshold. It went from being a platform that startups used because they could not afford their own servers to being a platform that large enterprises chose deliberately over their own data centers.

The numbers are staggering. AWS is running hundreds of thousands of active customers. Netflix, one of the largest streaming services in the world, runs entirely on AWS. Zynga, Reddit, Pinterest, Airbnb, and countless other high-traffic services run on AWS. These are not experiments or proof-of-concept projects. These are production workloads serving millions of users.

This year, AWS launched DynamoDB for managed NoSQL, Redshift for data warehousing, and continued expanding their service catalog at a pace that nobody else can match. Every re:Invent announcement (their first conference was this year) added more reasons to build on AWS rather than run your own infrastructure.

For someone like me, this shift has practical implications. More of the infrastructure work I do involves AWS services. The skills I need are shifting from "how do I configure a physical server" to "how do I architect a system using managed services." The job is not going away; it is transforming.

Google and Microsoft Joined the Fight

Google announced Compute Engine in June, entering the IaaS market with virtual machines on Google infrastructure. Microsoft continued to invest heavily in Azure, expanding beyond its initial .NET focus to support Linux workloads and open-source technologies.

The significance is not that Google and Microsoft launched cloud products. The significance is what it signals about the industry's direction. When three of the largest technology companies in the world are all betting heavily on cloud infrastructure, that tells you where the industry is going. These companies do not make bets this large on trends that might fizzle out.

The competition is already driving prices down and innovation up. AWS has cut prices multiple times this year. Google is competing on network performance. Microsoft is competing on enterprise integration and hybrid scenarios. We, the customers, benefit from all of it.

Mobile Became Dominant

This was also the year that mobile traffic became impossible to ignore. Smartphone adoption accelerated globally. More people access the internet on their phones than on desktop computers in many markets. Android, in particular, has made smartphones accessible at price points that put computing power in the hands of billions of people.

The connection between mobile growth and cloud computing is direct. Mobile applications need backend services. Those backend services need to scale elastically because mobile usage patterns are bursty and unpredictable. A game goes viral and suddenly you need 10x the server capacity for two weeks. A news event drives traffic spikes that would overwhelm static infrastructure.

The cloud is the only practical way to handle this. You cannot predict how many servers you will need when your user base can double overnight because someone shared your app on social media. You need infrastructure that can grow and shrink on demand, and that is exactly what cloud platforms provide.

What I Learned This Year

On a personal level, this year was transformative. I wrote about a lot of the technologies I explored: Linux containers and LXC, DynamoDB, configuration management tools, RAID and storage architecture, the Raspberry Pi. Each of these taught me something, but the meta-lesson is bigger than any individual technology.

The meta-lesson is that the infrastructure layer is being abstracted away. Not eliminated, abstracted. Someone still needs to understand how RAID works, how network packets flow, how operating systems manage memory. But increasingly, that someone works at AWS or Google or Microsoft, not at the company that is building the application.

This is an opportunity, not a threat. The engineers who understand both the traditional infrastructure and the cloud abstractions are the most valuable. They can make informed decisions about when to use a managed service and when to run their own. They can debug problems that span the abstraction boundary. They can architect systems that take advantage of cloud capabilities without falling into cloud-specific traps.

I have been investing in both. I still manage physical servers. I still configure RAID arrays and tune Linux kernels. But I also build on AWS, experiment with managed services, and think about architecture in terms of services rather than servers. The combination of both skill sets is, I believe, where the future is.

The Global Perspective

Something I want to note from my vantage point outside the US: the cloud is a great equalizer.

When infrastructure meant physical servers, geography mattered enormously. You needed to be near a data center. You needed local vendors for hardware procurement. You needed relationships with ISPs for connectivity. Companies in Silicon Valley had advantages in all of these areas.

The cloud eliminates most of these advantages. I can provision infrastructure in AWS from my desk anywhere in the world. The API is the same no matter where you are. The pricing is the same. The documentation is the same. The community is global and accessible to anyone with an internet connection.

This is creating opportunities for engineers everywhere that did not exist five years ago. Startups anywhere can build on the same infrastructure as startups in San Francisco. Remote engineering teams can manage global infrastructure without being physically present at the data center. The playing field is not perfectly level, time zones and bandwidth still matter, but it is more level than it has ever been.

Predictions for 2013

I want to put some predictions on record so I can look back next year and see how wrong I was.

Cloud adoption will accelerate. The enterprise resistance to cloud computing is weakening. Concerns about security, compliance, and reliability are being addressed by the cloud providers, and the cost and agility benefits are becoming undeniable.

Configuration management will become standard practice. Tools like Ansible, Puppet, and Chef will move from "nice to have" to "how do you not use one of these?" The infrastructure-as-code movement is gaining momentum, and managing servers by hand will increasingly be seen as unprofessional.

Containers will gain traction. The LXC ecosystem is improving, and I have a feeling that someone is going to build better tooling around Linux containers that makes them accessible to a broader audience. The performance advantages over VMs are too significant to ignore.

Mobile-first will become the default. Applications that do not have a mobile strategy will be at a competitive disadvantage. This will drive more cloud adoption as mobile backends need elastic scaling.

Looking Back, Looking Forward

When I started this blog, I was a student trying to figure out how the internet works. Now I am an infrastructure engineer building and managing systems that serve real users. The path from there to here was not straight, and it was not planned, but every step taught me something.

2012 was the year the cloud became real. Not just for startups, not just for Silicon Valley, but for the entire industry. The question is no longer whether to use the cloud. It is how to use it well.

I am going into 2013 with a longer list of things to learn than I have ever had. New AWS services. Better configuration management practices. Deeper understanding of distributed systems. More hands-on experience with containers.

It is a good time to be an infrastructure engineer. The tools are better than they have ever been, the problems are more interesting than they have ever been, and the opportunity to build things that matter is available to anyone willing to learn.

Here is to another year of learning. Let us see what 2013 brings.

Share: