Google Compute Engine and the Three-Horse Cloud Race
Google just announced Compute Engine at I/O, and suddenly the cloud is a three-way competition between AWS, Azure, and GCP
Two days ago, at Google I/O, Google announced something called Google Compute Engine. Virtual machines running on Google's infrastructure, available to developers and businesses through an API. If that sounds familiar, it is because AWS has been offering exactly this with EC2 for six years.
But here is why this matters: Google is now the third major player to enter the infrastructure-as-a-service market. AWS has been doing it since 2006. Microsoft launched Azure in 2010. And now Google. The three largest technology companies in the world are all competing to rent you their servers.
This is going to change everything about how we buy and manage infrastructure.
What Google Announced
Google Compute Engine lets you spin up Linux virtual machines on Google's infrastructure. The initial offering is straightforward: you choose a machine type (varying CPU and memory configurations), attach persistent disk storage, and run your workload.
The specs are solid:
- Machine types from 1 to 8 virtual CPUs with up to 30GB of RAM
- Persistent disks up to 10TB
- Network throughput that scales with machine size
- Integration with Google Cloud Storage for object storage
What stands out is Google's emphasis on performance and price. They are positioning themselves as having superior networking (no surprise, given that Google operates one of the largest private networks on the planet) and competitive pricing. The per-hour instance costs are in the same range as AWS, with some configurations noticeably cheaper.
The service is launching in limited preview, so I have not had a chance to use it hands-on yet. But the architecture is interesting because it reflects Google's internal infrastructure philosophy: massive scale, commodity hardware, software-defined everything.
The Three Competitors
Let me try to map out where each provider stands right now, because the differences are real and they matter.
AWS is the established leader by a wide margin. They launched EC2 in 2006, and they have spent six years building out an ecosystem of services around it. S3 for storage, RDS for managed databases, DynamoDB for NoSQL, Elastic Load Balancing, CloudFront for CDN, IAM for access management, CloudWatch for monitoring, VPC for networking, and dozens more. The breadth of AWS services is staggering, and every quarter they announce new ones.
The developer ecosystem around AWS is equally deep. There are books, courses, conferences, certifications, consulting firms, and an enormous community. If you have a problem with AWS, someone has probably solved it and written about it.
Microsoft Azure launched in 2010, initially with a strong focus on .NET and Windows workloads. They have been broadening their appeal, adding Linux VM support and open-source tooling, but the platform still feels most natural if you are coming from a Microsoft technology stack. Azure's biggest advantage is enterprise relationships. Microsoft has decades of enterprise sales infrastructure, and they are using it to bring existing customers to the cloud.
Azure also has strong hybrid story, which matters for enterprises that cannot move entirely to the cloud. The connection between on-premises Active Directory and Azure AD, the Azure VPN Gateway, the ability to extend your corporate network into Azure; these features appeal to the large organizations that are Microsoft's core customer base.
Google Compute Engine is the newest entrant, but Google brings some unique strengths. Their global network is arguably the best in the world. Their experience running infrastructure at Google scale is unmatched. And they have a culture of engineering excellence that should produce a high-quality platform.
The weakness is obvious: the ecosystem is thin. AWS has hundreds of services and years of community knowledge. Google has virtual machines and storage. Building out the service catalog to match AWS will take years, and Google will be playing catch-up the entire time.
Why Competition Matters
I have been working with AWS for our infrastructure, and it has been transformative. But I have also been locked in to one provider's way of doing things, their APIs, their tooling, their pricing model. When AWS raises prices or changes terms, my options are limited.
Three serious competitors changes the dynamic. It gives customers leverage. When all three providers are fighting for your workloads, prices come down, features come faster, and customer service improves. This is basic economics, and it is good for everyone who buys cloud infrastructure.
I am already seeing this play out in pricing. AWS has cut prices multiple times over the past few years, and I suspect the pace of price reductions will accelerate now that Google and Microsoft are competing aggressively for market share.
There is also the innovation angle. Competition forces each provider to differentiate. AWS might double down on breadth of services. Google might focus on performance and developer experience. Microsoft might emphasize hybrid and enterprise integration. Each provider trying to out-innovate the others means we all get better tools.
The Portability Question
Here is the thing that keeps me up at night: portability. If I build my application using AWS-specific services (DynamoDB, SQS, SNS, Lambda if they ever build something like that), moving to Google or Azure becomes a massive rewrite. If I stick to standard technologies (Linux VMs, PostgreSQL, Redis, nginx), I can move between providers more easily, but I give up the managed services that make the cloud compelling.
This is a real tension. The managed services are where the value is. Running my own PostgreSQL on an EC2 instance is not that different from running it on my own hardware. But using RDS, where AWS handles replication, backups, patching, and failover? That saves me real operational work.
The pragmatic answer is probably to use provider-specific services where they genuinely add value and to abstract the integration points so that switching providers is painful but not impossible. But I am not sure anyone has figured out the perfect balance yet.
What This Means for Infrastructure Engineers
If you are an infrastructure engineer in 2012, the ground is shifting under your feet. The skills that mattered five years ago (racking servers, configuring switches, managing SAN storage) are becoming less relevant for a growing category of workloads. The new skills are cloud architecture, automation, API-driven infrastructure, and cost optimization.
I do not think traditional infrastructure work is going away. Somebody has to run the data centers that the cloud providers operate. Regulated industries and certain workloads will stay on-premises for years. But the growth is in the cloud, and that is where the most interesting problems are.
For someone like me, this is enormously equalizing. I have the same access to AWS, Azure, and Google Compute Engine as an engineer in Silicon Valley. The API does not care where I am sitting. The infrastructure is global. The only thing that matters is whether I can design and build good systems on top of it.
My Prediction
I will go on record with a prediction: within five years, most new applications will be built cloud-first. Not cloud-optional, not cloud-compatible, but designed from the ground up to run on cloud infrastructure. The economics are too compelling, the velocity advantage is too large, and the operational burden of running your own infrastructure is too high for most organizations.
The three-horse race between AWS, Azure, and Google will drive prices down and capabilities up. Some workloads will use managed services heavily; others will run on plain VMs for portability. But the question will shift from "should we use the cloud?" to "which cloud should we use?"
I am going to sign up for the Google Compute Engine preview and compare it against the AWS setup I know well. The best way to evaluate these platforms is to build something real on each one and see where they help and where they get in the way.
Interesting times ahead.