re:Invent 2014: Lambda, Aurora, and the Serverless Future
AWS Lambda might be the most important cloud announcement since EC2 and I am still wrapping my head around it
AWS re:Invent just wrapped up in Las Vegas, and once again I am following remotely, staying up late to watch keynotes and refreshing the AWS blog compulsively. This year's announcements are, I think, more significant than last year's. Particularly one: AWS Lambda.
Let me go through what matters.
AWS Lambda: Computing Without Servers
Lambda is a service where you upload a function (a piece of code), and AWS runs it in response to events. You do not provision servers. You do not manage servers. You do not even think about servers. You write a function, configure a trigger (an S3 upload, an API call, a database change, a scheduled timer), and Lambda handles everything else.
You pay only for the compute time your function actually uses, measured in 100-millisecond increments. If your function runs for 200 milliseconds processing an uploaded image, you pay for 200 milliseconds of compute. When the function is not running, you pay nothing.
Let me say that again: when your code is not running, you pay nothing.
This is a fundamentally different model from anything that exists today. EC2 charges you for running instances, whether your application is processing requests or sitting idle at three in the morning. Lambda charges you only for actual execution time.
Why Lambda Is Revolutionary
I have been working in infrastructure for years. My job is to provision, configure, maintain, and monitor servers. Lambda asks the question: what if you did not need servers at all?
Not "what if servers were automated" or "what if servers were easy to manage." What if there were literally no servers in the picture? You write code, you define when it should run, and the cloud figures out the rest. Resource allocation, scaling, availability, patching, monitoring: all handled.
This is what people are calling "serverless," and the name is simultaneously perfect and misleading. There are obviously still servers. They are in an AWS data center somewhere. But you, the developer or the operations person, never see them. Never manage them. Never think about them.
The implications for operations teams are profound. If Lambda becomes the standard way to run code, a significant portion of what I do today, provisioning servers, managing operating systems, patching, capacity planning, becomes unnecessary. Not less important. Unnecessary.
I am not sure how to feel about that. Excited about the technology. Slightly anxious about my career. Both at the same time.
What You Can Build With Lambda
The initial use cases that AWS is highlighting are event-driven workloads.
Image processing: A user uploads an image to S3. That triggers a Lambda function that resizes the image, creates thumbnails, and stores them back in S3. No server sitting idle waiting for uploads. The function runs only when there is work to do.
Data transformation: Records arrive in a Kinesis stream. A Lambda function processes each record, transforms it, and writes it to a database. Again, no server to manage. The function scales automatically with the volume of incoming data.
Webhooks and API backends: Incoming HTTP requests trigger Lambda functions that process the request and return a response. Combined with API Gateway (which AWS also announced), you can build entire API backends without a single server.
Scheduled tasks: Lambda can run on a schedule, replacing cron jobs that currently require a dedicated server. A nightly data cleanup, an hourly report generation, a daily email; all can be Lambda functions triggered by a schedule.
These are the obvious use cases. The really interesting ones will come when people start thinking in terms of functions and events rather than servers and processes. That mental shift has not happened yet, but I think it will.
Amazon Aurora
The other major announcement is Aurora, a MySQL-compatible relational database engine built specifically for the cloud. AWS claims it delivers up to five times the throughput of standard MySQL while being fully compatible with existing MySQL applications.
Aurora is interesting because it rethinks the database architecture for a cloud environment. The storage layer is distributed across three availability zones with six copies of your data. It continuously backs up to S3. It automatically detects and repairs disk failures. It can have up to fifteen read replicas.
The key selling point is that it is MySQL-compatible. You can take an existing MySQL application, point it at Aurora, and it should work without code changes. But underneath, the storage engine is completely different from standard MySQL, designed for the reliability and performance characteristics of cloud infrastructure.
For anyone running MySQL on RDS (or self-managed on EC2), Aurora is worth evaluating. The cost is higher than standard RDS MySQL, but if the performance and reliability claims hold up, the total cost of ownership could be lower.
Other Notable Announcements
AWS Key Management Service (KMS): A managed service for creating and controlling encryption keys. This is important for compliance and security. Managing encryption keys properly is hard, and a managed service that integrates with other AWS services makes it significantly easier.
AWS Config: A service that provides a detailed inventory of your AWS resources and their configurations, along with a history of configuration changes. This is similar to CloudTrail (which logs API calls) but focused on the actual state of resources over time. Together, CloudTrail and Config give you a comprehensive audit trail of what your infrastructure looks like and who changed it.
Amazon EC2 Container Service (ECS): AWS's container orchestration service, announced in preview. ECS lets you run Docker containers on EC2 instances with AWS managing the orchestration. This is AWS's answer to Kubernetes, though from what I have seen so far, it is less ambitious in scope.
AWS CodeDeploy: A deployment service that automates code deployments to EC2 instances. You define a deployment configuration, and CodeDeploy handles rolling out new code across your fleet, with support for rolling deployments and automatic rollback.
The Serverless Trajectory
Stepping back and looking at the overall trajectory of AWS, a pattern emerges.
First came EC2: virtual machines in the cloud. You still managed the OS, the middleware, the application. You just did not manage the hardware.
Then came managed services like RDS, ElastiCache, and Elasticsearch: you managed the application and its data, but not the database engine or the caching layer.
Now Lambda: you manage nothing but your code. No OS. No middleware. No application server. No capacity planning. Nothing.
Each step removes a layer of infrastructure that you need to manage. The logical endpoint of this trajectory is a world where you write business logic and the cloud handles everything else.
We are not there yet. Lambda is limited in many ways: runtime constraints, maximum execution time, cold start latency, limited language support (Node.js only at launch). But the direction is clear.
What This Means for Me
I keep coming back to the career implications. If serverless computing becomes mainstream, what happens to infrastructure engineers?
I think the answer is that the role evolves rather than disappears. Someone still needs to design architectures. Someone still needs to think about security, compliance, cost optimization, and disaster recovery. Someone still needs to understand the trade-offs between Lambda and EC2, between Aurora and self-managed databases, between managed services and custom solutions.
The tactical, hands-on work of managing servers will diminish. The strategic, architectural work of designing systems will increase. The engineers who thrive will be the ones who can think at the system level, not just the server level.
I need to start learning to think that way. Less "how do I configure this server" and more "how do I architect this system." Less operational and more strategic.
Lambda might be the most important cloud announcement since EC2 itself. I need to understand it deeply, experiment with it, and figure out where it fits in the infrastructure landscape.
The future is serverless. Or at least, a lot more serverless than it is today.