re:Invent 2016: Serverless Goes Mainstream
AWS re:Invent just wrapped up and the message is clear: serverless is no longer experimental, it is the direction
I just spent the week at AWS re:Invent in Las Vegas, and my head is still spinning. This was my first re:Invent, and the scale alone is overwhelming. Tens of thousands of attendees spread across multiple venues on the Strip, hundreds of sessions, keynotes that run three hours long, and an expo hall that takes a full day to walk through properly.
But underneath the spectacle, there was a clear theme this year: serverless computing has graduated from interesting experiment to mainstream platform. The announcements around Lambda, Step Functions, and the broader serverless ecosystem signal that AWS sees this as the future of cloud computing. And after spending the week talking to engineers from companies of all sizes, I think they might be right.
The Big Announcements
AWS Step Functions. This was the announcement that excited me most. Step Functions is a visual workflow service that lets you coordinate multiple Lambda functions (and other AWS services) into serverless workflows. You define your workflow as a state machine using Amazon States Language (a JSON-based specification), and Step Functions handles execution, retry logic, error handling, and state tracking.
Why this matters: the biggest challenge with Lambda has been orchestrating complex processes. A single Lambda function is great for a single task, but real applications involve sequences of tasks with branching logic, error handling, parallel execution, and state management. Before Step Functions, you had to build that orchestration yourself, typically with SQS queues, DynamoDB for state, and a lot of custom glue code. Step Functions provides a managed solution for the orchestration problem.
The state machine model feels natural for many enterprise workflows. Order processing, data pipelines, approval workflows, ETL jobs: these are all sequences of steps with conditional logic and error handling. Being able to define them declaratively and have AWS manage the execution is a significant productivity gain.
AWS X-Ray. Debugging distributed systems is hard. When a request flows through an API Gateway, hits a Lambda function, writes to DynamoDB, triggers another Lambda function, and sends a message to SNS, figuring out where the latency or the error occurred is a challenge. X-Ray provides distributed tracing for AWS services, allowing you to visualize the path of a request and identify bottlenecks.
This has been a gap in the serverless story since Lambda launched. Traditional APM tools like New Relic and Datadog work well for server-based applications, but they struggle with serverless architectures where there is no persistent process to instrument. X-Ray is purpose-built for this environment.
Lambda@Edge. Run Lambda functions at CloudFront edge locations, closer to your users. The use cases include request/response manipulation, A/B testing, authentication at the edge, and dynamic content generation. The execution environment is more constrained than standard Lambda (lower memory, shorter timeout), but for lightweight processing at the edge, it opens interesting possibilities.
C# support for Lambda. Lambda launched with Node.js, added Java and Python, and now supports C# via .NET Core. This is significant for enterprises with large .NET codebases. The ability to write Lambda functions in C# lowers the adoption barrier for teams that are already invested in the Microsoft ecosystem.
Dead letter queues for Lambda. When an asynchronous Lambda invocation fails after the configured retry attempts, the event can now be sent to an SQS queue or SNS topic for later processing. This addresses one of the operational pain points with Lambda: knowing when functions fail silently.
Serverless at Scale
What struck me most at re:Invent was not the announcements themselves, but the maturity of the serverless conversations. Last year (based on what colleagues told me, as this was my first re:Invent), serverless talks were mostly introductory: "What is Lambda?" and "Build your first serverless application." This year, the talks were about production patterns: "Serverless architecture best practices," "Testing serverless applications," "Managing serverless at enterprise scale."
The architectural patterns emerging around serverless are getting sophisticated:
Event-driven microservices. Instead of building REST APIs that call each other synchronously, teams are building systems where services communicate through events. A Lambda function processes an order, puts an event on an SNS topic, and other Lambda functions react to that event independently. This reduces coupling and makes the system more resilient.
CQRS with Lambda and DynamoDB. Command Query Responsibility Segregation separates read and write paths. Writes go through Lambda functions that validate and store data in DynamoDB. DynamoDB Streams trigger Lambda functions that update read-optimized views in another DynamoDB table or ElasticSearch. The pattern is complex but powerful for high-throughput applications.
Saga pattern for distributed transactions. In a microservices architecture, traditional database transactions do not work across service boundaries. The saga pattern, implemented with Step Functions, coordinates a sequence of local transactions. If one step fails, compensating transactions undo the previous steps. Step Functions makes this pattern significantly easier to implement.
The Enterprise Question
The question I kept hearing in hallway conversations was: "Is serverless ready for enterprise?" The answer, as of this week, is "getting there."
The operational maturity of serverless has improved dramatically. Dead letter queues, X-Ray tracing, Step Functions for orchestration, and enhanced CloudWatch metrics address many of the observability and reliability concerns that enterprises have. But gaps remain.
Cold starts. Lambda functions that have not been invoked recently experience latency on the first invocation while the runtime is initialized. For Java functions, cold starts can be several seconds. For real-time, latency-sensitive applications, this is a problem. Workarounds exist (keeping functions warm with scheduled invocations, using lighter runtimes), but they feel like hacks.
VPC integration. Lambda functions that need to access resources in a VPC (like an RDS database) experience additional cold start latency because ENIs (Elastic Network Interfaces) need to be attached. This can add 10 to 15 seconds to cold starts, which is unacceptable for most use cases. AWS acknowledged this as a known issue and is working on improvements.
Deployment and testing. The tooling for deploying and testing serverless applications is still maturing. Frameworks like the Serverless Framework and AWS SAM are helping, but the development experience is not as polished as traditional application development. Local testing of Lambda functions, in particular, requires emulators that do not perfectly replicate the cloud environment.
Vendor lock-in. Serverless architectures are deeply integrated with AWS services. A system built on Lambda, API Gateway, DynamoDB, Step Functions, and SNS is effectively impossible to port to another cloud provider without a complete rewrite. For enterprises that want multi-cloud flexibility, this is a real concern.
My Take
I have been building and evaluating cloud architectures for six months now, and serverless is the most exciting development I have seen. Not because it eliminates servers (it does not; the servers are still there, just managed by AWS), but because it eliminates the operational overhead that consumes so much engineering time.
No patching. No capacity planning. No scaling configuration. No idle resource costs. You write a function, deploy it, and it runs when triggered. The economics of paying only for actual execution time, billed in 100-millisecond increments, fundamentally changes how you think about architecture.
Step Functions, in particular, changes my thinking about workflow automation. We have internal processes that are currently implemented as cron jobs, bash scripts, and manual handoffs. Modeling those as state machines in Step Functions, with built-in retry logic, error handling, and audit trails, would be a massive improvement.
I am going back to the office with a list of pilot projects. A few event-driven data pipelines. An internal workflow automation. Maybe a small API that does not need sub-second latency. Nothing mission-critical yet, but enough to build experience and demonstrate the value.
The direction is clear. Serverless will not replace every workload. Some applications need persistent connections, predictable latency, or runtime environments that Lambda does not support. But for a growing category of workloads, especially event-driven, bursty, and workflow-oriented applications, serverless is the better architecture.
re:Invent 2016 made me believe that. The announcements, the case studies, and the energy in the sessions all pointed the same direction. Serverless is not a novelty anymore. It is infrastructure.