Recently, I found myself driving a pilot project that had all the hallmarks of complexity: on-demand batch processing, credit card validation, hierarchical approval workflows, varying discounts as per sale days, and, of course, sending out a flurry of confirmation emails towards the end (and occasional hair-pulling moments) and needed a scale larger than scale of James cameroon's movies.
What did I design? A containerized microservices architecture running on serverless functions. (Because why not make life interesting?)
For the Uninitiated: Microservices in a Nutshell
Imagine an application that’s less like a monolithic, one-size-fits-all sweater and more like a closet full of tailored outfits. Microservices architecture breaks down an app into loosely coupled services, each owning a specific piece of functionality.
These services chat with each other using lightweight protocols like HTTP or messaging queues. The result? A system that’s agile, flexible, and resilient - kind of like a Nadia Comaneci level gymnast, but for your application.
The catch? Managing the infrastructure for all these tiny services can quickly feel like herding cats in a server farm. That’s where serverless computing came to the save the day and we lived to fight another day.
Wait, What’s that Serverless Again?
Sorry, there ae still servers involved. Despite its misleading name, serverless computing doesn’t eliminate servers. (Sorry, IT folks.) Instead, it abstracts away the nitty-gritty of server management. Providers like AWS, Azure, and Google Cloud handle all the heavy lifting; provisioning, scaling, and maintenance, so developers can focus on their core job, well, developing.
In more simpler terms, Serverless computing is like hiring an event planner for your app. You hand over the headaches of infrastructure management to cloud providers like AWS, Azure, or Google Cloud. They take care of provisioning, scaling, and maintenance while you focus on the fun stuff - writing code.
The best part? You pay only for what you use. No more paying for idle resources or spinning up servers for sporadic tasks. It’s like ordering à la carte instead of an all-you-can-eat buffet.
What we learnt and why Serverless is a Microservices MVP
1. Simplified Infrastructure Management
- Serverless platforms take care of provisioning, scaling, and maintenance automatically.
- Developers can focus on building business logic instead of sweating over server setups or security patches.
2. Scalability That’s Actually Scalable
- Serverless functions scale on demand to handle spikes or lulls in traffic.
- Techniques like function warming can reduce cold start delays, keeping performance snappy.
3. Cost-Effective (Yo, CFOs)
- With pay-per-use pricing, you’re billed only for the resources consumed.
- This model shines for applications with unpredictable or variable workloads.
4. Bulletproof Fault Isolation
- Each microservice is deployed as an independent serverless function, isolating failures and preventing ripple effects. This makes the system more reliable and resilient.
5. Speedy Deployments
- Serverless functions can be deployed in minutes, accelerating time-to-market.
- CI/CD pipelines integrate seamlessly with serverless platforms, making updates a breeze.
The Challenges That Gave US Grey Hairs
Of course, no architecture is without its quirks. If it is too easy, you probably are doing it wrong. Here are the "Shane Warne" level googlies we faced:
1. Cold Starts
When functions spin up after a period of inactivity, they can take a moment to get going—causing latency. It’s like waiting for your Chai Latte to brew when you’re already late.
2. Execution Time Limits
Serverless workloads don’t hang around for long. Most platforms have execution timeouts, and your task might get cut off if it takes too long. Long-running tasks? You might need a plan B and sometimes even C, D & F. Thankfully our programs were optimized to finish each batch run within 7 min.
3. Vendor Lock-In
Relying on one cloud provider means you’re married to their ecosystem in a one-way relationship. Although It wasn't a dealbreaker for us, but it is wise to keep the prenup handy.
4. Debugging Nightmares
Tracing issues in distributed systems is like finding a needle in a haystack—except the haystack is spread across multiple regions and needle is of the same color as haystack.
5. Statelessness
Serverless functions don’t persist state between invocations. If your app needs to remember things (and TBH, most do), you’ll require external storage solutions. Yes, Database and MQ's, I used you too.
So, Is Serverless the Holy Goblet?
Not quite there yet, but it’s close. For projects like mine, combining serverless and microservices was the catalyst to a completely cloud native solution. It streamlined development, reduced operational headaches, and offered unparalleled scalability at a very low cost.
That said, serverless isn’t a one-size-fits-all solution. It’s a powerful weapon in your arsenal—but like any tool, it works best when used wisely.
By embracing serverless, organizations can unlock faster innovation, better performance, and happier users. And in today’s fast-paced tech landscape, that’s a win worth celebrating.
Oh and yes, the pilot did work well, and met all the business requirement parameters, costed us very little to build, was event-driven and little operational costs. Of-course as usual, business could not justify ROI on this new business line hence we never went live. The whole project still lies in the depth of our github repo to be used someday in future.
Pro tip: Bring snacks and popcorns. Debugging a serverless architecture is still a marathon, not a sprint.