AWS held their annual ReInvent Conference recently, and a word thrown around all too frequently was Serverless. It is clear that AWS is trying their best to make Serverless a buzzword, and for a bunch of reasons that are not-so-esoteric. This article aims to dive deeper into why Serverless architecture is slowly, but surely, coming of age, and how it can help small and big businesses alike.
So, Lego.
One of the leadership sessions at ReInvent saw Sheen Brisals, SEM of the Lego Group talking about their Serverless journey [1]. In my opinion, there were some key takeaways from this talk, a few of them being:
- LEGO was able to handle a 200x transaction load increase and a 10x user load increase effortlessly with their newly deployed Serverless architecture
- The Engineering function re-organised as smaller, mutually exclusive product squads, each of which managed a bunch of related micro-services
- It took LEGO a total of around 3 years to migrate their entire legacy system to the new Serverless architecture. But, they started the race early, while most businesses were unaware of this architectural paradigm, and today, they have a clear competitive advantage
- Their entire architecture is 100% serverless. This includes integration with third party payment services, e-commerce modules, etc.
If there’s anything that the above points indicate – it is that this architectural paradigm is easy to manage, light on the wallet, and allows for better collaboration between members of different teams. It manages to do so without leaving a major dent on performance, and allows exceptional out-of-the-box scaling and monitoring capabilities.
But what’s the catch?
It’s difficult for conventional software engineers to write and build applications in this paradigm.
Since the architecture makes use of a number of loosely coupled, interfacing cloud services, each of which need to be set up, tracked and managed independently, it confuses those programmers who seek to find all their answers in one place, and find it difficult to track interactions between different parts of an application. Apart from this, it also makes the code layer extremely thin – all it should ideally contain with the Serverless architecture is your application’s business logic.
This seems exceptionally hard to conceive when you’ve spent years building backend services with a lot of framework bloat. Most of us have spent years believing that backend services have to be very “structured” – they should make use of ORMs, divide code into repositories, services, controllers and so on, all of which have their own purpose and place in the codebase. And we are not to blame, are we? This architectural paradigm has been integral to the success of millions of softwares, and businesses. How can we suddenly rely on this witchcraft which just requires us to define a function and execute raw queries?
Keep It Simple!
The answer lies in an old, but oft forgotten principle – KiSS (Keep it simple, stupid!). The KISS principle states that most systems work best if they are kept simple rather than made complicated; therefore, simplicity should be a key goal in design, and unnecessary complexity should be avoided.[2]
The question that arises is – how is an exceptionally thin code layer concentrating on implementing business logic, and a bunch of interconnected cloud services which just need to be configured an embodiment of the KISS principle? The answer is simple –
The complexity comes in after the event, not before it. Since you always have a source to the trigger, the entire workflow plays out right in front of you whenever there is an event of interest. Conventional applications are the other way around – you put a lot of safeguards and complexity in place even before you start receiving the events. Then, when you put it out into the real world, things either don’t work out the way you want them to, or they are insufficient to cater to the demand. Demand, at the end of the day, happens to be a random variable. And then of course, we’ve all spent countless hours restructuring APIs, making them scale, preventing database transaction interlocking, and the many other things that we do once our “very optimised and structured” code ends up failing despite our best efforts.
There are a few other advantages here, which are inherent in the way you need to build and configure these applications. It would be noteworthy to go through them.
If the cloud provider provides you a serverless environment, they will also provide you related security features, fault tolerance and scaling capabilities. They will inform you proactively when there’s a security vulnerability or performance issue, fix it as a high priority item, and keep you updated on how you can ensure minimum impact for the failure. You can in turn keep your customers informed and take adequate, guided remedial action to correct any anomalies due to the failure. All of this becomes a shared responsibility of you and the cloud provider, which is always better than it being too heavily dependent on your own team.
You can structure your team, assign them work and get an application up and running in almost no time. While doing this, you can also incorporate work from engineers of a diverse range of skills, capabilities and experience, and thus get development work done very efficiently and empathetically.
Your costs increase only when your load grows.
Your costs come back down when your load subsides. When you’re expecting peak load, you can monitor, scale and provision resources at a single place, at lightning speed. When your peak load subsides, you can deprovision them back, again in a jiffy. Your costs thus correlate much more strongly with your demand, and not with the resource requirements of your codebase.
All of these are a win-win if you’re facing unpredictable demand, want to experiment and innovate, or just don’t have enough cash to burn and not care about. Basically, it helps you translate sound software engineering principles into good business practices. All of this improves your company’s performance, probability of success, and longevity.
At AppDesk, we have already migrated 6 services to the Serverless architecture, and reduced our cloud computing costs by more than 56%. If you’d like to know how, feel free to reach out to us. We would be more than happy to help!
Authored By:
Sopandev Tewari,
Lead Software Engineer
AppDesk Services