In November 2017, The Register published an article, 'Lambda and serverless is one of the worst forms of proprietary lock-in we've ever seen in the history of humanity.'
The article goes on and elaborates:
Its code that is tied not just to hardware – which we've seen before – but to a data center, you can't even get the hardware yourself. And that hardware is now custom fabbed for the cloud providers with dark fiber that runs all around the world, just for them. So literally, the application you write will never get the performance or responsiveness or the ability to be ported somewhere else without having the deployment footprint of Amazon.
What happened next was nothing short of spectacular. Well known figures in the Cloud computing space such as John Arundel, Forrest Brazeal, Yan Cui started to have diverging opinions.
Yan Cui is known for his serverless articles in medium and his blog. In an article published in lumigo.com titled “You are wrong about vendor lock-in” he wrote:
The biggest misconception about serverless vendor lock-in arguments is that technology choices are never lock-ins. Being “locked in” implies that there is no escape, but that’s not the case with technology choices. Not with serverless, not with databases, not with frameworks, not with programming languages.
Computing started with bare metal servers, then with virtual machines and later with containers and distributed systems. However, in 2006 a product called Zimki offered the first functions as a Service. This allowed a “pay as you go” model for code execution.
Zimki was not commercially successful, but it paved the way for a new business model for computing services. Because of Zimki Functions as a Service or FaaS became a new category in the cloud space.
In 2008, Google offered Google App Engine, App Engine allowed “metered billing” for applications. This new offering from Google allowed developers to create functions using a custom Python Framework. The limitation of this is glaringly obvious, developers were not able to execute arbitrary code.
In November of 2014, AWS officially announced AWS Lambda. A fully-fledged Functions as a Service Platform that allowed execution of arbitrary code.
In our weekly newsletter, Shipped, we curate must-read serverless tutorials, news, and stories. Each week, there are tons of articles published, we read them for you, choose the best ones and share them with you. You can subscribe to Shipped by visiting faun.dev/join.
At the reInvent 2014 conference, Amazon CTO Werner Vogels is delivering a keynote. He proclaimed AWS Lambda is an event-driven computing service for dynamic applications.
You can reduce your development effort by writing no more glue code. Lambda responds to new data quickly, and it can improve performance by concurrency. And you don’t have to run any servers, no instances, you just write the code and it will run for you.
The focus here is on events. For example, one of these events could be an S3 upload notification or DynamoDB streaming update. You simply write a code that runs against these events.
Most importantly, the code will just run automatically without any computing infrastructure that you have to provision for it.
And with that, Serverless Architectures are born.
Amazing and revolutionary, isn’t it?
No more infrastructure provisioning, code executes automatically, and it responds almost instantaneously! This is how AWS is selling its Serverless platform.
So here’s a question to all our listeners. If this was in 2014, why is serverless not everywhere yet?
After all, when Steve Jobs unveiled the iPhone, after a couple of years, you can hardly find people who are not using iPhones or smartphones in general.
What’s going on? Is serverless architecture just a fad? Or is there more to it?
Let’s put on our pragmatic hat. If AWS Lambda can execute any arbitrary code, why don’t we simply copy all our existing codebase and get Lambda to execute it?
To explain it better, let’s use a thought experiment popularized by Gregor Hohpe, co-author of the seminal book "Enterprise Integration Patterns".
While on a two-week trip to Japan, Gregor was in a Starbucks coffee shop. While waiting for an order of Hotto Kokoa or Hot Chocolate. He thought of how Starbucks processes their orders.
Like most businesses, Starbucks cares most about maximizing the throughput of their orders. Because of this, orders are being done asynchronously.
When you make an order, the cashier takes a coffee cup, marks it, and places it in a queue. The queue decouples the cashier from the barista. This allows the cashier to take more orders. But as a consequence, the coffee served is not made in the order it was commanded.
How is this related to our serverless architecture? Simply put, serverless architectures introduce a problem of concurrency.
If the events in the queue are not guaranteed to get processed in order, then our existing codebase cannot simply be copy-pasted without regard to concurrent processing.
Okay. So if we guarantee that our existing codebase can support concurrent processing, then we can copy-paste the code, right?
Well, not really. There is a difference in how you write a standard codebase compared to an event-driven codebase.
Let’s extend that same thought experiment from earlier.
What if Starbucks decided not to allow customers to buy their coffee from a cashier. But rather the only way now to get your daily shot of joe is by being an amazon prime member and only through delivery.
In this case, the entire business model of having a physical store becomes obsolete.
Starbucks would need to rewrite its business model. This would mean a single fire and forget the delivery package to their customers every day.
How is this related to serverless architecture?
A store relies on direct input—for example, a person buying from a cashier.
In serverless architecture, however, each coffee delivery is a function. Each function relies on events raised rather than direct input.
Instead of having a system like a store, with individual routines and subroutines, serverless architecture focuses on events such as time being at 7 in the morning or winning a coffee lottery.
This means that you have to rewrite your application as a series of functions that works on events.
So if we are rewriting applications as a series of functions rather than services, how do we deploy?
To help us better understand, Let’s look at how one of the most brilliant commanders in history deployed his army.
In 1805, the United Kingdom, the Austrian Empire, and the Russian Empire formed the third coalition to overthrow Emperor Napoleon of France. Napoleon, during this time already formed the Grande Armée.
Unlike most armies of this time period, this new army, formed by Napoleon, became the first one to use fluid formations and flexible organization.
It was said that corps within Grande Armée would often act independently of each other depending on events while all were working towards the same goal.
This created an instantaneous decision-making process and superior tactical advantage compared to the armies of Austria.
The Austrians are surrounded! It took less than a week to surround the well-trained army under the command of Karl Mack. Despite the higher ground, a well-trained army, and an experienced commander, the Battle of Ulm was won without any major battles.
The new deployment strategy, however, did not come overnight. It took a dramatic rethink of how an army works in the Napoleonic age.
Back to 21st century serverless architecture. Why are we talking about battlefield deployments of the 19th century?
Simple. Serverless architectures are deployed differently.
Whereas the typical architectures are deployed centrally and one at a time from top to bottom with services running 24/7.
Serverless architectures are deployed decentrally, they are not active when they are deployed, but only when events are fired.
Napoleon delegated a lot of decision making decentralizing the process, while on the other hand, his contemporaries had a top-down approach. Napoleon’s nemesis Wellington, for example, was famous for moving around the battle and giving out commands himself.
This means serverless architecture requires rethinking how we deploy our systems.
With all these said, we here at FAUN don’t think that Serverless Architectures will make Traditional Architectures redundant.
There are use cases that serverless shines best, such as event-driven data processing, cronjobs, and workflow-oriented applications such as those supported by AWS Step Functions.
Before we deep dive into how we would put Serverless architecture into production, let’s look at some disadvantages Serverless has over the traditional architectures.
Cold starts and performance latency affects serverless architectures.
This is especially true for serverless applications that are not continuously processing data or serverless code written in runtime that is slow to start like Java.
Another key disadvantage is the fact that resource limits are imposed on serverless platforms. Because of this, they are not suited for high-performance computing. Most of the time, it would be cheaper to run high-performance computing in traditional architectures.
Using serverless architectures also means that we need to rethink observability and security.
Traditional architectures heavily rely on agents installed in the underlying servers for observability and security.
In Serverless architectures, we no longer have access to servers. This means that we need to have a robust set of middlewares that have to interface with our observability and security systems.
If you want to deploy serverless functions in a heavily regulated environment, then the amount of security and observability requirements on your function will make it difficult, which follows our next point. Serverless generally does not work across cloud providers. If your software and cloud architects care about multi-cloud and avoiding vendor lock-in, then building your core systems in serverless functions such as AWS Lambda is a hard sell.
So here’s the million-dollar question.
How do you productionize serverless architectures?
First and foremost, you need to decide when to use or not to use serverless. Use cases are important.
The operating model is equally important. Things like which programming language you support, which accounts can use serverless, and how authorization is implemented are critical.
Speaking of authorization, the principle of least privilege should be observed for any functions.
Make sure that your functions are granular and have only a single responsibility.
Next, once you have all these processes in place, you need to design the CICD pipelines. Place great emphasis on your middlewares and libraries when you design these pipelines.
Shift left on your security. Run scans per each dependency you have and get approved libraries out early and often.
Finally, have templating, lots of templating, to prevent reinventing the wheel each time.
Serverless architectures unlock a whole new set of tools in our toolbox. It does not replace our existing toolbox, but it does some things better than our existing tools. The key is to know when to use it and when not to use it.
That was the first episode about Serverless; part 2 will be released very soon.
The production of this podcast takes a lot of time and energy from us, but we are glad many of you are already supporting us. If you want to support this work, we’d like you to share this episode with your friends and followers, rate it on the podcast application you’re using, and subscribe to the podcast if it’s not done. This will help us reach more people, ensure the continuity of this project, and especially keep it open and free for everyone.
Don't forget to follow FAUN on twitter.com/@joinFAUN and subscribe to our hand-curated weekly newsletters. Visit faun.dev/join, choose the topics you would like to subscribe to, confirm your email subscription, and start receiving the best tutorials and stories from the web about DevOps, Cloud Native, Kubernetes, Serverless, and other must-follow topics.
If you want to reach us, we will be glad to read your feedback and suggestions, just email us on firstname.lastname@example.org.