Join us
@eon01 ă» May 31,2021 ă» 5 min read ă» 1881 views ă» Originally posted on thechief.io
Serverless is one of the significant trends of the moment in software development and deployment. A promising technology, Serverless Computing is developing very quickly in companies. In this concept, the cloud provider is fully responsible for launching and executing the code of your applications. A Serverless platform ensures that the resources necessary for its optimal operation are available.
Most studies show that Serverless Computing technologies are currently experiencing the most substantial growth in the very varied universe of cloud services. Datadog has published the results of a survey that voluntarily limits its analysis to the Serverless FaaS (Function as a Service) approach and, more particularly, to its use through AWS Lambda.
The first key finding from the study is that half of AWS users have also adopted Amazon Lambda. The research shows that in two years, the concept of Serverless Computing has moved within companies from the experimental or curiosity stage to much more extensive use with a wide variety of companies already having a part of their infrastructure in AWS.
But, behind this term, hide several realities. The last yearâs survey of 501 IT professionals by Cloud Foundry found that companies need to be careful when switching to a Serverless architecture. What is behind this warning? We will focus here on coming back to the disadvantages and possible pitfalls of these Serverless approaches.
Everything is always a question of balance. The significant benefits of Serverless Computing necessarily come with limits and constraints that should not be overlooked or overlooked.
Serverless platforms are optimally designed for scaling up. The direct consequence of this conception is that if a database (in the case of Database as a Service) or a function (in the case of Function as a Service) is only very rarely called, it will face a longer boot time especially compared to equivalent resources running on a dedicated server.
The Serverless infrastructure seeks to optimize the use of its underlying resources and therefore frees up anything that is not frequently used, resulting in a longer wake-up time (since it is necessary to fill the caches and reload the frameworks of âexecution).
Especially in the context of the Serverless platform, there are often constraints specific to each platform that must be known and integrated. Typically, your functions are often limited in code size and especially in execution time.
Therefore, it is important to keep in mind that even if these platforms have excellent scalability, they do not clear the developers to deliver quality code.
The loss of control linked to the very concept of Serverless makes it more complex to diagnose and monitor applications, particularly in terms of execution performance and resource use. Nevertheless, the âpay as you goâ system requires you to have a good view of the execution times and the resources consumed since these elements will be invoiced to you.
This essential aspect is gradually improving with the maturity of monitoring tools integrated into the platforms. The appearance of third-party tools specialized in monitoring cloud resources such as Dashbird, Epsagon, CloudWatch, Thundra, IOPipe, or Stackerry is providing some flexibility to the monitoring.
The introduction of Serverless in your security policies adds even more heterogeneity and complexity. Serverless also tends to increase your attack surface by multiplying access points and technologies. Besides, these technologies are still relatively immature and poorly understood by security officials and developers. In short, the security of your Serverless resources should not be overlooked and requires special attention even if the platforms and infrastructures are well protected and defended by the cloud operator.
Serverless consists of relying entirely on a service provided by a third party, which necessarily increases the dependence of your developments on this supplier. Whether it is BaaS (Backend as a Service), FaaS, DBaaS, or to a lesser extent CaaS (Container as a Service), it will not be easy to change providers.
The frameworks and the languages ââof development supported by one provider and the others differ considerably. Serverless is, without doubt, one of the cloud technologies on which the âLock-inâ effect is most potent. But the benefits are strong enough to offset the risk of this increased dependence.
On paper, Serverless simplifies all the deployment phases to the extreme. But when deploying interdependent functions or interdependent containers, procedures must be put in place to stop the event generating services beforehand and simultaneously implement all of the updated containers or functions. This is nothing new but poses organizational problems, a rollback in case of concerns, and availability of services that the Serverless does not solve alone.
Let us be clear; Serverless is not and does not pretend to be the comprehensive solution to all your problems. Also, the very concept of âFunction as a Serviceâ is well suited to a specific type of development: event programming, where the triggering of events dictates the execution of functionality.
On the other hand, it adapts less well to more monolithic scenarios with long transactions and intensive calculations or architectures designed for VMs or containers running in an orchestrator.
Serverless technologies allow a team to start an application by focusing on the business logic of the code, rather than the underlying infrastructure. This not only gives more time to market with more agility but also allows for more team innovation.
The primary interest of Serverless is to shorten the time between the idea of ââthe project and its production. Developers no longer have to worry about physical infrastructure, resources, and operating systems. They can entirely focus their attention on code quality and functionality without wasting time on the software plumbing required for implementation.
IT developers and teams no longer have to worry about the complex problematic scalability. No design, advanced settings, and endless tuning phases to ensure the scalability of the application. It is the role of the cloud provider (and its Serverless platform) to provide the necessary resources according to the needs of the use.
By design, Serverless platforms are generally extremely resilient and reliable. Because developers also have fewer lines of code to write, they can focus more on the quality of the functionality they implement. This results in generally more reliable executions as far as they rely on very elastic resources.
With Serverless, there are no costs related to the acquisition and installation of hardware, no costs of operating systems and attached licenses, no maintenance costs, no update costs to BIOS, and OS. The company also saves unused resources like all those VMs too often left on and yet wholly abandoned.
Companies are optimistic about the use of Serverless, predicting that most described challenges will be met or are being met. Serverless will gain popularity because by offering a simplified programming environment, the platform makes the usage of the cloud much easier, thereby attracting more people.
Serverless Architecture avoids the need for manual resource management and optimization that todayâs server-based computing imposes on application developers. It is a maturation similar to the transition from assembly language to high-level languages. Even comparatively non-technical end users may be able to deploy functions without any clear understanding of the cloud infrastructure.
Join other developers and claim your FAUN account now!
Founder, FAUN
@eon01Influence
Total Hits
Posts
Only registered users can post comments. Please, login or signup.