Join us
@rajataroracw ă» Jan 21,2022 ă» 11 min read ă» 3602 views
Introduction to Microservices
With the advent of handheld devices, there is a paradigm shift in usage and demand for Enterprise Software. On one hand, most of the companies have large complex and monolithic systems in place which take care of huge and complex business scenarios, rules, and computations, and on the other hand, there is a need to provide an easy way to access information to the customer on their Mobile devices. Itâs getting difficult for the companies to decide whether to invest money in Consumer-facing applications first or to invest in Enterprise solutions. This situation brings up new challenges and interesting ways of developing software that should focus on Agile development, Easy and Short Build deployments, Quick and small enhancements in Consumer-facing interfaces i.e. Web Application, Mobile Application and also enhance the enterprise software suite to meet the business needs. So to overcome these challenges we can think of using Microservice Architecture.
What is Microservices Architecture?
Microservices Architecture advises developing small, scalable, and loosely coupled services that can be deployed independently and also focus only on its functionality instead of taking care of the whole system at once. These small services are fully functional, independent work of units that are responsible for managing everything on their own i.e. Technology Stack, Development team, Development cycle, Development architecture, Build cycle, and deployment architecture. Later these services can be combined and consume other services to build up a large chain and provide the functionality of Enterprise software, the main benefit of this architecture style is to reduce the dependency on one large system and avoid a single point of failure, this also promotes quick development cycles as these services are a small unit of the solution and can be developed by a small team which will be responsible to manage all aspects of this service.
Microservices are an architectural and organizational approach to software development to speed up deployment cycles, foster innovation, and improve the maintainability and scalability of software applications. Therefore, the software is composed of small independent services that communicate over well-defined APIs and are owned by small self-contained teams.
How do we wire our services together?
Components in a monolithic system communicate through a simple method call, but components in a Microservice communicate over the network through REST or web services.
In monolithic systems, we can avoid issues of service wiring altogether and have each component create its own dependencies as it needs them. In reality, such close coupling between a component and its dependencies makes our system rigid, tight coupled and hampers testing efforts. It also makes release cycles long due to the changes in the dependencies and different timelines for different timelines impact the deployment of other components and make it mandatory to release the whole system as one unit.
If we decide to implement an application as a set of Microservices, we have wiring options similar to those we have for a monolith. We can hard-code or configure the addresses of our dependencies, tying our services together closely but it would need additional requirements of a configuration expert and bring more complexity. Alternatively, we can externalize the addresses of the services we depend on, and supply them at deploy time or run time. Here we can use the concept on Service registry which serves as the only point of information for the service available in the Microservices cloud and store information of the available services their IP addresses and ports, As these services could be deployed on a cloud platform and use concepts like auto-scaling, your services should not worry about the IP addresses and ports these will be managed by the service registry components and will serve as a service directory to look up the details of services, some tools which can act as Service registry are Apache Zookeeper, Spring Cloud and Netflixâs Eureka.
Characteristics of Microservices
Microservices architectures are not a completely new approach to software engineering, but rather itâs a collection and combination of various successful and proven concepts such as object-oriented methodologies, agile software development, service-oriented architectures, API-first design, and with focus on Continuous Delivery.
Here are some common and important characteristics:
Decentralized Data: Microservices architectures are distributed systems with decentralized data management. They donât rely on a unifying schema in a central database. Each Microservices has its own view on data models. Those systems are decentralized also in the way they are developed, deployed, managed, and operated. It makes it easier for teams to manage data and it also allows full control of data as itâs only managed by a specific service.
Independent Release Cycle: Various components in Microservices architecture can be changed, upgraded, or replaced independently and without affecting the functioning of other components. Similarly, the teams responsible for different Microservices are enabled to act independently from each other. It helps businesses to upgrade small changes in multiple services at a time to keep pace with the business requirements. It also helps businesses to bring new changes quickly to their end-users and take quick feedback from users.
Single Responsibility: every component is designed around a set of capabilities and with a focus on a specific domain. As soon as a component reaches a certain complexity, it might be a candidate to become its own Microservice. Microservices are designed based on the Single Responsibility concept which means one service is responsible for handling only one knowledge stream i.e. Authentication Service, Identity Management Service, Order Management Service, etc.
Heterogeneous: Microservices architectures donât follow a âone size fits allâ approach. Teams have the freedom to choose the best platform for their specific problems. As a consequence, Microservices architectures are usually heterogeneous with regard to operating systems, programming languages, data stores, and tools â an approach called polyglot persistence and programming.
Abstraction: individual components of Microservices are designed as a black box, i.e. they hide the details of their complexity from other components. Any communication between services happens via well-defined APIs. Generally, they avoid any kind of hidden communication that would impair the independence of the component such as shared libraries or data.
You build it, you run it: The Team responsible for building service is also responsible for operating and maintaining it in production â this principle is also known as DevOps. In addition to the benefit that teams can progress independently at their own pace, this also helps to bring developers into close contact with the actual users of their software and improves the understanding of the customersâ needs and expectations.
High Scalability and Availability
Breaking monolithic applications into small Microservices, the communication overhead increases because Microservices have to talk to each other. In many implementations, REST over HTTP is used as a communication protocol, which is pretty lightweight, but high volumes can cause issues. In some cases, it might make sense to think about consolidating services that send a lot of messages back and forth. For Microservices, it is quite common to use simple protocols like HTTP. Messages exchanged by services can be encoded in different ways, e.g. in a human-readable format like JSON or YAML or in an efficient binary format. HTTP is a synchronous protocol. The client sends a request and waits for a response. By using async IO, the current thread of the client does not have to wait for the response but can do different things. Modern generation frameworks are also available to suit this structure one can use Play Framework which provides all the basic requirements to develop REST APIs in a very easy way and also provides concepts like Akka Actors and Java Completion stage which are highly scalable and efficient mechanisms to handle the scalability. Also, this architecture is Cloud friendly and can be easily deployed on any of the popular cloud platforms like AWS or Windows Azure.
Microservices Architecture on AWS Cloud
This is a reference architecture for typical Microservices on the AWS Cloud Platform. The architecture is organized along with four layers: Content Delivery, API Layer, Application Layer, and Persistence Layer.
The purpose of the content delivery layer is to accelerate the delivery of static and dynamic content and potentially off-load the backend servers of the API layer. Since clients of Microservices are served from the closest edge location and get responses either from a cache or a proxy server with optimized connections to the origin, latencies can be significantly reduced. Microservices running close to each other donât benefit from a CDN but might implement other caching mechanisms to reduce messaging and minimize latencies.
The API layer is the central entry point for all client requests and hides the application logic behind a set of programmatic interfaces, typically an HTTP REST API. The API Layer is responsible for accepting and processing calls from clients and might implement functionality such as traffic management, request filtering, routing, caching, or authentication and authorization. Many AWS customers use Amazon Elastic Load Balancing (ELB) together with Amazon Elastic Compute Cloud (EC2) and Auto-Scaling to implement an API Layer.
The application layer implements the actual application logic. Similar to the API Layer, it can be implemented using ELB, Auto-Scaling, and EC2.
The persistence layer centralizes the functionality needed to make data persistent. Encapsulating this functionality in a separate layer helps to keep the state out of the application layer and makes it easier to achieve horizontal scaling and fault-tolerance of the application layer. Static content is typically stored on Amazon S3 and delivered by Amazon CloudFront. Popular stores for session data are in-memory caches such as Memcache or Redis. AWS offers both technologies as part of the managed Amazon ElastiCache service. Putting a cache between application servers and the database is a common mechanism to alleviate read load from the database which in turn may allow resources to be used to support more writes. Caches can also improve latency.
Relational databases are still very popular to store structured data and business objects. AWS offers six database engines (Microsoft SQL Server, Oracle, MySQL, PostgreSQL, and Amazon Aurora) as managed services.
Design for failure
A consequence of using services as components is that applications need to be designed so that they can tolerate the failure of services. Any service call could fail due to the unavailability of the supplier; the client has to respond to this as gracefully as possible. This is a disadvantage compared to a monolithic design as it introduces additional complexity to handle it. The consequence is that Microservices teams constantly reflect on how service failures affect the user experience. API Gateways comes to the rescue here, Gateway provides single-point entry for all the APIs and in case of failure, Gateway can respond to the API call with a proper error message instead of throwing a system error. Gateways also act as a service health dashboard that can monitor the health of services and in case of any failure or impact it generates automatic alerts to service Administrators and owners to take early corrective action instead of getting it reported by the service customers. It helps in increasing the service stability and improves the user experience.
Testing Microservices
With the increase in the number of Microservices in a system, it becomes very important to use the Automation testing approach to quickly verify the important aspect of the service and it also helps in identifying the early impact on other services. As the change in one service can easily impact another service hence, itâs more important to identify it early in the cycle to avoid big issues. There are many tools available like JMeter, SOAP UI which can be used to design the API call based automation suite and can also be easily integrated with Continuous Integration servers like Jenkins and Bamboo to execute Automated tests on each build and share the sanity report which helps in finalizing the decision for the deployment of the services. Cloud platforms like AWS also provide many solutions to support automated solutions like AWS Lambda, CodeDeploy, and CloudWatch which can be configured easily to automate the Build, Testing, and Deployment tasks.
Easy Cloud Adoption
With the growing popularity of Public cloud platforms like AWS, Windows Azure, and VMWare VCloud, Microservices are the best fit for Cloud environments and also most of the cloud providers are focusing on providing various features like Elastic Load Balancer, API Gateway Interface, Auto Scaling, Service Monitoring it's getting easier to use for service architecture. Also, container-based cloud services like Docker provides an easy way to deploy and test services with minimum configuration. Cloud adoption also helps businesses to manage operational costs and reduce the upfront capital expenditure and they can budget well to spend more money on the right service and reduce costs on the overall system.
Conclusion
New application architectures, including the Microservices architecture, enable developers to break down big business problems into small problems and help to focus more closely on business logic, shorter release cycles, and quick production deployments. The combination of these developments enables solutions to be built in more Agile styles and applications to benefit from new levels of scalability and fault tolerance. It enables Businesses to focus on both Customer requirements and developing the Enterprise software together with full force and helps to scale as per the market needs.
About the Author
Rajat Arora, Working with Natwest Group, Principal Engineer (Cloud Technologies). Having 16+ years of experience in developing scalable applications using Java, J2EE Technologies. Expert knowledge of multiple Cloud Computing Platforms like AWS EC2 and Windows Azure Cloud. Designed and developed applications for cloud environments and also helped teams to migrate their existing software to AWS Cloud.
Join other developers and claim your FAUN account now!
Principal Engineer - Cloud Technologies, Natwest
@rajataroracwInfluence
Total Hits
Posts
Only registered users can post comments. Please, login or signup.