Microservices are small apps designed to fulfill a single purpose. Each microservice is principally in charge of a single “concern” within the larger system, perhaps surrounding a business line, and operates as a black box, communicating with the outside through REST APIs, event streaming, and message brokers. Microservices architecture commonly aims at breaking down larger applications into smaller services, each acting as their own service component to other microservices, or larger programs, giving information upon request, but also creating an interworking set of independent services.
An independent microservices approach to deployment affords organizations several capabilities over traditional monolithic approaches:
Code is partitioned into microservices, encapsulating just that code specific to the concern of the component, updates can be more easily made, new features more easily added, all without disturbing other application code.
Universal communication through APIs allows microservices to be written in whichever language is necessary, convenient, or available, and still be able to interact with other microservices.
Microservices can be individually scaled to fit the demands of the system with ease.
Conceptually, microservices are an idea born out of service-oriented architecture (SOA). But, technologically, microservices are an evolutionary step from SOA, thanks to the supporting technology of virtual machines, containers, and container orchestration software.
Microservices and containers
Microservices would not be as effective an approach if it weren’t for containers. Much like virtual machines, containers are a form of virtualization, but whereas VMs virtualize the hardware, allowing multiple operating systems to run concurrently on the same hardware, containers virtualize the OS, allowing multiple workloads to run on a single instance of an operating system.
Virtualization is the act of dividing up computational resources (CPU, RAM, storage, connectivity) and wrapping them into individual scopes. To these virtual spaces, they believe they are a real, full system, unaware they are just a part of a larger machine. This allows multiple environments to run on the same hardware, and is the fundamental makeup of the cloud space, allowing multiple users to access a cloud system that may be running many more environments than the physical infrastructure would seem to support.
Virtualization, using either VMs or containers, contributes to IT flexibility, agility, and scalability, however, containers are aptly suited for microservices. Compared to VMs, containers are much smaller because they virtualize at the user level. A container contains a package of software, with the necessary dependencies, including code, runtime, configuration, and system libraries for it to be able to run on a host. Container instances sit atop a container engine that orchestrates the creation of containers, their resource allocations, and manages them. In this way, a container can be created from an image, holding and running a microservice, giving it exactly the resources it needs, and finally can blink out of existence instantaneously when it’s no longer needed. Has the microservice failed to require it to be reset? Simply spin up a new container, and discard the failed one. Additional workloads press the system, responsive automatic scaling can duplicate containers to meet demand. Containers are light-weight and compared to VMs or a server reboot can multiply and collapse nearly instantly.
Difference between monolithic and microservices
Monolithic architectures are a single-tiered application, including all the data access and user interface in one package for a single platform—think back to buying software in a box at a local store. For home users, this may seem manageable. For enterprise-level applications, these monoliths can be problematic to debug, update, and develop new releases, especially while maintaining the system- dependent business operations. Microservices architecture, on the other hand, is not a single application, but several small services working together, contributing to the whole user experience. A microservices approach and monolithic approach seem to be on opposite ends of a spectrum, but microservices are a response to the unwieldy mess that many monolith codebases have become.
Oftentimes, it is advised not to develop applications from the ground up starting only with microservices. It is better, to begin with, a central application codebase. While this at first seems like potential time savings—skipping the monolith altogether—by developing initial iterations into one application, teams actually save time, money, and management headaches because the code base is centralized and the inherited microservices dependency structure is not needed. In these beginning stages of development, this is simpler. Eventually, as services within the monolith demonstrate stability, they can be cleaved off into their own microservice, and removed from the monolith code.
Those microservices are now independent, capable of scaling as needed. Cleaving off the code which is most suited for scaling makes a good first microservice candidate. Individual teams can then be assigned to manage those microservices with dramatically less difficulty, and better efficiency.
How containerized microservices work
Microservices running within containers is the preferred method. The ephemeral nature and size of containers match the size and flexibility needed for microservices. Managing containers is best performed through automation.
Docker and Kubernetes are the two most popular tools for containerizing microservices. Docker is concerned with creating containers, and Kubernetes is concerned with container orchestration, the automatic management of individual containers within clusters. Microservices and containers work so well together that they seem to be made for each other, however, microservices are not container dependent and can run in different environments. In each microservice scenario without containers, there are resource inefficiencies.
Single Virtual Machine Environment Running Each Microservice — Like a container, each microservice has its own unique runtime environment. But this is a virtual machine instance with its own OS and allocated system resources, consuming much more of the system than containers. From a business standpoint, the OS may be limited by license, and each instance may require a subscription.
Single Operating System Environment Running Many Microservices — Microservices are no longer encapsulated and separated from other microservices, complicating automation. This easily can lead to dependency conflicts between libraries, and if one microservice fails prompting a restart, every service goes down.
One Physical Server Per Microservice — While this contains the microservice in a single environment, the hardware overkill makes this approach simply undesirable.
Benefits of microservice architectures
Primarily by making systems more resilient, flexible, and manageable, microservices architectures benefit developers, end-users, and the business goals of organizations. Below are some of the reasons companies are adopting microservice architectures.
Market Readiness Increased — Microservice architectures support agile deployment reducing development cycles, ultimately bringing services to market faster.
Improved System Scalability — With containerized microservices, as demand increases, additional servers can be deployed quickly and easily.
Heightened Resilience — Independent services are fault isolated, if one service fails, it can be destroyed without impacting any other services.
Enhanced Development and Deployment — Microservice-based applications are easy to deploy, localize complexity, increase developer productivity, simplify debugging and maintenance, and enable smaller more agile development teams.
Future-proofed Applications — In many ways, microservices are future-proofed apps: innovation is easier to adopt in virtual environments; microservice size reduces experimentation risks; microservices can be built around business lines improving developer and business user alignment.
Challenges of microservice architectures
With all the benefits and advantages that make microservices and containers attractive, it does come with its own set of challenges. Many on this list emerge from new complexities and ways of thinking about application development, by understanding and planning for them, many can be mitigated.
Single Concern Bounded Context — The single concern principle for microservices dictates that a microservice must only have one concern. Therefore API calls are used between microservices so that a microservice can guarantee its own integrity, mutability, and separation. Understanding the boundaries of concern entails developers understanding the overall context for which the microservice exists. The more the whole is broken down into microservices, the more dependencies and complexity there is.
Dynamic Vertical and Horizontal Scaling — Scaling resources to meet demands is a complex operational challenge best automated. Measuring performance though is a sure way to understand how the overall system needs to scale. Consider which microservices are most essential, unlike a monolith system where resources are assigned to the app, these essential microservices may need to scale together all at the same time, posing coordination challenges.
Microservices Monitoring — Microservices essentially break functionality down into component parts, while in one way simplifying some aspects, it often convolutes data paths between components, which makes monitoring difficult or impossible.
Fault Tolerance — Container encapsulation provides capabilities that assist in building fault- tolerant systems. Without these safeguards, such as the ability to launch containers on different machines, using microservices may work against the system, such as in cascading failures.
Dependency Chain — Code within microservices may become simpler for developers to maintain, but, the dependency chain can quickly grow longer, or wider due to the fact that microservice dependencies are indirect rather than explicit, as in monolith code. Visualizing a dependency graph, and automating coupling discovery—coupling is when two microservices are linked too closely and should be reconsidered for refactoring—are two good methods for understanding dependencies and where they may likely cause problems.
DevOps Culture — DevOps processes and practices are perfect for taking advantage of microservices and containers, however, shifting from monolithic development lifecycles to that of agile may prove challenging for some teams.
Key enabling microservices technologies
The core tools that enable microservices are in many ways common, but the convergence of these technologies delivers capabilities that many small, medium, and enterprise-size businesses cannot survive without today.
Containers — Containers are small virtualizations much like virtual machines, however, they only contain the essential runtime files that allow the microservices inside to operate. Virtual machines will have within it an entire OS, making it heavier and much less convenient to spin up or down.
Docker — Docker platform provides the software mechanism that abstracts the OS to create, deploy, and manage virtualized containers. Docker originally introduced container technology and is open-source, which has made it the most popular container platform today.
Kubernetes — Kubernetes helps in orchestrating the containers created by docker, or other container software, by organizing containers into pods, pods into nodes, and nodes into clusters. Through the use of deployments and probes, Kubernetes manages rollouts and monitors container health. Kubernetes Services manage the communications between pods, handle load balancing, and listen for external traffic. Essentially, Kubernetes is responsible for the scaling of containers.
API Gateways — API gateways act as middlemen between microservices and requests. They accept incoming calls and return a result. APIs help keep microservices encapsulated.
Messaging and Event Streaming — API calls are active methods for requesting the state of a microservice, where messaging and event streaming may provide better effectiveness. Through messaging and event streaming, which gives push notification rather than answers requested notification, interested services can “listen” for when changes occur, and make adjustments.
Serverless — This is a cloud-native approach where the servers are completely abstracted away from any app development. Servers are still involved, but developers never need to manage their servers, instead, that becomes the responsibility of the cloud service providers. Developers simply pack their code in containers for the CSP to deploy.
Microservices monitoring tools
Managing and monitoring microservices requires the right tools. Teams can choose between forming a toolbox of apps that each help visualize an aspect of the whole system, such as using the information from a network monitoring app, a container monitoring app, and a log monitoring app to form a picture.
Another option is using an observability solution suite, which could be a single app or combination of products. These tools provide single-pane-of-glass visibility across the organization's infrastructure, applications, services, and cloud. These options must provide the following core capabilities:
Performance monitoring of web applications, cloud applications, and/or services
Data visualizations of application and cloud infrastructure performance metrics
Performance baselining
Mechanisms for ensuring best practices
Multiple systems monitoring capabilities
User experience monitoring
Database(s) monitoring
Container, application, and network performance tracking
Real-time monitoring analytics
Microservices and cloud environments
Microservices are exceptional technologies when deployed within the cloud. Because cloud services are largely based on the virtualization technologies that enable VMs and containers, it provides a perfect and advantageous environment to deploy microservice architecture.
Cloud service providers offer pay-as-you-go scaling options. When these payment options are combined with microservices, organizations can ensure they are only paying for exactly their usage footprint and nothing more. Additionally, the infrastructure management may fall on the CSP.
Microservices and cloud-native applications have been made nearly synonymous thanks to the adoption of microservices architecture and container technologies. Microservices can be run on any compatible OS, and effectively exist anywhere.
{
"FirstName": "名字",
"LastName": "姓氏",
"Email": "公司電子郵件",
"Title": "職稱",
"Company": "公司名稱",
"Address": "Address",
"City": "City",
"State":"狀態",
"Country":"國家/地區",
"Phone": "電話",
"LeadCommentsExtended": "其他資訊 (非必填)",
"LblCustomField1": "What solution area are you wanting to discuss?",
"ApplicationModern": "Application Modernization",
"InfrastructureModern": "Infrastructure Modernization",
"Other": "Other",
"DataModern": "Data Modernization",
"GlobalOption": "若在下方選擇「是」,表示您同意收到 Hitachi Vantara 產品和服務相關業務往來電子郵件。",
"GlobalOptionYes": "是",
"GlobalOptionNo": "否",
"Submit": "送出",
"EmailError": "Must be valid email.",
"RequiredFieldError": "This field is required."
}