Micro-service architecture and practice-Wang Lei

Source: Internet
Author: User
Tags version control system logstash

(Original address: Http://www.infoq.com/cn/articles/microservice-and-continuous-delivery)

Excerpts from selected books-microservices and continuous delivery

10 years ago, software delivered a handful of times within a year.

Over the past decade, the delivery process has been continuously optimized and improved. From early RUP models, agile, XP, Scrum, to lean startups and DevOps in recent years, we are striving to reduce the cost and efficiency of the delivery process more effectively and realize the value of the software as early as possible.

Continuous delivery is a software development strategy that optimizes the process of software delivery to get high-quality, valuable software as quickly as possible. This approach helps organizations validate business ideas more quickly and continuously deliver value to users through a fast iterative approach.

For any deliverable software, the process of analysis, design, development, testing, construction, deployment, operation and maintenance must be experienced. From the perspective of continuous delivery, for any single deployable unit, it should have a separate delivery mechanism to effectively support its development, testing, construction, deployment and operation of the entire process.

As described in the 2nd Chapter, MicroServices split an application into separate services, each with business attributes and independently developed, tested, built, and deployed. In other words, each service is a deliverable "system". So in this fine-grained situation, how to effectively guarantee the delivery efficiency of each service, quickly realize its business value?

In this article, we will explore microservices and continuous delivery. The content of this article mainly includes:

    • What is continuous delivery
    • The core of continuous delivery
    • Micro-service and continuous delivery

Technically, continuous delivery is an automated implementation of the process of building, deploying, testing, auditing, and publishing software systems, the core of which is the deployment pipeline. Because the deployment pipeline can connect these links effectively. Of course, such as exploratory testing, usability testing, and the management's approval process require some manual action, as shown in 1.

Figure 1 Continuous Delivery

1 core of continuous delivery

During the continuous delivery process, the requirements flow smoothly among the team's roles in small batches, and the small-grained frequent releases are done in a shorter cycle. In fact, frequent deliveries not only continue to provide value to users, but also generate rapid feedback that helps business people develop better release strategies.

Therefore, the core of continuous delivery is three words: small, frequency, fast.

    • Low-volume value flows

By establishing an automated build and deployment mechanism, the business functions are moved from the demand generation side to the client in a small batch manner.

    • Frequently available for publishing

Continuously deliver value by creating automated build and deploy mechanisms that move small batches of business functionality from the demand generation to the client.

    • Quick Feedback

Quickly verify that the requirements are valid by establishing an efficient feedback mechanism. At the same time, according to feedback, the timely guidance of the business team and adjust the strategy, priority for users to deliver high-value functions.

Continuous delivery enables business functions to continue to achieve business goals through the smooth flow of small batches across roles throughout the software delivery process and rapid user feedback through more frequent, low-risk releases.

2 micro-service architecture and continuous delivery

From a delivery point of view, for any single deployable unit, it should have a separate deployment pipeline to effectively support its development, testing, construction, deployment and operation of the entire process.

In a microservices architecture, because each service is a separate, deployable business unit, each service should also correspond to a separate, continuous delivery pipeline, which is "perfectly formed".

Next, let's take a look at the architecture of microservices, and if you build such a set of continuous delivery lines, what you need to prepare for each step.

2.1 Development

For a microservices architecture, if you want to build an independent continuous delivery pipeline, we should try to do the following in the development phase.

    • Stand-alone code base

For each service, the code base of its code base and other services should be physically isolated. The so-called physical isolation refers to the code base itself does not interfere with each other, different services have different code base access address. For example, for the SVN, git, and other tools that we use, each service corresponds to a separate code base URL. The code base for the product information Service and the Customer information service is represented as follows.

Http://github.com/xxxxx/products-service

Http://github.com/xxxxx/customers-service

In addition, another benefit of isolating the code base for different services is that the code for a service is modified without worrying about affecting the code in other service code repositories, largely avoiding the need to modify one place, resulting in multiple defects.

    • Service description File

For each service, there should be a clear service description that describes the information of the current service while helping the team to understand faster and get started quickly. For example, in the author's micro-service practice process, for each code base, its service description includes the following several parts.

1. Service Introduction

    • What functions does the service provide, such as product services that provide access to or storage of product data.
    • Who is the consumer of the service. For example, the consumer of the product service is the e-commerce front-end website system or CRM system.

2. Service maintainers

    • Select one or two members of the team, as the head of the service, register their name, e-mail, telephone and other contact information so that other teams encounter problems to find the person responsible for the service in time.

3. Service Availability Period

    • Service availability cycles, such as 7x24 hours, or Monday ~ Friday (7:00~19:00).
    • Availability, availability is the percentage of the total time that a service can normally access, such as 99.9% or 99%. If the service is accessible within a day, the availability rate for the day of service is 100%. If the service has a 3-minute access outage and a total of 1440 minutes a day, the availability of the service is: ((1440-3)/1440) * 100%, which is 99.79%.
    • Response time, which is the acceptable response time for the service to return data. For example, 0.5~1 seconds.

4. Define the environment and describe the specific environment in which the service runs, usually including:

    • Production environment
    • Class production environment
    • Test environment

5. Development, description of development-related information, usually including:

    • How to build a development environment
    • How to run the service
    • How to locate a problem

6. Test, describe the test-related information, usually including:

    • Test strategy
    • How to run a test
    • How to view the statistical results of a test, such as test coverage, uptime, performance, and more.

7. Build, describe continuous integration and build relevant information, usually including:

    • URLs for Continuous integration access
    • Process description for Continuous integration
    • Post-Build deployment package

8. Deployment, which describes deployment-related information, usually includes:

    • How to deploy to different environments
    • Post-deployment functional verification

9. Operations, which describe operations-related information, usually include:

    • Access to log aggregation
    • Access to alarm information
    • Access to monitoring information

    • Code ownership belongs to the team

Any member of the team can submit code to the codebase, and the ownership of any service code belongs to the team.

The ownership of code belongs to the team, it is more the concept of teamwork, that is, the value of collective work is greater than the sum of each individual's productive value. When ownership belongs to the collective, then each developer should not reduce the code quality for personal reasons. Problems with the quality of the code should be dealt with collectively by the efforts of the team.

Conversely, if the business knowledge behind a piece of code is not properly shared with others, then the evolution of the code becomes dependent on a specific person and the bottleneck arises.

    • Effective Code versioning Tools

The use of code versioning tools has become one of the core skills that developers must have, such as Git, Mercurial, and CVCs (centralized version control System). However, it is best for the team to use the Distributed Versioning tool (Dvcs,distributed version control System), which avoids the inability to commit code due to the inability of the client to connect to the server.

    • Code static Check Tool

In addition, the team also needs to have a code static check tool to help complete the static check of the code. For example, the Java language Checkstyle, Ruby language Rubocop and so on.

In addition, code Metrics tools, such as common Sonarqube, Ruby Cane, and so on, can guarantee consistency and maintainability of the team's internal code.

    • Easy to run locally

As a team developer, when we check out a service's code from a code base, it should take a short time, at a low cost, to run the service in the local environment. If you rely on external resources, and the build and use costs are high, you should consider other piling mechanisms to emulate these external resources. Such external resources typically refer to databases, cloud storage, caches, or third-party systems.

For example, the author recently participated in an enterprise internal system transformation project, using the Okta integrated single sign-on functionality. Okta can also be used in the development environment, but due to network, security, approvals and many other reasons, the developer's ability to access Okta in the local environment is greatly affected. Finally, the team used the piling mechanism to build a set of analog Okta that met the Okta protocol and used it locally. In the development environment, by loading this simulated Okta, the problem of local access Okta time is solved effectively.

Another example is that the author uses AWS's S3 service in the system. Due to the existence of various factors such as permissions and network, the cost of using S3 in local development is very high, so a set of simulated S3 environment is built. When the service is running in the development environment, load the environment variables of the development mode, access the local mock S3 environment, and in the production environment, use the S3 address of the production mode. Help teams quickly build a running environment and demonstrate on-premises without changing any code, greatly improving development efficiency, as shown in 2.

Figure 2 Simulating S3 storage

2.2 Testing

For the microservices architecture, if you want to build an independent continuous delivery pipeline, we should pay attention to the following points in testing.

    • Ambiguity of integration testing

Unit testing is essential for any service. However, if integration testing is required, the team can decide on its own preferences. I personally suggest that the scope of integration testing be clearly defined, because the word "integration" is difficult to have an accurate measurement mechanism. What kind of combination is called integration? It can be a test mix between different systems outside the system, or it can be a combination of internal implementation logic, calls between classes and classes, so the term "integration testing" is easily misunderstood in the communication process within teams and organizations.

    • Mock and stub

For unit testing, we can use mock frames to help us with simulation of dependencies (mock) or piling (stubs), such as Java Mockito, Ruby RSpec, and so on. Of course, if the dependency construction cost between objects is not high, you can use a real call relationship rather than a mock or stub mechanism. For the difference between mock and stub, interested readers can refer to ThoughtWorks chief scientist Martin Fowler's mocks aren ' t stubs this article.

    • Interface Test

In addition to the unit tests overriding the code logic, there should at least be an interface test to overwrite the interface portion of the service. Note that for the interface test of the service, it is more concerned with the interface section. For example, as a producer of data, interface testing needs to ensure that the data it provides meet consumer requirements. As a consumer of data, interface testing needs to be ensured that the data can be effectively processed after it is obtained from the producer. In addition, the process of interaction between services and services is best designed to be stateless.

    • Validity of the test

If the coverage of the unit test is high enough, the interface test can effectively overwrite the interface of the service, then basically the testing mechanism guarantees the correctness of the business logic that the service is responsible for and the external interaction.

Some friends may wonder, do you need to use the framework of behavioral testing, such as cucumber, jbehave and other tools, based on different scenarios, to do some similar user behavior testing?

In fact, there's no definite answer here. In the author's participation in the project, usually in the outermost part of the behavior test, for the following reasons.

1. Most of the services we call are not related to the User Experience section. In other words, as a service, it is more concerned with the change of data, rather than with the user's interaction process. For example, when we select an item from the e-commerce website and place a order, a new one is generated. At this point, the status of the order may be "new". Then, when the payment is completed, the status of the order may be updated to "paid". If the implementation process of the status update is ignored, such as synchronous update, asynchronous update, etc., then from the service point of view, it does not care whether the user is from the PC-side browser, or mobile phone app to complete the payment of orders, the service itself only completed one thing: the completion of the order status from "new" to "paid" update.

2. As part of the entire application, the service can exist independently, and that must have its corresponding boundary conditions. As mentioned earlier, unit testing guarantees the internal logic, interface test guarantee interface, in this premise, the correctness and validity of the service has been verified in most cases.

3. From the classical test pyramid, the more biased the user scenario, the behavior of the test, the higher the cost, the longer the feedback cycle; Conversely, the closer to code-level testing, the lower the cost, the less the feedback period is, and the more the 3 is shown.

Figure 3 Test Pyramid

2.3 Continuous Integration

Continuous integration has evolved over the years and has become one of the best practices known in the system building process. For each stand-alone, deployable service, a set of continuous integration environments (continuous integration Project) should be established for it.

When a team member submits code to the service's code base, the configured continuous integration project detects code changes in a periodic refresh or webhook manner, triggering and executing the static checks, code metrics, tests, and steps to complete the previous development phase definition, as shown in 4.

Figure 4 Continuous integration

Common enterprise-class continuous Integration Server has Jenkins, bamboo and go, etc., online continuous integration platform has TRAVIS-CI, Snap-ci and so on.

For more details on continuous integration, please refer to ThoughtWorks Chief scientist Martin Fowler this article http://www.martinfowler.com/articles/continuousIntegration.html.

2.4 Build

Each service is a business unit that can be deployed independently, after static checks, code metrics, unit tests, interface testing, and so on, to build a deployment package that meets your needs.

Deployment packages exist in a variety of forms, which can be Deb packages, RPM packages, which can be installed directly on different UNIX operating system platforms, or zip packages, war packages, etc., simply copy them to the specified directory and execute certain commands to work. Of course, it could also be based on a specific IaaS platform, such as an Amazon Ami, which we call an image package.

In addition, as a representative of containerized virtual technology, the advent of Docker (an open-source Linux container) allows developers to package applications and dependencies into a portable Docker container and post them to any Linux machine with Docker.

By using Docker, we can easily build Docker-based deployment image packages.

2.5 Department

For each independent service, if you want to build a stand-alone continuous delivery pipeline, you need to choose the deployment environment and develop the appropriate deployment method to complete the deployment. In general, we can consider how to deploy from the following two dimensions.

1. Deployment environment
    • Cloud-based platform

We know that the cloud platform is a broad concept, including IaaS, PAAs, and SaaS layer three. When deploying on a cloud-based platform, you need to get a clear picture of the deployed environment, which tier the deployment will take place. As the SaaS layer is relative to the application's consumer, software as a service. That is, for users, there is no need to consider local installation, data maintenance and other factors, directly through the online access to services, which is not related to the deployment environment we discussed. Therefore, here we mainly discuss the deployment of the IaaS layer and the PAAs layer. In addition, I do not distinguish between the public cloud or private cloud. A public cloud is a cloud service running on the Internet, and a private cloud typically refers to a cloud service running on an intranet within an enterprise.

    • Based on the IAAS layer

The IAAS layer of the cloud platform typically includes the underlying resources that run the service, such as compute nodes, networks, load balancers, firewalls, and so on. Therefore, for the deployment package for this layer, it should actually be an operating system image that contains the basic environment required to run the service, such as the JVM environment, the Tomcat server, the Ruby environment, or the passenger configuration. When you deploy services at the IaaS tier, you can not only create new nodes with images, but also create other system-related resources such as load balancers, auto scaling monitors, firewalls, distributed caches, and so on.

    • Based on the PAAs layer

The PAAs layer does not care about the management of the underlying resources, it is more concerned about the service or the application itself. Therefore, for the deployment package of this layer, it is usually a binary package that can be installed directly on the UNIX operating system (such as a Deb package or RPM package), or a compressed package (such as a zip package, a tar package, a jar package, or a war package, etc.

In addition, you can deploy the current code directly using the tools or SDKs provided by the PAAs platform. For example, the command line provided by Heroku makes it easy to deploy Java, Ruby, Nodejs, and other code into the specified environment.

    • Based on data center

Cloud Platform has become one of the most recognized trends in the future, but for many traditional enterprises, because of the organization or enterprise internal multi-year business, data accumulation, as well as organizational structure, team, process hardening and other reasons, can not move from the existing data center to the cloud. Moreover, for traditional data centers, their corresponding environments are often complex, without the flexibility of IaaS-based resource creation, or the scalability of PAAs, which can be automatically provisioned. At this point, the deployment is relatively cumbersome for the data center, requiring more cost-building environments and provisioning resources.

Many businesses are also beginning to try to create virtual machines (such as VMware, Xen, etc.) on nodes in the data center to help simplify resource creation and provisioning.

    • Based on container technology

Container technology is a way to implement virtualization using containers (Container). Unlike traditional virtualization, container technology is not a complete set of hardware virtualization methods, it can not be attributed to the full virtualization, partial virtualization and semi-virtualized any one, it is an operating system-level virtualization method, can provide users with more resources.

Over the past two years, the rapid development of Docker has made it a typical representative of container technology. Docker can run on any platform, including physical machines, virtual machines, public clouds, private clouds, servers, etc., which allows us to easily deploy Docker images to any environment running Docker without worrying about the differences in the operating system or platform of the production environment.

2. How to Deploy

Deployment is the way in which services are effectively deployed to the appropriate environment. For services, the deployment approach is naturally different because of the deployment environment, as shown in 5.

Figure 5 Evolution of the mode of deployment

    • Manual deployment

For traditional data center environments, with limited resources and security in mind, the usual way to deploy is to use the SSH tool, log on to the target machine, download the required deployment package, copy it to the specified location, and then restart the service.

    • Script deployment

Because the deployment team has to manually download and copy each time, it is not only inefficient, but also the probability of human error is large. As a result, many businesses and organizations use shell scripts to automate the process of downloading, copying, restarting, and so on, dramatically increasing efficiency. The advantage of Shell scripting is good compatibility, but its disadvantage is that it requires a large amount of code to realize the function, the readability is also poor, the time is not easy to maintain.

    • Infrastructure Deployment Automation

As the business grows, many organizations and teams are discovering that the cost of installing and configuring, applying, or deploying a service is increasingly high. The concept of "infrastructure automation" is an effective solution to this kind of problem. As a result, more and more organizations are trying to use tools such as Chef, Puppet, ansible and so on to complete the installation and configuration of the software, as well as the deployment of applications or services.

    • Application Deployment Automation

The automated way to deploy without too much manual intervention, one-button triggering can be said to be any organization, from the business, development to operations are expected to achieve the goal. But it's easy to say, it's hard to achieve, and it's not an overnight process. It is necessary to realize automation gradually as the organization or enterprise evolves in the process of business evolution and technology accumulation. Typically, the automation of application deployment mainly consists of the following two parts.

    • Image deployment

The emergence of private and public clouds has made a significant difference in how they are deployed. The environment is different, so the deployment package is not the same. Previous war packages and RPM packages can be imaged on an IaaS-based cloud platform. For example, for Amazon's AWS Cloud environment, it is easy to use the system image (AMI) it provides to complete the deployment.

The most significant advantage of image-based imaging is the ability to scale more efficiently and rapidly when applications need to scale. The reason for this is that the image itself already includes all the dependencies that the operating system and the application need to run, and it starts to deliver the service. In the case of deployment based on war, RPM packages, and so on, it is usually necessary to start an isomorphic node before deploying a specific war package or RPM package.

    • Container deployment

With container technology, such as Docker, a deployment package that is built can also be an image. This image can be run in any Docker-equipped environment, effectively resolving the problem of inconsistent development and deployment environments. At the same time, because Docker is a Linux-based container virtualization technology that can build multiple containers on a single machine, it also greatly improves the utilization of nodes.

Therefore, in terms of the microservices architecture itself, how to effectively choose the right deployment method based on the deployment environment, and ultimately complete the automated deployment, is a worthwhile team or organization to constantly explore and practice the process.

2.6 Operation and Maintenance

Because each service is a business unit that can run independently, each service runs on separate nodes. Therefore, we need to establish independent monitoring, alerting, rapid analysis and location of the mechanism of the problem, we generalize them into service operations.

    • Monitoring

Monitoring is a very important link in the whole operation and maintenance. Monitoring is usually divided into two categories: system monitoring and application monitoring. The system monitors the health of the node where the service is running, such as CPU, memory, disk, network, and so on. Application monitoring focuses on the application itself and its associated health conditions, such as whether the service itself is available, whether its dependent services are properly accessed, and so on.

About monitoring, the industry already has a lot of mature products, such as Zabbix, Newrelic, Nagios and domestic ONEAPM. For the project I am involved in, the operating environment of the service node is mostly based on AWS (using EC2, Elb, and ASG, etc.), so using AWS's Cloudwatch as a system monitoring tool is much more. With regard to application monitoring, newrelic and Nagios are often used as monitoring tools.

    • Alarm

Alarm is another very important part of operation and maintenance. We know that anomalies can be detected by monitoring when the system is abnormal. At this time, through the appropriate alarm mechanism, you can timely and effectively notify the relevant responsible person, so that early detection of problems, early analysis of problems, early repair problems. Because each service is a separate individual, for different services, should be able to provide an effective alarm mechanism to ensure that when the service is abnormal, can accurately and effectively notify the relevant responsible person, and timely resolution of the problem.

For the alarm tool, the industry is more famous pagerduty, it supports a variety of reminders, such as screen display, phone calls, SMS notifications, e-mail notifications, and in the absence of response will automatically increase the level of reminders. In addition, the commonly used monitoring products mentioned above can also provide alarm mechanisms.

    • Log Aggregation

In addition, log aggregation is an integral part of the Operation and Maintenance section. Since the microservices architecture is essentially a software application architecture based on distributed systems, with the increase of service and the increase of nodes, it will cost more to view logs and analyze logs. Through the way of log aggregation, the logs of different nodes can be effectively aggregated into a centralized place for easy analysis and visualization.

Currently, the industry's most famous log aggregation tool is Splunk and Logstash, which not only provides an efficient log forwarding mechanism, but also provides convenient reporting and customized views. For more information on Splunk and Logstash, please refer to its official website.

3 Summary

This article first describes the concept of continuous delivery and its core, and then discusses the factors that should be taken into account in the implementation of the MicroServices architecture if a service-based, fine-grained, continuous delivery pipeline is established. While services in MicroServices are just one unit of business for an entire application, as an individual that can be released independently and deployed independently, it must also follow the mechanisms and processes of continuous delivery, including development, testing, integration, deployment, and operations, which are "perfectly formed". By building a stable continuous delivery pipeline, you can help teams deliver services frequently and reliably.

Book Introduction

This book first from the theory, introduced the concept of micro-service architecture, the emergence of the background, the essential characteristics and advantages and disadvantages; Then, based on practice, we explored how to build a micro-service from scratch, including Hello World API, Docker image Building and deployment, log aggregation, monitoring alarm, continuous delivery pipeline, etc. Finally, the lightweight communication, consumer-driven contract testing of MicroServices is discussed in the Advanced section, and a real case description is used to transform legacy systems using a microservices architecture. Rich in content, clear and easy to understand, it is a practical book of micro-service architecture combining theory with practice.

This book is not only for architects, developers, testers, and operators, but also for the team or personal reference that is trying to decouple legacy systems using the microservices architecture, and hopefully this book will help the reader in practical work.

Micro-service architecture and practice-Wang Lei

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.