This is a creation in Article, where the information may have evolved or changed.
What is micro-service architecture
The microservices architecture (microservices Architecture) is an architectural pattern that splits applications into small business units for development and deployment, communicates using lightweight protocols, and implements application logic by working together.
Flexible, stable and resource-saving are the main advantages of micro-service architecture
- Can be independently deployed, upgraded, replaced, scaled
- Free choice of development language
- Efficient Use of resources
- Fault Isolation
Service, management difficulty is the shortcomings of the micro-service architecture--
- More service, more operation
- Management complexity Improvement
- Increased difficulty in deployment
But in general, the micro-service architecture is very attractive, otherwise it will not be like Twitter, Netflix, Amazon, ebay and other well-known manufacturers of the favor.
Twitter micro-service architecture
Micro-Service Architecture mode
The micro-service architecture model can be divided into:
- Aggregation mode
- Proxy mode
- Resource sharing mode
- Asynchronous Message Pattern
Aggregation mode
Micro-Service Architecture aggregation mode
Aggregation mode is the aggregation of multiple services into a single service, called Aggregation services. The most common performance of aggregation services is Web services, the main features of the page performance, back-end services for pure business functions. In other words, expanding the business in aggregate mode requires only a new backend microservices to be added.
Aggregation services conform to the dry principle, which can be a higher-level combination of microservices, adding business logic to further post a new microservices, while each service has its own cache and database.
Aggregation mode is the most common pattern in a microservices architecture.
Proxy mode
Micro-Service Architecture proxy mode
Proxy mode is a special aggregation mode, that is, the service is packaged externally. The proxy mode can only delegate requests, or it can perform data conversion work.
We can compare the proxy mode to the community where the mail is sent through the mailroom, whether it is an external request or an internal data service, which is handled by the agent.
Resource sharing mode
Micro-service Architecture resource sharing mode
The resource sharing mode of MicroServices architecture can realize the logical separation and data sharing of some services.
The resource sharing model is commonly used in the transition phase of "all-in-one architecture" to "microservices architecture" and two services with data consistency requirements.
Asynchronous Message Pattern
Micro-Service Architecture Asynchronous message pattern
The asynchronous message pattern of the microservices architecture is suitable for scenarios that do not require synchronization, such as task-based services, and the use of Message Queuing instead of rest requests and responses for other microservices architecture patterns.
The challenges posed by microservices architectures
The challenges of service deployment
Each service requires a separate code management, versioning, build, test environment deployment, production environment deployment, code rollback, and so on, the manual management of a large number of microservices is almost impossible to complete the task.
The challenge of serving the gentry
Stateless services need to configure load balancing and add nodes, and stateful services need to expand the resources of a single service. To reduce resource wastage, developers need to monitor each service and reduce nodes and resources when necessary.
The challenge of serving high availability
The highly available policies for each service are different, and the management of stateless services is relatively straightforward, and the management of each stateful service is a challenge.
Service-tolerant challenges
The availability of any one service is not 100%, and in a distributed system, the system itself will not work when a dependent service is unavailable. Especially the complex distributed system which relies on a large number of services, since the availability of the service and the factors of network instability, the availability of the system may not be guaranteed satisfactorily.
The challenges of dependency relationships
If you write a dependency profile in code, you need to redeploy it to take effect, and the configuration file may contaminate the code.
The challenge of service monitoring
Monitor CPU or Load? The emergence of a large number of microservices is also a severe test of service monitoring.
Good rain micro-service Architecture solution
Good rain micro-service architecture solution core idea is--
- Simplifying user actions
- Micro-Service internal package, overall external
- Packaging technology, service business
The bottom of a good rain cloud is implemented through Docker and, as far as possible, allows users to feel no Docker experience. In good rain clouds, complex features are packaged inside, and the user does not need to manage computing resources and network resources through the service as a whole.
Service Deployment
Good clouds Service deployment
Good clouds support Java, Python, PHP, Ruby, golang,node.js and other mainstream development language, and support GitHub, good rain hosting warehouse and other code warehouse.
Service Scaling
Good rain clouds service scaling
Good rain clouds support service scaling, where horizontal scaling is used for stateless server and worker class services, and vertical scaling is used for stateful services.
Some stateful services support horizontal partitioning (sharding), and users only need to adjust the number of nodes.
Service High Availability
Good rain clouds service high availability
Good rain clouds enable stateless services, stateful services with high availability by using a highly available scheduler.
Dependent relationships
Good rain clouds dependency relationship
A good rain cloud dependency solution is similar to spring's dependency injection, where parameters are implemented through environment variables, and the dependency relationship of the service needs to be clicked on only by the interface.
Service Fault Tolerance
Good rain clouds service fault tolerance
Good rain cloud service fault tolerance principle is similar to the fuse, when a service slows down to reach the threshold of the circuit breaker, the service will automatically go offline without affecting other services, and when the delay becomes an hour, the service gradually recovers.
The use of asynchronous can also achieve the effect of service fault tolerance, in particular, refer to CQRS mode.
Service Monitoring
Good rain clouds Service monitoring
Good rain clouds use business metrics such as average response time, throughput, and online numbers for service monitoring.
With business monitoring instead of technology monitoring, service monitoring in real-world scenarios becomes simple and easy to understand.
Real-time performance analysis
Good rain clouds real-time performance analysis of a single rest service
The slowest SQL statement is not necessarily the most impact on the database, good cloud real-time performance analysis through Cep+log implementation, from the actual problem-solving, developers can be very intuitive to understand the current impact on the system is the most important URL.