Android system architecture-microservice architecture, android system architecture

Source: Internet
Author: User

Android system architecture-microservice architecture, android system architecture
Directory

  • I. microservice Architecture Model
    • 1.1 mode description
    • 1.2 mode Topology
    • 1.3 avoid dependency and Scheduling
    • 1.4 precautions
    • 1.5 Mode Analysis
  • Ii. microservice architecture in Android
  • Iii. Conclusion

The software architecture model we translated some time ago (the address of the complete book) was well received by everyone after its release. The book describes five classic and popular software architecture models, at the same time, the implementation, advantages and disadvantages of the five modes are analyzed, which provides valuable guidance for our development work. However, the problem in "Software Architecture Model" is that it does not combine specific examples to make the theoretical knowledge easier to absorb. Therefore, some students report in my Development Group that the book looks quite good, however, there is no specific example. Therefore, I plan to write some articles in combination with Android source code or development to describe these architecture models in depth, combining theory with practice, let everyone learn more deeply and concretely the charm of these architectures.

I. microservice Architecture Model

Due to the high flexibility and scalability of the microservice architecture model, it has developed rapidly in the industry in recent years. However, as this architecture model is still under development, industry insiders are still confused about it. For example, what is this model about? How is it implemented? This article first describes the key concepts, basic knowledge, and advantages and disadvantages of this architecture model, because you can determine whether your application is applicable to this architecture based on your actual situation only after you have a deep understanding of it.

1.1 mode description

No matter which implementation you choose, you need to understand several common core concepts. The first concept isIndependent Deployment Unit. As shown in Figure 4-1, each component of the microservice architecture is deployed as an independent unit, so that each unit can communicate with each other through an effective and simplified transmission pipeline, and it also has strong scalability, applications and components are highly decoupled to simplify deployment.

Perhaps we need to understand this model. The most important concept is service components. Do not consider the internal services of the microservice architecture. It is best to consider service components. In terms of granularity, it can be small to a single module or as large as an application. A service component contains one or more modules (such as Java classes) that provide a single function, such as providing weather conditions for a particular city or town, it can also be used as an independent part of a large commercial application, such as the ticket remainder query system. In microservice architecture, correct design of service component granularity is also a great challenge. This challenge is discussed in detail in the following service components.


Another key concept of the microservice architecture model is that it may beDistributedThis means that all components in the architecture are completely decoupled and accessed through a remote access protocol (such as JMS, AMQP, REST, SOAP, RMI, and so on. The distributed feature of this architecture is the key to achieving some excellent scalability and deployment features.

Another exciting feature of microservice architecture is that it evolved from problems existing in other common architecture models, rather than being created as a solution to wait for problems to emerge. The evolution of microservice architecture has two main sources: single applications using a layered architecture model and distributed applications using a service-oriented architecture.

Tip: a single application is a whole.

The development process from single application to microservice is mainly contributed by the continuous delivery development. A single application is usually composed of tightly coupled components, which are also part of another single deployable unit, making it cumbersome and difficult to change, test, and deploy applications. These factors usually make the application vulnerable, so that once a new function is deployed, the entire application cannot run due to exceptions caused by these new features. The microservice architecture mode solves this problem by separating applications into multiple deployable units (service components, these service components can be independently developed, tested, and deployed by other service components.

Another evolutionary process that leads to the microservice architecture model is caused by problems existing in the service-oriented architecture model (SOA) applications. Although the SOA model is very powerful and provides unparalleled abstraction levels, heterogeneous connections, and service scheduling, and ensures that the business goals are adjusted through IT capabilities, IT is still complex and expensive, it is hard to understand and implement. For most applications, it is too heavyweight. Microservices architecture solves complexity issues by simplifying service concepts, eliminating scheduling requirements, and simplifying service component connection and access.

1.2 mode Topology

Although there are many ways to implement the microservice architecture model, the three main topology structures stand out. The most common and popular ones are the rest api-based topology, REST-based application topology and centralized message topology.

  • REST-based API Topology
    The REST-based API topology applies to websites and provides small and self-contained services through some APIs. As shown in 4-2, this topology consists of a very fine-grained service component (hence called a microservice, these service components include one or two modules and are independent of other services for specific business functions. In this constructor, these fine-grained service components are usually accessed by REST-based interfaces, which are implemented through a separately deployed web API layer. Examples of this topology include some common dedicated cloud-based RESTful web services, which are used by large websites such as Yahoo, Google, and Amazon.


Figure 4-2

  • REST-based application topology
    The REST-based application topology is different from the REST-based API. It receives client requests through traditional web-based applications or client applications, rather than through a simple API layer. As shown in Figure 4-3, the UI Layer of an application is a web application. You can access separately deployed service components through simple REST-based interfaces. The service components in this topology are different from those in the REST-based API topology. These service components tend to be larger and coarse-grained, representing a small part of the entire business application, instead of fine-grained, single-operation services. This topology is common in applications with relatively low level of recovery such as small and medium enterprises.


Figure 4-3

  • Centralized message Topology
    Another common method in the microservice architecture mode is the centralized message topology, as shown in Figure 4-4. This topology is similar to the previously mentioned REST-based application topology. The difference is that the REST-based application topology uses REST for remote access, this topology uses a lightweight centralized Message Middleware (such as ActiveMQ and HornetQ ). It is extremely important not to confuse the topology with the service-oriented architecture model or use it as a simplified SOA version. Lightweight Message Broker in this topology does not execute any scheduling, conversion, or complex routing. Instead, it is just a transmission tool for Lightweight access to remote service components.
    The centralized message topology is usually used in large business applications, or some applications that have complicated control logic for the transport layer to the user interface layer or to the service component layer. Compared with the previously discussed simple REST-based topology, this topology has the advantages of Advanced Queuing mechanism, asynchronous message transmission, monitoring, error processing, and better load balancing and scalability. The spof and architecture bottleneck problems related to centralized proxy have been resolved through the proxy cluster and proxy Alliance (one proxy instance is divided into multiple proxy instances, divides the throughput load based on the system function area.


Figure 4-4

1.3 avoid dependency and Scheduling

One of the main challenges of the microservice architecture model is to determine the granularity level of service components. If service components are too coarse-grained, you may not be aware of the benefits of this architecture model (deployment, scalability, testability, and loose coupling ). However, excessive granularity of service components will lead to additional service scheduling, this may lead to a complicated, confusing, expensive, and error-prone microservice architecture.

If you find that you need to schedule service components from the application's internal user interface or API layer, it is very likely that the granularity of your service components is too small. Similarly, if you find that you need to execute inter-service communication between service components to process a single request, or the granularity of your service components is too small, either service components are not correctly divided from the perspective of business functions.

Inter-service communication may lead to coupling between components, but it can be processed through a shared database. For example, if a service component processes network orders and requires user information, it can retrieve necessary data from the database rather than calling the functions of the customer service component.

The shared database can handle information requirements, but what about the sharing function? If a service component requires a function that is included in another service component or a public function, you can copy the shared function of the service component, which violates the DRY rule. To ensure service component independence and deployment separation, a small part of the implementation of the microservice architecture is redundant due to repeated business logic, this is a very common problem in most business applications. The small tool class may belong to this type of repeated code.

Tip: DRY, that is, don't repeat yourself.

If you find that even if you do not consider the level of service component granularity, you still cannot avoid service component scheduling, which is a good sign, this architecture model may not apply to your application. Due to the distributed nature of this mode, it is difficult to maintain a single working Transaction Unit between service components. This method requires a transaction compensation framework to roll back the transaction, which significantly increases complexity for a relatively simple and elegant architecture model.

1.4 precautions

The microservice architecture model solves the problems of many single applications and service-oriented architecture applications. Because the main application components are divided into smaller and separate deployment units, applications built using the microservice architecture model are generally more robust, provide better scalability, and support continuous delivery is easier.

Another advantage of this model is that it provides real-time production deployment capabilities, greatly reducing the traditional monthly or weekend "big bang" production deployment needs. Because changes are usually isolated into specific service components, only changed service components need to be deployed. If your service component has only one instance, you can write special code on the user interface to detect an active hot deployment, once detected, the user is redirected to an error page or wait page. You can also switch multiple instances of service components during real-time deployment, allowing applications to maintain continuous availability during deployment (this is hard to achieve in a layered architecture mode ).

Finally, we should pay attention to the fact that the microservice architecture model may be a distributed architecture, which has some common and complex problems with the event-driven architecture model, including the agreed creation and maintenance, and management, remote system availability, remote access authentication and authorization.

1.5 Mode Analysis

The following table lists the features and ratings of the microservice architecture model. Each feature is rated based on its own characteristics and based on the capabilities and features of the typical model, and what the model is famous.

Features Rating Analysis
Overall flexibility High Overall flexibility is the ability to quickly respond to changing environments. Because services are independent deployment units, changes are usually isolated into separate service components, making the deployment fast and simple. At the same time, applications built using this pattern are often loosely coupled and can also facilitate changes.
Easy to deploy High Each service component is an independent deployment unit, which makes each deployment unit relatively simple and reduces the deployment complexity. Ease of deployment has become a major advantage of the microservice architecture.
Testability High Because the business functions are separated into independent application modules, tests can be performed in a local scope, so that the test is more targeted. Performing regression tests on a specific service component is simpler and more feasible than performing regression tests on a single application. In addition, because the service components in this mode are loosely coupled, from the development perspective, the probability of other parts of the application being changed due to a change is very small, it will not cause a slight change to test the entire application.
Performance Relatively low Although you have many advantages in this mode, due to the distributed or cross-Process Characteristics of the microservice architecture model, communication between customer programs and services will reduce the efficiency, therefore, it is not suitable for high-performance applications.
Scalability High Since applications are divided into separate deployment units, each service component can be expanded separately and the application can be expanded and adjusted. For example, the Administrator function module of a stock transaction may not need to be extended because users who use this function are limited. However, the transaction data request service component may need to be extended, because there may be countless people requesting it all the time, so it requires high scalability.
Easy to develop High Because functions are separated into different service components, development becomes easier because the development scope is smaller and isolated. The chance that programmers make a change in a service component that affects other service components is very small, thus reducing the coordination between developers or development teams.

Microservice architecture in Android

In the Android field, we often see 4-5 Android architecture.


Figure 4-5

This is a typical layered architecture, which includes the application layer, Framework layer, Native layer, and kernel layer. This seems to have nothing to do with the microservice architecture we are talking about today! It is worth noting that this is a more macro architecture, and there are other architecture models under this layered architecture. The microservice architecture is the most obvious one. Android systems are divided into different levels according to their responsibilities, but in the Java layer (Java application and Application Framework) and system service layer (Android runtime environment) the two layers communicate through the local C/S mode, that is, our microservice architecture.

We know that when the Android system is started, the following four steps will be taken:

After the init process starts, it will call the init_parse_config_file method to parse the init. rc file and then start the specified commands and services in init. rc.

Int main (int argc, char ** argv) {// Code omitted // create the system folder mkdir ("/dev", 0755); mkdir ("/proc ", 0755); mkdir ("/sys", 0755); // Code omitted // initialize the kernel open_devnull_stdio (); klog_init (); property_init (); process_kernel_cmdline (); // Code omitted // parse init. rc file init_parse_config_file ("/init. rc "); // return 0 is omitted in the Code ;}

Init. rc is a file written by a script called "Android initialization language. This file describes some actions, commands, services, and options. Here we only care about the service. A service description in init. rc is roughly like this.

service zygote /system/bin/app_process -Xzygote /system/bin --zygote --start-system-server    class main    socket zygote stream 660 root system    onrestart write /sys/android_power/request_state wake    onrestart write /sys/power/state on    onrestart restart media    onrestart restart netd

In the above Code, a zygote service is specified, which starts a process called zygote. zygote is the source of everything in the Android world, so all processes are incubated. When zygote is started, the System Server process is started again. System Server is the habitat of all System services and the hub for communications between applications and Zygote processes, for example, if you want to start an application, the System Server will notify zygote fork of a new process. After System Server is started, the android_server_SystemServer_nativeInit function in the com _ android _ server _ SystemServer. cpp class is called. In this function, the ServiceManager instance is obtained and some Native services are started. Finally, the initAndLoop function of the internal ServerThread of SystemServer will be called to register system services such as WindowManagerService and ActivityManagerService to ServiceManager. These services provide various functions for the system, and finally start the system message loop, now the Android runtime environment is basically built.

Public class SystemServer {// main function public static void main (String [] args) {// Code omitted // Initialize native services. nativeInit (); // This used to be its own separate thread, but now it is // just the loop we run on the main thread. serverThread thr = new ServerThread (); thr. initAndLoop () ;}}// internal class ServerThread {public void initAndLoop () {// 1. Start the main thread message loop logoff. preparemainlofter (); // The code is omitted try {// 2. Register various system services in ServiceManager. // Add PackageManagerService pm = PackageManagerService. main (context, installer, wm = WindowManagerService. main (context, power, display, inputManager, wmHandler, factoryTest! = SystemServer. FACTORY_TEST_LOW_LEVEL ,! FirstBoot, onlyCore); // Add WindowManagerService ServiceManager. addService (Context. WINDOW_SERVICE, wm); // registration of dozens of services, Code omitted} catch (RuntimeException e) {}// 3. Start the message loop logoff. loop () ;}// end of ServiceThread} // end of SystemServer

The Framework layer (client) and the Native System Service (server) are not called directly, but are called through the Binder Mechanism. The client code initiates a request to the Native service through the Java Service proxy and Bidner mechanism, the specific work is implemented by Native system services. Therefore, the architecture of the Framework and Native layers is shown in 4-6.

This architecture is similar to the API-based microservice architecture mentioned above. Native layer system services provide specific functions, and Java layer provides interfaces for accessing Native system services, they communicate with each other through the Binder Mechanism. The Java and Native layers are a local C/S architecture. If you use an App with network requests as a metaphor, the App client plays the role of the Java layer. The Native Layer System Service corresponds to the server, the communication mechanism is changed from http to Binder. The Native layer provides dozens of system services. Each System Service has clear and single responsibilities and has good cohesion. For example, WindowManagerService is only responsible for managing operations related to screen Windows. Through this microservice architecture, Java and Native layers have low coupling and high scalability. It also hides complicated implementations from the Java layer, making the entire system clearer and more flexible.

Summary

Microservice architecture stands out in various architecture modes with its excellent flexibility and scalability, but its weakness is its relatively low performance. Communication costs are high because the client and the service provider need to perform IPC or even network requests. In Android, the client and server are both local, so the IPC Mechanism is required. Android does not choose traditional IPC Mechanisms such as sockets and pipelines, but instead chooses a Binder that is more flexible, concise, and fast, with a low memory consumption, this also greatly improves the performance and overhead of the microservice architecture in the Android system. Therefore, if you use this architecture in a common application, you must first consider the performance overhead, flexibility, and memory benefits, whether they are greater than the drawbacks of performance reduction, this requires you to decide based on your own situation.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.