Seven Bull Li Daobing: Practical experience on highly available scalable architectures

Source: Internet
Author: User

Seven Bull Li Daobing: Practical experience on highly available scalable architectures

The maturity and development of mobile Internet, cloud computing and big data allow more good ideas to be realized in a short time. At this point, if the user needs to grasp, the number of users will be likely to get explosive growth, and do not need to be the same as in the past years of careful operation of the time. However, the rapid growth of the number of users (especially in short bursts) often makes the application developers somewhat overwhelmed and has to face some serious technical challenges: how to avoid the service unavailability due to a single machine, how to avoid a drop in user experience when the service capacity is insufficient, and so on. The use of highly available and scalable architectures at the outset of system construction will effectively avoid these problems.

How can I build a highly available and scalable architecture? The seven Qiniu storage chief architect, Li Daobing, gave his own thoughts on the tenth "developer Best Practices Day" salon event in March 22. In combination with his years of practical experience, he describes how to build highly available and scalable systems from the ingress layer, the business layer, the cache layer, and the database layer at four levels, for less complex business scenarios. I hope you will read this article, you can feel that high availability and scalability is not an unattainable thing, the low cost can be in the early stage of the project to the high availability and scalability into the architecture design.

How to achieve high availability

Entrance Layer

The ingress layer, which is usually a level of nginx and Apache, is responsible for the service portal of the application (whether it is a web app or a mobile app). We will usually locate the service in an IP, if the IP corresponding to the server, then the user's access will definitely be interrupted. At this point, you can use keepalived to achieve high availability of the ingress layer. For example, the IP of Machine A is 1.2.3.4, machine B's IP is 1.2.3.5, then request an IP 1.2.3.6 (called "Hop IP"), usually tied to machine a, if a when the machine, IP will automatically bind to machine B, if B, IP will automatically bind to machine a. For this form, we bind the DNS to the heartbeat IP to enable high availability of the ingress layer.

But the scheme has a little problem. First, its switchover may have a or two seconds interrupt, that is, if it is not required to a very strict millisecond level there will be no problem. Second, the machine will be a waste of the entrance, because bought two machines of the entrance, probably only one machine to use. For some long-connected applications may cause service interruption, this time it is necessary for the client to do some work to re-create the connection. Simply put, this solution can solve some of the problems for a more general business.

It is important to note that there are some restrictions on the use of keepalived.

Two machines must be in the same network segment, not in the same network segment, there is no way to achieve each other to rob IP.

Intranet service can also do heartbeat, but it is necessary to note that in the past for security we will bind the intranet service on the intranet IP, to avoid security problems. But in order to use keepalived, must listen on all the IP (if listens on the heartbeat IP, then the machine does not have the IP, the service cannot start), the simple solution is to enable the iptables, avoids the intranet service to be outside the network access.

Server utilization is down, you can consider a hybrid deployment to improve this.

A more common error, if there are two machines, two public network Ip,dns on the domain name at the same time to locate two IP, it feels that has been done high availability. This is not highly available at all, because if a machine is a machine, then about half of the users will not be able to access it.

Business Layer

The business layer is usually composed of PHP, Java, Python, go and other written logic code, need to rely on the background database and some cache level things. How to achieve high availability of the business layer? At its core, the business layer does not have a state, dispersing the state to the cache layer and the database. At present, we usually like to put several kinds of data into the business layer.

The first is the session, that is, the user login-related data, but the good thing is to put the session in the database, or a relatively stable cache system.

The second is the cache, when accessing the database, if a query is slow, you want to put these results temporarily into the process, the next time you do the query will not have to access the database. The problem with this approach is that when there is more than one server in the business layer, the data is difficult to match and the data from the cache can be wrong.

A simple principle is that the business layer does not have a state. When a business tier server is out of state, Nginx/apache automatically calls all requests to another server on the business tier. Because there is no state, the two servers do not have any differences, so the user completely feel. If the session is placed in the business layer, then the problem is that the user was previously logged on a machine, the process is dead, the user will be logged out.

Friendly reminder: There is a period of time more popular cookie session, that is, the session in the data encryption placed in the customer's cookie, and then issued to the client, this can also be done with the server completely stateless. But there are a lot of holes in this, so if you can get around these pits, you can use them. The first pit is how to ensure that the encrypted key does not leak, once disclosed means that the attacker can forge anyone's identity. The second pit is a replay attack, how to avoid the verification code that people try to keep trying by saving cookies, and of course there are other ways to attack. If there is no good way to solve both of these problems, then the cookie session should be used with utmost caution. It is best to place the session in a database with better performance. If the database is not performing well, then placing the session in the cache is better than putting it in a cookie.

Cache Layer

There is no concept of caching in a very simple architecture. But after the traffic, MySQL and other databases can not carry out, such as in the SATA disk run MYSQL,QPS reached 200, 300 or even 500, MySQL performance will be greatly reduced, then consider using the cache layer to block the majority of service requests, improve the overall capacity of the system.

A simple way to make the cache layer highly available is to divide the cache layer a bit. For example, the cache layer on a machine, then after the machine, all the application layer of pressure will go to the database pressure, the database can not carry the words, the entire website (or application) will be dropped. And if the cache layer is divided into four machines, each of only One-fourth, when the machine is lost, and only the total traffic of One-fourth will be pressed on the database, the database can be carried, the site can be very stable until the cache layer back up. In practice, One-fourth is clearly not enough, we will divide it more fine, to ensure that the single cache when the database can still survive. In small to medium size, the cache layer and the business layer can be deployed in a hybrid, which saves the machine.

Database Tier

High availability at the database level is usually done at the software level. For example, MySQL has master-slave mode (Master-slave) and the main master mode (Master-master) to meet the requirements.

MongoDB also has the concept of replicaset, basic can meet the needs of everyone.

In short, to achieve high availability, you need to do this: The entrance layer to do heartbeat, business layer server stateless, cache layer to reduce granularity, database to do a master-slave mode. For this model, we do not need too many servers, which can be deployed on both servers at the same time. At this point, the two servers will be able to meet the early high-availability requirements. Any server when the user is completely unaware of the computer.

How to achieve scalable

Entrance Layer

Scalability at the ingress layer can be achieved by direct horizontal expansion of the machine, then DNS plus IP. However, it should be noted that although a domain name resolves to dozens of IP is not a problem, but many browser clients only use the first few IP, some domain name vendors have optimized this (such as the IP order of each return random), but this optimization effect is not stable.

The recommended approach is to use a small number of nginx machines as a portal, the Business Server is hidden in the intranet (HTTP type of business this way, mostly). In addition, you can send all the IP to the client, and then do some scheduling on the client (especially non-HTTP type of business, such as games, live).

Business Layer

How is the scalability of the business layer implemented? As with the solution of high availability, it is a good way to realize the scalability of the business layer and ensure the stateless State. In addition, the machine continues to deploy horizontally.

Cache Layer

What's more troubling is the scalability of the cache layer, what's the simplest and most brutal way? When the midnight volume is lower, the entire cache layer is offline, then the new cache layer is launched. After the new cache layer is started, wait for these caches to warm up slowly. Of course, here's a requirement that your database can withstand the amount of requests for an undervalued period. What if I can't carry it? Depending on the type of cache, we can first distinguish the type of cache.

Strong consistency cache: it is not acceptable to get the wrong data from the cache (such as the user balance, or it will continue to be cached downstream).

Weak consistency cache: the ability to accept data from the cache for errors over time (such as the number of tweets forwarded).

Non-variant caching: The value of the cache key does not change (for example, a password that is introduced from SHA1, or the result of calculations of other complex formulas).

What kind of cache type scalability is better? Weak consistency and the expansion of the non-variant cache is very convenient, with a consistent hash can be; strong consistency is a little more complicated, and we'll talk about it later. The reason for using a consistent hash instead of a simple hash is the loss of cache efficiency. If the cache is scaled up from 9 to 10, 90% of the cache will be invalidated in a simple hash case, and only 10% of the cache will fail if a consistent hash condition is used.

So what's the problem with strong consistency caching? The first problem is that there is a slight difference in the configuration update time of the cache client, which is likely to get expired data in this time window. The second problem is that if you scale up and then abolish the node, you get dirty data. For example a this key before the machine 1, the expansion after the Machine 2, the data updated, but after the abolition of the node key back to machine 1, this time will get dirty data.

To solve problem 2, it is easier to either keep the node from never being reduced, or the node adjustment interval is greater than the data's effective time. Issue 1 can be resolved using the following steps:

1. Two sets of hash configuration are updated to the client, but still use the old configuration;

2. One client to the other only two sets of hash results consistent with the case will use the cache, the rest of the situation from the database read, but write cache;

3. Use the new configuration on a per-client notification basis.

The Memcache was designed earlier, leading to less thoughtful consideration in terms of high scalability availability. Redis has made a lot of improvements in this area, especially since the @ngaut team has developed the CODIS software based on Redis, which solves most of the problems in the cache layer at once. We recommend you to visit.

Database

Scalability at the database level, a lot of methods, there are many documents, here do not repeat too much. The approximate method is: horizontal split, vertical split, and periodic scrolling.

All in all, we can use the methods and techniques we have just described to achieve high availability and scalability at the ingress, business, cache, and database tiers at four levels. Specifically: In the entrance layer to do high-performance, with a parallel deployment to scale, in the business layer to achieve service stateless, in the cache layer, you can reduce some granularity to facilitate high availability, using consistent hash will help to achieve the scalability of the cache layer; the master-slave mode of the database layer can solve the high availability problem, Splitting and scrolling can solve scalability problems.



These tips and techniques are shared in this article to help you quickly build highly available and scalable systems for less complex business scenarios or small and medium-sized applications. There are many more details about how to build a highly available and scalable system and the practical experience is worth exploring, and we hope to be able to communicate with you more fully.

Reference Source:
Seven Bull Li Daobing: Practical experience on highly available scalable architectures
Http://www.lai18.com/content/407147.html

Seven Bull Li Daobing: Practical experience on highly available scalable architectures

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.