This article begins with a question to you, what are the bottlenecks of the relational database? I think some friends see this issue will certainly say that they have encountered a database in the development of what the problem, and then how to solve, and so on, the answer is no problem, but there is no representative, if there is a new storage bottleneck problem, your experience in that scene can be applied to the new problem? This is really hard to say.
In fact, no matter what kind of problem scenario at the end of the solution to the database, then this problem scenario must be hit a database of some pain point, then I in front of the six article those means in the end is in the solution database of those pain points, below I summarize, the following:
Pain Point One: The number of connections to the database is not enough. In other words, in the same time, the request to establish a connection with the database exceeds the maximum number of connections allowed by the database, and if we do not have effective control over the number of connections to the database, it can overwhelm the database, then we have to scatter these connections, or let the request queue up.
Pain Point Two: the operation of the database table is nothing more than two kinds of write operations, a read operation, in the real situation is difficult to read and write problems, is often one of the table operation bottlenecks caused by the problem, because read and write are operating the same medium, This causes the problem to be solved if we do not split the media separately to solve the problem of reading or write the problem will complicate the issue, and finally it is difficult to fundamentally solve the problem.
Pain Point three: real-time computing and the contradiction of massive data. This series of storage bottlenecks in fact there is a category, that is, the method described in this series is to use a relational database to complete real-time computing business scenarios, and in reality, the database table data will grow over time, when the table data beyond a certain size, subject to computer hard disk, memory and CPU ability, it is difficult for us to complete the real-time processing of these data, so we must adopt new means to solve these problems.
I summed up the three pain points today, mainly to tell you when we face the storage bottleneck, we have to put the problem to the end of the problem is because touch the database of the pain points, so back to see what I said before the technical means, I will know what means to solve the problem.
Well, needless to say here, the following begins the main content of this article. First, let's look at an interesting comic, as shown in:
As a programmer, I was frustrated to see this comic because we were defeated by the machine. But this comic also reminds the software programmer, software performance and hardware has an inseparable relationship, perhaps we encounter the storage problem is not necessarily caused by our program, but because the good shells into an old and outdated cannon, and of course we will feel the power of the shells did not meet our expectations. In addition, it is possible that our program design itself is not effective use of the existing resources, so in the previous article I mentioned that if we know that the storage bottleneck problem will be the site of the first problem, then in the database modeling, we should try to reduce the database computing function, only keep the database the most basic computing function , and complex computing functions are left to the data access layer, which is a good foundation to solve the bottleneck problem. Finally, I would like to emphasize that as a software engineer often unconsciously ignores the impact of hardware on program performance, so in the design of the project, the relationship between hardware and problem scenarios may be able to open up our thinking to solve the problem.
The above questions are summarized in accordance with the summary of the pain points at the beginning of this article, then it is as follows:
Pain Point Four: When the database is located on the server's hardware has been greatly improved, we can give priority to whether to improve the performance of the hardware to improve the quality of the database.
In the first article of this series, I mentioned that according to the characteristics of HTTP stateless, we can be stripped of the state of the Web server is mainly the function of the session, then when the Web site load increase we can increase the Web server way to expand the concurrency of the site. In fact, whether it's a read-write separation scheme, a vertical split scheme or a horizontal split scheme, they are also similar to the way they scale web services horizontally, and the similarity is that by adding new services to extend the performance of the entire storage, the new problem comes, Can the previous three solutions to storage bottlenecks be scaled horizontally like web services? In other words, when the scenario has been implemented for a period of time and there is a bottleneck problem, can we solve the new problem by adding the server?
To answer this question, we first need to analyze in detail the principle of horizontal extension of Web services, the horizontal extension of Web services is based on the HTTP protocol stateless, HTTP stateless refers to the different HTTP requests there is no association between, so if there are multiple Web services in the background processing HTTP requests , each Web server deploys the same Web service, and the result is equivalent regardless of the Web service processing the HTTP request. If this principle is translated into the database, then every database operation falls to any database server is equivalent, then this equivalence requires each different physical database to store the same data, so that can not solve the read-write imbalance, solve the problem of massive data, Of course, this may seem to solve the problem of the number of connections, but the face of the write operation is troublesome, because when writing data we must ensure the data synchronization problem of two databases, which complicates the problem, so the horizontal extension of Web Services is not applicable to the database. This also shows that the database of the Sub-Library table itself has a strong state.
However, the horizontal extension of Web services also represents the idea that when business operations exceed the processing power of a stand-alone server, we can expand the processing power of the entire Web server horizontally by increasing the server, which is certainly applicable to the database. Then we can define the horizontal extension of the database, as follows:
Horizontal scaling of a database is the performance of the entire storage tier by increasing the server.
The database read and write separation scheme, vertical splitting scheme and horizontal splitting scheme is actually in the table as a unit, if we take the database table as an operation Atom, read-write separation scheme and vertical splitting scheme have not broken the atomic nature of the table, and are all based on the table as the focus point, So if we increase the server to expand the performance of these scenarios, will definitely touch the table atomic red Line, then this program has evolved into a horizontal split scheme, so we can draw a conclusion:
The horizontal scaling of the database is basically based on horizontal splitting, that is, the horizontal expansion of the database is a horizontal split after the database is split horizontally, and the number of horizontal expansions represents the number of horizontal split iterations. Therefore, to talk about the problem of the horizontal expansion of the database, we must first of all more detailed analysis of the horizontal splitting of the scheme, of course, the horizontal splitting scheme referred to here refers to the narrow level of split.
The horizontal expansion of the database is actually to let the horizontal split table data and further scattered, and the discrete rules of the data is determined by the horizontal splitting of the primary key design, in the previous article I advocated a use of sequence and self-increment of the scheme, when I gave two means of implementation, One is to split the distribution of data by setting different starting numbers and the same stride size, the other is to split the data by estimating the storage load capacity of each server, and by setting the self-increment starting and maximum values, I was talking about scheme one we can set the interval of different steps, In this way we will facilitate the level of expansion after us, program two start can also set a new starting value also to complete the horizontal expansion, but no matter which program is scaled horizontally, there is a new problem we have to face, that is the imbalance of data distribution, because the original server will have the burden of historical data. And when I talk about the narrow level split, the data distribution of the uniform problem has been my advantage as a horizontal technology split, but to the expansion of the data distribution is unbalanced, the data imbalance will cause the system computing resource utilization confusion, but also it will affect the upper layer of computing operations, such as the large-scale data sorting query, Because the data is unevenly distributed, the bias for local ordering becomes larger. The only way to solve this problem is to redistribute the data according to the average principle, which requires a large-scale data migration, so that this level of expansion is very effective unless we feel that the data is evenly distributed to the business and does not need to adjust the distribution of the data. But if the business system does not tolerate the uneven distribution of data, then our horizontal expansion is the equivalent of re-doing the horizontal split, which is quite troublesome. In fact, these are not the most deadly, if a system back-end database to do horizontal expansion, the level of expansion and do data migration, this expanded table is a core business table, then the plan will inevitably cause the database to stop service for a period of time.
The horizontal scaling of a database is essentially an iterative operation of horizontal splitting, in other words horizontal scaling is split once after a horizontal split, The main problem is whether the new horizontal split can inherit the previous horizontal split, so that only a small amount of modification can achieve our business needs , then if we want to solve this problem we have to go back to the source of the problem, whether our previous level split is good to support the subsequent level of split, then in order to do this we should pay attention to what problems? I personally think that we should focus on two issues, namely: horizontal expansion and data migration , and the relationship between the problem of sequencing.
Issue One: horizontal scaling and data migration relationship issues . In my example above, we do the horizontal split of the primary key design is based on an average principle, if the new server will break the principle of the average distribution of data, in order to ensure the uniform distribution of data we have to do the corresponding migration of data. This problem, by extension, even if our horizontal split does not overemphasize the average principle, or the use of other dimensions to split the data, if the dimension in the horizontal expansion of the original database associated with the relationship, then the result is likely to lead to data migration problems, because horizontal scaling is very easy to produce data migration problems.
For a real-time system, data migration of the core business table is a very risky thing, leaving the operational risk of migration, data migration will lead to system downtime, which is difficult for all system stakeholders to accept. So how to solve the problem of horizontal extended data migration, then this time the consistency hash comes in handy, the consistency hash is the derivation of fixed hash algorithm, the following is a brief introduction to the principle of consistent hashing, first I look at the following picture:
Consistent hashing uses the first calculation of the digital hashes used to do the horizontal splitter server, and the hashes are configured on the 0~232 circle, then the numeric hashes of the primary keys of the stored data are computed, mapped to the circle, and then searched clockwise from the point where the data is mapped. and save the data on the first server found, if the primary key's hash value exceeds 232, then the record will be saved on the first server. These are the first pictures.
Then one day we will add a new server, that is, to do the horizontal expansion, such as the second picture, the new node (Figure NODE5) will only affect the original node node4, that is, the first node in the clockwise direction, so the consistent hash can suppress the maximum data redistribution.
In the above example, we only used 4 nodes, adding a new node affects about 25% of the data, this effect is still a bit large, that there is no way to reduce the impact of the point, then we can be improved on the basis of a consistent hashing algorithm, the more distributed nodes on the consistent hash, Then adding and deleting a node is minimal for the overall impact, but in reality we may not really use so many nodes, so we can add a large number of virtual nodes to further suppress the uneven distribution of data.
I will split the level of the primary key design analogy distributed cache technology memcached, in fact, the level of split in the database technology also has an exclusive concept to represent him, that is the partition of the data, but the level of division of this partition is more granular, the operation of the movement is more large, The author here is mainly because write storage bottlenecks will be subject to my own experience and knowledge limitations, if a friend because read this article and the storage problem occurred interest, then I can also indicate a learning direction, so that can avoid some valuable exploration process, so that the efficiency of learning will be more high.
question two: horizontal scaling of the sorting problem. when we have to do horizontal expansion there must be a factor in the mischief: The amount of data is too large. I said that the huge amount of data will be a serious challenge to read operations, for real-time systems, to make real-time query of large amounts of data is almost impossible to do, but in reality we still need such operations, but when it comes to this operation we generally take part of the results of the data to meet the real-time nature of the query, To make these small amounts of data satisfying to the user without having too much business bias, sorting becomes very important.
But here the sort must add a category, first we have to make it clear ah, the full ordering of large amounts of data, and this whole sort also to be in real-time requirements, this is simply unable to complete, why said can not be completed, because these are in the challenge of hard disk read and write speed, memory read and write speed and CPU computing power, If the 1TB data above these three elements do not include the sorting operation, the read operation can be completed in 10 milliseconds, perhaps the real-time sequencing of large amounts of data is possible, but the current computer is absolutely no such ability.
So how do we solve the problem of real-time sequencing of massive data in real-world scenarios? In order to solve this problem we must have a bit of reverse thinking consciousness, a way to deal with the problem of sequencing. The first way is to reduce the size of the data needs to be sorted, then the database partitioning technology is a good means, in addition to partition means, in fact, there is a means, in the front I talked about using search technology can solve the database reading slow problem, the search library itself can be used as a read library, So the search technology is how to solve the problem of fast reading large amounts of data, its means is to use the index, index like a book directory, we want to retrieve the information we want from the book, our most efficient way is to query the table of contents, find the title we want to see, then the corresponding page number, the book directly to the page, The nature of the storage-system index is the same as the directory of the book, except that indexing technology in the computer domain is more complex. Actually indexing the data itself is a means of narrowing the range and size of the data, which is similar to partitioning. We can actually use the index as a mapping table of the database, the general storage system in order to make the index efficient and to expand the index to find the accuracy of the data, the storage system will be indexed when the index is also set up a good ranking, then when the user makes real-time queries, he based on the index field to find data, Because the index itself has a good ordering, then in the process of the query can be removed from the sequencing operations, and finally we can efficiently get a well-ordered result set.
Now we go back to the horizontal split massive data sorting scenario, the previous article I mentioned a large amount of data to do paging real-time query can be used in a sampling way, although the user's intention is to carry out a large amount of data query, but people can not digest all the characteristics of massive data, So we can only do a lot of data on the part of the operation, but because the user's intention is the full amount of data, we give the sample data can be more accurate point, then and we in the distribution of data when the principle of distribution is related to the specific implementation is the primary key design, When we encounter such a scene, we have to ask our primary key to have the sort character, then we have to discuss the sorting problem of the primary key in the horizontal split.
In the previous article, I mentioned a scheme using fixed hashing algorithm to design the primary key, the constraints mentioned at the time is that the primary key itself does not have the ordering attribute, only the uniqueness, so the hash value is unique, this hash does not guarantee the data distribution when the data on each server landed on a sequential order of time, It can only guarantee that in the massive data storage distributed time each server approximate uniformity, so this kind of primary key design scheme encounters the paging query to have the sorting request when actually does not have any function, therefore if we want to let the primary key have the order of precedence to use the increment number to represent, But the incremental number of the design scheme if according to my previous start number, the step mode will have a problem, that is, the order of the library single table can be guaranteed, cross-library across the table between the order is difficult to guarantee, it also shows that the horizontal split of the primary key field for the logical table is a complete sorting is also an impossible task.
So how do we solve this problem, then we have to use a separate primary key generation server, the previous article I have criticized the primary key generation server solution, after the publication of a friend to find me to talk about this problem, he said that they plan a practice, they developed a primary key generator server, Because of the fear of this server single point of failure, they made it into a distributed, they designed a simple UUID algorithm, so that the algorithm suited to the characteristics of the cluster, they intend to use zookeeper to ensure the reliability of the cluster, well, they approach the most critical point, How to ensure the efficiency of the primary key acquisition, he said they did not let each generation of primary key operations are directly access to the cluster, but between the cluster and the primary key user to do a proxy layer, the cluster is not frequently generated primary key, but each generation of a large number of primary keys, which are in a queue of primary key values cached in the proxy layer Each time the primary key user gets the primary key, the queue consumes a primary key, and of course their system also checks the ratio of primary key usage, when the ratio reaches the threshold, the cluster will be notified, immediately start generating a new batch of primary key values, and then append these values to the agent layer queue. In order to ensure the reliability of the primary key generation and the continuity of the primary key generation, the primary key queue only receives the primary key request operation to consume the primary key, and does not care whether the primary key is really normal use, then I also raised a question of their own, if the agent hung off it? Then the cluster how to generate the primary key value, he said their system does not have a single point system, even if the agent layer is also distributed, so very reliable, even if all the servers are hung, then this time the primary key generation server cluster will not duplicate the generated primary key values, of course, each time the primary key value is generated, For security reasons, the primary key generation service persists the resulting maximum primary key value.
In fact, this friend's primary key design is actually the core design starting point is to solve the primary key sorting problem, this also for the actual use of a separate primary key design solution to find a very realistic scenario. If you can ensure the order of the primary key, while the data landing time according to this order, then in the library to do sort query accuracy will be very high, query time we put the number of queries evenly distributed to the table of each server, the final summary of the sorting results are approximate accurate.
Since I talked to this friend about the design of the primary key generation service and the consistency hash I talked about today, I am now a bit rid of the primary key design of the fixed hashing algorithm mentioned earlier, and this abandonment is conditional, and the primary key generation service is actually a better solution to the fixed hash scheme. But if the primary key itself is not sortable, only uniqueness, then this practice does not work for sort queries, to horizontal expansion, fixed hash of the extension will lead to a large number of data migration, risk and cost is too high, and the consistent hash is a fixed hash of the evolutionary version, so when we want to use hash distribution data, It might be better to use a consistent hash at the outset, which would be a great convenience for subsequent system upgrades and maintenance.
There are netizens in the message also mentioned the hash algorithm distribution data, that is, the performance of the hardware on the average distribution of data, if the horizontal split server performance differences, then the average distribution is caused by the emergence of hot issues, if we do not change the hardware differences, Then we have to add weights in the allocation principle of the algorithm to dynamically adjust the distribution of data, so that the production of artificial data distribution imbalance, then to the upper level of the calculation operation of some scenes we will not consciously add the weight of the dimension. However, as the author of my objection to this approach, these objections are specific as follows:
Objection One: I personally think that no matter what system introduces the weight is the operation of the complexity of the problem, the weight is often the benefit of the calculation, if over time to further expand the weight algorithm, then the problem becomes more and more complex, and I personally think that the weight is difficult to handle rationally, the weight if the evolution will become unusually complex , this complexity may be far beyond the distributed system, data splitting itself is difficult, so unless we have to try not to use any weight, even if there are weights do not use easily, see there is no way to eliminate the fundamental problem of weight.
Objection two: If our system back-end database is to use a standalone server, then generally will let the best Server service to the database, this practice itself illustrates the importance of the database, and our database of any sub-list of the solution will be cumbersome, cumbersome or even dangerous, Therefore, this article began to put forward if we solve the bottleneck problem before the hardware, if the hardware can solve the problem, the priority to take the hardware solution, which means that we reasonably treat the storage problem is the premise of the database hardware to keep up with the requirements of the times, then if some hardware has a performance bottleneck, Are we ignoring the importance of hardware?
Objection three: Uniform distribution of data can not only reasonable use of computing resources, it will also bring benefits to business operations, then we expand the database when the individual server itself is balanced, this is not difficult, if the old server is too old, with the new server replaced, although there will be a whole library migration problem, But this coarse-grained data translation is much less difficult than the data migration of any split scheme.
Well, this article is written here, I wish everyone a happy working life!
Original link: http://www.cnblogs.com/sharpxiajun/p/4279946.html
"Turn" thoughts on the evolution of large-scale website Technology (vii)-storage bottlenecks (7)