C # Hadoop Learning Note (vii)-c# Cloud Computing framework for reference (bottom)

Source: Internet
Author: User

Transferred from: http://blog.csdn.net/black0707/article/details/12853049

In the previous chapter, we focused on how this system handles "read" operations of Big data, and of course, some of the details are not. In the next chapter, we will focus on how the "write" operation is handled. We all know that if only "read", it is almost no data synchronization, there is no concurrency security problem, the reason, will produce such a problem, will lead to the cache and database data inconsistency, in fact, the root of the "write" operation exists. Now, let's take a look at what happens when the system needs to write a piece of data.

Again, let's take friend list for example. Now, I landed on this site, I got a friend list, I added a friend, then, my friend list must be modified and updated (of course, add a friend this action will certainly not only modify the Update friend list this request, but we take this as an example, Other requests are similar to processing), then, this request to modify and update the Friend list request, and get a friend list request similar to the slave node in the service process before processing, but also through the DNS load balancer, is assigned to the appropriate master node, The master node is then assigned to the appropriate relative idle slave point that is responsible for this function. Now suppose, as we've already said before, it's very common to get requests like friend list, so there will be more than one service process to provide this, for example, there are 10 parts, the service process number is 0~9, and it runs at 10 (or maybe only 1 slave nodes!). ) on the slave node, when the request is allocated, which slave node and which service process are selected? Of course there are a number of rules to affect the allocation strategy, we give a simple example, the user ID of 10 modulo, get 0~9 results, that is the selected service process number, assuming my user ID tail number is 9, then I this request, will only be assigned to the service process numbered 9 to deal with (of course, All user ID tail number is the same as 9, the service process with the number 9 is also responsible for caching only those data in the database whose user ID tail number is 9, while the user ID tail number 0~8 cache is handled by other service processes. If the requested request is working in this way, now I ask that the request to modify and update the friend list will be handled only by the process assigned to the service process number 9, which we call a "single point model" (that is, the same data will only have one available cache, Backup node does not count), you may have guessed, there will be "multipoint model"-that is, there are several service processes will be responsible for the same cache data, this is more complex situation, we'll discuss later.

Now, let's go on to the "single point Model". How does this change and update the Friend list request to a service process numbered 9, and how is it handled? The cache must be processed before the cache is considered to be consistent with the database, which we all know (or what else do you want the system to do?). We also know that as long as the concurrent read and write, there must be concurrency conflicts and security issues, and how to solve it? We have two ways to read and write synchronization.

1, the first way, is the traditional, lock the way-through the lock, can effectively ensure that the data in the cache synchronization and correct, but the shortcomings are very obvious, when the service process at the same time there are read and write operations of the thread, there will be serious lock competition, and then re-form the performance bottleneck. Fortunately, the business requirements usually handled in this way, after some of the above load balancing, shunt measures, lock granularity is not too large, or the above example, I locked up all the user ID tail number to 9 of this part of the cache data updates, the other 90% of the users are unaffected. More specifically, the locked cache data can be smaller, or even only lock my user's cache data, the lock will have a performance bottleneck effect is smaller (why the lock is not too small granularity to always directly lock each user's cached data?) The answer is simple, you can not have so many locks at the same time at work, the database will not be able to build a table for each user, that is, the granularity of the lock needs to be balanced and adjusted . OK, now continue, I asked to modify and update the Friend list request, has been processed by the process of the write process, it will be requested to obtain a lock on this part of the cache data, and then write operations, then release the lock, the traditional lock workflow. During this time, the read operation will be blocked waiting, it can be imagined that if the size of the lock is very large, there will be a number of read operations in the blocking wait state, then the system's high performance will not be discussed.

2. Is there a better way? Of course, this is the way to work without locks. First of all, our site, is a read operation is much larger than the write operation of the site (if the demand is reversed, may be handled in the opposite way), that is, most of the time, the read operation should not be blocked by the write operation, should give priority to the read operation, if a write operation, and then find ways to make the read operation " This allows for read-write synchronization. This way of working, in fact, is much like version management tools, such as SVN works: that, everyone, can read, not because someone is writing, so that reading is blocked, when I read the data, because someone wrote, may not be the latest data, SVN when you try to submit write, the time to judge, If the version is inconsistent, reread, merge, and write. Our system works in a similar way: Each thread is readable, but before it is read, the version number is compared, then the cached data is read, and when it is ready to be returned to the Web, the version number is compared again, if the found version has been updated (of course, the data you read is at most "old" data, but not the wrong data Why? or refer to SVN, this is the "copy and write" principle, that is, I write the data, is copy out of the write, write, and then copy back, will not be written on the one you read, you must re-read, until you read the cache data version number is up to date. As mentioned earlier, compare and update version numbers, can be considered atomic operations (for example, the use of CAS operations can be very good to do this, about CAS operations, can Google to a large number of things), so the entire processing process to achieve a non-locking, so that when the big data high concurrency, there is no lock bottleneck. However, you may have found some of these problems, the most obvious problem is that it is possible to read more than once data, if the data read more large, but also to create a performance bottleneck (bitter!). There is no way), and may cause delays, resulting in poor user experience. So, how to solve these problems? In fact, we are based on the actual business needs to make a trade-off, if, the requested request, allow a certain delay exists, real-time requirements are not the highest, for example, I see my friend hair dynamics, such a cache of data, does not require real-time very high, slightly delayed is allowed, you can imagine, If your friend has a status, you don't have to, in fact, it is not possible to click on the "POST", your dynamic has been updated, in fact, just for a short period of time (such as 10 seconds? Your dynamic update, and see his new release status, is enough. Suppose it is such a request, and if I have a greater performance bottleneck with the 1th type of lock-up, thenThe use of this lock-free mode of operation, that is, when read and write conflicts, the reading operation of the cost or delay of re-reading is tolerable。 Fortunately, at the same time there are multiple read and write threads operation of the same cache data resulting in multiple reread behavior, in fact, it does not always occur, that is, our system of big data concurrency, mainly in multiple process threads simultaneously read different data this business requirements, it is easy to understand, each user login, Are read their individual friend lists (different data, and on different slave nodes), except that these requests are concurrent (if the server or database is destroyed without distributed processing), but not always, many users have to read a friend at the same time List at the same time I am also updating this friend list to cause multiple invalid reread behavior.

We continue with the friend list above. Now, my friend list has been modified and updated in the cache. Either by way 1 or mode 2, during this time, if there is another thread to read my friend list, then it will be affected, if it is mode 1, the request will wait to write, and if it is mode 2, the request will read 2 times (it may be more, but it is not common). This kind of processing, should not be the best, but the previous has said, our system, the main solution: high traffic and read and write multiple data, rather than one. Next, it's time to consider synchronizing things with the database.

Well, just said so much, have you found that after I modify and update friend list, the data in the cache is inconsistent with the database? Obviously, the data in the database has expired and needs to be updated. Now, the service process with number 9 in the slave node, after updating its own cache data (modify update my friend list), will "try" to update the database. Note that the word "Try" indicates that the request will not necessarily be satisfied immediately. In fact, the service process to the database update, is a batch, can be considered a Taskcontainer (Task container), every interval of time, or get a certain number of tasks, then batch to the database update operations, rather than every request, Update the database once the cache is updated (you now know how many times you save database operations!) )。 So why is it possible to do so? Because, we already have the cache, the cache is our protection, under the "single Point Model", after the cache update, any read cache operation, will only read the cache, do not need to go through the database, the reference to this issue mentioned in the article. Therefore, the database write update operation, you can "gather", you can delay after the processing. You will find that in this case, I can merge and optimize these operations, for example, two write requests are all operations of the same table, then can be combined into one, yes, this is actually involved in the field of SQL optimization. Of course, you will also find that the new data in the cache has not been persisted, if at this point in time, the slave node machine down, then this part of the data is lost! Therefore, this delay time is not too long, usually 10 seconds is enough. That is, every 10 seconds, tidy up I this service process has updated the cache does not update the DB request, and then unified processing, if more worrying (although the consideration of data security can not be said to be unfounded, but you have to understand, in fact any real-time server down behavior will always have data loss, but more or less), The delay interval can be shorter, the db pressure is greater, and the actual considerations and tradeoffs need to be made again. At this point, my Friend list modification and update requests are all done, although it may have been seen on the page a few 10 seconds ago (by caching the returned data).

So, both reading and writing have been told, are there any other questions? There's a lot of problems. What we have just discussed is a "single point model".

That is, the data in each database has only one copy of the cached data corresponding to it. In practice, however, the "multipoint model" is a must-have, and a more powerful way to handle it, as well as more challenges of synchronization and consistency, with each piece of data that may have multiple caches corresponding to it. That is, in the service process on multiple slave nodes, there is a cache corresponding to the same data in the DB, and how is this time synchronized? We solve the way, called the "final consistency" principle, about the final consistency model, but also Google to a large number of, especially to put forward is the Googlefs multi-point consistency synchronization, is through the "final consistency" to solve, the popular speaking, is the same data, the same time, can only be modified by one node. The assumes that my current business is a "multipoint model", for example, my friend list, which is a multipoint model with multiple caches (though not really), then I modify and update the friend list, Will only modify the cache that I am assigned to in the Slave node service process, the caches of other service processes or slave nodes, and the database, which will have to be updated synchronously, how is this done? This is also used in the last mentioned notification (notification Service), although this module does not appear in the architecture diagram, but the system is the most core of a service (of course, it is also many, hehe), that is, when a data is a multipoint model, when a service process to modify and update it, By submitting notificaion to the master node and notifying other service processes or other slave nodes that the cache has expired and needs to be updated, this update may be updated by the modified service process, sending cached data to other processes or nodes, Also by the possibility of waiting for the DB update, the other nodes are updated from the DB, thereby indirectly guaranteeing multipoint consistency. Wait, didn't you just say, usually 10 seconds to update the DB in bulk? That is because in a single point model, this is reasonable, but in the multipoint model, although the database is also updated, but this delay is usually very small, you can think of a batch update of the database immediately, and then, through notification notify all the nodes with this data, update their cache. Therefore, the multi-point model, the problem that may arise is a lot of. So why use a multipoint model? Suppose I have such a business: Big Data High concurrency read a certain piece of data, very very much read, but write very little, such as a hot picture of XX door, there are many many requests from different users need this data cache, multipoint model is the perfect choice. Many of my slave nodes have their caches and rarely update them, so you can maximize the performance gains from the multipoint model.

       There are some questions that have to be said. Is the problem of down and periodic cache updates. First, the outage, it is clear that the cache is the memory of the service process in the slave node, once the node is down, the cache is lost, then the previous I mentioned the "Rebuilding cache", which is usually issued by the master node, The master node is responsible for monitoring the health of each slave node (and, of course, other master nodes), and if a slave node is found to be down (without a "heartbeat", if you know some Hadoop, you'll find it works as well), After the slave node is rerun (a possible reboot), the master node notifies the slave node, rebuilds the cache of the data it is responsible for, and from where it is, of course, from the database, which takes a certain amount of time (after we have millions of users, Rebuilding a cache of data that is responsible for a slave node typically takes a few minutes), so who is the service provided by the time the cache is rebuilt from the outage to the slave node? Obviously the backup node is on the go. In fact, in a single point model, if the backup node is considered, all requests are actually multipoint models. Only the backup node does not always update its cache, but it does so on a regular basis, or when it receives notification. When the master node discovers that a slave node is down, it can immediately point to a backup node that contains the same data, ensuring that the cache service is uninterrupted. So, is the cache data for the backup node up-to-date? It's probably not. Although it is common to notify the backup nodes to update the cache every time a batch update is completed on the database, there is still a possibility of inconsistency. So, the way the backup node works is special, that is, for each request cache is pulled (pull) way, how to pulling? The previously mentioned version management system again, that is, before each reading, compare the version, reread, write is the same. Therefore, the performance of the backup node is not very high, and, usually need to be responsible for a few slave node data backup, so there is the possibility of being washed out, but also need to restore the slave node as soon as possible, and then return the service work back to it.

Again, the issue of periodic cache updates. Typically, all slave nodes are deployed at some point in the dead of night (such as 02:00~06:00), and when users are infrequent, cache updates are regularly made to ensure that the data is synchronized and consistent as much as possible, and the next morning, when a large number of requests arrive, it is essential to return the latest data from the cache. Backup nodes, it is possible to cache updates every 30 minutes. Hey? Didn't you say that the backup node had to pull, compare versions and update the cache for each read before returning? Yes, why do you have to update it regularly? The answer is very simple, because if the majority of the cache is the latest data, only compare the version without the actual update operation, the performance is very small and small, so the regular update, in the event of slave node downtime from the backup node to work, a great help.

Finally, say the push (push) mode, that is, every time a data change is forced to update all caches. This approach consumes performance, but is more secure in real time. And usually we use, is pulled (pull) mode, that is, whether it is regularly updated cache, or received notification (although receiving notification is "pushed" a) after the update cache, in fact, are pulled, the new data pulled over, just fine. In the actual system, there are two ways, or that sentence, look at the demand, and then decide how to deal with.

Well, finally finished this summary, see the Last post, got a lot of friends of the encouragement and support, in this together thanks! I believe there are a lot of friends, have seen the system of many deficiencies and bottlenecks, indeed, it is not a perfect system, but also need to evolve. I write this article, but also hope to communicate with you a lot, and make progress together. Immediately is 2013 years, hope that they can have a better development, but also hope that all friends, can go further!

Original source: http://www.cnblogs.com/ccdev/archive/2012/12/29/2837754.html

C # Hadoop Learning Note (vii)-c# Cloud Computing framework for reference (bottom)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.