The explanation of high concurrent synchronization under large data volume

Source: Internet
Author: User

For our web site, if the site is very large, then we need to consider the relevant concurrent access issues. And concurrency is a problem that most programmers have headaches with,

But then again, since we can not escape, then we face it calmly ~ Today let us study the common concurrency and synchronization bar.

To better understand concurrency and synchronization, we need to understand two important concepts: synchronous and asynchronous

1. Differences and connections between synchronous and asynchronous

The so-called synchronization, can be understood as after executing a function or method, waiting for the system to return a value or message, then the program is blocked, only receive

Returns the value or message before executing the other command.

Asynchronously, after executing a function or method without having to wait for the return value or message to be blocked, simply delegate an asynchronous procedure to the system, and when the system receives a return

Value or message, the system automatically triggers the asynchronous process of the delegate, thus completing a complete process.

Synchronization to a certain extent can be seen as a single-threaded, the thread request a method after the method to reply to him, or he does not go down (dead mind).

Async can be seen to some extent as multithreaded (nonsense, how a thread is called async), and after a method is requested, no matter what, continue with other methods.

Synchronization is one thing, one thing to do.
Asynchronous is, do one thing, do not sound to do other things.

For example: Eat and talk, can only one thing to come, because only a mouth.
But eating and listening to music is asynchronous, because listening to music does not lead us to eat.

For Java programmers, we will often hear the Sync keyword synchronized, if this synchronization

2, common concurrent synchronization case analysis

Case One: Booking system case, there is only one ticket on a flight, assuming 1w individuals open your website to book tickets, ask how you solve concurrency problems (scalable to any high concurrency site to consider

Concurrent read and write issues)

Problem, 1w person to visit, ticket not to go out before you have to ensure that everyone can see a ticket, it is impossible for a person to see the ticket when others can not see. In the end who can rob, it depends on the person's "luck" (network speed, etc.)

Second consideration of the problem, concurrent, 1w individuals at the same time click on the purchase, in the end who can deal? There is only one ticket in total.

First of all, we tend to think of several concurrency-related scenarios: Lock synchronization

Synchronization more refers to the level of the application, multiple threads come in, only one access, Java middle finger is the syncrinized keyword. Locks also have 2 levels, one is the object lock discussed in Java, for thread synchronization, the other is the database lock, if it is a distributed system, it is obvious that the database-side lock can only be implemented.

Assuming that we adopt a synchronization mechanism or database physical lock mechanism, how to ensure that 1w individuals can also see the ticket, obviously will sacrifice performance, in high concurrency site is not desirable. After using hibernate, we put forward another concept: optimistic lock , pessimistic lock (that is, traditional physical lock), using optimistic lock to solve this problem. Optimistic locking means that without locking the table, the use of business control to solve the concurrency problem, so that the concurrency of data to ensure the readability of the data and guarantee the exclusivity of the performance of the same time to solve the problem of dirty data caused by concurrency.

How to implement optimistic locking in hibernate:

Premise: Add a redundant field to the existing table, version number, long type principle: 1) Only the current version number = database table version number, to commit 2) after successful submission, version No.

The implementation is simple: Add a property optimistic-lock= "version" in Ormapping, here is the sample fragment

optimistic-lock= "version" table= "T_stock "Schema=" STOCK ">

2, stock trading system, banking system, big Data volume How do you think about it?

First of all, the stock Exchange system of the market, every few seconds there is a market record, a day down there (assuming the market 3 seconds a) Stock number X20x60*6 records, January down this table record how much? Oracle the number of records in a table after more than 100w query performance is poor, how to ensure system performance?

For example, China Mobile has billions of users, how is the table designed? To use all that exists in a table?

Therefore, a large number of systems, must consider the table split-(table name is not the same, but the structure is exactly the same), several common ways: (depending on the situation)

1) According to the business, such as mobile phone number of the table, we can consider the 130 beginning as a table, 131 the beginning of another table and so on

2) Use Oracle's table splitting mechanism to make a table

3) If it is a trading system, we can consider splitting by the timeline, one table for the same day data, and the other table for historical data. The reports and queries for historical data here do not affect the transaction of the day.

Of course, after the table has been split, our application has to be adapted accordingly. The simple or-mapping may have to be changed. such as part of the business through the stored procedures, etc.

3) In addition, we have to consider caching

The cache here means that not only hibernate,hibernate itself provides a level two cache. Here the cache is independent of the application, is still the memory of the read, if we can reduce the frequent access to the database, it is certainly greatly advantageous to the system. For example, an e-commerce system of commodity search, if a keyword product is often searched, then you can consider this part of the product list to be stored in the cache (in memory), so that not every time access to the database, performance greatly increased.

Simple cache you can understand to make a hashmap, to make a key,value of the data that is accessed for the first time from the database, the next visit can be read from the map, but not read the database; Professional currently has a separate caching framework such as memcached, Can be deployed independently as a cache server.

The explanation of high concurrent synchronization under large data volume

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.