Deadlock, concurrency, parallelism, and many questions about the concept of flash sales

Source: Internet
Author: User
First, how is a deadlock generated? Many answers on the Internet are based on the following concepts: four conditions for deadlock: mutex condition: A resource can only be used by one process at a time. Request and retention conditions: when a process is blocked by requesting resources, it will not release the acquired resources .... First, how is a deadlock generated? Many answers on the Internet are based on the following concepts: four conditions for deadlock:
  1. Mutual Exclusion condition: A resource can only be used by one process at a time.

  2. Request and retention conditions: when a process is blocked by requesting resources, it will not release the obtained resources.

  3. Non-deprivation condition: resources obtained by a process cannot be forcibly deprived before they are used.

  4. Cyclic waiting condition: a type of cyclic waiting resource relationship is formed between several processes. *

My problem is: after reading the introduction here, I found that concurrency does produce deadlocks. I personally think that when the CPU is processing a process in the concurrency status, other processes need to wait for the CPU to switch over and find themselves. I personally think the waiting process is blocking, that is, deadlock. I don't know if it is correct!

In addition, many people have said that the flash sales/flash sales system will bring more sales risks in high concurrency, which I cannot understand:

According to my advice, we assume that we have snapped up 1000 Items and 100 requests have been squeezed into the CPU at one time within a certain CPU time. If the CPU is single-core, so isn't the concurrency constantly switched by the CPU? In this case, the CPU will process the operation to reduce the number of items one by one. In this case, I can determine the remaining number of items during each operation. How can I sell more products !!! So hereI need some help from Daniel.

There is also parallelism.

Assume that four requests are squeezed into the CPU at one time within a certain CPU time, and our CPU is the four cores, when the four cores simultaneously process 4 flash sales processes and decrease the number of commodities, this dish considers: in this case, the product record is locked at the same time, so that no core can be released, resulting in a deadlock! But the problem is:

  1. I do not have enough knowledge at the underlying layer. I don't know if each of these four requests will be allocated to one core or all of them will be allocated to one core, or is it random allocation at all (for example, two requests on the core, one request on the Core 2, one request on the core 3, and no request on the Core 4 )? But in any case, since it is deadlocked, it will not be sold out!

  2. The second problem is: If the number of requests is greater than four, for example, if another 100 requests are squeezed into the CPU at a time, concurrency occurs again, instead of parallelism, each CPU may be rotated to process concurrent requests;

Reply: first, how is a deadlock generated? Many answers on the Internet are based on the following concepts: four conditions for deadlock:
  1. Mutual Exclusion condition: A resource can only be used by one process at a time.

  2. Request and retention conditions: when a process is blocked by requesting resources, it will not release the obtained resources.

  3. Non-deprivation condition: resources obtained by a process cannot be forcibly deprived before they are used.

  4. Cyclic waiting condition: a type of cyclic waiting resource relationship is formed between several processes. *

My problem is: after reading the introduction here, I found that concurrency does produce deadlocks. I personally think that when the CPU is processing a process in the concurrency status, other processes need to wait for the CPU to switch over and find themselves. I personally think the waiting process is blocking, that is, deadlock. I don't know if it is correct!

In addition, many people have said that the flash sales/flash sales system will bring more sales risks in high concurrency, which I cannot understand:

According to my advice, we assume that we have snapped up 1000 Items and 100 requests have been squeezed into the CPU at one time within a certain CPU time. If the CPU is single-core, so isn't the concurrency constantly switched by the CPU? In this case, the CPU will process the operation to reduce the number of items one by one. In this case, I can determine the remaining number of items during each operation. How can I sell more products !!! So hereI need some help from Daniel.

There is also parallelism.

Assume that four requests are squeezed into the CPU at one time within a certain CPU time, and our CPU is the four cores, when the four cores simultaneously process 4 flash sales processes and decrease the number of commodities, this dish considers: in this case, the product record is locked at the same time, so that no core can be released, resulting in a deadlock! But the problem is:

  1. I do not have enough knowledge at the underlying layer. I don't know if each of these four requests will be allocated to one core or all of them will be allocated to one core, or is it random allocation at all (for example, two requests on the core, one request on the Core 2, one request on the core 3, and no request on the Core 4 )? But in any case, since it is deadlocked, it will not be sold out!

  2. The second problem is: If the number of requests is greater than four, for example, if another 100 requests are squeezed into the CPU at a time, concurrency occurs again, instead of parallelism, each CPU may be rotated to process concurrent requests;

  1. PHP is a high-level language. You do not need to consider the CPU issue. Every request is allocated to a process. php manages the resources called by the process itself. The deadlock you mentioned should be a database, not a process.

  2. 100 processes are pushed into one CPU .. The CPU will also be processed one by one...

Okay .. Let's talk about your questions .. I tried to sort it out for a long time and guess what you asked .. Let's put it simply ..

How is a deadlock generated?

In general. Every time mysql modifies or queries a pile of data (innodb is not subject to this restriction under certain conditions... Will tell the system that I want to use this piece of data .. Others cannot move... Then when someone else comes back to use it, the system will tell someone else .. So-and-So are using this piece of data. You can wait .. Then others will wait... This is called a lock ..
What is the deadlock? Some data is associated, such as flash sales. You need to deduct inventory and write orders first .. So it will be a high-speed system. You should lock the inventory first. When I finish writing the order, I will return and change the inventory, and you will release both...
If another person executes the opposite thing at this time .. You must first lock the order table. Then modify the database and table.
The deadlock may occur., To simulate a slow lens...

A: system! Lock the inventory table. I want to write the order table .. System: OK! .. B: system! Lock the order table. I want to change the inventory table .. System: OK! ..

A: system. I want to write an order ~ System: Sorry, user B is in use. You can wait. A: Oh ~

B: system. I want to change the database and table system. Sorry, user A is in use. You can wait. B: Oh ~

A few hours have passed... A: It's useless for B .. B: What A is doing is useless ..

Server: What are you doing? millions of users are waiting ..

Understand? Code:

SELECT `product` WHERE `pid` = 1 FOR UPDATA;UPDATA `order` SET `status` = 1 WHERE `uid` = 2 AND `pid` = 1;
SELECT `order` WHERE `uid` = 2 FOR UPDATA;UPDATA `product` SET `num` = `num` + 1 WHERE `oid` = 3;

For a real experience, let him sleep for several seconds in the middle, and request both sides, it will be locked until the timeout

This makes it clear that deadlocks are generated in the past. The deadlocks in my problem are indeed database deadlocks.
For deadlocks, I think it is a coincidence that if the business logic is very complex, it is still very prone to such coincidence, for example, when innodb's next business logic 1 requires a transaction (operations on a and B are performed on each of the two tables, it is entirely possible that when a record operation in Table a is completed, that is, when the operation is preparing to operate the record in Table B; suddenly another business logic 2 request was switched over by the cpu (coincidentally, it also uses transactions and first operates Records of the same logic 1 in Table B, and then operates
Table a is the same as the record of logic 1), and it just completes the record of Table B, prepare the request so that the current lock will lead to the failure of logic 1, logic 2 cannot do either! Is that correct?

I still don't know why the concurrency will produce more sales. If, as you said, the cpu will always process requests one by one, then I will judge the current inventory when processing each request, there will be no excess inventory.

Also, you can say that the concurrency is handled by one request. This may occur on a single cpu or when the number of requests exceeds the number of CPUs.

If there are five CPUs and only three requests are sent, and each request must operate on the 1 record of Table a, it is possible that three CPUs simultaneously capture the 1 record of Table, is this also a deadlock? Or, even though the cpu reaches this record in parallel, the database will only allow locks one by one. Even if they arrive at the same time, mysql will still queue up. Isn't it a deadlock?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.