=====================================
On the internet for the "second kill" a lot of solutions, data splitting to resolve hot spots, Readpash solve the lock problem, application queuing limit concurrency and many other ways, each has advantages and disadvantages, only to prove a famous saying: All Roads to Rome.
=====================================
Today take SQL SERVER 2014 memory table to test the "seconds Kill", the memory table uses "version" to solve the high concurrency lock request and blocking problems, using hash index to deal with the data page "hot" problem, resolved the Page_latch wait, Although local compilation is not so obvious in this test, better than nothing.
Because the test code is modified on the basis of others ' code, it is not taken out to share, the concrete realization idea:
1. Use the local compilation stored procedure to encapsulate the seconds kill (implement insert operations on inventory update and success orders to kill seconds)
2. On the basis of step 1 to encapsulate a layer, the implementation of retry logic, retry the relevant foundation, please bash
3. Split the second-kill product into multiple records to prevent a single record from becoming a hotspot
4. Design the order table of the second kill to be the memory table, avoid page_latch wait when inserting the record
=========================================
Test environment
Windows version: Windows Server 2012 Enterprise Edition
Database version: SQL SERVER 2014 Enterprise Edition
Server Cpu:4 64 Logical CPUs of physical CPU
Server memory: 128GB
Analog seconds Kill 300000 items, 1200 threads simulate concurrency
Test results
Number of records |
Time-consuming (milliseconds) |
Number of items per second killed |
300 |
2786 |
107681.26 |
100 |
3620 |
82872.93 |
50 |
4363 |
68760.03 |
20 |
5240 |
57251.91 |
10 |
7690 |
39011.70 |
5 |
12266 |
24457.85 |
2 |
31186 |
9619.70 |
1 |
69770 |
4299.84 |
The above test results are for reference only! --============================================= Although the memory table does not have a lock blocking condition, but when the same data is updated for two transactions, only the second transaction can be successfully updated after the first transaction commits. Transaction commits are affected by the speed of log writes, so when a single record or a small number of records is updated, the time the disk Word is written (AVG. Disk Sec/write) is critical to this test on the server, Avg. Disk Sec/write The average value is between 0.05ms and 0.09ms, so theoretically the maximum number of updates per second for a single record is around 1w to 2w, which is why splitting into multiple records is the reason. --============================================= Welfare is still sister