For example, for the first access request, I returned json: {& quot; n & quot;: 1} for the first access request, and returned json: {& quot; n & quot ;: 100} What should I do if the traditional database is written and the database is queried and returned and cannot be guaranteed when the concurrency is large? This should be the most simplified problem. for example, I returned json for the first access request:
{"n": 1}
I returned json for 100th access requests:
{"n": 100}
What should I do if the traditional database is written and the database is queried and returned, which cannot be guaranteed when the concurrency is large? This should be the most simplified problem.
Reply content:
For example, in my first access request, json is returned:
{"n": 1}
I returned json for 100th access requests:
{"n": 100}
What should I do if the traditional database is written and the database is queried and returned, which cannot be guaranteed when the concurrency is large? This should be the most simplified problem.
The simplest way is to create a mysql table with the auto-incremental id primary key, insert a record every time a request is sent, and then read this record. the read id is the value you want.
Then, you can easily deal with high concurrency scenarios based on the id value. for example, the "second kill" rule considers that the second kill is successful because the id is less than 300 and can be divisible by 6; lottery: you can win a prize by dividing the prize by 100 (1% probability.
If you implement it by yourself, that is, to maintain a global variable by processing socket requests in an endless loop of a single thread, it is better to use a ready-made mysql for convenience and reliability.
If yesjava
Then, a globalAtomicLong
Can meet your needs,getAndIncrement
Atomic operation, plusvolatile
If it is in another language, it is similar.
Using setnx (id) in redis, a single thread must add 1 at a time, and it is also a memory database, which is extremely fast.
If it is pure Java, you can make the counter object into a singleton.filter
Intercept all request calculators plus 1 (synchronization required ). I don't know what the database means,{n : 100}
,n
Is it from the database?
In fact, what you need to do is a queue with resident memory and queue processing based on requests.
On a single machine, you can try to read and write SQLite on the Linux memory file system (tmpfs)/dev/shm.
You do not need to read files through the network or implement the memory resident, lock, auto-increment, and unique constraints.
Exec ($ ddl); // pdo_sqlite query and prepare do not support executing multiple SQL statements at a time} else {$ db = new PDO ('sqlite :'. $ file) ;}$ stmt = $ db-> prepare ('Insert INTO queue (user_id) VALUES (?) '); $ Stmt-> execute (array (time (); // time () is replaced with your user IDecho $ stmt-> rowCount (). "\ n"; // number of affected (changed) rows in the query. when insertion fails, the value 0 echo $ db-> lastInsertId (); // The inserted auto-increment ID, when insertion fails, it is 0 // php-S 127.0.0.1: 8080-t/home/eechen/www>/dev/null 2> & 1 & // AB-c100-n1000 http: // 127.0.0.1: 8080/
The simplest is to use the zset of redis for auto-increment, which is highly efficient and simple. if a single machine is used, you can also consider using atomiclong (it will become invalid after reboot)