For example, my first access request returns JSON:
{"n": 1}
My 100th time Access request returns JSON:
{"n": 100}
Traditional write database, and then check the database back, in the concurrency of large when it seems that can not be guaranteed, what should I do? This is supposed to be the most streamlined question.
Reply content:
For example, my first access request returns JSON:
{"n": 1}
My 100th time Access request returns JSON:
{"n": 100}
Traditional write database, and then check the database back, in the concurrency of large when it seems that can not be guaranteed, what should I do? This is supposed to be the most streamlined question.
The simplest way is to build a MySQL table with the primary key of the ID, and then insert a record for each request, and then read the record, and the ID you want is the value.
Then can be based on the ID value to easily handle high concurrency scenarios, such as "seconds Kill" can be used with ID less than 300 and divisible by 6 rules that the second kill success, "lottery" can be divisible by 100 (1% probability) as the winning and so on.
If you implement it is nothing more than a single-threaded dead loop processing socket request maintenance of a global variable, rather than using ready-made MySQL convenient and reliable.
If java
so, a global one AtomicLong
can meet your needs, getAndIncrement
atomic manipulation, plus volatile
retouching, if it's a different language, it's similar.
With Setnx (ID) in Redis, the single-threaded guarantee is super fast with 1 per add and a memory database.
Purely Java, the counter object can be made into a single case, by filter
intercepting all the request Calculator plus 1 (need to sync). Don't know what you mean by the database, {n : 100}
n
is the database taken?
In fact, you have to do is a resident memory queue, according to the request has been queued processing.
On a single machine you can try the Linux memory file system (TMPFS)/dev/shm Read and write SQLite.
Read files do not need to go through the network, and do not need to implement their own memory resident, lock, self-increment and unique constraints.
!--? Phpheader (' Content-type:text/plain; Charset=utf-8 ');//sudo mkdir -M 777/dev/shm/app$file = '/dev/shm/app/data.db3 '; $ddl = "BEGIN; CREATE TABLE IF not EXISTS queue (ID integer PRIMARY KEY autoincrement, user_id integer); CREATE UNIQUE INDEX IF not EXISTS queue_user_id_idx on queue (user_id); COMMIT; "; if (!file_exists ($file)) {//Multi-core multi-process concurrency may be entered into this branch of judgment, so the DDL is to use if not exists $db = new PDO (' SQLite: ' $file); $DB--->exec ($DDL); Pdo_sqlite's query and prepare do not support executing multiple SQL statements at a time} else {$db = new PDO (' SQLite: '. $file);} $stmt = $db->prepare (' INSERT into queue (user_id) ' VALUES (?) '); $stmt->execute (array time ()); Replace time () with your user Idecho $stmt->rowcount (). " \ n "; The number of rows affected (changed) in the query, 0echo $db->lastinsertid () when the insert fails; Inserted self-increment ID, insert failure is 0//php-s 127.0.0.1:8080-t/home/eechen/www >/dev/null 2>&1 &//ab-c100-n1000 http://127 .0.0.1:8080/
The simplest is to use Redis zset for self-increment, high efficiency, simple, single-machine can also consider using Atomiclong (downtime restart after failure)