Implement High-performance and high-concurrency counters in MySQL (such as the number of clicks in articles) and mysql counters
There are a lot of projects that implement counters at will. For example, when we implement the number of clicks on a website article, we design data tables like: "article_id, article_name, article_content, article_author, article_view ...... Record the page views of this article in article_view. Fraud seems to be okay. This is what we do for a small website, such as this blog. Is it because a small dish blog involves concurrency issues? The answer is obvious: there are not many IP addresses in a day, and they will not be very large in the future.
Let's get down to the truth. For articles and news projects, when browsing a page, we not only need to perform a lot of queries (query the above records, name of a category, comment on popular articles, TAG, etc., and write operations (update the number of views ). Putting the detailed content and counter of the article in a table is very convenient for development, but it will cause too much pressure on the database (otherwise, why do large projects require database and table sharding ).
So, can we store two tables? Save the document details in one table, and save the counter in another form.
Copy codeThe Code is as follows:
Create table 'Article _ view '(
'Article _ id' int (11) not null,
'View' int (11) not null,
Primary key ('Article _ id ')
) ENGINE = InnoDB;
In this way, although the pressure on the article table is shared, every time a process requests an update, a global mutex lock is generated, which can only be serialized and cannot be parallel. There will be a long wait time in high concurrency.
Another better way is to counter each article in multiple rows rather than one row, for example, row one hundred. Each time a row is randomly updated, the number of views in this article is the sum of all rows.
Copy codeThe Code is as follows:
Create table 'Article _ view '(
'Article _ id' int (11) not null,
'Pond' tinyint (4) not null comment' pool, which is used randomly ',
'View' int (11) not null,
Primary key ('Article _ id', 'pond ')
) ENGINE = InnoDB;
There must be more than 100 random pools with small traffic. During each access, a random number (1-100) is used as the pond. If the pond exists, view + 1 is updated; otherwise, view = 1 is inserted. Use duplicate key. Otherwise, SELECT must be implemented in the program, and then INSERT or UPDATE.
Copy codeThe Code is as follows:
Insert into 'Article _ view' ('Article _ id', 'pond', 'view') VALUES ('20140901', RAND () * 123, 1) on duplicate key update 'view' = 'view' + 1
When obtaining the total access volume of a specified article:
Copy codeThe Code is as follows:
Select sum ('view') FROM 'Article _ view' WHERE 'Article _ id' = '20140901'
PS: Everything is a double-edged sword. We usually sacrifice something for faster reading. Reading a large number of tables must speed up reading, while writing a large number of tables must speed up writing. Trade-offs. To speed up reading, we sacrifice not only the write performance, but also the development cost, the complexity of development and the maintenance cost. So it is not the faster the reading speed, the better. You need to find a balance point.
How can I use Flash to create a web application and click the number recorder? How to create a counter?
I don't know.