Now there are a lot of projects, the implementation of the counter is very random, such as in the implementation of the website article clicks, is so designed datasheet, such as: "article_id, Article_name, Article_content, Article_author, Article_view ... Record the amount of browsing for the article in Article_view. There seems to be no problem in cheating. For a small station, such as this blog, is to do so, because the small dishes of the blog will involve concurrent problems? The answer is obvious, there is not much IP in a day, and it will not be very big in the future.
To go back to the article information class, the main project, in browsing a page not only to do a lot of check (check the record above, already belong to the classification of the name, popular article information commentary, tag, etc.), but also to write operations (update browse number of clicks). Put the details of the article and counters in a table although convenient for development, but it will cause the pressure of the database is too large (otherwise why major projects have to be divided into tables).
Well, would it be better to have two tables stored? One table to save the details of the article, the other table separately stored counter.
The code is as follows:
CREATE TABLE ' Article_view ' (
' article_id ' int (one) is not NULL,
' View ' int (one) is not NULL,
PRIMARY KEY (' article_id ')
) Engine=innodb;
This way, although sharing the pressure of the article table, but whenever there is a process request update, will produce a global mutex, only serial, not parallel. There will be longer wait times under high concurrency.
Another good way is for each article counter is not a line, but more lines, such as bar, 100 lines. Each time you randomly update one row, the number of views for that article is the and of all rows.
The code is as follows:
CREATE TABLE ' Article_view ' (
' article_id ' int (one) is not NULL,
' Pond ' tinyint (4) Not NULL COMMENT ' pool, which is used for random use ',
' View ' int (one) is not NULL,
PRIMARY KEY (' article_id ', ' Pond ')
) Engine=innodb;
Small traffic random pool 100 must be more, 35 enough. Each time a visit, a random number (1-100) as a pond, how the pond exist then update view+1, otherwise inserted, view=1. With the help of duplicate KEY, otherwise in the program is implemented first select, judge the insert or update.
The code is as follows:
INSERT into ' article_view ' (' article_id ', ' pond ', ' View ') VALUES (' 123 ', RAND () *100, 1) on DUPLICATE KEY UPDATE ' view ' = ' VI EW ' +1
When you get the total number of visits for the specified article:
The code is as follows:
SELECT SUM (' View ') from ' Article_view ' WHERE ' article_id ' = ' 123 '
PS: Everything is a double-edged sword. We usually have to sacrifice something to read faster. In reading more tables to speed up the reading, write more in the table to speed up the write speed. Weigh each. In speeding up the speed of reading, we sacrifice not only the performance of writing, but also the development costs, development become more complex, maintenance costs. So not to read faster the better, need to find a balance point.