LNMP environment, the other page content has been cached, not much load, is this view statistics function, to MySQL bring a lot of pressure, ask you have any solution?
Reply content:
LNMP environment, the other page content has been cached, not much load, is this view statistics function, to MySQL bring a lot of pressure, ask you have any solution?
How to say do not use MySQL to do, 200 per second concurrency for MySQL is not difficult ah. In addition to Redis, Memcached, persistence aside, business code and operational deployment are not small.
I'll give you a few suggestions and try to make your operations and business code changes smaller.
You can decide from the separation, not in a library high concurrent inserts also do a lot of statistical operations. After the separation, the query is done from the library (or even a specialized distributed system such as Hive), and the index can be removed from the main library to improve the performance of the insert. This method, the business code almost without any changes (change the database configuration file is good). MySQL OPS deployment can also choose a business trough to do online.
If you can accept a small number of business code (PHP) changes, there are two suggestions:
1. Sub-Library, sub-table, the total data of each table is small, operation performance will be better, especially to the MyISAM from the library table. You may have some queries before inserting, such as querying the IP is not in the library, previously counted.
2. Use the Handlersocket plugin to bypass SQL Parser and directly manipulate the storage file. If business is possible, you can also use BULK Insert (BULK INSERT). MySQL InnoDB also launched a similar handlersocket InnoDB NoSQL Plugin, with the memcached protocol, shared InnoDB Buffer, No more worrying about how to maintain data consistency before MySQL and memcached.
Do not use MySQL for this kind of thing. Just get a redis or a memcachedb.
The direct log in the file is not possible ... Why must mysql ...
stored in the memcached and then, for example, to 100 times to write to the database, every day to the statistics, not to meet 100 of the data together statistics into MySQL inside
You can get in a queue and slowly plug in.
Count logs, make a mark, and then write it into MySQL.
Scenario One: Write files directly, run the total number of scripts per day (concurrency is too high, IO may not be able to stand)
Scenario Two: Log to write queue, by a back-end server from the queue and then write to MySQL, run every day script statistics total
Scenario two: Using an open source Log collection service Program
Do not write to MySQL immediately, you can write a text, each add a certain number of 100 times, write a database
1. Do not use PHP to do, use Nginx to do
2. PHP writes to the cache system and then periodically brushes data from the cache system to the database
3. Using Redis
I feel that I can take redis in front of the MySQL block first.
Browse volume statistics This type of operation is not suitable for real-time write database, even if the time to hang on, its architecture is not good enough, it is recommended to merge, reduce the frequency of write db.
The idea seems wrong.
Page view statistics do not usually do this, because the page view of the data is not completely accurate, some may be search engine access, so do not need to be so accurate.
You can refer to the following article: (The principle and implementation of data collection in website statistics)
http://developer.51cto.com/art/201210 ...
If the real-time is not required, you can directly analyze the Nginx log to do page traffic statistics.
The simplest way is to use the Blackhole engine, which is very fast to insert
Specific can be seen:
http://dev.mysql.com/doc/refman/5.0/e ...
With Redis resolution, with the article ID as key, each time the value of a visit plus 1, every half an hour to write to MySQL inside.
Browse volume statistics This type of operation is not suitable for real-time write database, even if the time to hang on, its architecture is not good enough, it is recommended to merge, reduce the frequency of write db.