Have a server, the traffic is very large, nearly 250w daily dynamic PV, database query average of nearly 600 times per second
Another server, running the same program as this one, but only about 40w dynamic PV per day
The previous period of continuous card death several times, when the state is
The server does not crash, the database can log on normally. Just all the queries are stuck in the "sending data" state, a long time can not be completed, these simple SQL statements, sometimes concentrated on a table, sometimes concentrated on the B table, while some of the cards died in the locked state or update state
Look at the description of MySQL, sending data state represents two situations, one is that MySQL has queried the data, is being sent to the client; in another case, MySQL already knows where some data needs to be read and is reading from the data file
MySQL official said this is not a MySQL bug, but the official did not say how to deal with ... So, look at the situation, it should be the problem of configuration.
First from the angle of SQL optimization check, those who die of SQL statements, are simple query, consumption is very low, the index does very good, so think should not be the problem of SQL statements. But there is no slow query in the query log.
The table has been optimized, is optimize table, a few days to find, or will appear card death situation ...
Later consider adding concurrent performance, increased key_buffer thread_cache and so on a series of memory configuration, found no effect. The situation is still
Later, the Query_cache was reduced to the default value of 16M, and some of the data that was not changed was static. The surprise discovery, 12 days passed, no more problems ...
Later think, modify Query_cache may have some help on this issue, after all, data updates more frequently, Query_cache update is also very frequent. But looking at the state of MySQL, Query_cache's hit rate is still quite high, almost 75%.
Think the problem may be on the program, just not found out. Later, the static of those content, is the description of some products, the general description of a product is also 350 Chinese characters.
Here the problem is relatively large, a page has seven or eight products, add up to maybe 3,500 Chinese characters, although not much, but the query is very frequent, from this table query the amount of data should be very considerable, MySQL will frequently from this table to get data. However, sometimes the statements that die are not in the query of this table ...
Have no tools on hand to be depressed. Anyway, the problem seems to be good, first put down the record, and so on higher level, and then check.
one of the important reasons MySQL is easy to get a full death
It is not easy to build the station is far more than my imagination and expectations, in addition to the economic and technical, some problems are not the general technical staff can solve. But in this time I also learned how to think about problems and solve problems, especially a continuous solution to a number of problems, can be said to be really not a developer or other technical staff can solve, the self-confidence is more and more foot!
Talking about this, we must say that our station cloth Life Network www.yes81.net, basic configuration, LINUX 9.0 system, JBOSS42 Web Services, MYSQL, from 51 to now, run for some time, the current amount of traffic is about 4000IP.
Remember before a problem is also checked for a long time did not solve, the failure of a CPU ran to 100% or so, the system did not respond, MYSQL, JBoss process died. The original is through some large data tables to create an index to solve! This problem and this is a bit like this, dead when almost no response to the service, by looking at the background MySQL process, incredibly already exceeded my set of 1000 restrictions, the 1th day I changed the configuration to 3000, think about whether it is related to this, The number of recent visits has increased. To tell you the truth, I still do not believe in concurrent 1000 connection, but the fact is in front of, now is 1000 processes stuck in this! The 2nd day found that 3000 also die, see in the process list is basically very easy to process full, and each process is in sending data state, Looking for 2 days is still unable to resolve the problem, whether it is to reconfigure the startup parameters or to check for external attacks can not be resolved, according to some people, the temporary buffer to 512M is no help. Each additional connection like this will almost die, and it is sending data state! is the data not issued or the query cannot be completed?
With this problem, with the development of communication, whether there is data deadlock or did not submit the problem, resulting in the query lock dead! And sometimes it's normal, but most of it is an abnormal deadlock! After a long time, the report said that the program did not find the problem, because according to the command has been able to locate the exact code of the program! So what's the problem?
Recall the problem of database corruption that had occurred under Ms SQLSERVER2000 before, and tried to fix it. Based on the blocking command focused on several important tables, one is the restaurant information table (40,000 records), with repair commands can not repair! found that the type of Setup is Inoubox, change the type to MyISAM after repair, repair also did not report what errors, but restart the system after all the problems solved!
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.