This article brings you the content of MySQL large table in the use of count () and MySQL count () of the optimization, there is a certain reference value, the need for friends can refer to, I hope to help you.
A single table contains 6000w+ data, but you can't split it. How much data is needed in the separate tables, how many of a products there are, and how many of these data are available for B products.
Before for optimization. The table structure is as follows, and in order to hide the contents I have obfuscated the corresponding fields.
CREATE TABLE `xxxx` (
`link` varchar(200) DEFAULT NULL,
`test0` varchar(500) DEFAULT NULL,
`test1` varchar(50) DEFAULT NULL,
`test2` int(11) DEFAULT NULL,
`test3` varchar(20) DEFAULT NULL,
`test4` varchar(50) DEFAULT NULL,
`test5` varchar(50) NOT NULL,
`inserttime` datetime DEFAULT NULL,
`test6` bit(1) NOT NULL DEFAULT b'0',
`A` bit(1) NOT NULL DEFAULT b'0',
`B` bit(1) NOT NULL DEFAULT b'0' ,
PRIMARY KEY (`test5`),
KEY `test6` (`test6`) USING BTREE,
KEY `A` (`A`) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
This is a regular InnoDB table, so its count (*) is much slower than MyISAM, and the row number shown by InnoDB is not very accurate, so here I need to count. There are so many strategies.
Total 61500000 Data
COUNT (*) Time 1539.499s
COUNT (1) Time consuming 907.581s
Count (A) to count the index.
Count (TEST6) for the primary key.
Without exception, since this table is not optimized well, whichever of these takes thousands of seconds, this is something we can't stand.
Here we begin to analyze and deal with this problem.
The expected count (*) of the entire table should be within 200s normal, within 100 is good, within 50 is excellent.
First I took the inside Test6 out and formed a table alone. operate on it.
Total 61500000 Data
COUNT (*) Time 10.238s
COUNT (1) Time consuming 8.710s
Count (TEST6) to count the primary key. Time consuming 12.957s
count(1)it has the highest efficiency and iscount(pk)52.0%faster than the slowest speed.
Change the field you can determine to the optimal value, for example:
varchar is more char. Although varchar can automatically allocate the size of the storage space however. VarChar needs to use 1 to 2 extra bytes to record the length of the string, increasing its update operation time,
DateTime changed to timestamp the latter between 1978-2038 years
The last time you use the Count (1) test is the fastest, 168s. Although somewhat slow but acceptable.
Summarize:
Redesign the fields in your table to optimize its length. Do not blindly use too much varchar.
Use COUNT (1) instead of Count (*) to retrieve.