When the amount of data surges, we will choose the library table hash and so on to optimize the data read and write speed. The author made a simple attempt, 100 million data, divided into 100 sheets. The specific implementation process is as follows:
First create 100 tables:
1$i =0;
2 while($i <= About){
3Echo"$newNumber \ r \ n";
4$sql ="CREATE TABLE ' Code_". $i."` (
5' Full_code 'Char(Ten) Not NULL,
6' Create_time 'int(Ten) Unsigned not NULL,
7PRIMARY KEY (' Full_code '),
8) Engine=myisam DEFAULT Charset=utf8";
9mysql_query ($sql);
Ten$i + +;
Here's my table rule, full_code as the primary key, we hash the full_code.
The functions are as follows:
1$table _name=get_hash_table ('Code', $full _code);
2function get_hash_table ($table, $code, $s = -){
3$hash = sprintf ("%u", CRC32 ($code));
4Echo $hash;
5$hash 1 = intval (Fmod ($hash, $s));
6return$table."_". $hash 1;
7}
This gets the table name of the data stored by get_hash_table before inserting the data.
Finally, we use the merge storage engine to implement a complete code table
1CREATE TABLE IF not EXISTS ' code ' (
2' Full_code 'Char(Ten) Not NULL,
3' Create_time 'int(Ten) Unsigned not NULL,
4INDEX (Full_code)
5) Type=merge union= (code_0,code_1,code_2 ...) Insert_method=last;
So we're through select * fromCode will be able to get all the full_code data.
Billions of data in PHP implementation MySQL database table 100