When the amount of data surges, we will choose the library table hash and so on to optimize the data read and write speed. The author made a simple attempt, 100 million data, divided into 100 sheets. The specific implementation process is as follows:
First create 100 tables:
$i =0;while ($i <=99) {echo "$newNumber \ r \ n"; $sql = "CREATE TABLE ' Code_". $i. " ' (' Full_code ' char () not NULL, ' create_time ' int (ten) unsigned not NULL, PRIMARY KEY (' Full_code '),) Engine=myisam DEFAULT Charset=utf8 "; mysql_query ($sql); $i + +;
Here's my table rule, full_code as the primary key, we hash the full_code.
The functions are as follows:
$table _name=get_hash_table (' Code ', $full _code), function get_hash_table ($table, $code, $s =100) {$hash = sprintf ("%u", CRC32 ($code)); Echo $hash; $hash 1 = intval (Fmod ($hash, $s)); return $table. " _ ". $hash 1;}
This gets the table name of the data stored by get_hash_table before inserting the data.
Finally, we use the merge storage engine to implement a complete code table
1 CREATE TABLE IF not EXISTS ' code ' ( 2 ' Full_code ' char (Ten) not null,3 ' create_time ' int (ten) unsigned not null,4 INDE X (Full_code) 5) Type=merge union= (code_0,code_1,code_2 ...) Insert_method=last;
So we can get all the Full_code data with select * from code.
The above is 100 million data in PHP implementation MySQL database table 100, more relevant content please pay attention to topic.alibabacloud.com (www.php.cn)!