Here's a list of 100 tables to illustrate the next 100 million data, see the code below.
When the volume of data surges, everyone will choose the library table hashing and so on to optimize the data read and write speed. The author made a simple attempt, 100 million pieces of data, divided into 100 sheets. The specific implementation process is as follows:
First create 100 tables:
$i =0;
while ($i <=99) {
echo "$newNumber \ r \ n";
$sql = "CREATE TABLE ' Code_". $i. " ' (
' Full_code ' char (') not NULL,
' create_time ' int (a) unsigned not null,
PRIMARY KEY (' Full_code '),
) Engine=myisam DEFAULT Charset=utf8 ";
mysql_query ($sql);
$i + +;
Here's my table rule, full_code as the primary key, we do hash on Full_code
The functions are as follows:
$table _name=get_hash_table (' Code ', $full _code);
function get_hash_table ($table, $code, $s =100) {
$hash = sprintf ("%u", CRC32 ($code));
echo $hash;
$hash 1 = intval (Fmod ($hash, $s));
return $table. " _ ". $hash 1;
}
This gets the name of the table stored in the data by get_hash_table before inserting data.
Finally, we use the merge storage engine to implement a complete code table
CREATE TABLE IF not EXISTS ' code ' (
' Full_code ' char (TEN) NOT NULL,
' create_time ' int (a) unsigned not NULL, in
DEX (Full_code)
) type=merge union= (code_0,code_1,code_2 ...) Insert_method=last;
So we can get all the full_code data through the select * from code .
The above introduction is the whole content of this article, hope to be helpful to everybody.