: This article mainly introduces 0.1 billion pieces of data into Mysql database sub-tables in PHP. if you are interested in the PHP Tutorial, please refer to it. When the amount of data increases, you will choose database/table hash to optimize the data read/write speed. I made a simple attempt to divide 0.1 billion pieces of data into 100 tables. The specific implementation process is as follows:
First, create 100 tables:
$i=0;while($i<=99){echo "$newNumber \r\n";$sql="CREATE TABLE `code_".$i."` ( `full_code` char(10) NOT NULL, `create_time` int(10) unsigned NOT NULL, PRIMARY KEY (`full_code`),) ENGINE=MyISAM DEFAULT CHARSET=utf8";mysql_query($sql);$i++;
Next, let's talk about my table sharding rule. full_code is used as the primary key, and we do hash on full_code.
The function is as follows:
$table_name=get_hash_table('code',$full_code);function get_hash_table($table,$code,$s=100){$hash = sprintf("%u", crc32($code));echo $hash;$hash1 = intval(fmod($hash, $s)); return $table."_".$hash1;}
In this way, get_hash_table is used to obtain the name of the table where data is stored before data is inserted.
Finally, we use the merge storage engine to implement a complete code table.
1 CREATE TABLE IF NOT EXISTS `code` ( 2 `full_code` char(10) NOT NULL,3 `create_time` int(10) unsigned NOT NULL,4 INDEX(full_code) 5 ) TYPE=MERGE UNION=(code_0,code_1,code_2.......) INSERT_METHOD=LAST ;
In this way, we can use select * from code to obtain all the full_code data.
The above is the content of 0.1 billion pieces of data into Mysql database sub-tables in PHP. For more information, see PHP Chinese network (www.php1.cn )!