0.1 billion pieces of data are split into 100 Mysql Databases and Mysql Databases in PHP.
When the amount of data increases, you will choose database/table hash to optimize the data read/write speed. I made a simple attempt to divide 0.1 billion pieces of data into 100 tables. The specific implementation process is as follows:
First, create 100 tables:
1 $i=0; 2 while($i<=99){ 3 echo "$newNumber \r\n"; 4 $sql="CREATE TABLE `code_".$i."` ( 5 `full_code` char(10) NOT NULL, 6 `create_time` int(10) unsigned NOT NULL, 7 PRIMARY KEY (`full_code`), 8 ) ENGINE=MyISAM DEFAULT CHARSET=utf8"; 9 mysql_query($sql);10 $i++;
Next, let's talk about my table sharding rule. full_code is used as the primary key, and we do hash on full_code.
The function is as follows:
1 $table_name=get_hash_table('code',$full_code);2 function get_hash_table($table,$code,$s=100){3 $hash = sprintf("%u", crc32($code));4 echo $hash;5 $hash1 = intval(fmod($hash, $s));6 return $table."_".$hash1;7 }
In this way, get_hash_table is used to obtain the name of the table where data is stored before data is inserted.
Finally, we use the merge storage engine to implement a complete code table.
1 CREATE TABLE IF NOT EXISTS `code` ( 2 `full_code` char(10) NOT NULL,3 `create_time` int(10) unsigned NOT NULL,4 INDEX(full_code) 5 ) TYPE=MERGE UNION=(code_0,code_1,code_2.......) INSERT_METHOD=LAST ;
In this way, we can use select * from code to obtain all the full_code data.