How to split 0.1 billion pieces of data into Mysql databases (PHP) and 100 pieces of mysql. How to split 0.1 billion million pieces of data into Mysql databases (PHP) and 100 pieces of mysql tables to demonstrate the table sharding process of 100 pieces of data. for details, see the following code. How can I split 0.1 billion million data entries into Mysql databases (PHP) and 100 mysql databases?
The following code creates 100 tables to demonstrate the table sharding process for 0.1 billion data entries.
When the amount of data increases, you will choose database/table hash to optimize the data read/write speed. I made a simple attempt to divide 0.1 billion pieces of data into 100 tables. The specific implementation process is as follows:
First, create 100 tables:
$i=0; while($i<=99){ echo "$newNumber \r\n"; $sql="CREATE TABLE `code_".$i."` ( `full_code` char(10) NOT NULL, `create_time` int(10) unsigned NOT NULL, PRIMARY KEY (`full_code`), ) ENGINE=MyISAM DEFAULT CHARSET=utf8"; mysql_query($sql); $i++;
Next, let's talk about my table sharding rule. full_code is used as the primary key, and we do hash on full_code.
The function is as follows:
$table_name=get_hash_table('code',$full_code);function get_hash_table($table,$code,$s=100){$hash = sprintf("%u", crc32($code));echo $hash;$hash1 = intval(fmod($hash, $s)); return $table."_".$hash1;}
In this way, get_hash_table is used to obtain the name of the table where data is stored before data is inserted.
Finally, we use the merge storage engine to implement a complete code table.
CREATE TABLE IF NOT EXISTS `code` ( `full_code` char(10) NOT NULL,`create_time` int(10) unsigned NOT NULL,INDEX(full_code) ) TYPE=MERGE UNION=(code_0,code_1,code_2.......) INSERT_METHOD=LAST ;
In this way, we useSelect * from codeYou can get allFull_codeData.
The above is all the content of this article, and I hope it will help you.
Using (PHP), the following 100 mysql tables are created to demonstrate the table sharding process of 100 data entries. for details, see the following code. When the data...