MySQL table column replication, MySQL connection time and memory control.

Source: Internet
Author: User
Tags cron script nginx server
There are two table temporary table temp_t, and the positive table table1, are MyISAM, respectively, there are about 28W records.
Cron periodically fetches some information from the network every minute and stores it to temp_t (frequent writes).
Then cron timed every 10 minutes to copy information from temp_t to positive table table1 (read frequently, write once every 10 minutes).

temp_t Field ID (auto_increment), title (varchar), content (varchar), date (DATETIME), MyKey (tinyint (1) Default 1, This to determine whether it has been copied to the positive table, the update is 2 after replication)

Table1 Field ID (no auto_increment), title (varchar), content (varchar), date (DATETIME)


28W records, each data table about 300MB, each update almost to 2k-3k record.
If you copy directly, MySQL connection time is too long, resulting in MySQL performance degradation table1 will appear similar to the lock table phenomenon, query table1 time significantly longer
Require DirName (__file__). '/.. /connection.php '; mysql_select_db ("News", $connextion); mysql_query ("SET NAMES UTF8"); $query = mysql_query ("SELECT * From temp_t where MyKey = ' 1 '), while ($rows = Mysql_fetch_array ($query)) {mysql_query ("UPDATE temp_t SET mykey= ' 2 ' where ID = ' ". Mysql_real_escape_string ($rows [' ID '])." ' "; mysql_query ("INSERT into table1 (id,title,content,date) VALUES ('". mysql_real_escape_string ($rows [' ID ']). "', '". Mysql_real_escape_string ($rows [' title ']). "', '". mysql_real_escape_string ($rows [' content ']). "', '". Mysql_real_ Escape_string ($rows [' Date ']). "'");} Mysql_close ($connextion);



So I want to first read the records from the temp_t data, save as a JSON array, and then copy the data to Table1, but sometimes encounter cannot allocate memory, and then cause the entire server crashed (is not this JSON array too large, Account for a huge amount of memory). PS: Already in the php.ini memory limit added to 64M (but always think that the temporary solution, such a replication process, do not need so much memory, the key or the code is unreasonable).

require dirname (__file__). '/.. /connection.php '; mysql_select_db ("News", $connextion); mysql_query ("SET NAMES UTF8"); $query = mysql_query ("SELECT *  From temp_t where MyKey = ' 1 '), $jon = Array (), while ($rows = Mysql_fetch_array ($query)) {$jon [' a '] = $rows [' id ']; $jon [' B '] = $rows [' title ']; $jon [' c '] = $rows [' content ']; $jon [' d '] = $rows [' Date ']; $pjon. = Json_encode ($jon). ', ';} Mysql_close ($connextion); $njso = Json_decode (' ['. substr ($pjon, 0,-1). '); foreach ($njso as $nx) {if ($nx->a) {require dirname (__file__). '/.. /connection.php '; mysql_select_db ("News", $connextion), mysql_query ("SET NAMES UTF8"); mysql_query ("UPDATE temp_t SET mykey= ' 2 ' WHERE id= ' ". Mysql_real_escape_string ($nx->a)." ' "; mysql_query ("INSERT into table1 (id,title,content,date) VALUES ('". Mysql_real_escape_string ($nx->a). "', '". Mysql_ Real_escape_string ($nx->b). "', '". Mysql_real_escape_string ($nx->c). "', '". Mysql_real_escape_string ($nx- >d). "'"); Mysql_close ($connextion);}} 


So to help prawns, there is no more saving code to copy the information of the table? Require MySQL to connect for a short time with less memory consumption? Thank you.


Reply to discussion (solution)

Execute these two sentences in turn.
INSERT into table1 (id,title,content,date) values
SELECT ID, title,content,date from temp_t where MyKey = ' 1 '

UPDATE temp_t SET mykey= ' 2 ' WHERE mykey = ' 1 '

No need to excuse PHP transfer

Since temp_t is temporarily deposited, why not delete the data after the transfer?

Execute these two sentences in turn.
INSERT into table1 (id,title,content,date) values
SELECT ID, title,content,date from temp_t where MyKey = ' 1 '

UPDATE temp_t SET mykey= ' 2 ' WHERE mykey = ' 1 '

No need to excuse PHP transfer

Since temp_t is temporarily deposited, why not delete the data after the transfer?

temp_t data can not be deleted, insert also has a judgement, that is, title does not repeat (avoid repeated insertion)
According to the boss, this will be done? If you haven't tested it, you'll have to stop the cron script.
Weak weak question, according to the boss of the experience, 300MB data sheet, like this one time to insert 2000-3000 Records, this script about how long to run, how much memory? Is it my.cnf to add some value to the memory to speed up the execution time of the insert and select? Thanks again.

Require DirName (__file__). '/.. /connection.php '; mysql_select_db ("News", $connextion), mysql_query ("SET NAMES UTF8"), $query = mysql_query ("INSERT into table1 (id,title,content,date) The values SELECT ID, title,content,date from temp_t where MyKey = ' 1 '); $query = Mysql_qu Ery ("UPDATE temp_t SET mykey= ' 2 ' WHERE mykey = ' 1 '");} Mysql_close ($connextion);

1: First temporary table, and no external read? If there is why not memcache, then table1 to read memcache back?

2:300m not too big. Other people's literature station, winter not winter a table on the 10G. The same copy.

3: Before the company to do a project, 1 tables on 4 million data, the method, from the temporary table, 1 1 of reading, slowly read, do not have to squeeze all the things together to come out, so that although it is always running, a total better than the card dead.

1: First temporary table, and no external read? If there is why not memcache, then table1 to read memcache back?

Memory is not enough, also installed memcache ... Others are Nginx server cluster, I this piece is dozens of yuan of virtual space ...

10G table Copy, is not the my.cnf need to increase the size of some values?
Always feel their watch 300M running slowly, also often stuck. People astronomical, frequent reading and writing work smoothly.


1: First temporary table, and no external read? If there is why not memcache, then table1 to read memcache back?

Memory is not enough, also installed memcache ... Others are Nginx server cluster, I this piece is dozens of yuan of virtual space ...

10G table Copy, is not the my.cnf need to increase the size of some values?
Always feel their watch 300M running slowly, also often stuck. People astronomical, frequent reading and writing work smoothly.


Then you're using a third point, no matter how much more complex data you have, 1 or 1 of them. 1-2 seconds and 1 bars, 1 days down is also very scary.
There is no need to pour so much at once, if not, how big a group of companies are afraid to do so.

  • Related Article

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.