Mysqlupdate data is too large and the connection is disconnected

Source: Internet
Author: User
Mysqlupdate data is too large, and there is a 20 thousand data source when the connection is disconnected. now, the value of updating another data table (remote API) is output one by one, because the data volume is too large, if the server is frequently disconnected from the remote server or 500 internal errors are displayed on the webpage, ask the master. Is there any way to optimize it? Can I update it in batches? please give me a guide. thank you.


Reply to discussion (solution)

My current method is to use the for loop limit to batch mysql_query () or not. request guidance

After testing, the response from the API is not slow, but the response from the API is slow. the server directly disconnects the connection and shows whether the response timeout can be handled.

Set_time_limit (0 );

Is the SQL statement that you generate, submit it to the api, and then run by the api?
In this case, we recommend that you write the SQL statement to be executed in a file, and then submit the file.
After the API receives the file, execute the SQL statement in the file accordingly.

If the structure cannot be modified, it can only be submitted one by one. if the structure is returned successfully, the next one is submitted. This may be because the number of connections on the API side is limited.

Is the SQL statement that you generate, submit it to the api, and then run by the api?
In this case, we recommend that you write the SQL statement to be executed in a file, and then submit the file.
After the API receives the file, execute the SQL statement in the file accordingly.

If the structure cannot be modified, it can only be submitted one by one. if the structure is returned successfully, the next one is submitted. This may be because the number of connections on the API side is limited.



Now I want to put the called API in a separate file and call it step by step.

I have 20 thousand million pieces of data. I have calculated that it takes 30 seconds to call the API. the normal response time is 100 seconds. in this way, the main program kicked me out early and made an error, is there any way to solve this problem? Thank you, moderator.

Why does it take 30 seconds to call 100 entries? Cannot Your SQL be optimized?


Is the SQL statement that you generate, submit it to the api, and then run by the api?
In this case, we recommend that you write the SQL statement to be executed in a file, and then submit the file.
After the API receives the file, execute the SQL statement in the file accordingly.

If the structure cannot be modified, it can only be submitted one by one. if the structure is returned successfully, the next one is submitted. This may be because the number of connections on the API side is limited.



Now I want to put the called API in a separate file and call it step by step.

I have 20 thousand million pieces of data. I have calculated that it takes 30 seconds to call the API. the normal response time is 100 seconds. in this way, the main program kicked me out early and made an error, is there any way to solve this problem? Thank you, moderator.

The error message is 1000. I do not include the peer network latency or other factors.

Running 1000 entries in 100 seconds also takes a long time. during such a long period of operation, HiChina will report an error. Is there any way to separate them and execute them in batches?

Enable multi-process processing

I have a total of 20 thousand pieces of data. I have calculated that it takes 30 seconds to call the API. the normal response time is 100 seconds. in this way, the main program kicked me out early.
Because you are executing update, there is no possibility of optimization on SQL (not to mention calling the API)
However, 20 thousand pieces of data should not be transmitted through the website (only one record can be updated at a time). you can use a CLI program to operate

Run

I have a total of 20 thousand pieces of data. I have calculated that it takes 30 seconds to call the API. the normal response time is 100 seconds. in this way, the main program kicked me out early.
Because you are executing update, there is no possibility of optimization on SQL (not to mention calling the API)
However, 20 thousand pieces of data should not be transmitted through the website (only one record can be updated at a time). you can use a CLI program to operate

What does CLI mean? I'll check the information, Cainiao. sorry.

I am an Alibaba virtual host. this is the runtime environment PHP5.5.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.