Today, I encountered a problem. I wrote a program that grabbed the address of 150000 images from the Internet and saved them to the txt file. Each line stores an image address. Now I want to download the image and save it to my local computer, but the download speed is not very fast. So I used php to implement multithreading in linux, first, split the txt file into 10 files of the same size, each file stores the address of 15000 images, and start 10 programs for saving images, this time may be only 1/10 of the original time. I will post the program below to help you see if there are any better methods.
01
71 function for_save_img ($ num)
02
72 {
03
73 for ($ I = 0; $ I <= $ num; $ I ++)
04
74 {
05
75 system ("/usr/local/bin/php index. php crawl save_img {$ I} &>/tmp/null ");
06
76}
07
77}
08
78
09
79 function save_img ($ num)
10
80 {
11
81 static $ I = 0;
12
82 // read the object into an array
13
83 $ img_urllists = ROOTDIRPATH. "/static/image_1_num1_.txt ";
14
84 $ arr_img_url = file ($ img_urllists );
15
85 foreach ($ arr_img_url as $ imageurl)
16
86 {
17
87 $ imageurl = trim ($ imageurl );
18
88 echo $ imageurl;
19
89 $ this-> benchmark-> mark ("code_start ");
20
90 // Save the image
21
91 $ final_imageurl = "http: // www. *****. com/upload/UploadFile/". $ imageurl;
22
92 $ img_open = file_get_contents ($ final_imageurl );
23
93 $ ret = file_put_contents (ROOTDIRPATH. '/static/uploadimg/'. $ imageurl, $ img_open );
24
94 if ($ ret)
25
95 {
26
96 echo "Success ......";
27
97}
28
98 $ this-> benchmark-> mark ('Code _ end ');
29
99
30
100 echo $ this-> benchmark-> elapsed_time ('Code _ start', 'Code _ end ');
31
101}
32
102}
33
34
35
This program runs php index. php crawl for_save_img in the root directory of the website in linux shell under the ci framework.
36
You have good suggestions. You are welcome to accept them modestly.
Author: Yue guanqun