In order to practicing, using reptiles to climb a "you know" figure bed, using the Urlretrieve function, not only slow, but also always error, not open timeout is the above mentioned socket error.
There are many ways to find online, such as adding headers to a request, and calling Urllib2. Methods such as close () need to be invoked after Request.urlopen (). Read () did not work.
Because do not want to trouble scrapy and so on library, so found a simple and rude way:
Open the data stream directly with the open function with Urllib and save it in binary write file:
Reference Code Snippet: The method in which the annotation is originally replaced
# Urlretrieve Speed Slow unstable
# urllib.urlretrieve (i, Path + '%s.jpg '% imgnum)
Urlopen = Urllib. Urlopener ()
#下载图片流
fp = Urlopen.open (IMAGEURL)
data = Fp.read ()
#清除并以二进制写入
f = open (path + ' 1.jpg ') , ' w+b ')
f.write (data)
F.close ()