Before sharing the Python Multithreaded Crawl Web page, but this can only use Python to crawl to the Web page source code, if you want to be used as a Python download files, the above may not be suitable for you, I recently in the use of Python files to download the time to encounter this problem, But eventually it was resolved, and I sent the code
Python implements bulk download files
?
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 This is the |
|
Other user's method:
?
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
Os.path import basename from urlparse import urlsplit def url2n Ame (URL): Return basename (Urlsplit (URL) [2]) def download (url, localfilename = None): localname = url2name (URL) req = Urllib2. Request (URL) r = Urllib2.urlopen (req) if R.info (). Has_key (' Content-disposition '): # If the response has On, we take the file name from it localname = R.info () [' Content-disposition '].split (' filename= ') [1] if localname[0] = = ' "' or Localname[0] = = "'": LocalName = localname[1:-1] elif r.url!= URL: # If we were redirected, the real file name we take FR Om the final URL localname = Url2name (r.url) If LocalFilename: # We can force to save the file as specified name LocalName = LocalFilename F = open (LocalName, ' WB ') F.write (R.read ()) f.close () download (r ' URL address of the python file you want to download ') |
These are all the things that this article has to share, and the small partners can test which methods are more efficient.