High-performance Crawler solutions:
Multi-process
Multithreading
Single-threaded concurrent requests are implemented using asynchronous non-blocking modules.
Essence
1SK =socket ()2 #Blocking3Sk.connect (('www.cnblogs.com', 80))4 5Sk.sendall (b"Get/wupeiqi http1.1\r\n.....\r\n\r\n")6Sk.sendall (b"Post/wupeiqi http1.1\r\n.....\r\n\r\nuser=alex&pwd=123")7 8 #Blocking9data = SK.RECV (8096)Ten OneSk.close ()
IO multiplexing:
Monitor multiple sockets for changes
The role of IO multiplexing:
1.select, internal Loop detects if the socket changes; up to 1024 sockets can be detected
2.poll, internal loop to detect whether the socket is changed, no limit of socket detection
3.epoll, check whether the socket is changed by callback method, no limit of socket
What is asynchronous non-blocking?
Non-blocking:
Do not wait (may error, catch exception)
Code:
SK = Socket.socket ()
Sk.setblocking (False)
Asynchronous:
Callback that calls a specific function automatically when a specified state is reached.
How do I customize asynchronous non-blocking modules?
Nature: Socket+io Multiplexing
Socket-based setup setblocking and IO multiplexing are implemented.
The crawler sends HTTP requests to create the socket object essentially;
Io multiplexing "loop" listens for changes in the socket, and we can customize the operation (triggering the execution of a function) once it has changed
What is a co-process?
1. Is "micro-threading", does not exist, is created by the programmer and Control program: first execute a piece of code, and then jump somewhere to execute a piece of code.
2. Switching back and forth if a non-IO request is encountered: lower performance.
3. If you encounter an IO (time-consuming) request to switch back and forth: high performance, implementation concurrency (essentially using the IO wait process, to do some other things)
Achieve a process through yield:
deffunc1 ():Print('ADSFASDF') Print('ADSFASDF') Print('ADSFASDF') yield1Print('ADSFASDF') Print('ADSFASDF') Print('ADSFASDF') yield2yield3yield4defFunc2 ():Print('ADSFASDF') Print('ADSFASDF') Print('ADSFASDF') yield11yield12yield19G1=func1 () G2=Func2 () g1.send (none) G1.send (none) g2.s End (None)
View Code
A co-process is implemented through the Greenlet module:
fromGreenletImportGreenletdeftest1 ():Print12Gr2.switch ()Print34Gr2.switch ()deftest2 ():Print56Gr1.switch ()Print78Gr1=Greenlet (test1) GR2=Greenlet (test2) gr1.switch ()
View Code
Python's built-in and third-party modules provide asynchronous IO request modules, which are much more efficient to use, while the essence of asynchronous IO Requests is "nonblocking socket" + "IO multiplexing":
3 Common types of:
ImportAsyncioImportRequests@asyncio.coroutinedefFetch_async (func, *args): Loop=Asyncio.get_event_loop () Future= Loop.run_in_executor (None, func, *args) Response=yield from FuturePrint(Response.url, response.content) Tasks=[Fetch_async (Requests.get,'http://www.cnblogs.com/wupeiqi/'), Fetch_async (Requests.get,'http://dig.chouti.com/pic/show?nid=4073644713430508&lid=10273091')]loop=Asyncio.get_event_loop () results= Loop.run_until_complete (Asyncio.gather (*tasks)) Loop.close ()
asyncio+requests
ImportgeventImportRequests fromGeventImportMonkeymonkey.patch_all ()defFetch_async (method, URL, Req_kwargs):Print(method, URL, Req_kwargs) response= Requests.request (Method=method, Url=url, * *Req_kwargs)Print(Response.url, response.content)###### Send Request #####Gevent.joinall ([Gevent.spawn (Fetch_async, Method='Get', url='https://www.python.org/', req_kwargs={}), Gevent.spawn (Fetch_async, Method='Get', url='https://www.yahoo.com/', req_kwargs={}), Gevent.spawn (Fetch_async, Method='Get', url='https://github.com/', req_kwargs={}),])###### Send request (pool control maximum number of threads) ######From gevent.pool Import Pool#pool = Pool (None)#Gevent.joinall ([#Pool.spawn (Fetch_async, method= ' get ', url= ' https://www.python.org/', req_kwargs={}),#Pool.spawn (Fetch_async, method= ' get ', url= ' https://www.yahoo.com/', req_kwargs={}),#Pool.spawn (Fetch_async, method= ' get ', url= ' https://www.github.com/', req_kwargs={}),# ])4.gevent + Requests
gevent+requests
fromTwisted.web.clientImportGetPage, defer fromTwisted.internetImportreactordefAll_done (ARG): Reactor.stop ()defCallback (contents):Print(contents) d_list=[] url_list= ['http://www.bing.com','http://www.baidu.com', ] forUrlinchurl_list:d= GetPage (bytes (URL, encoding='UTF8')) D.addcallback (callback) d_list.append (d)#used to check if the page has been fully downloaded, and if it has already been downloaded, stop the loop. Dlist =defer. Deferredlist (d_list) Dlist.addboth (all_done)#Reactor.run ()
Twisted Example
Python's path-crawler-high performance related