Recently need to crawl a site, but the page is JS rendering after the generation, the ordinary crawler frame is uncertain, and then thought to use PHANTOMJS to build an agent.
Python calls Phantomjs does not seem to have a ready-made third-party library (if you have, please tell the small 2), stroll around, found only Pyspider provides a ready-made solution.
A simple trial, feel pyspider more like a novice to build a crawler tool, such as a mother son, sometimes meticulous, sometimes chatter.
Lightweight gadgets should be more popular, I also with a little selfish, can take my favorite beautifulsoup together, instead of learning pyquery (Pyspider used to parse HTML), not to endure the browser to write a bad experience of Python (laughter).
So spent an afternoon time, the pyspider in the implementation of the PHANTOMJS agent part of the split out, independent into a small reptile module, I hope you will like (thank binux! )。
Preparatory work
Of course you have to have PHANTOMJS, nonsense! (Under Linux It is best to use the Supervisord Guardian, must keep the crawl when the PHANTOMJS has been in the open state)
Start with Phantomjs_fetcher.js under Project path: PHANTOMJS phantomjs_fetcher.js [Port]
Install Tornado dependency (using Tornado httpclient module)
The call is super simple
From Tornado_fetcher import fetcher# create a crawler >>> fetcher=fetcher ( user_agent= ' Phantomjs ', # Analog browser user-agent phantomjs_proxy= ' http://localhost:12306 ', # PHANTOMJS address poolsize=10, # maximum httpclient number Async=false # Synchronous or asynchronous ) # Start connecting PHANTOMJS proxy, can render js! >>> fetcher.phantomjs_fetch (URL) # executes an additional JS script after the render succeeds (pay attention to using function to wrap up!) ) >>> fetcher.phantomjs_fetch (URL, js_script= ' function () {setTimeout ("Window.scrollto (0,100000)}", 1000) ')