How to write web crawler in PHP language?
1. Don't tell me PHP is not suitable for this, I don't want to learn a new language in order to write a crawler, I know it can be done
2. I am also certain of the basic PHP programming, familiar with data structures and algorithms, have a general network of basic knowledge, such as the TCP/IP protocol and other concepts
3. Can you provide a specific book name, network article name
4. Can I greedy for source?
Thank you!
Reply content:
- Pcntl_fork or swoole_process implements multi-process concurrency. The crawl time per page is 500ms, open 200 processes, can achieve 400 pages per second crawl.
- Curl implements a page crawl, setting a cookie to enable a simulated login
- Simple_html_dom implementing page parsing and DOM processing
- If you want to emulate a browser, you can use Casperjs. Encapsulating a service interface with the swoole extension for PHP layer invocation
In the multi-play network here a set of crawler system is based on the above-mentioned technical solution, will crawl tens of millions of pages per day. You need this-goutte, a simple PHP Web scraper-friendsofphp/goutte GitHub
USTC Spider
This is written in PHP, every once in a while to crawl the target site, write data to local, and then directly read the local file. PHP is not difficult to implement content crawler, upstairs said Curl,selenium can almost complete all possible tasks. However, if you still want to do content processing, it is best to add a thing that can handle user interaction, Casperjs. "WebBots, spiders and screen scrapers: Technical Analysis and application practice" this afternoon, I wrote a keyword to grab the information of the Watercress group that meets the requirements.
, very rough. is just beginning to learn.
There is a problem is the old was sealed, still thinking how to solve ...
And it's too slow ... Single-threaded. I think the answer to the most votes is pretty good. Prepare to continue the transformation. PHP Simulation Login educational system, test display login Success but the page did not jump out the simplest with regular expression +get_file_contents can be implemented crawler