Web page capturing is like a search engine that can automatically crawl content on other servers. Below are some common php practices I have compiled. let's take a look at them. to capture the content of a webpage, you need to parse the DOM tree, find the specified node, and then capture the content we need ,... web page capturing is like a search engine that can automatically crawl content on other servers. Below are some common php practices I have compiled. let's take a look at them.
To capture the content of a webpage, you need to parse the DOM tree, find the specified node, and then capture the content we need. The process is a bit cumbersome, LZ summarizes several common and easy-to-implement web page capturing methods. if you are familiar with JQuery selector, these frameworks will be quite simple.
1. Ganon
Project address: http://code.google.com/p/ganon/
Document: http://code.google.com/p/ganon/w/list
Test: capture all the class attribute values on the homepage of my website as the p element of focus and output the class value.
class, "
n"; }
II. phpQuery
Project address: http://code.google.com/p/phpquery/
Document: https://code.google.com/p/phpquery/wiki/Manual
Test: capture the article tag element on the homepage of my website and publish the html value of the h2 tag
find('h2')->html()."
"; }
3. Simple-Html-Dom
Address: http://simplehtmldom.sourceforge.net/
Document: http://simplehtmldom.sourceforge.net/manual.htm
Test: capture all links on the home page of my website
Find ('IMG ') as $ element) // echo $ element-> src .'
'; // Find all links, foreach ($ html-> find ('A') as $ element) echo $ element-> href .'
';
IV. Snoopy
Project address: http://code.google.com/p/phpquery/
Document: http://code.google.com/p/phpquery/wiki/Manual
Test: capture the homepage of my website
Fetch ($ url); // Obtain all content echo $ snoopy-> results; // Display results // echo $ snoopy-> fetchtext; // get text content (remove html code) // echo $ snoopy-> fetchlinks ($ url); // Obtain the link // $ snoopy-> fetchform; // Obtain the form
5. manually write crawlers
If the writing capability is OK, you can write a web crawler to capture the web page. there are articles on the Internet that introduce this method, so LZ will not repeat them and will be interested in understanding it, it can be captured on Baidu php web pages.
Address:
Reprinted at will, but please attach the article address :-)