According to the top two posts:
Get_html of single page acquisition function based on Curl data acquisition
The use of GET_HTMLS for single page parallel acquisition function based on Curl data acquisition
We can already get the HTML files we need, and now we need to process the files to get the data we need to collect.
For parsing HTML documents, there is no parsing class like XML, because HTML documents have a lot of tags that are not very strict. This time you need to adopt some other auxiliary classes, Simplehtmldom is a jquery-like operation of HTML documents parsing class. It's easy to get the data you want, but it's slow. Here is not the focus of our discussion here, I mainly use the regular to match the data I need to collect, can quickly get the information I need to collect.
Given that get_html can judge the returned data, but GET_HTMLS has no way of judging, the following two functions have been written to facilitate the mode and invocation:
Copy Code code as follows:
function Get_matches ($pattern, $html, $err _msg, $multi =false, $flags =0, $offset =0) {
if (! $multi) {
if (!preg_match ($pattern, $html, $matches, $flags, $offset)) {
echo $err _msg. " Error message: ". Get_preg_err_msg ()." \ n ";
return false;
}
}else{
if (!preg_match_all ($pattern, $html, $matches, $flags, $offset)) {
echo $err _msg. " Error message: ". Get_preg_err_msg ()." \ n ";
return false;
}
}
return $matches;
}
function Get_preg_err_msg () {
$error _code = Preg_last_error ();
Switch ($error _code) {
Case PREG_NO_ERROR:
$err _msg = ' preg_no_error ';
Break
Case PREG_INTERNAL_ERROR:
$err _msg = ' preg_internal_error ';
Break
Case PREG_BACKTRACK_LIMIT_ERROR:
$err _msg = ' preg_backtrack_limit_error ';
Break
Case PREG_RECURSION_LIMIT_ERROR:
$err _msg = ' preg_recursion_limit_error ';
Break
Case PREG_BAD_UTF8_ERROR:
$err _msg = ' preg_bad_utf8_error ';
Break
Case PREG_BAD_UTF8_OFFSET_ERROR:
$err _msg = ' preg_bad_utf8_offset_error ';
Break
Default
Return ' Unknown error! '
}
Return $err _msg. ': '. $error _code;
}
You can call this:
Copy Code code as follows:
$url = ' http://www.baidu.com ';
$html = get_html ($url);
$matches = get_matches ('!<a[^<]+</a>! ', $html, ' No link found ', true);
if ($matches) {
Var_dump ($matches);
}
or This call:
Copy Code code as follows:
$urls = Array (' http://www.baidu.com ', ' http://www.hao123.com ');
$htmls = get_htmls ($urls);
foreach ($htmls as $html) {
$matches = get_matches ('!<a[^<]+</a>! ', $html, ' No link found ', true);
if ($matches) {
Var_dump ($matches);
}
}
You can get the information you need, whether a single page collection or multiple page collection, the final PHP can still only deal with a page, because the use of get_matches, you can return the value of True and false, get the correct data, as the use of regular time encountered more than regular backtracking problem, increase get_ Preg_err_msg to prompt for regular information.
As a result of collecting data, often is the collection List page, according to the list page of content page links to collect content pages, or more levels, then loop nesting will be a lot of code control will feel powerless. So can we take the code that collects the list page and the code that collects the content page, or the collection code of more level to leave, even the loop simplifies?