Php thief program example code _ PHP Tutorial

Source: Internet
Author: User
Php thief program instance code. In fact, the thief program uses a specific function in php to collect the content of other websites, and then saves the content we want to save to our local database through regular expression analysis, the following thief program uses a specific function in php to collect the content of other websites, and then saves the content we want to save to our local database through regular expression analysis, next I will introduce the implementation method of the php thief program. if you need it, please refer to it.

In the following data collection process, the file_get_contents function is critical. let's take a look at the syntax of the file_get_contents function.

String file_get_contents (string $ filename [, bool $ use_include_path = false [, resource $ context [, int $ offset =-1 [, int $ maxlen])
Like file (), only file_get_contents () can read the file into a string. The content with the length of maxlen will be read at the position specified by the offset parameter. If it fails, file_get_contents () returns FALSE.

The file_get_contents () function is used to read the file content into a string. If the operating system supports it, the memory ing technology will be used to enhance the performance.

Example

The code is as follows:

$ Homepage = file_get_contents ('http: // www.hzhuti.com /');
Echo $ homepage;
?>

In this way, $ homepage is the content of the collected network, which is saved. let's get started with so much.

Example

The code is as follows:

Function fetch_urlpage_contents ($ url ){
$ C = file_get_contents ($ url );
Return $ c;
}
// Obtain the matching content
Function fetch_match_contents ($ begin, $ end, $ c)
{
$ Begin = change_match_string ($ begin );
$ End = change_match_string ($ end );
$ P = "{$ begin} (. *) {$ end }";
If (eregi ($ p, $ c, $ rs ))
{
Return $ rs [1];}
Else {return "";}
} // Escape a regular expression string
Function change_match_string ($ str ){
// Note: The following is a simple escape
// $ Old = array ("/", "$ ");
// $ New = array ("/", "$ ");
$ Str = str_replace ($ old, $ new, $ str );
Return $ str;
}

// Collect web pages
Function pick ($ url, $ ft, $ th)
{
$ C = fetch_urlpage_contents ($ url );
Foreach ($ ft as $ key => $ value)
{
$ Rs [$ key] = fetch_match_contents ($ value ["begin"], $ value ["end"], $ c );
If (is_array ($ th [$ key])
{Foreach ($ th [$ key] as $ old => $ new)
{
$ Rs [$ key] = str_replace ($ old, $ new, $ rs [$ key]);
}
}
}
Return $ rs;
}

$ Url = "http://www.bkjia.com"; // address to be collected
$ Ft ["title"] ["begin"] =""; // Start point of interception <br/> $ ft [" title "] [" end "] =""; // The end point of the screenshot.
$ Th ["title"] ["Zhongshan"] = "Guangdong"; // replace the intercepted part

$ Ft ["body"] ["begin"] =""; // Start point of interception
$ Ft ["body"] ["end"] =""; // The end point of the screenshot.
$ Th ["body"] ["Zhongshan"] = "Guangdong"; // replace the intercepted part

$ Rs = pick ($ url, $ ft, $ th); // start Collection

Echo $ rs ["title"];
Echo $ rs ["body"]; // output
?>

The following code is used to extract all hyperlinks, mailboxes, or other specific content from the web page.

The code is as follows:

Function fetch_urlpage_contents ($ url ){
$ C = file_get_contents ($ url );
Return $ c;
}
// Obtain the matching content
Function fetch_match_contents ($ begin, $ end, $ c)
{
$ Begin = change_match_string ($ begin );
$ End = change_match_string ($ end );
$ P = "# {$ begin} (. *) {$ end} # iU"; // I indicates case-insensitive. U prohibits greedy matching.
If (preg_match_all ($ p, $ c, $ rs ))
{
Return $ rs ;}
Else {return "";}
} // Escape a regular expression string
Function change_match_string ($ str ){
// Note: The following is a simple escape
$ Old = array ("/", "$ ",'? ');
$ New = array ("/", "$ ",'? ');
$ Str = str_replace ($ old, $ new, $ str );
Return $ str;
}

// Collect web pages
Function pick ($ url, $ ft, $ th)
{
$ C = fetch_urlpage_contents ($ url );
Foreach ($ ft as $ key => $ value)
{
$ Rs [$ key] = fetch_match_contents ($ value ["begin"], $ value ["end"], $ c );
If (is_array ($ th [$ key])
{Foreach ($ th [$ key] as $ old => $ new)
{
$ Rs [$ key] = str_replace ($ old, $ new, $ rs [$ key]);
}
}
}
Return $ rs;
}

$ Url = "http://www.bkjia.com"; // address to be collected
$ Ft ["a"] ["begin"] ='
$ Ft ["a"] ["end"] = '>'; // The end point of the screenshot.

$ Rs = pick ($ url, $ ft, $ th); // start Collection

Print_r ($ rs ["a"]);

?>

TipsFile_get_contents is easy to prevent from being collected. we can use curl to simulate user access to the website, which is much more advanced than above. file_get_contents () is less efficient, common failures and high curl () efficiency. multithreading is supported, but curl extension must be enabled. The steps to enable curl extension are as follows:

1. copy the three php_curl.dll, libeay32.dll, and ssleay32.dll files in the PHP folder to system32;

2. remove the semicolon from php. ini (c: WINDOWS directory) and extension = php_curl.dll;

3. restart apache or IIS.

A simple page capture function with the Referer and User_Agent spoofing function

The code is as follows:

Function GetSources ($ Url, $ User_Agent = '', $ Referer_Url ='') // capture a specified page
{
// $ Url: The Url of the page to be crawled
// $ User_Agent the user_agent information to be returned, such as "baiduspider" or "googlebot"
$ Ch = curl_init ();
Curl_setopt ($ ch, CURLOPT_URL, $ Url );
Curl_setopt ($ ch, CURLOPT_USERAGENT, $ User_Agent );
Curl_setopt ($ ch, CURLOPT_REFERER, $ Referer_Url );
Curl_setopt ($ ch, CURLOPT_FOLLOWLOCATION, 1 );
Curl_setopt ($ ch, CURLOPT_RETURNTRANSFER, 1 );
$ MySources = curl_exec ($ ch );
Curl_close ($ ch );
Return $ MySources;
}
$ Url = "http://www.bkjia.com"; // no content to get
$ User_Agent = "baiduspider + (+ http://www.baidu.com/search/spider.htm )";
$ Referer_Url = 'http: // www.jb51.net /';
Echo GetSources ($ Url, $ User_Agent, $ Referer_Url );
?>


...

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.