PHP+HTML+JAVASCRIPT+CSS Implement simple crawler development _php Skills

Source: Internet
Author: User
Tags button type

To develop a reptile, first you need to know what your reptile is for. I'm going to use different websites to find specific keywords and get links to them so I can read them quickly.

According to personal habits, I first have to write an interface, clear the idea.

1, go to different websites. Then we need a URL input box.

2, find a specific key words of the article. Then we need an article title input box.

3, get the article link. Then we need a display container for the search results.

<div class= "Jumbotron" id= "Mainjumbotron" >
 <div class= "Panel Panel-default" >
 
  <div class= " panel-heading "> Article URL crawl </div>
 
  <div class=" Panel-body ">
   <div class=" Form-group ">
    <label for= "Article_title" > article title </label>
    <input type= "text" class= "Form-control" id= "Article_" Title "placeholder=" article title ">
   </div>
   <div class=" Form-group ">
    <label for=" Website_url " > website url</label>
    <input type= "text" class= "Form-control" id= "Website_url" placeholder= "Web site URL" >
   </div>
 
   <button type= "Submit" class= "BTN Btn-default" > Crawl </button>
  </div>
 </div>
 <div class= "Panel Panel-default" >
 
  <div class= "panel-heading" > Article url</ div>
 
  <div class= "Panel-body" >
    
 

Directly on the code, and then add some of their own style adjustments, the interface is complete:

Then the next is the realization of the function, I use PHP to write, the first step is to get the site's HTML code, get the way of HTML code is also a lot, I will not introduce, here with the curl to get, the incoming site URL can get HTML code:

Private Function get_html ($url) {
 
 $ch = Curl_init ();
 
 $timeout = ten;
 
 curl_setopt ($ch, Curlopt_url, $url);
 
 curl_setopt ($ch, Curlopt_returntransfer, 1);
 
 curl_setopt ($ch, curlopt_encoding, ' gzip ');
 
 curl_setopt ($ch, Curlopt_useragent, ' mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/34.0.1847.131 safari/537.36 ');
 
 curl_setopt ($ch, Curlopt_connecttimeout, $timeout);
 
 $html = curl_exec ($ch);
 
 return $html;
 
}

Although you get the HTML code, but soon you will encounter a problem, that is, coding problems, which may make your next match is reactive, and here we unify the resulting HTML content into UTF8 encoding:

$coding = mb_detect_encoding ($html);
 
if ($coding!= "UTF-8" | | |!mb_check_encoding ($html, "UTF-8"))
 
 $html = mb_convert_encoding ($html, ' utf-8 ', ' GBK, Utf-8,ascii ');

To get the HTML of the site, to get the URL for the article, the next step is to match all the A tags under that page, need to use regular expression, after many tests, finally get a more reliable regular expression, no matter how complicated the structure under a tag, as long as the a tag is not spared: (the most critical step)

$pattern = ' |<a[^>]*> (. *) </a>|isu ';
 
Preg_match_all ($pattern, $html, $matches);

The result of the match in $matches, it is probably such a multidimensional element group:

Array (2) { 
 [0]=> 
 Array (*) { 
  [0]=>
  string (*) "full a label"
  ...
 }
 [1]=>
 Array (*) {
  [0]=>
  string (*) with the contents of the a label corresponding to the superscript above "
 }
}"

As long as you can get this data, the other can be completely operational, you can traverse this element group, find you want a label, and then get a label corresponding attributes, how to operate on how to operate, the following recommended a class, so that you more convenient to operate a label:

$dom = new DOMDocument ();
 
@ $dom->loadhtml ($a);//$a are some of the A-tag
 
$url = new Domxpath ($dom);
 
$hrefs = $url->evaluate ('//a ');
 
for ($i = 0; $i < $hrefs->length; $i + +) {
 
 $href = $hrefs->item ($i);
 
 $url = $href->getattribute (' href '); Here gets the href attribute of a label
 

Of course, this is only a way, you can also use regular expressions to match the information you want to play the data new tricks.

Get and match the results you want, of course, the next step is to return to the front to show them, the interface is written, and then the front end with JS to obtain data, with jquery dynamic add content to display:

var website_url = ' Your interface address ';
$.getjson (website_url,function (data) {
 if (data) {
  if (Data.text = = ") {
   $ (' #article_url '). HTML (' <div ><p> no link to the article </p></div> ');
   return;
  }
  var string = ';
  var list = Data.text;
  for (var j in list) {
    var content = list[j].url_content;
    for (var i in content) {
     if (content[i].title!= ') {
      string + = ' <div class= ' item ' > ' +
       ' <em>[& Lt;a href= "http://' + List[j].website.web_url + '" target= "_blank" > ' + list[j].website.web_name + ' </a>]</em > ' +
       ' <a href= ' + content[i].url + ' "target=" _blank "class=" web_url "> ' + content[i].title + ' </a> ' +
       ' </div> ';
  }}} $ (' #article_url '). HTML (string);
});

On the final effect diagram:

The above is the entire content of this article, I hope to help you learn.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.