PHP+HTML+JAVASCRIPT+CSS implementation of simple crawler development, javascriptcss_php tutorial

Source: Internet
Author: User

PHP+HTML+JAVASCRIPT+CSS implements simple crawler development, JAVASCRIPTCSS


To develop a crawler, first you need to know what your reptile is for. I'm going to use a different website to find a specific keyword article and get a link to it so I can read it quickly.

According to personal habits, I first have to write an interface, to clarify the idea.

1, go to different websites. Then we need a URL input box.

2, search for specific keywords of the article. Then we need an article title entry box.

3, get the article link. Then we need a display container for search results.

    Article URL crawl          article title              Site URL           Crawl       Article URL      

Directly on the code, and then add some of your own style adjustments, the interface is complete:

Then next is the implementation of the function, I use PHP to write, the first step is to get the HTML code of the website, the way to get the HTML code is also a lot, I do not introduce, here with curl to get, the site URL can be obtained HTML code:

Private Function get_html ($url) {  $ch = Curl_init ();  $timeout = ten;  curl_setopt ($ch, Curlopt_url, $url);  curl_setopt ($ch, Curlopt_returntransfer, 1);  curl_setopt ($ch, curlopt_encoding, ' gzip ');  curl_setopt ($ch, Curlopt_useragent, ' mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/34.0.1847.131 safari/537.36 ');  curl_setopt ($ch, Curlopt_connecttimeout, $timeout);  $html = curl_exec ($ch);  return $html; }

Although get the HTML code, but soon you will encounter a problem, that is the coding problem, which may let your next match reactive, we here unify the resulting HTML content into UTF8 encoding:

$coding = mb_detect_encoding ($html); if ($coding! = "UTF-8" | |!mb_check_encoding ($html, "UTF-8"))  $html = mb_convert_encoding ($html, ' utf-8 ', ' gbk,utf-8 , ASCII ');

Get the Web site's HTML, to get the URL of the article, then the next step is to match all the a tag under the page, need to use regular expression, after many tests, and finally get a more reliable regular expression, regardless of a tag structure is more complex, as long as the a tag is not spared: (the most important step)

$pattern = ' |] *> (. *) |isu '; Preg_match_all ($pattern, $html, $matches);

The result of the match is in $matches, which is probably one of these multidimensional groups:

Array (2) {  [0]=>  Array (*) {   [0]=>  string (*) "full A-label"  ...  } [1]=> Array (*) {  [0]=>  string (*) "The contents of the a tag corresponding to the subscript above"}}

As long as you can get this data, the other can be fully operational, you can traverse the group, find you want a tag, and then get the corresponding properties of a tag, how to operate it, the following recommended a class, let you more convenient operation a tag:

$dom = new DOMDocument (); @ $dom->loadhtml ($a);//$a is some of the above obtained a tag $url = new Domxpath ($dom); $hrefs = $url->evaluate ('//a '); for ($i = 0; $i < $hrefs->length; $i + +) {  $href = $hrefs->item ($i);  $url = $href->getattribute (' href '); This gets the href attribute of the A tag}

Of course, this is just one way, you can also use regular expressions to match the information you want, to play the data new tricks.

Get and match the results you want, the next step of course is to return to the front end to show them, the interface is written, and then the front end with JS to get the data, with jquery dynamic add content display:

var website_url = ' Your interface address '; $.getjson (website_url,function (data) {if (data) {  if (Data.text = = ') {   $ (' #article_ URL '). html ('

No link to this article

'); return; } var string = '; var list = Data.text; for (var j in list) { var content = list[j].url_content; for (var i-in content) { if (content[i].title! = ") { string + = ' + ' + '[' + List[j].website.web_name + ' + ' + ' + content[i].title + ' + ';}} } $ (' #article_url '). HTML (string);});

On the final:

The above is the whole content of this article, I hope that everyone's study has helped.

Articles you may be interested in:

    • Nodejs the whole process of making reptiles
    • Nodejs the whole process of making reptiles (cont.)
    • Summary of Nodejs crawler crawling data garbled problem
    • Coding problem of crawling data of Nodejs crawler
    • A powerful crawler based on node. JS can publish crawled articles directly.
    • Java Implementation Crawler provides data to app (Jsoup web crawler)
    • Asynchronous concurrency control of Nodejs Crawler Advanced Tutorial
    • node. JS Base module HTTP, web Analytics tool Cherrio implementation Crawler
    • node. js The basic idea of writing crawlers and the example of crawling Baidu pictures to share
    • Nodejs Crawler get data Simple implementation code

http://www.bkjia.com/PHPjc/1117089.html www.bkjia.com true http://www.bkjia.com/PHPjc/1117089.html techarticle Php+html+javascript+css to implement a simple crawler development, javascriptcss develop a crawler, first you need to know what your reptile is to be used to do something. I'm going to use a different website to find special ...

  • Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.