Compile crawler artifacts

Source: Internet
Author: User

Compile crawler artifacts

I have written many crawler applets. Previously, I used C # + Html Agility Pack to complete my work. As. net bcl only provides "bottom layer" HTTP webrequest and "middle layer" WebClient, you still need to write a lot of code for HTTP operations. To write C #, you need to use Visual Studio, a "heavy" tool. The development efficiency has long been in a low state.

 

Recently, I came across a magical language named Groovy, which is fully compatible with the Java language and provides a large number of additional syntax functions.Dynamic Language. In addition, there is an open-source Jsoup project on the network-a lightweight class library that uses CSS selectors to parse HTML content. Such a combination of programming crawlers is just like a breeze.

 

Script for capturing the news title on the cnblogs Homepage

Jsoup.connect("http://cnblogs.com").get().select("#post_list > div > div.post_item_body > h3 > a").each {    println it.text()   }

Output

Capture cnblogs homepage news details

Jsoup. connect ("http://cnblogs.com"). get (). select ("# post_list> div"). take (5). each {
Def url = it. select ("> div. post_item_body> h3> a"). attr ("href ")
Def title = it. select ("> div. post_item_body> h3> a"). text ()
Def description = it. select ("> div. post_item_body> p"). text ()
Def author = it. select ("> div. post_item_body> div> a"). text ()
Def comments = it. select ("> div. post_item_body> div> span. article_comment> a"). text ()
Def view = it. select ("> div. post_item_body> div> span. article_view> a"). text ()

Println ""
Println "News: $ title"
Println "link: $ url"
Println "description: $ description"
Println "author: $ author, comment: $ comments, read: $ view"
}

Output

Capture feeds of cnblogs

New XmlSlurper (). parse ("http://feed.cnblogs.com/blog/sitehome/rss"). with {xml->
Def title = xml. title. text ()
Def subtitle = xml. subtitle. text ()
Def updated = xml. updated. text ()

Println "feeds"
Println "title-> $ title"
Println "subtitle-> $ subtitle"
Println "updated-> $ updated"

 

Def entryList = xml. entry. take (3). collect {
Def id = it. id. text ()
Def subject = it. title. text ()
Def summary = it. summary. text ()
Def author = it. author. name. text ()
Def published = it. published. text ()
[Id, subject, summary, author, published]
}. Each {
Println ""
Println "article->$ {it [1]}"
Println it [0]
Println "author->$ {it [3]}"
}
}

Output

Capture the product category information subscribed by msdn

New JsonSlurper (). parse (new URL ("http://msdn.microsoft.com/en-us/subscriptions/json/GetProductCategories? Brand = MSDN & localeCode = en-us "). with {rs->
Println rs. collect {it. Name}
}

Output

 

Let's talk about the code editor. Because Groovy is a dynamic language, you can select a lightweight text editor. Sublime is recommended here. Its Chinese translation means "Gao dashang. From the rich features and excellent user experience presented by this small text editor, the name is indeed worthy.

Advantages:

  • Lightweight (client 6 m)
  • Supports the coloring of various languages, including Groovy
  • Custom topic package (color table)
  • Column editing
  • Quick Selection, extended selection, etc.

Disadvantages:

  • Not free, not open source. Fortunately, the trial version can be used without restrictions, but occasionally a dialog box is displayed during the save operation.

Finally, I will share a quick script to capture Soufun's second-hand house information.

Http://noria.codeplex.com/SourceControl/latest#miles/soufun/soufun.groovy

After capturing and sorting

 

So far, I hope my friends who are interested in crawlers can help me.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.