Nutch Crawl News, how to do the specified timing update _nutch

Source: Internet
Author: User

Apply to Nutch version 1.7. Built in a Linux environment.

To climb the news, it should be noted that

1, be sure to update the entry URL List 2, crawled news do not need to crawl 3, how to control nutch the crawled URL to check


Modify Nutch-site.xml Add the following configuration

<!--how long to crawl before grabbing the page, unit seconds. Default 30 days-->
<property>
	<name>db.fetch.interval.default</name>
	<value> 420480000</value>
	<description>the Default number of seconds between re-fetches of a page (days).
	</description>
</property>
<!--How much time to force update the entire CRAWLDB library-->
<property>
	< name>db.fetch.interval.max</name>
	<value>630720000</value>
	<description> The maximum number of seconds between re-fetches of a page
	(days). After this period every page in the DB would be re-tried, no
	matter what are its status.
	</description>
</property>


Add the following script to the crontab timing plan.

The following shell script is the key to control

#!/bin/sh export java_home=/usr/java/jdk1.6.0_45 export classpath=.: $JAVA _home/jre/lib/rt.jar: $JAVA _home/jre/lib/ Dt.jar: $JAVA _home/jre/lib/tools.jar export path= $PATH: $JAVA _home/bin #Set Workspace nutch_work=/home/nutch/ Searchengine/nutch-test tmp_dir= $nutch _work/out_tmp save_dir= $nutch _work/out solrurl=http://192.168.123.205:8080/ Solr-4.6.1/core-nutch # Set Parameter depth=2 threads=200 #-------Start--------$nutch _work/bin/nutch inject $tmp _dir/c Rawldb $nutch _work/urls #-----Loop This operation, the number of times is determined-----for ((i=0;i< $depth; i++)) Does #-----Step 1-----if ((i==0)) then $ Nutch_work/bin/nutch generate $tmp _dir/crawldb $tmp _dir/segments segment= ' ls-d $tmp _dir/segments/* | Tail-1 ' else $nutch _work/bin/nutch generate $save _dir/crawldb $save _dir/segments segment= ' ls-d $save _dir/segments/ * | Tail-1 ' fi #-----Step 2-----$nutch _work/bin/nutch fetch $segment-threads $threads #-----Step 3-----$nutch _work/bi N/nutch Parse $segment #-----Step 4-----if ((i==0)) then $NUTCH_work/bin/nutch updatedb $save _dir/crawldb $segment else $nutch _work/bin/nutch updatedb $save _dir/crawldb $segment-no Additions FI #-----Step 5-----$nutch _work/bin/nutch invertlinks $save _dir/linkdb $segment Do #-----Step 6-----$nut Ch_work/bin/nutch solrindex $solrurl $save _dir/crawldb-linkdb $save _dir/linkdb $segment #-----Step 7-----$nutch _work/
 Bin/nutch solrdedup $solrurl #-----Step 8-----RM-RF $tmp _dir/* #-----Finished-----


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.