· Introduction to search engine nutch (1)-use nutch

Source: Internet
Author: User
Tags web database
Document directory
  • Requirements
  • Getting started
  • Intranet crawling
  • Whole-Web Crawler
  • Searching
Tutorial


Requirements

  1. Java 1.4.x, either from sun or IBM on Linux is preferred. SetNUTCH_JAVA_HOMETo the root of your JVM installation.
  2. Apache's tomcat 4.x.
  3. On win32, cygwin, for shell support. (If you plan to use CVS on Win32, be sure to select the CVs and OpenSSH packages when you install, in the "devel" and "Net" categories, respectively .)
  4. Up to a gigabyte of free disk space, a high-speed connection, and an hour or so.

Getting started

First, you need to get a copy of the nutch code. you can download a release from http://www.nutch.org/release. unpack the release and connect to its top-level directory. or, check out the latest source code from CVS and build it with ant.

Try the following command:

bin/nutch

This will display the documentation for the nutch Command Script.

Now we're re ready to crawl. There are two approaches to Crawler:

  1. Intranet crawling, withcrawlCommand.
  2. Whole-web crawler, with much greater control, using the lower levelinject,generate,fetchAndupdatedbCommands.

Intranet crawling

Intranet crawling is more appropriate when you intend to crawl up to around one million pages on a handful of web servers.

Intranet: Configuration

To configure things for Intranet crawling you must:

  1. Create a flat file of root URLs. For example, to crawlnutch.orgSite you might start with a file namedurlsContaining just the nutch home page. All other nutch pages shocould be reachable from this page.urlsFile wocould thus look like:
    http://www.nutch.org/
  2. Edit the fileconf/crawl-urlfilter.txtAnd replaceMY.DOMAIN.NAMEWith the name of the domain you wish to crawl. For example, if you wished to limit the crawl tonutch.orgDomain, the line shocould read:
    +^http://([a-z0-9]*/.)*nutch.org/

    This will include any URL in the domainnutch.org.

Intranet: running the crawl

Once things are configured, running the crawl is easy. Just use the crawl command. Its options include:

  • -dir DirNames the directory to put the crawl in.
  • -depth DepthIndicates the link depth from the root page that should be crawled.
  • -delay DelayDetermines the number of seconds between accesses to each host.
  • -threads ThreadsDetermines the number of threads that will fetch in parallel.

For example, a typical Call might be:

bin/nutch crawl urls -dir crawl.test -depth 3 >& crawl.log

Typically one starts testing one's configuration by crawling at low depths, and watching the output to check that desired pages are found. once one is more confident of the configuration, then an appropriate depth for a full crawl is around 10.

Once crawling has completed, one can skip to the searching section below.

Whole-Web Crawler

Whole-web crawling is designed to handle very large crawls which may take weeks to complete, running on multiple machines.

Whole-Web: Concepts

Nutch data is of two types:

  1. The Web database. This contains information about every page known to nutch, and about links between those pages.
  2. A set of segments. Each segment is a set of pages that are fetched and indexed as a unit. segment data consists of the following types:
    • AFetchlistIs a file that names a set of pages to be fetched
    • TheFetcher outputIs a set of files containing the fetched pages
    • TheIndexIs a Lucene-format index of The fetcher output.

In the following examples we will keep our web database in a directory namedDBAnd our segments in a directory namedSegments:

mkdir dbmkdir segments

Whole-Web: boostrapping the Web Database

The admin tool is used to create a new, empty database:

bin/nutch admin db -create

TheInjectorAdds URLs into the database. let's inject URLs from the dmoz Open Directory. first we must download and uncompress the file listing all of the dmoz pages. (This is a 200 + MB file, so this will take a few minutes .)

wget http://rdf.dmoz.org/rdf/content.rdf.u8.gzgunzip content.rdf.u8.gz

Next we inject a random subset of these pages into the web database. (we use a random subset so that everyone who runs this tutorial doesn't hammer the same sites .) dmoz contains around three million URLs. we inject one out of every 3000, so that we end up with around 1000 urls:

bin/nutch inject db -dmozfile content.rdf.u8 -subset 3000

This also takes a few minutes, as it must parse the full file.

Now we have a web database with und 1000 as-yet unfetched URLs in it.

Whole-Web: fetching

To fetch, we first generate a fetchlist from the database:

bin/nutch generate db segments

This generates a fetchlist for all of the pages due to be fetched. the fetchlist is placed in a newly created Segment directory. the Segment directory is named by the time it's created. we save the name of this segment in the shell variable> S1:

s1=`ls -d segments/2* | tail -1`echo $s1

Now we run the fetcher on this segment:

bin/nutch fetch $s1

When this is complete, we update the database with the results of the fetch:

bin/nutch updatedb db $s1

Now the database has entries for all of the pages referenced by the initial set.

Next we run five iterations of Link Analysis on the database in order to prioritize which pages to next fetch:

bin/nutch analyze db 5

Now we fetch a new segment with the top-scoring 1000 pages:

bin/nutch generate db segments -topN 1000s2=`ls -d segments/2* | tail -1`echo $s2bin/nutch fetch $s2bin/nutch updatedb db $s2bin/nutch analyze db 2

Let's fetch one more round:

bin/nutch generate db segments -topN 1000s3=`ls -d segments/2* | tail -1`echo $s3bin/nutch fetch $s3bin/nutch updatedb db $s3bin/nutch analyze db 2

By this point we 've fetched a few thousand pages. Let's index them!

Whole-Web: Indexing

To index each segment we useIndexCommand, as follows:

bin/nutch index $s1bin/nutch index $s2bin/nutch index $s3

Then, before we can search a set of segments, We need to delete duplicate pages. This is done:

bin/nutch dedup segments dedup.tmp

Now we're re ready to search!

Searching

To search you need to put the nutch war file into your servlet container. (if instead of downloading a nutch release you checked the sources out of CVs, then you'll first need to build the war file, with the commandAnt war.)

Assuming you 've unpacked Tomcat ~ /Local/tomcat, then the nutch war file may be installed with the commands:

rm -rf ~/local/tomcat/webapps/ROOT*cp nutch*.war ~/local/tomcat/webapps/ROOT.war

The webapp finds its indexes in./Segments, Relative to where you start Tomcat, so, if you 've done Intranet crawler, connect to your crawl directory, or, if you 've done whole-web crawler, don't change directories, and give the command:

~/local/tomcat/bin/catalina.sh start

Then visit http: // localhost: 8080/and have fun!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.