Convert Lucene index to mahout input vector

Source: Internet
Author: User
Tags solr

Mahout Lucene. vector -- DIR/home/test-in/index/-- output/home/test-in/outdex/part-out.vec -- field body -- dictout/home/test- in/outdex/dict. out

Problem 1: Version problem ("exception in thread" Main "org. Apache. Lucene. Index. corruptindexexception: Unknown format version:-11" as an error .)
A: I have been checking this question for a long time. In fact, this question has been explicitly raised on the official website (For details, refer to reference 1 ). I used indexes generated by javase3.1, And the Lucene package in mahout-0.4 is version 3.0.2. Therefore, this error is reported. If you change mahout to version 0.5, no error is returned. In version 0.5, you use javase3.1.

Question 2: Why does hadoop have to be started before conversion can be executed? Why is the output directly to HDFS? Instead of local directories, is it to facilitate subsequent cluster analysis? Why is there data output to the local directory?

A: I checked Lucene. vector source code (in the mahout-utils-0.5.jar package), mahout0.5 version is different from the previous version in the output directory here, mahout is directly placed on HDFS, and the previous version is to determine that it is a local directory, if not, put it on HDFS.

2011-8-1 corrected answer: Although mahout0.5 is in Lucene. the processing in vector source code is different from that in previous versions, but it is not the root cause. mahout starts from version 0.4 and runs on HDFS as long as hadoop_home is set in the environment variable, if hadoop_home is not set, it will prompt "No hadoop_home set, running locally". At this time, the input and output of the file format conversion command of mahout is the local directory.

Question 3: When the parse_text generated by nutch is used as the kmeas input, an error is also reported. I wonder whether it is a file format problem or a data problem?

2011-8-1 answer: it must be converted to the input vector type of mahout.

Question 4: Reference 1: Clustering input must be in the Vector Form (Binary storage). However, in the preceding mahout getting started example, files of the. Data type can be used directly.

2011-8-1 answer: In the Getting Started example, the jar package of example is used for running. to run your own data, you must convert the data type and run it using the mahout kmeans + parameter.

If you want the index to be indexed by mahout Lucene. to convert a vector into a vector, you need to store the termvector attribute. After termvector is added to Lucene or SOLR, there will be more entries in the index directory. TVD ,. tvf ,. three tvx files

References:

1. https://cwiki.apache.org/confluence/display/MAHOUT/Creating+Vectors+from+Text

2. http://www.lucidimagination.com/blog/2010/03/16/integrating-apache-mahout-with-apache-lucene-and-solr-part-i-of-3/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.