The source file for mahout seq2sparse is sparsevectorsfromsequencefiles. java.
First use documentprocessor. the tokenizedocuments method, and the documentprocessor class is described as follows in the mahout API documentation: "This class converts a set of input parameters ENTs in the sequence file format of stringtuples. the sequencefile input shoshould have
Text keyContaining the unique document identifier and
Text ValueContaining the whole document. ", we can see that the key-Value Type of the input sequencefile must be (text, text), and the result is to convert (text, text) to (text, stringtuple)
Stringtuple: "An ordered list of strings which can be used in a hadoop MAP/reduce job". This is an API description on the mahout official website, the APIs on the official mahout website are incomplete. For example, you can find the randomaccesssparsevector and densevector APIs. The APIs on the official website are not available. You only need to check the source code. It is estimated that mahout is still being expanded, slow document update and maintenance
Then it is determined by dictionaryvectorizer. createtermfrequencyvectors converts (text, stringtuple) to the Word Frequency Vector (text, vectorwritalbe) type. to convert it to TFIDF-vectors, tfidfconverter is called. processtfidf function, which contains partialvectormerger. mergepartialvectors method. The input requirements for this method (text, vectorwritalbe ),
The result is "merge all the partial randomaccesssparsevectors into the complete document randomaccesssparsevector"