Mahout version: 0.7,hadoop version: 1.0.4,jdk:1.7.0_25 64bit.
In the last blog in the final terminal information can be seen, SVD algorithm a total of 5 job tasks. Following through the Mahout distributedlanczossolver source code to analyze each:
In order to facilitate the subsequent data at any time, use Wine.dat modified data, as follows (5 rows, 13 columns):
14.23,1.71,2.43,15.6,127,2.8,3.06,0.28,2.29,5.64,1.04,3.92,1065
13.2,1.78,2.14,11.2,100,2.65,2.76,0.26,1.28,4.38,1.05,3.4,1050
13.16,2.36,2.67,18.6,101,2.8,3.24,0.3,2.81,5.68,1.03,3.17,1185
14.37,1.95,2.5,16.8,113,3.85,3.49,0.24,2.18,7.8,0.86,3.45,1480
13.24,2.59,2.87,21,118,2.8,2.69,0.39,1.82,4.32,1.04,2.93,735
1. First, the algorithm uses the main method to invoke, see only one sentence in the Main method:
Toolrunner.run (New Distributedlanczossolver (). Job (), args);
So just go and find the Run method, enter the 96 line of the Run method, here is the initialization of the parameters we set, here are some of the parameters are relatively good to understand, such as data input, output and input data of the number of rows, columns, but there is a working dir parameter, do not know what is doing, And symmetric, is this supposed to mean that the input data is symmetric? Rank parameter, this does not understand. CLEANSVD, as if it were a modification of the output or something, reader the online side is set to true, and this is also analyzed in terms of the idea set to true. If CLEANSVD is true, then there is an if condition that is judged and then enters:
if (CLEANSVD) {
double maxerror = double.parsedouble (Abstractjob.getoption (Parsedargs, "--maxerror"));
Double mineigenvalue = double.parsedouble (Abstractjob.getoption (Parsedargs, "--mineigenvalue"));
Boolean inmemory = Boolean.parseboolean (Abstractjob.getoption (Parsedargs, "--inmemory"));
Return run (InputPath,
OutputPath,
Outputtmppath,
Workingdirpath,
numrows,
numcols,
Issymmetric,
Desiredrank,
maxerror,
mineigenvalue,
inmemory);
Here first initialize three parameters, here are all by default, Maxerror is 0.05,mineigenvalue is 0,inmemory is false;
Then go to the Run method, which is a method that calls 142 rows, and there is a run method and a second job, as follows:
public int Run (path inputpath, path OutputPath, Path Outputtmppath, Path workingdirpath, int numrows, int numcols, Boolean issymm
etric, int desiredrank, double maxerror, double mineigenvalue, Boolean inmemory) throws Exception {int result = run (InputPath, OutputPath, Outputtmppath, working
Dirpath, NumRows, Numcols, Issymmetric, Desiredrank);
If (Result!= 0) {return result;
Path Raweigenvectorpath = new Path (OutputPath, raw_eigenvectors);
Return to New Eigenverificationjob (). Run (InputPath, Raweigenvectorpath,
OutputPath, Outputtmppath,
Maxerror, Mineigenvalue, InMemory, getconf ()!= null?
New Configuration (Getconf ()): New Configuration ()); }
There is a run method here, so it should be the run method that calls three jobs, and then the last call to the Eigenverificationjob.run () method to run a job and then a total of four jobs, which is just a guess. Let's look at the Run method first. Enter the 181-line run method, well, OK, still in this class. There is a bit of substance in this run method:
public int Run (path inputpath, path OutputPath, Path Outputtmppath, Path workingdirpath, int numrows, int numcols, Boolean issymm etric, int desiredrank) throws Exception {Distributedrowmatrix matrix = new Distributedrowmatrix
(InputPath, Outputtmppath, NumRows, numcols);
Matrix.setconf (New Configuration (getconf ()!= null? getconf (): New Configuration ());
Lanczosstate State;
if (Workingdirpath = = null) {state = new lanczosstate (Matrix, Desiredrank, Getinitialvector (Matrix)); else {hdfsbackedlanczosstate hstate = new Hdfsbackedlanczosstate (Matrix, Desiredrank, Getinitialvec
Tor (Matrix), Workingdirpath;
Hstate.setconf (matrix.getconf ());
state = Hstate;
} Solve (state, Desiredrank, issymmetric); Path Outputeigenvectorpath = new Path (OutputPath, RAW_eigenvectors);
Serializeoutput (state, Outputeigenvectorpath);
return 0; }