Isotonic regression is given an unordered sequence of numbers, by modifying the values of the elements to obtain a non-descending sequence of numbers, the requirement is to minimize the error (the square of the difference between the predicted value and the actual value). For example, in animals to experiment with a drug, the use of different doses, supposedly, the larger the dosage, the higher the effective proportion, but if the dose is found to be inefficient, this time only the disorder of two elements merged, recalculate efficiency, Until the calculated efficiency is no greater than the efficiency of the next element.
Mllib uses the Pava (Pool adjacent violators algorithm) algorithm and is a distributed Pava algorithm. First, the Pava algorithm is run in the sample set sequence of each partition, guaranteeing the local order and then running the Pava algorithm on the whole sample set to ensure the global order.
Code:
Importorg.apache.log4j. {level, Logger}ImportOrg.apache.spark. {sparkconf, sparkcontext}Importorg.apache.spark.mllib.regression. {isotonicregression, Isotonicregressionmodel, Labeledpoint}object isotonicregression {def main (args:array[string]) { //setting up the operating environmentVal conf =NewSparkconf (). Setappname ("Istonic Regression Test"). Setmaster ("spark://master:7077"). Setjars (Seq ("E:\\intellij\\projects\\machinelearning\\machinelearning.jar"))) Val SC=Newsparkcontext (conf) Logger.getRootLogger.setLevel (Level.warn)//read sample data and parseVal Datardd = Sc.textfile ("Hdfs://master:9000/ml/data/sample_isotonic_regression_data.txt") Val Parseddatardd= Datardd.map {line = =val Parts= Line.split (', '). Map (_.todouble) (Parts (0), parts (1), 1.0) } //Sample Data division, training samples accounted for 0.7, test samples accounted for 0.3Val dataparts = Parseddatardd.randomsplit (Array (0.7, 0.3), seed = 25L) Val Trainrdd= Dataparts (0) Val Testrdd= Dataparts (1) //establishing a sequence-preserving regression model and trainingVal model =NewIsotonicregression (). Setisotonic (true). Run (Trainrdd)//Calculation ErrorVal prediction = Testrdd.map {line = =Val predicted=model.predict (line._2) (predicted, line._2, Line._1)} Val showprediction=prediction.collect println println ("Prediction" + "\ T" + "Feature") for(I <-0 to Showprediction.length-1) {println (Showprediction (i). _1+ "\ T" +showprediction (i). _2)} Val MSE= Prediction.map { Case(P, _, L1) = Math.pow ((P-L1), 2)}.mean () println ("MSE =" +MSE)}}
Operation Result:
Spark Machine Learning (3): order-Preserving regression algorithm