1. Bayes theorem
Conditional probability formula:
This formula is very simple to calculate the probability of a occurring in the case where B occurs. But many times, it's easy to know P (a| B), the need to calculate is P (b| A), the Bayes theorem will be used:
2. Naive Bayesian classification
The derivation process of naive Bayesian classification is not detailed, and its flow can be simply represented by a graph:
For a simple example, the following table illustrates the demographic composition of the regions:
This time if a dark-skinned man came (a 0,0,1), would he come from Europe, America, Asia or Africa? can be calculated according to naive Bayes classification:
Europe =0.30x0.90x0.20x0.40=0.0216
Asia =0.95x0.10x0.05x0.40=0.0019
Africa =0.90x1.00x0.90x0.20=0.1620
That is, he is the most likely from Africa, the second most likely from Europe and America, the least likely from Asia, then we judge him from Africa, which is consistent with our experience in our daily lives.
If the feature attribute is a continuous value, the following formula is calculated:
3. Bayesian Classification of Mllib
Directly on the code:
Importorg.apache.log4j. {level, Logger}ImportOrg.apache.spark.mllib.classification.NaiveBayesImportorg.apache.spark.mllib.linalg.VectorsImportOrg.apache.spark.mllib.regression.LabeledPointImportOrg.apache.spark. {sparkconf, sparkcontext}object naivebayestest {def main (args:array[string]) {//setting up the operating environmentVal conf =NewSparkconf (). Setappname ("Naive Bayes Test"). Setmaster ("spark://master:7077"). Setjars (Seq ("E:\\intellij\\projects\\machinelearning\\machinelearning.jar"))) Val SC=Newsparkcontext (conf) Logger.getRootLogger.setLevel (Level.warn)//read sample data and parseVal Datardd = Sc.textfile ("Hdfs://master:9000/ml/data/sample_naive_bayes_data.txt") Val Parseddatardd= Datardd.map {line = =val Parts= Line.split (', ') Labeledpoint (Parts (0). ToDouble, Vectors.dense (Parts (1). Split ('). Map (_.todouble ))}//Sample Data division, training samples accounted for 0.8, test samples accounted for 0.2Val dataparts = Parseddatardd.randomsplit (Array (0.8, 0.2)) Val Trainrdd= Dataparts (0) Val Testrdd= Dataparts (1) //establish Bayesian classification model and trainVal model = Naivebayes.train (Trainrdd, lambda = 1.0, Modeltype = "Multinomial") //Test the test sampleVal Predictionandlabel = testrdd.map (p =(Model.predict (p.features), P.label, P.features)) Val showpredict= Predictionandlabel.take (50) println ("Prediction" + "\ T" + "Label" + "\ T" + "Data") for(I <-0 to Showpredict.length-1) {println (Showpredict (i). _1+ "\ T" + showpredict (i). _2 + "\ T" +showpredict (i). _3)} Val accuracy= 1.0 * Predictionandlabel.filter (x = x._1 = = x._2). Count ()/Testrdd.count () println ("Accuracy=" +accuracy)}}
Among them, Naivebayes is the companion object of Bayesian classification, the train method is trained in model, and three parameters are training samples, smoothing parameters and model categories respectively. There are two model categories: multinomial (polynomial) and Bernoulli (Bernoulli), which uses multinomial. The Predict method is classified according to the characteristic value.
Operation Result:
Spark Machine Learning (4): Naive Bayesian algorithm