Today, Google's robot Alphago won the second game against Li Shishi, and I also entered the stage of the probability map model learning module. Machine learning fascinating and daunting.
--Preface
1. Learning based on PGM
The topological structure of Ann Networks is often similar. The same set of models are trained in different samples, and different results can be obtained. PGM can artificially add prior knowledge to the network, which is more readable in design and analysis. In addition, the probability graph model is superior to the conventional machine learning algorithm in the learning of sequence variables. In essence, PGM-based learning is an estimate of the distribution of random variables based on a given dataset.
2, learning can be divided into the following situations:
1. Complete data and complete network structure
2, the data is incomplete, the network structure is complete
3, incomplete data, network structure is not complete
4, data integrity, network structure is not complete
5, there are hidden variables, and the data is not complete
3, learning can be divided into the following purposes:
1, for the new instance answer probability inquiry: Take out three bananas an apple, the proportion of fruit in the box
Features: Easy to calculate
2. Forecast for a new instance: Get a pixel to ask for a label
Features: Care for specific goals. Optimization Model selection is simple.
3, to the given data mining its connection: the relationship between milk powder and diapers
4. Avoid overfitting
Using hyper-parameters, the datasets are divided into: training sets, calibration sets, test sets, and correction sets to correct the parameters.
Machine learning--Probability map model (learning: a review)