This series of blogs records the Stanford University Open Class-Learning notes for machine learning courses.
Machine learning Definition
Arthur Samuel (1959): Field of study that gives computers the ability to learn without being explicitly programmed.
Tom Mitchell (1998): A computer program was said to learn from experience E with respect to some task T and some performance Measure P, if its performance on T, as measured by P, improved with experience E.
Machine learning is roughly divided into four parts:
1) Supervised learning supervised learning
Regression algorithm regression alogorithms problem (the predicted variable is continuous)
--estimate the price of a new housing size corresponding to the statistics of the House size and house price mapping.
Classification algorithm Classification Alogirithms problem (the predicted variable is discrete)
--predicting tumor characteristics of a given tumor size from statistical data on tumor size and tumor characteristics (benign or malignant)
2) Learning theory learning theory
Learning-based algorithms
3) Unsupervised learning unsupervised learning
Clustering problems
--such as computer vision, photo 3D
Cocktail party Questions:
-How to separate and extract different sources of sound from noisy sounds. It can be solved by ICA algorithm.
4) Intensive Learning reinforcement Learning
Use an enhanced learning algorithm to make a machine make a series of decisions to learn, such as writing a learning algorithm to make a helicopter flight or a robotic dog cross an obstacle. The Google driverless car is also a typical example.
[Learning Note 1] motivation and application of machine learning