1> supervised Learning (classification): First let the machine learn the sample data of each flower, and then let him according to this information, the non-marked flowers of the type of image classification.
2> Characteristics: We call the results of all measurements in the data a feature.
2> cross-validation: Extreme call-to-law (leave-one-out) takes a sample from the training set and trains a model on the data that lacks the sample, and then sees if the model is able to classify the sample correctly
The composition of the 3> classification model:
Model structure: A valve value is used to divide a characteristic.
The process of searching: Try as many combinations of features and thresholds as possible.
Loss function: Use him to determine which possibilities are not too bad.
4> feature Engineering (feature Engineering): The goal of a good feature is to take a different value in an important place and not to change it in an unimportant place.
Feature selection (Feature selection)
5> nearest Category: Given that each sample is represented by its characteristics (it is a point in n-dimensional space), we can calculate the distance between samples.
The 6> feature is normalized to one unit: one to Z (z-score): How far away the feature is from its average value.
7> two categories and multiple classifications
Building machine learning Systems with Python 2