1. Loss function is used to quantify the quality of the current predictions, and the smaller the Loss function, the better the predictions.
Several typical loss function:
1) multiclass SVM loss: The general SVM is for 0, 12 categories of tags, now is to extend it to the N class label. Its physical meaning is: now to predict a sample of the label, based on the weights previously trained to find the sample in all the label score, the correct label if the score is greater than the other label score (often added a safety margin, is required to be large enough), then loss function does not increase, otherwise the loss function increases the difference in the score of the other label than the correct label. This loss function takes a value from 0 to infinity. In the initial training, the weight w is often designed to be very small random numbers, so the calculated score of each tag is close to 0, in this case, if the number of labels is n, the correct label and other labels compared n-1 times, each comparison of the score difference is less than safety margin (assuming safety Margin is 1), then the value of loss function is n-1,
2) Softmax (cross-entropy) loss: Very common in deep learning. The calculated score is brought into the Softmax function, which describes the probability, the final loss function is to take the negative log of the Softmax functions.
The difference between the two loss function, for the SVM loss, the tag pair can be, continue to increase the score will not reduce the loss, because it has already taken 0, but for Softmax loss, the higher the score of the correct label, the better the wrong label, the lower the better.
2. Regularization (regularization). The general loss function can be fitted on the training set, and in order to solve this problem, the regularization term is added to the loss function. Follow the principles of the Ames Razor. This time loss function = Data loss + regularzation. It can be understood that regularization: for example, with polynomial fitting data, there are two ways to suppress overfitting, one is to directly limit the number of polynomial, the other is an unqualified number of times, but in the loss function to increase the number of times related, it will make the algorithm more inclined to find a low number of polynomial. Regularization is the latter way.
3. The key to optimization is the derivation, the derivation by analytic way, the value of the way to verify the derivation is correct. The iteration distance for each step (learning rate) is hyperparameter, which needs to be set in advance, and Justin Johson says he always checks the learning rate to see if it is the right one.
4. Stochastic Gradient descent (SGD): The loss function is the addition of all features, and when the feature is very high, the calculation is very slow (the comparison, each pixel is a feature), this time you can use a subset (general 32/64/ 128 characters) to calculate.
5. Image features:
1) Color histogram, evaluate the specific gravity of various colors in the image.
2) Histogram of oriented gradients (HoG), divides the image into small squares, extracts the edges in each small square, sets the edge with 9 orientations, and evaluates the local boundary features of the image. Very useful in object recognition.
3) Bags of Words, the image into a small box (or extract the feature points in the vicinity of the feature point to take a small square), each square can be described with a code, coding needs their own design, all the code to form a dictionary. This is derived from natural language processing.
CS231N Spring Lecture3 Lecture Notes