Vector norm and regular term in machine learning _ machine learning

Source: Internet
Author: User
Tags square root
1. Vector Norm

Norm, Norm, is a concept similar to "Length" in mathematics, which is actually a kind of function.
The regularization (regularization) and sparse coding (Sparse coding) in machine learning are very interesting applications.
For Vector a∈rn A\in r^n, its LP norm is | | a| | p= (∑IN|AI|P) 1p (1) | | a| | _p= (\sum_i^n |a_i|^p) ^{\frac 1 p} \tag 1
Commonly used are:

L0 Norm
The number of elements in the vector that are not 0. L1 Norm
The sum of the absolute values of each element in the vector. L2 Norm
The square of each element of the vector and then the square root. 2. Compare

A vector represents the size and direction of a point and a point, so different norms correspond to different distances.
Commonly used is the Manhattan distance and Euclidean distance. When the vector is 2 D, it can be easily graphically expressed: Manhattan distance
Manhattan distance, corresponding to the L1 norm.
Manhattan is a city, can be imagined as the chess board, from point A to the distance of B, only along the board of the route around or up and down.
For L1 distances, the set of points from the distance of the origin point 1 is shown in Figure 2-1 above. Euclidean distance
Euclidean distance, also called Euclidean distance, corresponds to the L2 norm.
For L2 distances, the set of points from the distance from the origin point 1 is shown in Figure 2-1.


Fig. 2-1 The unit circle with different norms 3. Regularization

regularization , regularization, is the regular expression. Note that it is different from regularization .
Regularization is used to prevent the fitting and to enhance the generalization ability of the model.

In the regression problem, the loss function is the difference square
L (x;θ) =∑i (θtxi−yi) 2 (3-1) L (X;\theta) =\sum_i (\theta^tx_i-y_i) ^2 \tag {3-1}
The objective function is to find its minimum value.
According to the difference of the number of 0 elements in the vector Θ\theta, the three fitting curves in the following figure are obtained:


Fig. 3-1, ideal condition and over fit

The more complex the parameters, the more expressive ability, but it is easy to appear in the above figure of the fitting phenomenon, especially when the training concentrated noise, we do not expect their own model to put some noise off the group point also fitting out, this will increase the error of the test concentration.
So we will add the L2 norm of Θ\theta to the objective function as the regular term:
L (x;θ) =∑i (θtxi−yi) 2+λ| | θ| | 2 (3-2) L (X;\theta) =\sum_i (\theta^tx_i-y_i) ^2+\lambda | | \theta| | _2 \tag {3-2}
Where the Λ\LAMBDA is the coefficient of the regular item.
In this way, the parameter is about complex, its L2 norm is larger, so (3-2) the expression can constrain the complexity of the parameter.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.