Contrastive Loss (Siamese network)

Source: Internet
Author: User
Contrastive Loss (contrast loss)

In the twin Neural Network (Siamese network), the loss function is contrastive loss, which can effectively deal with the relation of paired data in the twin neural network. The expressions for contrastive loss are as follows:

of which d=| | an−bn| | 2, representing the Euclidean distance of two sample features, Y is the label of two samples matching, Y=1 represents two samples similar or match, y=0 is not match, margin is the set threshold value.

This loss function was originally derived from Yann LeCun's "dimensionality reduction by Learning a invariant Mapping", mainly used in dimensionality reduction, that is, similar samples, after the dimensionality reduction (feature extraction), In the feature space, the two samples are still similar, while the original samples, which are not similar, after the dimensionality reduction, in the feature space, two samples are still not similar.

The expression of the above contrastive loss can be found that the loss function can be well expressed as the matching degree of the samples, and it can be used for the training of the model of feature extraction. When the Y=1 (that is, similar to the sample), the loss function only left the ∑yd2, that is, similar to the original sample, if the Euclidean distance in the feature space, the larger the loss function (Increase function), then the current model is not good. When Y=0 (that is, the sample is not similar), the loss function is ∑ (1−y) Max (margin−d,0) 2

(subtraction function), that is, when the sample is not similar, the Euclidean distance of the feature space is small, the loss function value will become larger.
This diagram shows the relationship between the loss function value and the Euclidean distance of the sample feature, where the red dotted line represents the loss value of the similar sample (Y=1 time), and the loss value of the dissimilar sample shown by the blue Solid Line (y=0). Here the M is the threshold value, depending on the specific problem, for different target m values will be different sizes. But the fact shows that constractive loss is often used to fit the problem of multiple classification in the training set, which appears to be rather feeble. The improved methods for this problem include Triplet Loss, four-tuple loss (quadruplet Loss), difficult sample sampling ternary loss (Triplet Loss with batch hard mining, Trihard Loss), and boundary mining losses ( Margin sample mining loss, MSML)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.