Image similarity measurement

Source: Internet
Author: User

Image similarity measurement

[Reprint] Original Source: http://blog.sina.com.cn/s/blog_4a540be60100vjae.html

 

Image similarity calculation is mainly used to rate the degree of similarity between the two images. The similarity between the two images is determined based on the scores.

   It can be used to obtain the target position in the detection and tracking in computer vision, and locate the closest area in the image according to the existing template. And keep following. Some existing algorithms, such as blobtracking, meanshift, camshift, and particle filter, also need to be supported by theories in this field.

  Another aspect is Image Retrieval Based on Image content, that is, image moderation. For example, if you want a person to list the most matched images in a massive image database, this technology may also do the same, abstract the images into several feature values, such as trace transformation, image Hashing or sift feature vectors are used to match these features in the database and then return the corresponding images to Improve the efficiency.

  The following describes the principles and effects of algorithms you have seen.

  (1) Histogram Matching.

     For example, image a and Image B are used to calculate the histograms, hista and histb of the two images respectively, and then calculate the normalized correlation coefficient (distance and histogram intersection distance) of the two histograms.

     This idea is to measure the degree of image similarity based on the differences between simple mathematical vectors. This method is currently used in many ways. First, the Histogram can be well normalized, such as the 256 bin records. Therefore, it is convenient to calculate the similarity of two images with different resolutions by calculating the histogram. The calculation amount is small.

      Disadvantages of this method:

        1. the histogram reflects the probability distribution of pixel gray values of an image, for example, the number of pixels with a gray value of 200, but the original position of these pixels is not reflected in the histogram, therefore, the skeleton of the image, that is, what objects exist inside the image, what the shape is, and what the gray-scale distribution of each piece is omitted in the histogram information. One problem is that, for example, the histogram distribution of an image in the upper black and lower white is the same as that of the image in the upper white and lower black, with a similarity of 100%.

         2. The distance between two images is measured by babbling distance or normalized correlation coefficient. This method of analyzing mathematical vectors is a very bad method.

         3. In terms of the amount of information, using a value to determine the similarity between the two images is itself a process of information compression, then the distance between the vectors of two 256 elements (assuming that the histogram has 256 bin entries) is represented by a numerical value, there will certainly be inaccuracy.

  The following is a MATLAB demo and experiment result of image similarity calculation based on Histogram distance.

% Calculate the image histogram distance
% Coefficient Calculation Method

M1_imread('1.jpg ');
N1_imread('2.jpg ');
I = rgb2gray (m );
J = rgb2gray (N );

[Count1, X] = imhist (I );
[Count2, X] = imhist (j );
Sum1 = sum (count1); sum2 = sum (count2 );
Sumup = SQRT (count1. * count2 );
Sumdown = SQRT (sum1 * sum2 );
Sumup = sum (sumup );
Figure (1 );
Subplot (2, 2, 1); imshow (I );
Subplot (2, 2); imshow (j );
Subplot (2, 2, 3); imhist (I );
Subplot (2, 2, 4); imhist (j );
Histdist = 1-sqrt (1-sumup/sumdown)

 


   We can see that this method for calculating image similarity does have many drawbacks. However, many people have modified this method, such as the fragtrack algorithm. For more information, see this article. The image is divided into horizontal and vertical blocks, and then the Histogram Matching the most is searched for each block. To calculate the similarity between the two images and integrate the information of the corresponding position of the histogram. However, the computing efficiency is very slow.

 Another method is to calculate an image outsourcing polygon. Generally, the foreground image of the tracking image is obtained and its outsourcing polygon is calculated. The y triangle is decomposed Based on the outsourcing polygon, and then the histogram inside each triangle is calculated, calculate the similarity distance between the two histogram groups. In this way, the position information of the histogram is integrated.

 (2) math matrix decomposition

  The image itself is a matrix, which can be computed based on the knowledge of matrix decomposition to obtain some robust features in the matrix representing the value and distribution of the matrix elements.

  The most common types are SVD decomposition and NMF decomposition.

  Below is a brief introduction to some of the features of SVD decomposition. If you need to explore more deeply, there are some related documents on the Internet, you can go to more details:

 <1>Stability of Singular Values

 <2> proportional immutability of Singular Values

 <3>Rotation immutability of Singular Values

 <4>Compression of Singular Values       

   In summary, we can see that the Singular Value Decomposition is based on the overall representation. The singular value feature vectors of images not only have orthogonal transformation, rotation, displacement, mirror ing, and other algebra and geometric immutability, but also have good stability and noise resistance, it is widely used in pattern recognition and image analysis. The purpose of Singular Value Decomposition is to obtain a unique and stable feature description, reduce the dimension of the feature space, and improve the ability to resist interference and noise. However, the singular vectors obtained by Singular Value Decomposition have negative numbers, so they cannot be well explained.

 Non-negative matrix decomposition (NMF ):

   The main idea of NMF is to break down the non-negative matrix into a base matrix and a coefficient matrix that can reflect the main information of the image, and give a good explanation to the base matrix, such as Face segmentation, the obtained base vectors are the main conceptual features such as human eyes and noses. Source images represent weighted combinations of these features. Therefore, NMF algorithms play a huge role in face recognition and other occasions.

  The following experiment illustrates the application of SVD + NMF mathematical decomposition in image similarity determination. I will not reveal more details about my current project.

Of course, there are still many ways to calculate matrix feature values based on mathematics, such as trace transformation and constant moment calculation. If you need such materials, please contact me, I can help you.
(3) image similarity calculation based on feature points

    Each image has its own feature points. These feature points represent important locations in the image, which are similar to the inflection point of the function, frequently Used features include the Harris and SIFT features. Then compare the obtained image corner points. If the number of similar corner points is large, we can think that the two images have a high degree of similarity. Here we mainly introduce the sift-based operator.

  For the principles and code of Sift, see David lower's website.

David G Lowe sift website

  Then we can determine whether the two images are consistent by finding the number of matching points. The advantage of this algorithm is that for an object, the photos obtained from two different angles can still find many matching points. I have always thought that the results are relatively accurate, however, because each feature point requires a long feature value, it also consumes a lot of time. Therefore, it is not often used for real-time video processing. Another advantage of this algorithm is that it can perform Image Correction by matching feature points. For more information about how to use sift for Image Correction, see my other blog.

  

I found 50 feature points for images on the left, for example. If more than 60% of them match those on the right, I think the two images are similar images.

Match the corresponding point by using sift, calculate the six-dimensional parameter of the affine transformation, and then obtain the corrected image by inverse transformation. The effect is quite good, it can be seen that sift has good anti-rotation and noise effects.

Neither can we trust sift. ransac is generally used to remove incorrect matching points to achieve better results. Of course, there are also many algorithms for improving sift. I hope there will be more exchanges in this area.

 

 

 

 

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.