Image Features and common feature extraction and matching methods

Source: Internet
Author: User

Common image features include color features, texture features, shape features, and spatial relationship features.

I


Color Features

(1) features: color features are a global feature.
,

Description
Describes the surface properties of a scene corresponding to an image or an image area. Generally, color features are based on Pixel features. At this time, all pixels belonging to the image or image area have their respective contributions. Because of the color of the image or image area
The changes in the direction and size of the domain are not sensitive. Therefore, color features cannot capture the local features of objects in the image. In addition, when you only use color feature queries, if the database is large, many unnecessary images are also
Search. Color histogram is the most commonly used method to express color features. Its advantage is that it is not affected by image rotation and horizontal motion changes. further normalization can not be affected by image scale changes.
Shows the color space distribution.

(2) Common Feature Extraction and matching methods

(1)


 


Color histogram

Its
Advantage: It can briefly describe the global distribution of colors in an image, that is, the proportion of different colors in the entire image, it is particularly suitable for describing images that are difficult to automatically split and do not need to consider the spatial location of objects.
Image. Its disadvantage is that it cannot describe the local distribution of colors in the image and the spatial location of each color, that is, it cannot describe a specific object or object in the image.

The most common color space:
RGB

Color Space,
HSV

Color Space.

Color histogram feature matching methods: histogram intersection, distance, center distance, reference color table, and accumulative color histogram.

(2)



Color Set

Color histogram is a global Color Feature Extraction and matching method, which cannot distinguish local color information. The color set is an approximation of the color histogram.
RGB

The color space is converted to the color space of visual balancing (for example
HSV

Space), and quantize the color space into several handles. Then, the image is divided into several areas by Automatic Color Segmentation technology. Each area is indexed by a color component of the quantified color space, so that the image is expressed as a binary color index set. In image matching, compare the distance between different image color sets and the spatial relationship between color areas

(3)



Color moment

The mathematical basis of this method is that any color distribution in an image can be represented by its moments. In addition, because the color distribution information is mainly concentrated in the low-order moment, only the first-order moment (
Mean

), Second-order moment (
Variance

) And third-order moment (
Skewness

) Is enough to express the color distribution of the image.

(4)



Color aggregation Vector

The core idea is to divide the pixels of each stock in the histogram into two parts. If the area occupied by certain pixels in the stock is larger than the given threshold, then, the pixels in the region are used as aggregate pixels, otherwise they are used as non-aggregated pixels.

(5)



Color correlation Diagram

II


Texture features

(1)
Features: texture features are also global features, which also describe the surface properties of the scene corresponding to the image or image area. However, because texture is only a property of an object's surface, it cannot fully reflect its essential attributes,
Therefore, using texture features alone cannot obtain high-level image content. Unlike color features, texture features are not pixel-based features. They need to be calculated statistically in areas containing multiple pixels. In Mode
In matching, this regional feature has great superiority and won't be matched successfully due to local deviation. As a statistical feature, texture features often have rotation immutability and have strong resistance to noise.
Capability. However, texture features also have their disadvantages. One obvious drawback is that when the image resolution changes, the calculated texture may have a large deviation. In addition, light and reflection may occur.
From
2-D

The texture reflected in the image is not necessarily
3-D

The real texture of the object surface.

For example, the reflection in the water and the impact of smooth metal surface mutual reflection will lead to texture changes. Because these are not the characteristics of objects, texture information is sometimes "misleading" when applied to search ".

Texture features are an effective method for retrieving texture images with large differences such as width and density. However, when there is little difference between easy-to-distinguish information, such as the width and density of textures, texture features often cannot accurately reflect the differences between human visual senses and textures.

(2) Common Feature Extraction and matching methods

Classification of texture feature description methods

(
1

Statistical Methods: A typical statistical method is a texture feature analysis method called a gray-scale co-occurrence matrix.
Gotlieb

And
Kreyszig

Based on the statistical characteristics of the co-occurrence matrix, et al. obtained the four key features of the gray-scale co-occurrence matrix through experiments: energy, inertia, entropy and correlation. Another typical method in the statistical method is to extract texture features from the image's self-correlation function (that is, the image's energy spectrum function), that is, through the calculation of the image's energy spectrum function, extracting features such as texture fineness and directionality

(
2

) Ry Method

The so-called geometric method is a texture feature analysis method based on the texture elements (basic texture elements) theory. According to the texture primitive theory, a complex texture can be composed of several simple texture elements which are arranged in regular order. In the geometric method, there are two kinds of algorithms that affect the comparison:
Voronio

Style Feature method and structure method.

(
3

) Model Method

The model method is based on the Image Construction Model and uses the model parameters as texture features. A typical method is the random field model method, such as Markov (
Markov

) Random Field (
MRF

) Model method and
James

Random Field Model Method

(
4

) Signal Processing Method

Texture Feature Extraction and matching mainly include: gray-level co-occurrence matrix,
Tamura

Texture features, auto-regression texture models, and wavelet transformations.

The Feature Extraction and matching of the gray level co-occurrence matrix mainly depends on four parameters: energy, inertia, entropy, and correlation.
Tamura

Texture features based on Human Visual Perception psychology research on texture
6

Properties: roughness, contrast, direction, line image, normalization, and rough. Auto-regression texture model (
Simultaneous Auto-regressive, SAR

Is a Markov Random Field (
MRF

) Is an application instance of the model.

3.


Shape features




(1)
Features: Various search methods based on shape features can effectively search for objects of interest in images. However, they also have some common problems, including: ① shape-based search methods still lack
Relatively complete mathematical model; ② the search results are often unreliable if the target has deformation; ③ many shape features only describe the nature of the target, to fully describe the target, it is often necessary to have a high computing time and storage capacity.
④ The target shape information reflected by many shape features is not exactly the same as the human's intuitive feeling, or the similarity between the feature space and the human visual system. In addition
2-D

The
3-D

An object is actually a projection of an object on a certain plane of the space.
2-D

The shape reflected in the image is often not
3-D

The real shape of an object may produce various distortion due to changes in the viewpoint.

(2) Common Feature Extraction and matching methods

I typical shape feature description methods

Generally, shape features have two types of Representation Methods: contour features and regional features. The contour feature of an image mainly targets the outer boundary of an object, while the regional feature of an image is related to the entire shape area.

Several typical methods for describing shape features:

(
1

(Boundary features) This method obtains the shape parameters of an image by describing the boundary features. Where
Hough

The transformation Detection Method of parallel lines and the boundary direction histogram method are classic methods.
Hough

Transformation is a method of connecting edge pixels to form a region closed boundary based on the global characteristics of the image. Its basic idea is the parity of point-line; the boundary direction histogram method first obtains the edge of the image from a differential image, and then generates a histogram of the edge size and direction. The general method is to construct the Gray Gradient Direction matrix of the image.

(
2

) Fourier shape descriptor Method

Fourier shape Descriptor
(Fourier shape descriptors)

The basic idea is to use the Fourier transformation of the boundary of an object as the shape description, and use the closeness and periodicity of the region boundary to convert two-dimensional problems into one-dimensional problems.

Three shape expressions are derived from the boundary points, namely the curvature function, the center distance, and the complex coordinate function.

(
3

) Geometric parameter method

Shape expression and matching use a simpler method of regional feature description, for example, the shape parameter method (such as the moment, area, perimeter, etc (
Shape factor

). In
QBIC

In the system, geometric parameters such as roundness, eccentric heart rate, spindle direction, and algebraic moment are used to search images based on shape features.

It should be noted that the extraction of shape parameters must be based on image processing and image segmentation. The accuracy of the parameters must be affected by the segmentation effect, shape parameters cannot be extracted.

(
4

) Shape Invariant Moment Method

Use the moment of the target region as the shape description parameter.

(
5

) Other methods

In recent years, work on shape representation and matching also includes the finite element method (
Finite Element Method

Or
FEM

), Rotation function (
Turning Function

) And wavelet descriptor (
Wavelet descriptor

.



Shape Feature Extraction and Matching Based on Wavelet and relative moment




This method first obtains the multi-scale edge image by using the wavelet transform modulus maximum, and then calculates
7

Moment, and then converted
10

The relative moment on all scales is used as the image feature vector, so that the area and closed and non-closed structures are unified.

Thu

Spatial Relationship features




(1) features:
The so-called spatial relationship refers to the spatial location or relative direction relationship between multiple targets separated from the image. These relationships can also be divided into connections.
/

Joining and overlapping
/

Overlap and inclusion
/

Package
. Space Location Information can be divided into two types: relative space location information and absolute space location information. The previous relationship emphasizes the relative situation between the targets, such as the upper-lower-left relationship. The latter relationship is strong.
The distance and orientation between targets are adjusted. Obviously, relative spatial locations can be introduced from absolute spatial locations, but it is often relatively simple to express relative spatial location information.

The use of spatial relationship features can enhance the ability to differentiate the image content, but spatial relationship features are often sensitive to image or target rotation, reversal, scale changes, and so on. In addition, in practical applications, it is often not enough to use only spatial information, and scene information cannot be effectively and accurately expressed. In order to search, in addition to using spatial link features, other features are also required.

(2) Common Feature Extraction and matching methods

There are two ways to extract spatial relationship features: one is to automatically split the image and divide the objects or color areas contained in the image, then, the image features are extracted based on these regions and indexed. The other method is to divide the image evenly into several rule sub-blocks, and then extract features from each image sub-block, and create an index.

 

Source:

Http://www.china-vision.net/blog/user1/218/archives/2006/2006922155355.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.