Foundation of Image Processing-Brief Introduction to image features

Source: Internet
Author: User

Common image features include color features, texture features, shape features, and spatial relationship features.

Color Features

(1) features: a color feature is a global feature that describes the surface properties of a scene corresponding to an image or an image area. Generally, color features are based on Pixel features. At this time, all pixels belonging to the image or image area have their respective contributions. Because the color is not sensitive to changes in the direction and size of the image or image region, the color features cannot well capture the local features of objects in the image. In addition, if the database is large when only color feature queries are used, many unnecessary images are often retrieved. Color histogram is the most commonly used method to express color features. Its advantage is that it is not affected by image rotation and moving changes. Further, with the help of normalization, it is not affected by image scale changes, the disadvantage is that the color space distribution is not displayed.

(2) Common Feature Extraction and matching methods

(1) color histogram

Its advantage is that it can briefly describe the global distribution of colors in an image, that is, the proportion of different colors in the entire image, it is particularly suitable for describing images that are difficult to automatically split and images that do not need to consider the spatial location of objects. Its disadvantage is that it cannot describe the local distribution of colors in the image and the spatial location of each color, that is, it cannot describe a specific object or object in the image.

The most common color space: RGB color space and HSV color space.

Color histogram feature matching methods: histogram intersection, distance, center distance, reference color table, and accumulative color histogram.

(2) Color Set

Color histogram is a global Color Feature Extraction and matching method, which cannot distinguish local color information. The color set is an approximation of the color histogram. First, the image is converted from the RGB color space to the color space (such as the HSV space) of the visual balance, and the color space is quantified into several handles. Then, the image is divided into several areas by Automatic Color Segmentation technology. Each area is indexed by a color component of the quantified color space, so that the image is expressed as a binary color index set. In image matching, compare the distance between different image color sets and the spatial relationship between color areas

(3) color moment

The mathematical basis of this method is that any color distribution in an image can be represented by its moments. In addition, because the color distribution information is mainly concentrated in the lower-order moment, only the first-order moment (mean), second-order moment (variance) and third-order moment (skewness) of the color are used) it is enough to express the color distribution of the image.

(4) color aggregation Vector

The core idea is to divide the pixels of each stock in the histogram into two parts. If the area occupied by certain pixels in the stock is larger than the given threshold, then, the pixels in the region are used as aggregate pixels, otherwise they are used as non-aggregated pixels.

(5) color correlation Diagram

Binary texture features

(1) features: texture features are also global features, which also describe the surface properties of the scene corresponding to the image or image area. However, because texture is only a feature of the surface of an object and cannot fully reflect the essential properties of an object, it is impossible to obtain high-level image content only by using texture features. Unlike color features, texture features are not pixel-based features. They need to be calculated statistically in areas containing multiple pixels. In pattern matching, this regional feature has great superiority and won't be able to match successfully due to local deviation. As a statistical feature, texture features often have rotation immutability and have strong resistance to noise. However, texture features also have their disadvantages. One obvious drawback is that when the image resolution changes, the calculated texture may have a large deviation. In addition, due to the potential impact of illumination and reflection, the texture reflected in the 2-D image is not necessarily the real texture of the surface of the 3-D object.

For example, the reflection in the water and the impact of smooth metal surface mutual reflection will lead to texture changes. Because these are not the characteristics of objects, texture information is sometimes "misleading" when applied to search ".

Texture features are an effective method for retrieving texture images with large differences such as width and density. However, when there is little difference between easy-to-distinguish information, such as the width and density of textures, texture features often cannot accurately reflect the differences between human visual senses and textures.

(2) Common Feature Extraction and matching methods

Classification of texture feature description methods

(1) A typical statistical method is a texture feature analysis method called the gray-scale symbiotic matrix. gotlieb and kreyszig, based on the study of various statistical features in the symbiotic matrix, through experiments, the four key features of the gray-scale symbiosis matrix are obtained: energy, inertia, entropy, and correlation. Another typical method in the statistical method is to extract texture features from the image's self-correlation function (that is, the image's energy spectrum function), that is, through the calculation of the image's energy spectrum function, extracting features such as texture fineness and directionality

(2) Geometric Method

The so-called geometric method is a texture feature analysis method based on the texture elements (basic texture elements) theory. According to the texture primitive theory, a complex texture can be composed of several simple texture elements which are arranged in regular order. There are two kinds of algorithms that affect the comparison in the geometric method: voronio board feature method and structure method.

(3) Model Method

The model method is based on the Image Construction Model and uses the model parameters as texture features. A typical method is the random field model method, such as the Markov (Markov) Random Field (MRF) model method and the Gaussian random field model method.

(4) Signal Processing

Texture Feature Extraction and matching mainly include: gray-level co-occurrence matrix, Tamura texture features, self-regression texture model, and wavelet transformation.

The Feature Extraction and matching of the gray level co-occurrence matrix mainly depends on four parameters: energy, inertia, entropy, and correlation. Based on Human Visual Perception psychology of texture, Tamura texture features propose six attributes: roughness, contrast, direction, line image, normalization, and rough. Simultaneous Auto-regressive (SAR) is an application example of the MRF model.

Tri-shape features

(1) features: Various search methods based on shape features can effectively use the objects of interest in the image for retrieval. However, they also have some common problems, including: ① At present, there is still a lack of comprehensive mathematical models for shape-based search methods; ② the search results are often unreliable if the target has deformation; ③ many shape features only describe the local properties of the target, to fully describe the target, there is always a high demand for computing time and storage. ④ the target shape information reflected by many shape features is different from the human's intuitive feeling, or, the similarity of the feature space is different from that of the human visual system. In addition, the 3-D objects displayed in the 2-D image are only the projection of an object on a certain plane of the space. The shape reflected in the 2-D image is often not the real shape of a 3-D object, various distortion may occur due to changes in the viewpoint.

(2) Common Feature Extraction and matching methods

I typical shape feature description methods

Generally, shape features have two types of Representation Methods: contour features and regional features. The contour feature of an image mainly targets the outer boundary of an object, while the regional feature of an image is related to the entire shape area.

Several typical methods for describing shape features:

(1) Boundary feature method this method obtains image shape parameters by describing boundary features. Among them, the methods for detecting parallel lines and the histogram of the boundary direction are classic methods. Using the global features of the image, we can connect edge pixels to form a region Closed Boundary. The basic idea is the parity of point-line; the boundary direction histogram method first obtains the edge of the image from a differential image, and then generates a histogram of the edge size and direction. The general method is to construct the Gray Gradient Direction matrix of the image.

(2) Fourier shape descriptor Method

The basic idea of Fourier shape descriptor is to use the Fourier transform of the Object Boundary as the shape description, and convert the two-dimensional problem into a one-dimensional problem by using the closeness and periodicity of the region boundary.

Three shape expressions are derived from the boundary points, namely the curvature function, the center distance, and the complex coordinate function.

(3) geometric parameter method

Shape expression and matching use a simpler method of regional feature description, for example, a shape factor (such as a moment, area, perimeter, etc.) related to shape quantitative measurement ). In the QBIC system, geometric parameters such as roundness, eccentric heart rate, spindle direction, and algebraic moment are used to search images based on shape features.

It should be noted that the extraction of shape parameters must be based on image processing and image segmentation. The accuracy of the parameters must be affected by the segmentation effect, shape parameters cannot be extracted.

(4) Shape Invariant Moment Method

Use the moment of the target region as the shape description parameter.

(5) Other methods

In recent years, work on shape representation and matching also includes methods such as finite element method or FEM, turning function, and wavelet descriptor.

Ⅱ Shape Feature Extraction and Matching Based on Wavelet and relative moment

This method first obtains the multi-scale edge image by using the wavelet transform modulus maximum. Then seven immutations of each scale are calculated and converted to 10 relative moments, the relative moment on all scales is used as the image feature vector to unify the region and closed and non-closed structures.

Four spatial relationship features

(1) features: the so-called spatial relationship refers to the relationship between the space locations or relative directions of multiple targets in the image, these relationships can also be divided into connection/adjacent relationship, overlapping/overlapping relationship, and include/package capacity relationship. Space Location Information can be divided into two types: relative space location information and absolute space location information. The previous relationship emphasizes the relative situation between the targets, such as the upper-lower-left relationship. The latter relationship emphasizes the distance and azimuth between the targets. Obviously, relative spatial locations can be introduced from absolute spatial locations, but it is often relatively simple to express relative spatial location information.

The use of spatial relationship features can enhance the ability to differentiate the image content, but spatial relationship features are often sensitive to image or target rotation, reversal, scale changes, and so on. In addition, in practical applications, it is often not enough to use only spatial information, and scene information cannot be effectively and accurately expressed. In order to search, in addition to using spatial link features, other features are also required.

(2) Common Feature Extraction and matching methods

There are two ways to extract spatial relationship features: one is to automatically split the image and divide the objects or color areas contained in the image, then, the image features are extracted based on these regions and indexed. The other method is to divide the image evenly into several rule sub-blocks, and then extract features from each image sub-block, and create an index.

 

Http://blog.csdn.net/djh512/article/details/7293563

Foundation of Image Processing-Brief Introduction to image features

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.