Abstract:
Automatic estimation of salient object regions processing SS images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances computer vision and computer graphics applications. we introduce a regional contrast based salient object extraction algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. the proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. these saliency maps are further used to initialize a novel iterative version of grabcut for high quality salient object segmentation. we extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging internet image dataset. our experimental results demonstrate that our algorithm consistently outperforms existing salient object detection and segmentation methods, yielding higher precision and better recall rates. we also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling aggressive sketch-based image retrieval (SBIR) via simple shape comparisons. despite such noisy Internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.
(1) HC: Based on the histogram contrast method, the significance value of each pixel is determined by the color difference between it and all other pixels in the image to obtain a fully-resolution significance image;
(2) RC: Based on the local contrast method, the image is first divided into a small area. The image-based splitting method is used. The basic idea of the splitting is to use each pixel as the vertex of an undirected graph, the similarity between two pixels is used as the edge weight. the maximum weight of the edges connecting the two vertices in the same region is less than the minimum weight of the edges connecting different regions, during the iteration process, vertices are summarized and merged. The significance value of each region is determined by the difference in the space distance between each region and the weighted color difference of the number of regions; the spatial distance is the Euclidean distance of the center of gravity of the two regions, and a smaller weight is allocated to the distant areas.
(3) Acceleration details:
1. After each color channel is quantified from 256 color values to 12 color values, the color histogram is calculated for the input color image to retain the high-frequency color, and the remaining color is discarded, use the nearest color in the histogram.
2. Smooth Color Space: The Quantization Error is reduced. The significance value of each color is replaced with the weighted average of the similar color significance. The distance is measured in the lab space in the RGB space.
The code will be parsed. For details, refer to: Click the open link.