Rapid Object Detection using a Boosted Cascade of simple Features partial translation

Source: Internet
Author: User
Tags comparable rounds

Rapid objectdetection using a Boosted Cascade of simple Features fast target detection using the easy feature cascade classifierNote: Some translations are not allowed in a red font

Translation, Tony,[email protected]

Summary:

This paper introduces a vision application of machine learning in target detection, which can process images quickly and achieve a higher recognition rate. The success of this work is due to the existence of the following three key features: The first introduces a new kind of image representation, which we call "integral image", which allows us to use the characteristics of the detector can be quickly calculated. Second, it is a learning algorithm based on AdaBoost, which can select a small number of key visual features from a large set, and produce an effective classifier "5". A third contribution is to connect some increasingly complex classifiers into a "cascade" (Cascade), which allows the background area of the image to be dropped quickly and more computationally in the area of a promising object. "Cascading" can be seen as a mechanism by which an object has a clear focus, which differs from the previous method to provide a statistical guarantee that the discarded area cannot contain objects of interest. In the field of face detection, the system produces a rate comparable to the best in the previous system. In real-time applications, the detector runs at 15 frames per second and does not require image differential and skin tone detection.

Abstract

This paper describes a machine Learningapproach for visual object detection which is capable of processing images Extremel Yrapidly and achieving high detection rates. This are distinguished by threekey contributions. The first is the introduction of a new image representationcalled the "Integral image" which allows the features used by O UR detector tobe computed very quickly. The second is a learning algorithm, based Onadaboost, which selects a small number of critical visual features from Alarge R set and yields extremely efficient classifiers[6]. The Thirdcontribution is a method for combining increasingly more complex classifiers ina "Cascade" which allows Backgroun D regions of the image to being quicklydiscarded while spending more computation on promising object-like regions. Thecascade can viewed as an object specific focus-of-attention mechanism Whichunlike previous approaches provides Stati Stical guarantees that discardedregions is unlikely to contain the object of interest. In the domain of facedetection the system yields detection rates comparable to the best previoussystems. Used in real-time applications, the detector runs at a frames Persecond without resorting to image differencing or skin C Olor detection.

1. Description

This paper brings together new algorithms and insights to build a robust and fast object detection framework. This framework was proven by the face detection task. to this end we have established a positive face detection system to achieve detection, and the false positive rate is equivalent to the best published results "16,12,15,11,1". This face detection system can differentiate itself from previous systems with very fast speed in terms of its detection capability. On 384x288 pixel images, the face is detected at a speed of 15 frames per second, on a traditional Pentium 3,700mhz machine. In other face detection systems, auxiliary information, such as frame difference, pixel color in color images, has been used in high frame rate detection. The high frame rate of our system is only based on a single grayscale image information. Those other ancillary information can also be added to our system to achieve a higher detection frame rate.

Our target detection framework has three major contributions. We will introduce these ideas briefly and then describe them in detail in the subsequent chapters. The first contribution of this paper is a new image representation as integral image, which can be used to calculate the feature very quickly. Ideas are partly out of Papageorgiou's work. Our inspection system does not operate "10" directly on the image strength. As with these authors, we use a set of features, which reminds us of the basic functionality of Haar (although we will also use the relevant filters, but it is more complex than the Haar filter). In order to calculate these characteristics very quickly in multi-scale, we introduce the representation of image integral images. The integral image can be used for a small number of operations per pixel. Once the calculations are complete, any one of these class Haar features anywhere in any scale space can be calculated as a constant time.

The second contribution of this paper is to propose a method for constructing a classifier, which uses adaboost "6" to select a small number of important features to construct a classifier. The total number of harr-like features in any image subwindow is very large, much larger than the pixels. To ensure rapid categorization, the learning process must exclude most of the available features and focus on a small key feature. For the work of Tieu and viola, feature selection is achieved through a simple modification of the AdaBoost program. Weak learning is constrained therefore, the return of each weak classifier can only rely on a single feature "2". As a result of each phase lifting (boosting) process, its selection of a new classifier can be considered as a process of feature selection. The AdaBoost algorithm provides an effective learning algorithm and a strong boundary "13,9,10" for generalization performance.

The third major contribution of this paper is to connect some complicated classification into cascade structure method in turn, which greatly improves the speed of the detector by focusing on the area where the image has hope (as the target). The idea behind the focus approach is that it is often possible to quickly determine where the target may be found in the image. Further more complex processing, which only preserves these possible areas. The key indicator of this approach is the "error judgment" (false negative the target as a non-target) rate in the process. The requirement is that all targets (and most) must pass through the filter of interest (the attentional filter).

We will describe such a process, train extreme examples, and effective classifiers, as supervised by the attention operation (attention operator). Terminology supervision refers to the practice of focusing on the training of instances of a particular class. in the Face detection field it is possible to achieve less than 1% "error Negative" (False to target targets as non-target) and 40% false positives by using two harr-like features to construct the classifier. The effect of this filter is to reduce the final position where more than half of the detectors must be computed.

Sub-windows that are not excluded by the initialization filter will be processed by a series of filters, each of which is more complex than the previous one. Once the sub-window is rejected by the filter, it will not be processed further. The Cascade structure recognition detection process is essentially a degenerate decision tree, related to the work of Geman and its colleagues "1,4".

A very fast face detector will have a wide range of practical applications. These include user interfaces, image databases, and conference calls. In fast frame rate is not necessary in the application, our system will allow a lot of extra post-processing and analysis. In addition, our systems can be implemented in a wide range of small, low-power devices, including handheld devices and embedded processors. In our lab, we have executed this face detector on the Compaq iPAQ handheld device (which has a powerful arm processor with low power dissipation of MIPS, this processor lacks floating point arithmetic) and achieved two frames per second.

The remainder of this article describes our contributions and some of the experimental results, including a detailed description of our experimental methods. Closely related work at the end of each section of the place.

2. Features

Our goal detection program is to classify images based on the values of simple features. There are many reasons for us to use eigenvalues instead of using pixel values directly. The most common reason is that features can be encoded into point-to-point structures to the domain of knowledge that is difficult to learn using a limited number of training data. The system also has a second key reason for using feature points: systems that use features will be faster than pixel-based systems. The simple features used make people think of Haar-based functions (functions), which are used by Papageorgiou and others in "10". To be more specific, we use three features. The 2-Rectangle eigenvalue is the difference between the sum of the pixels within the two rectangular region. The area has the same size and shape, horizontally or vertically adjacent (see Figure 1). 3-The rectangle's feature calculates the and minus the middle rectangle of the two outer rectangles. The last 4-rectangle feature calculates the difference between the diagonal of the rectangle.

Considering the base resolution of the detector 24x24, the complete set of rectangular features is quite large, more than 180000. It is important to note that the rectangular feature set is too complete, unlike the Haar base algorithm.


2.1 points Image

Rectangular features can be very quickly computed using an intermediate representation of what we call integral images. The value of the integral image at (x, y) is the sum of all pixels at the left and upper part of the point, summarized as:


Where II (x, y) is an integral image, I (x, y) is the original image. Use the following two equations to reproduce:


which represents the cumulative sum of rows. , the calculation of the integral image only needs to be computed over the original image.

Using the integral image, you can calculate the rectangles and (2) of any position by referencing the four nearby arrays. The difference of two rectangles can be obtained by referring to eight arrays. Since the 2-rectangle feature definition involves the and of adjacent rectangles, it can be calculated by six related arrays, 3-rectangle features require 8 related arrays, and 4-rectangle features require 9.

A study of 2.2 characteristics

Compared with other alternatives such as directional tunable filters, the rectangle features some primitive "5,7". Tunable filters, and their derivation, are excellent for boundary analysis, image compression, and texture analysis tools. The opposite rectangular features are fairly coarse, although they are sensitive to the presence of edges, strip patterns, and other simple image structures. Unlike the directional tunable filter, the rectangular feature has only the only available direction, either vertical, horizontal, or diagonal. The rectangular feature set provides a rich representation of the image and supports effective learning. Combined with the integration image, the high efficiency of the rectangular special Collection provides compensation for its limited flexibility.

3. Learn the classification function

Given a feature set, including a positive training set (target) and a negative training set (non-target), any number of machine learning methods can be used to learn a classification function . In our system, a variant adaboost is a "6" that is used to select both a small feature set and a training classifier. The initial form of adaboost is to improve the classification performance of some simple learning algorithms (sometimes called weak learning algorithms). There are also some formal guarantees provided by the AdaBoost Learning Program (formal guarantees), Freund and Schapire prove that the training error of the strong classifiers at all levels tends to zero at exponential speed. More importantly, some results were later proven to have a generalization performance of "14". The key finding is that generalization performance is related to the edge of the sample, and AdaBoost achieves a significant acceleration.

As mentioned above, there are more than 180,000 rectangle features associated with each image subwindow, far greater than the number of pixels. Although each function can be calculated efficiently, a complete set of calculations is time-consuming. Our experiment concludes that a small fraction of these features can form an effective classifier. And our biggest challenge is to find this small feature. To achieve this goal, the weak learning algorithm is designed to select a single rectangular feature that can best separate the positive and negative samples (this is similar to the Image Database retrieval field of the method "2"). For each feature, the weak learner determines the optimal threshold classification function to minimize the number of error classifications for the sample. If the classifier contains a feature, a threshold value, and a positive and negative check to indicate the inequality direction:


This uses a 24x24 image subwindow, and table 1 shows the Ascension process.

In practice, there is no single feature that can classify tasks under the premise of a lower error rate. During the promotion process, the first few rounds feature selection error rates of 0.1 to 0.3. In the subsequent rounds of feature selection, the error rate is 0.4 to 0.5 due to the difficulty of the task.

3.1 Study Discussions

Many common feature selector programs have been proposed ("18" in the 8th chapter of the review). Our final application requires a radical method of discarding most of the features. A similar identification problem, presented by Papageorgiou and others, proposes a scheme "10" that uses feature variance to select features. Selecting 37 features from 1734 features shows a good effect.

Roth and others proposed a feature selection process "11" based on the learning Rules of Winnow index Perceptron. The Winnow learning process converges to a number of these zero-weighted solutions. However, a large number of features are retained (perhaps hundreds of or thousands of).




3.2 Learning Results

Although the details of the training and the performance of the final system are described in the fifth part, a few simple results are worth exploring. The experimental results show that the recognition rate of the frontal face classifier composed of 200 features is 95%, and the error rate is 1%, in 14,084 images. These results are convincing, but are not tested in real-world situations. In terms of computing, the classifier may be faster than any other published system, requiring 0.7 seconds to scan an image of a 384x288 pixel. Unfortunately, the simplest way to improve detection performance is to add new features to the classifier, but increase the computational time directly. For the task of face detection, the initial rectangular features of the selected adaboost are meaningful and easy to understand. The first feature selection seems to be forced on such features as the position of the eye tends to be darker than the nose and cheek area (see Figure 3). This feature is relatively large compared to the detection of sub-windows, and this feature is insensitive to the size and position of the human face. The second feature is chosen depending on the properties of the eye darker than the nose bridge.


4. Considerations for cascading

This section describes the algorithm for building a cascade classifier, which improves detection performance and reduces computational time fundamentally. The key concerns are smaller, but more efficient lift classifiers can be made, which can reject many non-target sub-windows while detecting almost all targets. (for example, the threshold value of the ascending classifier can be adjusted so that the false detection rate is close to zero.) A simple classifier is used to remove most of the non-target sub-windows. After being accepted, the more complex classifier will be adjusted to a low false positive rate (false positive rates). The overall morphology in the detection process is a degenerate decision tree, what we call "waterfall (cascade)" (see Figure 4) The first classifier obtains a positive sample test result, triggers a second classifier to evaluate the result, and the second classifier is adjusted to obtain a high detection rate. A third classifier is detected as a continuation of a positive sample from the second classifier, and so on. Any one judged to be non-objective will cause the child window to be rejected immediately. The cascade phase is composed by using the AdaBoost training classifier, and then adjusting the threshold is the lowest false negative rate. Note that the default AdaBoost threshold is designed to produce a low error rate on the training set. In general, a lower threshold yields high detection and false positives. For example, an excellent first-class classifier can be constructed from a strong classifier of 2-features, reducing the false negatives by reducing the threshold value. By validating the training set, you can adjust the threshold so that it detects the face through 100% and 40% non-human faces. Figure 3 depicts the classification of the two features.


These two feature classifiers use a total of approximately 60 microprocessor-based instructions for the computational amount. It seems hard to imagine that any other simple filter can achieve such a high rejection rate. By comparing a template that scans a simple image in each sub-window, or a single-layer perceptron, it will take at least 20 times times the operation.

The cascading structure reflects the fact that the vast majority of any single image sub-window is non-objective. Therefore, the cascade structure attempts to reject as many non-target parts as possible. When a real target enters the cascade structure, it will trigger a positive result for each layer classifier, which is extremely rare.

Just like a decision tree, subsequent classifiers use these to train through examples of all previous stages. As a result, the second classifier faces a more arduous task than the previous classifier. The example of the first classifier will be more difficult than a typical example. Deeper classifiers Face the more difficult example of pushing the overall ROC curve downward. In the case of a given detection rate, a deeper classifier has a higher false positives rate.

4.1 Training Cascade classifier

During the cascade training process, there are two types of tradeoffs involved. In most cases the classifier uses more features to achieve higher detection rates and lower false positives. At the same time, classifiers that use more features will produce more computational amounts, which in principle can be defined as an optimization framework:

i) the series of the classifier.

II) The number of features used in each level.

III) thresholds for each level.

To minimize the expected number of features. Unfortunately, finding the best is a very difficult problem.

In practice, a very simple framework is used to produce an efficient classifier. Each phase in the cascade reduces the false positives rate but reduces the detection rate. Judging as a positive target is the need for a higher recognition rate and a lower false positives rate. Each stage of training is achieved by adding features until the target's detection and false positive rate are met (these ratios are determined in the validation set detector detection). Add cascade classifier progression until the error rate and the detection rate are up to par.

4.2 Detector Cascade Discussion

The complete face detection has 38 stages, more than 6000 features. However, the cascade structure produces a fast average detection time. In a difficult data set containing 507 faces and 75 million sub-windows, face face is detected using 10 features, and each subwindow uses an average of 10 feature evaluations (feature evaluations). In comparison, the system is 15 times times faster than the "12"-built detection system, such as Rowley.

Similar to the Cascade concept in the face detection system is presented by Rowley and others, which used two detector network "12". Rowley and others use a faster but imprecise network for pre-trial to find areas that may be human faces, and then use slow, but precise, networks for processing. Although it is difficult to confirm accurately, Rowley and other people's dual network face system is the fastest in the known human face detectors.

Cascade structure detection and processing is essentially a degenerate decision tree. The same is true of the relevant theories Amit and Geman put forward. Unlike the technique of using fixed detectors, Amit and Geman present another view that the unusual features common in simple images are used to trigger a more complex assay to evaluate. such a complete inspection process does not need to be evaluated in many potential location and scale images. And this basic view is very valuable, in their implementation must first in each location to carry out some feature detector evaluation. These features will be grouped to find "unusual" occurrences. In practice, due to the form and function of our detectors, making it very effective, the amortization cost of our detectors at each scale and position is evaluated far faster than image-based edge finding and grouping.

    in the recent workFleuretand theGemanA new face detection technology is proposed, which relies on a"chain"the test order, which means that the face appears on a specific scale and position "4". Fleuretand theGemanThe proposed image attribute determination, fine-scale edge divergence, which is different from the simple rectangular feature, exists on all scales and has an explanatory. These two approaches are fundamentally different from their learning philosophies. In theFleuretand theGemanthe motivation of the learning process is the density estimation density discrimination, and our detectors are purely discriminant. Finally,Fleuretand theGemanthe rate of miscarriage of the method seems to be higher thanRowleyand other people's previous methods. Unfortunately, there are no reports of quantitative results in the article. The included sample images each have2to theTenof false positives.

Rapid Object Detection using a Boosted Cascade of simple Features partial translation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.