Summarization and improvement of image processing on vehicle detection algorithm __ image processing

Source: Internet
Author: User

I haven't updated my blog for almost two months, from the first blog post "Haar Adaboost detection of the custom target (video vehicle detection algorithm code)" Since the publication of a lot of readers leave a message, a little praise good, also have the results of the trough is not ideal, of course, more or ask some questions about the details. For most of the questions, I would be happy to answer them patiently. Because I think it is also a process of learning from each other. What I want to say is that I can't guarantee that the test will meet the requirements of every reader, and my own training and training samples are only released for my needs. If the classifier is used to detect the vehicle at night, because I do not consider the night problem in the training process, are for the day. The purpose of this article is to help many readers who have not yet started to image processing or are eager to work on similar subjects. Writing this blog today is mainly a summary of the feedback from readers over the past five months and some of my own understanding. In addition, some viewpoints are put forward for the optimization of the vehicle detection algorithm of this kind of scanning mechanism.

First of all, make some summary of this article. The combination of Haar feature and AdaBoost classification algorithm is first used for face detection, that is, the effect of this combination to make the adaboost algorithm fire for several years, various varieties of adaboost have come out, even what features and the combination of this algorithm can brush some papers out. If you want to understand the principle of more thorough can refer to blog or read the original "robust real-time face Detection "

The first question, how to choose the number of samples. Although the analysis is more important, but I think the attempt is more important, if you choose hundreds of samples to train to achieve satisfactory results, it can stop, the sample is not the more the better. Because unilateral offsets can cause TP,TN,FP,FN to change. The data I gave was just one of my own suggestions. Why is a negative sample more than a positive sample? Because the car must be more than the car. If the numbers are equal, they are not comprehensive for the car. I have also done verification, if the two numbers consistent, the effect is not ideal, many non-car is judged as a car. However, there may be other algorithms, and this ratio may not be the case. so still a word, try it more.

the second question, the sample size problem. Some readers have asked about the effect of changing the aspect ratio when training vehicles. Why to resize into 24x24. Can I select a different value. Check how large the window is. Why the display box looks bigger than the 24x24, detects the window size and the sample size is not the same big. I think this is a more important issue, in many target detection algorithms are reflected. My opinion is: size is not fixed 24*24, but the proposal is based on the shape of the target to choose, if it is pedestrian, the recommendation 1:3 is more in line with the actual target ratio. The size of the resize only affects the number of characters and the length of training time, which has little effect on the effect. For each frame to be detected, the algorithm of this paper has multi-scale detection. For example, a car is 30*30, if the 24*24 window to detect is not detected, so when we reduce the picture 0.8 times times to become 24*24, it can be detected. However, we can still use the 30*30 rectangle to frame the vehicle in the original image. Because we are reduced to 0.8 times times the image of the vehicle detected, are to be 24/0.8=30 size display box, which is the window you see more than 24*24 reason. This is very important, in many scanned pictures to detect problems will encounter this problem. Because we do not know the target is to meet the size of the detection window, so the scale transformation. 24*24 is not the best, not necessarily. so still a word, try it more.

The third problem is the quality of the sample. if it is a surveillance camera under the vehicle detection, the general resolution is not very high. The lower the resolution, the superiority of this algorithm is more prominent. In the case of low resolution, most of the algorithms based on motion detection are invalid. Because the background and objectives are not obvious. But the feature-based approach still stands. The sample I provided is actually not high resolution, I detect the video resolution is not high. I have tried the high resolution video, the special HD video, the effect is not my own offer of those good. so still a word, try it more.

Let me explore another problem, how to optimize the algorithm of this kind of scanning mechanism. will use Haar and adaboost to test the target can only be used in the project, to participate in some competitions, the effect is quite flashy, you can take out the show. If you want to improve a little, you can start with three aspects. I made an improvement a year ago and then wrote a paper, cast neurocomputing, the journal found that innovation is not enough to be rejected, and later to change or feel that innovation is not enough. I'm still in the middle of a change, so it's still a problem if I continue to contribute, because my learning of image processing may stop here. Learned only a fur. I may write an article and image segmentation related articles, this is the work I did two years ago, have time to tidy up, no time or forget, forget it.

Direction One: first carry on the movement target detection, then removes the background, does not participate in the scanning. The benefits of doing this are two, saving the scanning time, in addition, a lot of the background can be like a car to be removed. However, in order to ensure the integrity of vehicle features, there should be a morphological processing retention feature.

direction two: characteristic fusion. This makes the algorithm slow, but it can be useful for detection performance.

Direction Three: Online AdaBoost learning. If you can learn to update the classifier online, it is also a direction, there is a paper in Taiwan mentioned. Or to study the positive and negative samples according to the specific scene.

More questions to explore, if there is contact to update the ~ ~

The 2016.07.24 update considers the inclusion of motion detection, detects the foreground target, and then validates the foreground target. Advantage One: can reduce the range of sliding window scanning, and thus reduce the scanning time; merit two: can reduce the interference caused by some background (characteristics and similar background of vehicle). You can see paper "Real-time Vehicle detection with foreground-based cascade classifier" zhuangxiaobin/9733793

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.