I read several articles about boosting...

Source: Internet
Author: User

It has been more than three months since the previous blog. Recently, I am also a lazy user. When I see that others' blogs are of a high level, I always introduce some superficial things of opencv, and I always feel ashamed of myself. So I have never written anything about it. I have switched to Bo for three months. opencv is 2.4.3, but I still feel weak. I know too little about it. Let's look at my previous knowledge, most of them are unknown. This time I went over boosting and learned it. (I understood it as soon as I went to the course ....). After reading it, I don't know whether I understand it or not. You are welcome to correct each other and give me some advice.

The idea of boosting is also clear, that is, learning from multiple weak classifiers is equivalent to a strong classifier. In addition, the implementation of weak classification learning algorithms is much easier than that of strong classification algorithms.

When talking about boosting, we have to mention the most representative Adaboost, A demo-theoretic generalization of on-line learning and an application to boosting. adaBoost is used to increase the weight of the samples that are incorrectly classified in the previous round, reduce the weight of the samples that are correctly classified in the previous round, and then use the addition model to obtain a total classifier. The literature on the coefficients of each weak classifier and the formula for updating the weight distribution of samples proves that I will not write more about them. The algorithm flow is as follows. I have read a few articles and have some questions or gains:

1. for each iteration, the updated formula of each document varies with the distribution of weights of training samples, but the idea is the same: When the split is incorrect, the W weight increases, and when the split is correct, W weight is reduced.

2. Is the number of weak classifiers, that is, the determination of T in the figure, an empirical value?

Then I read online-Adaboost, referring to online bagging and boosting. The algorithm flow chart is as follows:

The idea in this article is to update the weak classifier for the new label samples, which is similar to the online-bagging method. I carefully looked at this flowchart. Although I did not prove it, I still have some questions. As the online training samples increase, I feel that the input order of the samples will decrease the contribution to the weak classifier, it may be because I have an incorrect understanding. I hope someone else can give me some advice ~~

There is also an online‑sting that is based on the different routes "online boosting and vision". The article also uses his application to improve the Adaboost algorithm and uses boosting as the feature select, instead of weak classifier. The benefit of doing so is to avoid calculating the weight distribution of samples, because many samples are unknown in the online process. That is, select the most distinctive feature in each feature pool for the current classifier. The algorithm flow is as follows:

To improve the diversity of the feature pool, the classifier with the worst classification effect is updated each time. In actual programming, the feature pool size is simplified from M * n to m, which greatly reduces the computational workload.

Finally, I looked at Simi-boosting, which was also the author of the previous article. Combined with the paper "semi-supervised online boosting for robust tracking" developed by tracking, I can refer to this Article "semiboost: boosting for semi-supervised learning. The idea is to give a pseudo label based on the similarity between unlabeled data and labeled data, and then perform online
Boosting for feature selection. The flowchart is as follows:

After writing, I found that I wrote a lot of water and it was very rough. In fact, I have read a simple note. For more information, see the original article. I also don't know whether the calculations I read are classic. If you have any good articles about boosting, you may want to share more. The reason why tracking is used as an application is that these articles have code for reference. If you have time, you can use the code to explain my experiences. You are welcome to give more comments and communicate with each other. I am easy to understand and may be wrong ~~

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.