1 / 19

Boosting

Boosting. CMPUT 615. Boosting Idea. We have a weak classifier, i.e., it’s error rate is a little bit better than 0.5. Boosting combines a lot of such weak learners to make a strong classifier (the error rate of which is much less than 0.5). Boosting: Combining Classifiers.

ailani
Download Presentation

Boosting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Boosting CMPUT 615

  2. Boosting Idea We have a weak classifier, i.e., it’s error rate is a little bit better than 0.5. Boosting combines a lot of such weak learners to make a strong classifier (the error rate of which is much less than 0.5)

  3. Boosting: Combining Classifiers

  4. Adaboost Algorithm

  5. Boosting With Decision Stumps

  6. First classifier

  7. First 2 classifiers

  8. First 3 classifiers

  9. Final Classifier learned by Boosting

  10. Final Classifier learned by Boosting

  11. Performance of Boosting with Stumps

  12. Boosting Fits an Additive Model Now analyze boosting in the additive model frame work: We want

  13. Forward stagewise (greedy search) Adding basis one by one

  14. Apply Exponential Loss function If we use We want to

  15. Other Loss functions Loss function Population Minmizer

  16. Robustness of different Loss function

  17. Boosting and SVM • Boosting increases the margin “yf(x)” by additive stagewise optimization • SVM also maximizes the margin “yf(x)” • The difference is in the loss function– Adaboost uses exponential loss, while SVM uses “hinge loss” function • SVM is more robust to outliers than Adaboost • Boosting can turn base weak classifiers into a strong one, SVM itself is a strong classifier

  18. Robust Loss function for Regression

  19. Summary • Boosting combines weak learners to obtain a strong one • From the optimization perspective, boosting is a forward stage-wise minimization to maximize a classification/regression margin • It’s robustness depends on the choice of the Loss function

More Related