1 / 24

CVPR 2009, Miami, Florida

CVPR 2009, Miami, Florida. Object Detection Using a Max-Margin Hough Transform. Subhransu Maji and Jitendra Malik University of California at Berkeley, Berkeley, CA-94720. Overview. Overview of probabilistic Hough transform Learning framework Experiments Summary.

cloris
Download Presentation

CVPR 2009, Miami, Florida

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CVPR 2009, Miami, Florida Object Detection Using a Max-Margin Hough Transform Subhransu Maji and Jitendra Malik University of California at Berkeley, Berkeley, CA-94720

  2. Overview • Overview of probabilistic Hough transform • Learning framework • Experiments • Summary

  3. Our Approach: Hough Transform • Popular for detecting parameterized shapes • Hough’59, Duda&Hart’72, Ballard’81,… • Local parts vote for object pose • Complexity : # parts * # votes • Can be significantly lower than brute force search over pose (for example sliding window detectors)

  4. y y s s x x y y s s x x Spatial occurrence distributions Generalized to object detection • Use Hough space voting to find objects • Lowe’99, Leibe et.al.’04,’08, Opelt&Pinz’08 • Implicit Shape Model • Leibe et.al.’04,’08 Learning • Learn appearance codebook • Cluster over interest points on • training images • Learn spatial distributions • Match codebook to training images • Record matching positions on object • Centroid is given

  5. Probabilistic Voting Matched Codebook Entries Detection Pipeline Interest Points KD Tree eg. SIFT,GB, Local Patches B. Leibe, A. Leonardis, and B. Schiele. Combined object categorization and segmentation with an implicit shape model ‘ 2004

  6. Probabilistic Hough Transform • C – Codebook • f – features, l - locations Detection Score Position Posterior Codeword Match Codeword likelihood Codeword likelihood

  7. Learning Feature Weights • Given : • Appearance Codebook, C • Posterior distribution of object center for each codeword P(x|…) • To Do : • Learn codebook weights such that the Hough transform detector works well (i.e. better detection rates) • Contributions : • Show that these weights can be learned optimally using a max-margin framework. • Demonstrate that this leads to improved accuracy on various datasets

  8. Learning Feature Weights : First Try 8 • Naïve Bayes weights: • Encourages relatively rare parts • However rare parts may not be good predictors of the object location • Need to jointly consider both priors and distribution of location centers.

  9. Learning Feature Weights : Second Try • Location invariance assumption • Overall score is linear given the matched codebook entries Position Posterior Codeword Match Codeword likelihood Feature weights Activations

  10. Max-Margin Training Training: Construct dictionary Record codeword distributions on training examples Compute “a” vectors on positive and negative training examples Learn codebook weights using by max-margin training activations class label {+1,-1} Our Contribution Standard ISM model (Leibe et.al.’04) non negative

  11. Experiment Datasets 11 ETHZ Shape Dataset (Ferrari et al., ECCV 2006) 255 images, over 5 classes (Apple logo, Bottle, Giraffe, Mug, Swan) UIUC Single Scale Cars Dataset (Agarwal & Roth, ECCV 2002) 1050 training, 170 test images INRIA Horse Dataset (Jurie & Ferrari) 170 positive + 170 negative images (50 + 50 for training)

  12. Experimental Results • Hough transform details • Interest points : Geometric Blur descriptors at sparse sample of edges (Berg&Malik’01) • Codebook constructed using k-means • Voting over position and aspect ratio • Search over scales • Correct detections (PASCAL criterion)

  13. Learned Weights (ETHZ shape) Naïve Bayes Max-Margin Influenced by clutter (rare structures) Important Parts blue (low) , dark red (high)

  14. Learned Weights (UIUC cars) Naïve Bayes Max-Margin Important Parts blue (low) , dark red (high)

  15. Learned Weights (INRIA horses) Naïve Bayes Max-Margin Important Parts blue (low) , dark red (high)

  16. Detection Results (ETHZ dataset) Recall @ 1.0 False Positives Per Window

  17. Detection Results (INRIA Horses) Our Work

  18. Detection Results (UIUC Cars) Our Work INRIA horses

  19. Hough Voting + Verification Classifier 19 Recall @ 0.3 False Positives Per Image Implicit sampling over aspect-ratio better fitting bounding box ETHZ Shape Dataset IKSVM was run on top 30 windows + local search KAS – Ferrari et.al., PAMI’08 TPS-RPM – Ferrari et.al., CVPR’07

  20. Hough Voting + Verification Classifier Our Work IKSVM was run on top 30 windows + local search

  21. Hough Voting + Verification Classifier 1.7% improvement UIUC Single Scale Car Dataset IKSVM was run on top 10 windows + local search

  22. Summary • Hough transform based detectors offer good detection performance and speed. • To get better performance one may learn • Discriminative dictionaries (two talks ago, Gall et.al.’09) • Weights on codewords (our work) • Our approach directly optimizes detection performance using a max-margin formulation • Any weak predictor of object center can be used is this framework • Eg. Regions (one talk ago, Gu et.al. CVPR’09)

  23. Thank You Acknowledgements Work partially supported by: ARO MURI W911NF-06-1-0076 and ONR MURI N00014-06-1-0734 Computer Vision Group @ UC Berkeley Questions?

  24. Backup Slide : Toy Example Rare but poor localization Rare and good localization

More Related