1 / 35

Model comparison and challenges II Compositional bias of salient object detection benchmarking

Model comparison and challenges II Compositional bias of salient object detection benchmarking. for the Crash Course on Visual Saliency Modeling: Behavioral Findings and Computational Models CVPR 2013. Xiaodi Hou K-Lab, Computation and Neural Systems California Institute of Technology.

kimi
Download Presentation

Model comparison and challenges II Compositional bias of salient object detection benchmarking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Model comparison and challenges IICompositional bias of salient object detection benchmarking for the Crash Course on Visual Saliency Modeling: Behavioral Findings and Computational Models CVPR 2013 XiaodiHou K-Lab, Computation and Neural Systems California Institute of Technology

  2. Schedule

  3. On detecting salient objects • Learning to Detect A Salient Object [Liu et. al., CVPR 07] • Frequency-tuned Salient Region Detection [Achanta et. al., CVPR 09]

  4. The progress! • Some top performers: • [PCA]What makes a patch distinct [Margolin et. al., CVPR 13] • [SF]Saliency filters [Perazziet. al., CVPR 12]: • F-Measure: 0.84 • [GC]/[GC-seg]Global contrast-based salient region detection [Cheng et. al., CVPR 11] • F-Measure: 0.75 • [FT] Frequency Tuned Salient Region Detection [Achanta et. a.l., CVPR 09] : • 0.65 by [Achanta et. al., CVPR 09]. Image from [Perazzi et. al., CVPR 2012]

  5. The progress? • Salient objects in PASCAL VOC? • 850 images from VOC 2013 validation set. • Intersection of main challenge and segmentation challenge. • Answers more questions: • Where is your algorithm (in salient object detection)? • Where is salient object detection (in computer vision).

  6. The progress • FT: 0.28 • GC: 0.39 • SF: 0.35 • PCA: 0.40 • GC-seg: 0.38 55% performance drop!!

  7. The arguments • No!!These objects are not salient! • Our algorithm works on images with salient objects only!

  8. The paradox of salient object detection But hey, what is a “salient object”?

  9. Dataset Design bias

  10. Before we proceed… • Google Image Search: “science” • Rutherford atomic model (9) • Test tubes (10) • Microscopes (4) • Double helix (3) • Old guys with crazy hair and glasses (3) Stereotypes of science are not sciences!

  11. How to compose a biased salient object detection dataset Decide to build a new salient object dataset! Searching for unambiguous examples of saliency… Job done! Let other people play with my dataset! So what is saliency? Found one! Add to my dataset!

  12. The Dataset Design bias Unlike datasets in machine learning, where the dataset is the world, computer vision datasets are supposed to be a representation of the world. ---- [Torralba and Efros: Unbiased look at Dataset bias] • Dataset design bias: Biases introduced during the designof a dataset: • Exaggerating on stereotypical attributes. • Limited variability in positive samples. • Lack of negative samples at all.

  13. Dataset design bias: the statistics • Object number

  14. Dataset design bias: the statistics • Object eccentricity

  15. Dataset design bias: the statistics • Global foreground and background contrast

  16. Dataset design bias: the statistics • Local foreground/background contrast (contour strength)

  17. Towards a better Salient object dataset

  18. The new project • Build a salient object detection dataset from a good object detection dataset (e.g. PASCAL VOC). Let the eye fixations pick up those salient objects!

  19. Data collection (in process) • SR Research EyeLink 1000 • 2-sec viewing time. • “Free-viewing” instruction (will mention it later). • 3 subjects (more subjects on the way). We will release the dataset very soon!

  20. What makes an object salient • Unit conversion: • From fixation maps • To object fixation score • sum of blurred fixation map intensity within the object mask.

  21. Object size and saliency • Large objects attract more fixations. • Small objects receive denser fixations.

  22. Object size and saliency

  23. Objects, salient objects, and the most salient objects • Salient objects: • Fixation score higher than mean (67.3% objects). • Most salient objects: • Fixation score higher than mean*2 (27.8% objects). Image with fixation Object labeling Salient objects Most salient object(s)

  24. Salient objects and salient object detection • Guess how does the algorithms perform on “salient objects” and “most salient objects”? • On all objects: • FT: 0.28 • GC: 0.39 • SF: 0.35 • PC: 0.38

  25. Testing on salient objects • FT: 0.22 • GC: 0.35 • SF: 0.31 • PCA: 0.38 • GC-seg: 0.39 60% performance drop!! Salient objects on PASCAL VOC

  26. Testing on most salient objects • FT: 0.10 • GC: 0.20 • SF: 0.15 • PCA: 0.26 • GC-seg: 0.23 79.8% performance drop!! Most salient objects on PASCAL VOC

  27. Something is wrong, seriously!

  28. discussions

  29. The role of saliency in a visual system • Bad performance because of boundary detection? • Bad performance because of unpredictability of human “free will”?

  30. Saliency as an oracle • Oracle selecting the best segment • CPMC: 78% from 154 segments • gPB: 61% from 1286 segments • * coverage = intersect/union

  31. Saliency and tasks • Build a salient object detection dataset from an egocentric object dataset. • Let the eye-fixation speaks Eye Tracker Forward-looking Camera Learning to recognize daily actions using gaze, [Fathi et. al. ECCV 12]

  32. What makes an object salient? • Object in egocentric actions • Fixated object == Manipulated object?

  33. Thanks

  34. Acknowledgement • Joint work with Yin Li @ Gatech. • Special thanks to Nathan Faivre for his kind help on eye tracking.

  35. Open discussions

More Related