1 / 11

Studying Relationships Between Human Gaze, Description, and Computer Vision

Studying Relationships Between Human Gaze, Description, and Computer Vision. Kiwon Yun 1 , Yifan Peng 1 Dimitris Samaras 1 , Gregory J. Zelinsky 1,2 , Tamara L. Berg 3 1 Department of Computer Science, Stony Brook University 2 Department of Psychology, Stony Brook University

davis
Download Presentation

Studying Relationships Between Human Gaze, Description, and Computer Vision

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Studying Relationships Between Human Gaze, Description, and Computer Vision Kiwon Yun1, Yifan Peng1 Dimitris Samaras1, Gregory J. Zelinsky1,2, Tamara L. Berg3 1Department of Computer Science, Stony Brook University 2Department of Psychology, Stony Brook University 3Department of Computer Science, University of North Carolina Computer Vision and Pattern Recognition (CVPR) 2013

  2. Overview User behavior while freely viewing images contains an abundance of information - about user intent and depicted scene content. • Human gaze  “where” the important things are in an image. • Description “what” is in an image, which parts of an image are important to the viewer. • Computer vision “what” might be “where” in an image. • However, it will always be noisy and have no knowledge of importance.

  3. Overview User behavior while freely viewing images contains an abundance of information - about user intent and depicted scene content. 1.We conduct several experiments to better understand the relationship between gaze, description, and image content. 2.From these exploratory analyses, we build prototype applications for gaze-enabled object detection and annotation. Description An old black woman wearing a turban and a headdress and a dog next to her wearing the same red headdress Human gaze Image content

  4. Datasets • PASCAL VOC • 1,000 images, free-viewing for 3 seconds • eye movements from 3 observers • 5 natural language descriptions per image from different observers. • SUN09 • 104 images, free-viewing for 5 seconds • eye movements from 8 observers, each of whom provided a scene description.

  5. Experiments and Analyses Gaze vs. Object Type • People are more likely to look at people, other animals, televisions, and vehicles. • People are less likely to look at chairs, bottles, potted plants, drawers, and rugs. • Animate objects are much more likely to be fixated than inanimate objects • (0.636 vs. 0.495). Probability of being fixated when present for various object categories (top: PASCAL, bottom: SUN09)

  6. Experiments and Analyses Gaze vs. Location on Objects What objects do people describe? • Animate objects are much more likely to be described than inanimate objects • (0.843 vs. 0.545). bus train tv person horse bird bicycle chair table cabinet curtain plant

  7. Experiments and Analyses What is the relationship between gaze and description? Fixated objects: bottle, person. Described objects: bottle, person S1: A man is reading the label on a beverage bottle. S2: A man looking at the bottle of beer that he is holding. S3: The man in a white tee shirt is holding a beer bottle and looking at it. S4: The scraggly haired man is holding up and admiring his bottle of beer. S5: Young man with curly black hair holding a beer bottle.

  8. Gaze-Enabled Computer Vision Analysis of Human Gaze with Object Detectors • Potential for gaze to increase the performance of object detectors varies by object category.

  9. Gaze-Enabled Computer Vision Gaze-Enabled Object Detection and Annotation • Combine gaze and automated object detection methods to create a collaborative system for detection and annotation.

  10. Gaze-Enabled Computer Vision Gaze-Enabled Object Detection and Annotation

  11. Conclusion • Through a series of behavioral studies and experimental evaluations, we explored the information contained in eye movements and description, and analyzed their relationship with image content. • We also examined the complex relationships between human gaze and outputs of current visual detection methods. • In future work, we will build on this work in the development of more intelligent human-computer interactive systems for image understanding. [1] Studying Relationships Between Human Gaze, Description, and Computer Vision, Kiwon Yun, YifanPeng, Dimitris Samaras, Gregory J. Zelinsky, and Tamara L. Berg, Computer Vision and Pattern Recognition (CVPR) 2013 (Oregon/USA) [2] Specifying the Relationships Between Objects, Gaze, and Descriptions for Scene Understanding, Kiwon Yun, YifanPeng, HosseinAdeli, Tamara L. Berg, Dimitris Samaras, and Gregory J. Zelinsky, Visual Science Society (VSS) 2013 (Florida/USA)

More Related