1 / 18

Cognitive Computer Vision

Cognitive Computer Vision. Kingsley Sage khs20@sussex.ac.uk and Hilary Buxton hilaryb@sussex.ac.uk Prepared under ECVision Specific Action 8-3 http://www.ecvision.org. Lecture 15. Active Vision & cameras Research issues. Active vision.

orde
Download Presentation

Cognitive Computer Vision

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cognitive Computer Vision Kingsley Sage khs20@sussex.ac.uk and Hilary Buxton hilaryb@sussex.ac.uk Prepared under ECVision Specific Action 8-3 http://www.ecvision.org

  2. Lecture 15 • Active Vision & cameras • Research issues

  3. Active vision • During recent years, there has been a growing interest in the use of active control of “image formation” to simplify and accelerate scene understanding • Examples of “image formation” include, for example: • gaze or focus of attention (saccadic control) • stereo viewing geometry (vergence control) • a head mounted camera

  4. Active vision • Historical roots of “Active Computer Vision” • 1982: Term first used by Bajcsy (Nato workshop) • 1987: Paper by Aloimonos et al (ICCV) • 1989: Entire session at ICCV • References: • “Active Percpetion”, R. Bajcsy, IEEE Proceedings Vol 76, No 8, pp 996-1006, August 1988 • “Active Vision”, J. Y. Aloimonos, I. Weiss and A. Bandopadhay, ICCV, pp 333-356, 1987

  5. Active visionTo reconstruct or not reconstruct? • “Classical” stereo correspondence reconstructs a scene in reference frame based on stereo geometry • Active vision changes vergence angles, focus etc. making reconstruction by traditional means intractable • Active systems avoid reconstruction wherever possible • Many visual control tasks such as driving a car or grasping an object can be performed by servoing directly from measurements made in the image: • “A New Approach to Visual Servoing in Robotics”, B. Espiau, F. Chaumette and P. Rives, IEEE Trans. on Robotics and Automation 8(3), June 1992

  6. Active visionApplication areas • Task based visual control • Example in the ActIPret project • Need to get reference for second video!! • Navigation • Telepresence • Wearable computing • Panoramic cameras • Saccadic control

  7. Task based visual controlThe ActIPret project • In ActIPret, information about the current task (what objects are we likely to be interacting with, what types of behaviour) are used to determine in real-time an optimum viewing geometry (gaze vector, focus, zoom)

  8. Task based visual controlSource unknown (for now) • The vision system is using an appearance based model to determine how and when it is appropriate to pickup up the part

  9. Active vision in navigationExample: GTI Project http://www.robots.ox.ac.uk/~lav/Research/GTI/section1.html • One approach to visual navigation in cluttered environments is to recover the boundaries of free space, and then move conservatively along the middle of it. • Humans tend to prefer to cut corners by "swinging" from protruding corner to protruding corner. • Using a stereo head to recover range to a fixated point, can take the vehicle into "orbit" around the fixated point at a chosen safe radius |R| of clearance. (The sense of rotation can by chosen by using R>0 or R<0.)

  10. TelepresenceExample: VFR Project http://www.robots.ox.ac.uk/~lav/Research/VFR/section1.html • Telepresence can be defined as the process of sensing sufficient information about the operator and task environment, and communicating this information in a sufficiently natural way to the human operator, that the operator feels physically present at the remote site. • The top movie shows an early version of a tracker using infra-red light to control 2 degrees of freedom of the head at 50Hz. The bottom movie shows a more sophisticated version controlling the head at the end of a robot arm.

  11. Wearable computingExample: DyPERS from MIT

  12. Panoramic vision • 360° images usually achieved using a 2D imaging array looking into a rotating mirror or hemi-spherical reflector • Rotating mirror approach allows variable resolution at different angular ranges • Lots of good web links at: http://www.cis.upenn.edu/~kostas/omni.html

  13. Panoramic vision • Panorama pictures taken from: http://cmp.felk.cvut.cz/demos/Omnivis/Photos/omniphotos.html

  14. Panoramic vision applicationHoming robot (ICS, Greece)http://www.ics.forth.gr/~argyros/research/pan_homing.htm • Perceptual processes are addressed in the context of goals, environment and behaviour of a system • A novel, vision-based method for robot homing, the problem of computing a route so that a robot can return to its initial “home” position after the execution of an arbitrary “prior” path. • Robot tracks visual features in panoramic views of the environment that it acquires as it moves.

  15. Panoramic vision applicationHoming robot (ICS, Greece)http://www.ics.forth.gr/~argyros/research/pan_homing.htm • When homing is initiated, the robot selects Milestone Positions (MPs) on the “prior” path by exploiting information in its visual memory. The MP selection process aims at picking positions that guarantee the success of the local control strategy between two consecutive MPs. • See website for panoramic view

  16. Saccadic controlAttention – recognition loop (KTH, Sweden)http://www.nada.kth.se/~celle/ • Scene is observed using a stereo head • Disparity between two images can be used in localise objects in a 3D plane • User saccades to an object, localised object is then recognised • Attention – recognition loop

  17. Robots that interact with humansSONY QRIO robot

  18. The end • Please feed back comments to Kingsley Sage or Hilary Buxton at the University of Sussex, UK

More Related