330 likes | 895 Views
Inverse Depth Parameterization for Monocular SLAM Vision Seminar. 2009. 3. 25 (Wed) Young Ki Baik. Computer Vision Lab. References. Inverse Depth parameterization for Monocular SLAM J. Civera, A. J. Davison, J. M. M. Montiel (IEEE Trans. On Robotics 2008)
E N D
Inverse Depth Parameterization for Monocular SLAMVision Seminar 2009. 3. 25 (Wed) Young Ki Baik Computer Vision Lab.
References • Inverse Depth parameterization for Monocular SLAM • J. Civera, A. J. Davison, J. M. M. Montiel (IEEE Trans. On Robotics 2008) • Inverse Depth to Depth Convsrsion for Monocualr SLAM • J. Civera, A. J. Davison, J. M. M. Montiel (ICRA 2007) • Unified Inverse Depth Parameterization for Monocular SLAM • J. M. M. Montiel, J. Civera, A. J. Davison (RSS 2006) Computer Vision Lab.
Outline • What is SLAM? • What is Visual SLAM? • Overall process of SLAM • An issue of the Map • Inverse depth parameterization • Conclusion Computer Vision Lab.
What is SLAM? • SLAM: Simultaneous Localization and Mapping is a technique used by robots and autonomous vehicles to build up a map within an unknown environment while at the same time keeping track of their current position. Where am I ? Observation Map building Computer Vision Lab.
What is SLAM? • SLAM : Simultaneous Localization and Mapping basically uses some statistical techniques based on recursive Bayesian estimation such as Kalman filters and particle filters (aka. Monte Carlo methods). ^$#!@&%? Computer Vision Lab.
What is Visual SLAM? • SLAM : Simultaneous Localization and Mapping can use many different types of sensor to acquire observation data used in building the map such as laser rangefinders, sonar sensors and cameras. • Visual SLAM • - is to use cameras as a sensor. Computer Vision Lab.
Why Visual SLAM? • Vision data can inform us more meaningful information (such as color, texture, shape…) relative to other sensors. Computer Vision Lab.
Overall process of Visual SLAM Initialization Prediction Measurement Map management Update Computer Vision Lab.
Visual SLAM DEMO Mono-slam Computer Vision Lab.
Problems • Proposal • Data association • Filter • Map management • Real-time Computer Vision Lab.
What is the map of visual SLAM? • Map (Landmarks:LM) • Robot (or Camera) Li= (yi, Yi)T + Patch C= (r, q)T y : 3D position of LM r : 3D position Y : 3x3 covariance matrix of LM q : 3D orientation Computer Vision Lab.
What is the map of visual SLAM? • Robot and maps L2= (y2, Y2)T C6D= (r, q)T L1= (y1, Y1)T Computer Vision Lab.
How can we obtain initial LM info.? • Binocular camera case 3D landmarks are directly reconstructed from stereo images since binocular camera retains parallax. C6D= (r, q)T L= (y, Y)T Parallax: The measured angle between the captured rays from different view points Computer Vision Lab.
How can we obtain initial LM info.? • Monocular camera case Is it possible that 3D landmarks are directly reconstructed by monocular camera? ? C6D= (r, q)T L= (y, Y)T Computer Vision Lab.
How can we obtain initial LM info.? • Delayed Initialization of LM location • A batch update [Dean 2000, Bailey 2003] - Large base line will assure high parallax !!! • We can’t always expect large base line !!!→ Problem is distance from camera to LM. Computer Vision Lab.
How can we obtain initial LM info.? • Delayed Initialization of LM location • Gaussian Sum Filter [Kwok 2005, Sola 2005] - Initializing predefined multiple hypothesis at various depths !!! • Pruning those not re-observed in subsequent images !!! • → It can cover the predefined depth only. • → can not cover the distant depth. • → can not cover low parallax cases. Computer Vision Lab.
How can we obtain initial LM info.? • Undelayed Initialization of LM • Inverse Depth Parameterization [Montiel 2006~2008] - Initializing a ray !!! • Updating uncertainty by inverse depth coding !!! • → It can cover the infinity depth. Computer Vision Lab.
How can we obtain initial LM info.? • Undelayed Initialization of LM • Inverse Depth Parameterization [Montiel 2006~2008] • Contribution • * Initializing LM immidiately !!! • * Covering the infinity depth of LM !!! • * Covering the Low parallax case !!! Computer Vision Lab.
Inverse Depth Parameterization • Overview LXYZ= (X, Y, Z)T = (x,y,z)T + 1/ρ*m(θ,ф) 1/ρ = d α m (x,y,z)T C C6D= (rwc, qwc)T rwc W Computer Vision Lab.
Inverse Depth Parameterization • Definition (Point parameterization) • X-Y-Z Point Parameterization • Inverse Depth Point Parameterization LXYZ= (X, Y, Z)T = (x,y,z)T + 1/ρ*m(θ, ф) m( cosфsinθ, -sinф, cosфsinθ) LIDP = (x, y, z, θ, ф, ρ)T Computer Vision Lab.
Inverse Depth Parameterization • Definition (Measurement Equation) • X-Y-Z system • Inverse Depth system LXYZ= (X, Y, Z)T = (x, y, z)T + 1/ρ*m(θ, ф) hC= hXYZ = Rcw [ (X, Y, Z)T – rwc] (u, v)T = (u0 – fx hxC / hzC , v0 - fy hyC / hzC ) hC= hρ = Rcw [ρ((x, y, z)T – rwc) + m(θ, ф)] It can be safely used even for points at infinity (ρ=0) !!! Computer Vision Lab.
Inverse Depth Parameterization • Initialization of LM using IDP LIDP = (x, y, z, θ, ф, ρ)T C= (r, q)T LIDP = (r, θ, ф, ρ)T Computer Vision Lab.
Inverse Depth Parameterization • Initialization of LM using IDP LIDP = (x, y, z, θ, ф, ρ)T (u’, v’, 1)T C= (r, q)T Hw = Rwc(u’, v’, 1)T θ = arctan (hxw, hzw)T ф= arctan (-hyw, sqrt(hxw ^2+hzw^2) )T LIDP = (r, θ, ф, ρ)T ρ = 0.1 (or arbitrary constant value) Computer Vision Lab.
Inverse Depth Parameterization • Initialization of LM using IDP Updating state covariance matrix State covariance Measurement covariance Inverse depth variance Computer Vision Lab.
Inverse Depth Parameterization • Switching from Inverse depth to XYZ LIDP LXYZ L= (X, Y, Z)T = (x,y,z)T + 1/ρ*m(θ,ф) PIDP PXYZ Computer Vision Lab.
Inverse Depth Parameterization • Demo • Monocular SLAM based on EKF Computer Vision Lab.
Inverse Depth Parameterization • Demo • Monocular SLAM based on PF with OIF Computer Vision Lab.
Conclusion • Pros. • IDP is robust for monocular SLAM. • Non-delayed LM initialization • Processing for any point in the scene, close or distant, or even at “infinity” • Dealing simultaneously with low and high parallax case • Cons. • IDP requires 6-D state vector → This doubles the map state vector size Computer Vision Lab.
Q & A Computer Vision Lab.