1 / 34

Augmented Reality for Robot Development and Experimentation

Augmented Reality for Robot Development and Experimentation. Authors: Mike Stilman, Philipp Michel, Joel Chestnutt, Koichi Nishiwaki, Satoshi Kagami, James J. Kuffner. Jorge Dávila Chacón. Introduction & Related Work Overview Ground Truth Modelling Evaluation of Sensing

marrim
Download Presentation

Augmented Reality for Robot Development and Experimentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Augmented Reality for Robot Development and Experimentation Authors: Mike Stilman, Philipp Michel, Joel Chestnutt, Koichi Nishiwaki, Satoshi Kagami, James J. Kuffner. Jorge Dávila Chacón

  2. Introduction & Related Work • Overview • Ground Truth Modelling • Evaluation of Sensing • Evaluation of Planning • Evaluation of Control • Discussion

  3. Introduction • Virtual simulation: Find critical system flaws or software errors. • Testing of various interconnected components for perception, planning, and control becomes increasingly difficult. • Vision system: Model of the environment. • Navigation planner: Erroneous path. • Controller: Properly following the desired trajectory.

  4. Objective: To present a ground truth model of the world and to introduce virtual objects into real world experiments. • Establish a correspondence between virtual components (Environment, models, plans, intended robot actions) and the real world. • Visualize and identify system errors prior to their occurrence.

  5. Related Work • For humanoid robots: Simulation engines model dynamics and test the controllers, kinematics, geometry, higher level planning and vision components. • Khatib: Haptically interaction with the virtual environment. • Purely virtual simulations are limited to approximating the real world (Rigid body dynamics and perfect vision).

  6. Hardware in-the-loop simulation (Aeronautics and space robotics). • Virtual overlays for robot teleoperation: Design and evaluate robot plans. • Speed, robustness and accuracy enhanced by binocular cameras • Hybrid tracking by the use of markers (Retroreflective, LEDs and/or Magnetic trackers).

  7. Overview • Lab space setting: • “Eagle-4” Motion analysis system, cameras and furniture objects. • Experiments focus: High level autonomous tasks for humanoid robot “HRP-2”. • Foot locations to avoid obstacles and manipulate them to free its path.

  8. Technical details • “Eagle-4” system: • Eight cameras, 5 × 5 x 2 m. • Distances calculated to 0.3% accuracy. • Dual Xeon 3.6GHz processor • “EVa Real-Time” (EVaRT) software: Locate 3D markers at a max. rate of 480Hz and 1280 × 1024 (60 markers min. at 60Hz).

  9. Virtual chair is overlayed in real-time. Both the chair and the camera are in motion.

  10. Ground Truth Modeling • Reconstructing Position and Orientation • Individually identified markers, attached to an object can be expressed as a set of points {a1, ..., an} in the object’s coordinate frame “F” (Object template). • Displaced marker location “bi” is found with translation vector “t”, orientation vector “R” and the centroid of markers.

  11. Markers occluded from motion capture: • Algorithm performed only on the visible markers: their corresponding rows in matrix must be removed. • New centroids are the centroids of the visible markers and their associated template markers.

  12. Reconstructing Geometry and • Virtual Cameras • 3D triangular surface meshes form environment objects (Manually edited for holes and automatically simplified to reduce the number of vertices). • The position of robot camera found with ground-truth positioning information and calculus of its axis provides the “Virtual view”.

  13. Evaluation of Sensing • Ground-truth positioning information localize sensors, cameras, range finders. • Build reliable global environment representations (Occupancy grids or height maps) for robot navigation plan. • Overlay them onto projections of the real world evaluate sensing algorithms for construction of world models.

  14. Reconstruction by Image Warping • Tracking of camera’s position, using motion capture to recover projection matrix, enables 2D homography between the floor and the image plane. • To build a 2D occupancy grid of the environment for biped navigation, we assume that all scene points of interest lie in the z = 0 plane.

  15. Reconstruction from Range Data • Range sensor “CSEM Swiss Ranger SR-2” time-of-flight (TOF) to build 2.5D height maps of the environment objects. • Motion-capture based localization lets us convert range measurements into clouds of 3D points in world coordinates in real-time. • Environment height maps can be cumulatively constructed.

  16. Example “box” scene. Raw sensor measurement. Point cloud views of reconstructed box.

  17. Registration with Ground Truth • Environment reconstructed by image warping or range data allows to visually evaluate the accuracy of our perception algorithms. • Make parameter adjustments on the-fly by overlaying the environment maps generated back onto a camera view of the scene.

  18. Evaluation Of Planning • Video overlay displays diagnostic information about the planning and control process in physically relevant locations. • The robot plan a safe sequence of actions to convey itself from its current configuration to a goal location. • Goal location and obstacles moved while robot was walking, requiring a constant update of the plan.

  19. Example camera image. Synthesized ground plane view. Corresponding environment map.

  20. Planning algorithm evaluates candidate footstep locations through a cluttered environment. • • Motion capture obstacle recognition. • • Localized sensors. • • Self-contained vision • Motion capture data removed completely and the robot use its own odometry to build maps of the environment.

  21. Visual Projection: Footstep Plans • For each step it computes 3D position and orientation of the foot. • Augmented reality planned footsteps are overlaid in real-time onto the environment (Continuously updated while walking). • This display exposes the planning process to identify errors and gain insight into the performance of the algorithm.

  22. Occupancy grid generated from the robot’s camera. Planar projection of an obstacle recovered from range data.

  23. Temporal Projection: Virtual Robot • Real world preferred to completely simulated environment for experimentation: AVATAR proposal. • Instead of replacement of all sensing with perfect ground truth data, we can simulate degrees of realistic sensors.

  24. Objects And The Robot’s Perception • Slowly increase the realism of the data which the system must handle. • By knowing the locations and positions of all objects as well as the robot’s sensors, we can determine which objects are detectable by the robot at any given point in time.

  25. Footstep plan displayed onto the world. Augmented reality with a simulated robot amongst real obstacles.

  26. Evaluation Of Control • Objective: To maximize the safety of the robot and the environment. • To accomplish this, we perform hardware in-the-loop simulations while gradually introducing real components. • “Complexity of the Plant”

  27. Virtual Objects • Simulation: Analyze the interaction of a robot with a virtual object by a geometric and dynamic model of the object. • In case of a failure we observe and detect virtual collisions without affecting the robot hardware. • Similarly, these concepts can be applied towards grasping and manipulation.

  28. Precise Localization • To perform force control on an object during physical interaction. • Fixing the initial conditions of robot and environment, or asking the robot to sense and acquire a world model prior to every experiment.

  29. The hybrid experimental model avoids the rigidity of the former approach and the overhead time required for the latter. • Virtual optical sensor: Efforts can be focused on algorithms for making contact with the object and evaluating the higher frequency feedback required for force control.

  30. Gantry Control • Physical presence of gantry and its operator prevent from testing fine manipulation and navigation in cluttered environments that requires the close proximity to objects. • To bypass this problem a ceiling suspended gantry was implemented, that can follow the robot throughout the experimental space.

  31. Discussion • Paradigm leverages advances in optical motion capture speed and accuracy to enable simultaneous online testing of complex robotic system components. • Promotes a rapid development and validation testing on each of the perception, planning and control.

  32. Future Work • Automated methods for environment modeling (Object with markers could be inserted at environment and immediately modeled for application) • Automatic sensor calibration in the context of a ground truth world model. • Enhanced visualizations by fusing local sensing (Gyroscopes and Force) sensors into the virtual environment.

  33. ?

More Related