10 likes | 366 Views
Perception and Control in Humanoid Robotics using Vision Using position-based visual servoing, Metalman has the ability to perform simple manipulation tasks; the sequence below shows Metalman autonomously locating and stacking three randomly placed blocks. Future work will include
E N D
Perception and Control in Humanoid Robotics using Vision Using position-based visual servoing, Metalman has the ability to perform simple manipulation tasks; the sequence below shows Metalman autonomously locating and stacking three randomly placed blocks. Future work will include servoing two arms cooperatively to perform even more complex tasks! Geoffrey Taylor Supervisors: A/Prof Lindsay Kleeman A/Prof R Andrew Russell 0 s 20 s Imagine you had a domestic humanoid robot servant, then consider what you would like it to do … It quickly becomes clear that a practical domestic robot must possess a basic ability to find and grasp objects in a dynamic, cluttered environment (ie. your house!). To address this issue, we have developed a self-calibrating, position-based visual servoing framework. Metalman, the Monash upper-torso humanoid robot, provides a platform for this and other exciting humanoid robot experiments. 35 s 3D hand pose measurement gives the relative position and orientation between hand and head This is the actual stereo view seen by Metalman while tracking its hand Biclops active head It’s a visual thing … Visual servoing is a feedback control technique using visual measurements to robustly regulate the motion of a robot. Metalman uses stereo cameras to estimate the 3D pose (position and orientation) of its hands, by observing bright LEDs attached in a known pattern and feeding the data into a Kalman tracking filter. Other objects are similarly localized via attached coloured markers. Depending on the desired action (eg. grasp an object), Metalman uses this pose information to generate actuating signals that drive the arm to the required pose. Because Metalman continuously estimates the pose of its hands, the system is completely self-calibrating. 80 s LED markers on the hand facilitate pose tracking Metalman uses pose information to drive hand in desired direction Final hand pose depends on the desired action 100 s Progress time indicated at top-right of each frame 160 s Even robots get lonely! Metalman must interact with humans to be truly useful. The experiment below demonstrates simple interaction using motion cues: the user taps on a random block, and Metalman places a finger above the selected object. Where has all the data gone? In a complex system such as Metalman, the interaction of various components can generate unwanted dynamics such as dead time delays. For instance, the graph below plots the position of the head during a sinusoidal motion: the red line indicates joint encoder data, and the blue line shows data from the cameras. The apparent 30 ms delay between these devices can degrade Metalman’s dynamic performance. In this work, we develop simple matching and prediction techniques that allow Metalman to autonomously estimate and reduce these effects. For more information, check the IRRC web page atwww.ecse.monash.edu.au/centres/IRRC Electrical and Computer Systems Engineering Postgraduate Student Research Forum 2001