1 / 41

Performance Assessment Methodology Workshop

Human Robotic Working Group Meeting. Performance Assessment Methodology Workshop. Embassy Suites Hotel 211 East Huntington Drive, Arcadia, California 91006 June 21, 2001 Future Missions & Task Primitives/Baldwin Room - Mike Duke

hani
Download Presentation

Performance Assessment Methodology Workshop

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Human Robotic Working Group Meeting Performance Assessment Methodology Workshop Embassy Suites Hotel 211 East Huntington Drive, Arcadia, California91006 June 21, 2001 Future Missions & Task Primitives/Baldwin Room - Mike Duke Optimal Allocation of Human & Robot Roles/Cancun Room - George Bekey • Organizing Committee • NASA Human-Robot Joint Enterprise Working Group • D. Clancy, ARCG. Rodriguez, JPL • M. DiJoseph, HQ B. Ward,JSC • R. Easter,JPL J. Watson, LaRC • J. Kosmo, JSCK. Watson, JSC • R. Moe, GSFC C. R. Weisbin, JPL

  2. Human Robotic Working Group Meeting Performance Assessment Methodology WorkshopObjectives • To provide a forum where scientists, technologists, engineers, mission architects and designers, and experts in other relevant disciplines can together identify the best roles that humans and robots can play in space over the next decades. • To bring to bear the collective expertise of broad communities in assisting NASA in a thrust toward a better understanding of the complementary roles of humans and robots in space operations.

  3. Human Robotic Working Group Meeting Performance Assessment Methodology WorkshopAttendees Workshop Organizer - Charles R. Weisbin, JPL Future Missions & Task Primitives / Chairman - Mike Duke, Colorado School of Mines Optimal Allocation of Human & Robot Roles / Chairman - George Bekey, USC Dave Akin, University of MarylandDave Kortenkamp, JSC Jim Albus, NISTJosip Loncaric, LaRC Jacob Barhen,ORNLNeville Marzwell, JPL Arun Bhadoria, USCRudd Moe,Goddard Maria Bualat, ARC Bill Muehlberger, University of Texas Robert Burridge, JSCLynne Parker, ORNL Art Chmielewski, JPLLew Peach, USRA Tim Collins, LaRCLiam Pederson, ARC Chris Culbert, JSC Trish Pengra, HQ Mary DiJoseph, HQStephen Prusha, JPL Bill Doggett, LaRC Steve Rock, Stanford University Steve Dubowsky, MIT Guillermo Rodriguez, JPL Robert Easter, JPL Sudhakar Rajulu, JSC Richard Fullerton, JSC Paul Schenker, JPL Mark Gittleman,Oceaneering, Inc Reid Simmons, CMU Tony Griffith, JSC Kevin Watson, JSC

  4. Human Robotic Working Group Meeting Performance Assessment Methodology WorkshopAgendaThursday, June 21, 2001 , 8:00 - 5:30 PM • Time Topic Speaker • 8:00 - 8:30 p.m. Welcome Robert Easter • Introduction Charles Weisbin • 8:30 - 11:30 p.m. Break-Out Session I • - Future Missions & Task Primitives Mike Duke • - Optimal Allocation of Human & Robot Roles George Bekey • 11:30 - 12:30 p.m. Lunch • 12:30 - 3:30 p.m. Break-out Session II • - Optimal Allocation of Human & Robot Roles George Bekey • - Future Missions & Task Primitives Mike Duke • 3:30 - 4:00 p.m. Break • 4:00 - 5:30 p.m. Concluding Plenary Session Charles Weisbin

  5. Human Robotic Working Group Meeting Performance Assessment Methodology WorkshopFuture Missions and Task Primitives The workshop focus will be on producing a set of useful products that NASA needs in developing and justifying quantitatively its technology investment goals. To this end, the workshop will provide a framework by which a range of specific questions of critical importance to NASA future missions can be addressed. These questions can be grouped as follows: • What are the most promising human/robot future mission scenarios and desired functional capabilities? • For a given set of mission scenarios, how can we identify appropriate roles for humans and robots. • What set of relatively few and independent primitive tasks (e.g. traverse, self-locate, sample selection, sample acquisition, etc.), would constitute an appropriate set for benchmarking human/robot performance? Why? • How is task complexity, the degree of difficulty, best characterized for humans and robots?

  6. Human Robotic Working Group Meeting Performance Assessment Methodology WorkshopOptimal Allocation of Human and Robot Roles • How can quantitative assessments be made of human and robots working either together or separately in scenerios and tasks related to space exploration. Are additional measurements needed beyond those in the literature? • What performance criteria should be used to evaluate what humans and robots do best in conducting operations? How should the results using different criteria be weighted? • How are composite results from multiple primitives to be used to address overall questions of relative optimal roles? • How should results of performance testing on today's technology be suitably extrapolated to the future, including possible variation in environmental conditions during the mission?

  7. Human Robotic Working Group Meeting Performance Assessment Methodology WorkshopOptimal Allocation of Human and Robot Roles (Cont.) • How should results of performance testing on today's technology be suitably extrapolated to the future, including possible variations in environmental mission dynamics? • How are disciplinary topics (learning, dynamic re-planning) incorporated into the evaluation?

  8. Human Robotic Working Group Meeting Performance Assessment Methodology WorkshopConcluding Remarks The workshop will draw from expertise in such diverse fields as space science, robotics, human factors, aerospace engineering, computer science, as well as the classical fields of mathematics and physics. The goal will be to invite a selected number of participants that can offer unique perspectives to the workshop.

  9. Mike Duke, Colorado School of Mines

  10. Performance Assessment Methodology Workshop Working Group I: Future Missions and Task Primitives

  11. Questions to be Addressed • What are promising future missions and desired functional capabilities? • What primitive tasks would constitute an appropriate set for benchmarking human/robot performance? • How is task complexity, the degree of difficulty, best characterized for humans and robots?

  12. Approach • Define a basic scenario for planetary surface exploration (many previous analyses of space construction tasks applicable to telescope construction) • Identify objectives • Characterize common capabilities for task performance • Determine complexities associated with implementing common capabilities • Define “task complexity” index • Provide experiment planning guidelines

  13. Task: Explore a Previously Unexplored Locale (~500km2) on Mars • Top level objectives • Determine geological history (distribution of rocks) • Search for evidence of past life • Establish distribution of water in surface materials and subsurface (to >1km) • Determine whether humans can exist there • This level of description is not sufficient to compare humans, robots, and humans + robots

  14. More Detailed Objectives • Reconnoiter surface • Identify interesting samples • Collect/analyze samples • Drill holes • Emplace instruments • These are probably at the right level to define primitives

  15. Task Characteristics • Each of the objectives can be implemented with capabilities along several dimensions • Manipulation • Cognition • Perception • Mobility • These capabilities occupy a range of complexity for given tasks • E.g. mobility systems may encounter terrains of different complexity

  16. Mobility - Characteristics • Suitability to environment • Suitability to task • Reliability, maintainability • Moving with minimum • disturbance to environment • Autonomy • Mission duration • Data analysis rate • Capability for rescue • Navigation, path planning • Distance/range • Speed • Terrain (slopes, obstacles, texture, soil) • Load carried • Altitude • Agility (turn radius) • Stability • Access (vertical, sub-surface, small spaces, etc.) Complexity as related to terrain - Scale of features - Distribution of targets, obstacles - Slope variability - Environmental constraints - Soil, surface consistency - Degree of uncertainty

  17. Perception/Cognition • Sensing • Analysis (e.g. chemical analysis) • Training • Data processing • Context • Learning • Knowledge • Experience • Problem solving • Reasoning • Planning/replanning • Adaptability • Communication • Navigation

  18. Manipulation • Mass, volume, gravity • Unique vs. standard shapes • Fragility, contamination, reactivity • Temperature • Specific technique • Torque • Precision • Complexity of motion • Repetitive vs. unique • Time • Consequence of failure • Need for multiple operators • Moving with minimal disturbance

  19. Task Difficulty • Function of task as well as system performing task • Includes multidimensional aspects • Perception (sensing, recognition/discrimination/classification) • Cognition (modelability(environmental complexity), error detection, task planning) • Actuation (mechanical manipulations, control) • General discriminators • Length of task sequence and variety of subtasks • Computational complexity • Number of constraints placed by system • Number of constraints placed by environment

  20. Experiment Planning • Thought experiments • Conceptual feasibility • Eliminate portions of trade space • Constructed experiments • Natural analogs – need to document parameters • Controlled experiments • Isolate parameters • Experiments must be chosen to reflect actual exploration tasks in relevant environments • Questions are exceedingly complex due to their multidimensionality • How to determine optimum division of roles, which may change with task, environment, time frame, etc. is a difficult problem

  21. Robot Method Human Role Site Access Data Scope Rel Cost Hdw Repair Safety Risk Remote teleoperation Earth based control Lowest Lowest Low None None Fully automated Earth based monitoring Low Low Low-Med None None Local teleoperation Orbital habitat Low Low- Med Med None Low Local teleoperation Lander habitat-No EVA Low Low- Med Med-Hi None High Variable autonomy Lander habitat-No EVA Low Med Med-Hi None High Variable autonomy (pressurized garage) Lander habitat-No EVA Low Med Med-Hi Partial High Variable autonomy (dockable to habitat) Canned mobility (No EVA Capability) Low-Med Med High Partial Highest Precursors only Suited humans on foot Med-Hi High Med-Hi Full Med Variable autonomy (total crew access) Suited transportable humans (w/Rovers) Highest Highest Highest Full Med-Hi • Exploration Implementation Options

  22. George Becky, University of Southern California (USC)

  23. Performance Assessment Methodology Workshop Working Group II: Optimal Allocation of Human & Robot Roles

  24. Quantitative Assessment • Total Cost ($) • Time to complete task • Risk to mission • Degree of uncertainty in environment • Detection of unexpected events • Task complexity (branching)

  25. Quantitative Assessment Standard Measures • R H • Manipulation Planning/Perception • “Low Level” • PF = f(performance, success, cost) = f(p,s,c) • Df = fDp+ fDs+ fDc • p s c

  26. Quantitative AssessmentLimitations of Humans • Health and Safety • Dexterity of Suited Human • Strength

  27. Quantitative AssessmentLimitation of Robots • Task Specific • Adaptability • Situation Assessment/Interpretation

  28. R H Quantitative AssessmentPerformance Measurement • Expected Information gained / $ • (Utility Theory) • Probability of Success E (Cost) Cost * * Science Return (Performance) Probability of Success *H - Humans *R - Robots

  29. $  Mass D Mass  $  Tech Change D TC Quantitative AssessmentScenario I - Space TelescopeCost • Mass In Situ • Time to recover • Frequency of Occurrence • Reliability of Tasks • Launch Mass/Volume

  30. Quantitative AssessmentPerformance • Based on such quantities as: • Time to completion of task • Mass • Energy required • Information requirements

  31. Quantitative AssessmentIssues • Quantification is difficult • Planetary Geology requires humans • Define realistic mission/benefits • How to improve predictability • Select a performance level and then study cost/risk trade-offs • Trade studies important for research also • Consider E(Cost), E(Performance) • Eventually - H will be on Mars • Difficult to use probabilities - but important • Blend of Humans and Robots are bound to be better • Assumptions need to be clear re capabilities of Humans and Robots

  32. Quantitative AssessmentCan We Get the Data to Evaluate Performance? • Task: Travel 10km and return • Assume: • Terrain complexity is bounded • How to quantify: • Time • Energy • Cost • Probability of success?

  33. Quantitative Assessment Standard Measures • Mass • Failure Probability • Dexterity • Robustness • Cost

  34. Quantitative Assessment Non-Classical • Detection of surprising events • Branching of decision spaces • Degree of uncertainty in environment

  35. Quantitative Assessment Summarize • Probability of success (risk -1) • Performance (science success) • Cost

  36. H R Quantitative Assessment Cost * * Probability of Mission Success(PS) Achievement (A) of Objectives

  37. Quantitative AssessmentOverall Performance Criteria • R H • Manipulation Planning/Perception • “Low Level” • PC = f(A, PS, C ) • DPC = fADp1+ fSDp1+ fCDp1 • A p1PS p1 C p1 • where p1 is a parameter that changes by Dp1 • PC = f(A, PS, C) • DPC = 2f * 2ADpi + 2f2ADpi + … • 2A 2pi 2A 2pi • + 2f * 2psDpi... • 2C 2pi • + 2f * 2cDpi • 2c 2pi

  38. Quantitative Assessment Critique from Afternoon Discussion Group Measures like cost, performance and probability of success are too simple These measures are not orthogonal/not independent There are intangible factors that need to be considered Cannot do this without specifying mission carefully and completely but not narrowly Top down analysis is desirable if possible - but dealing with primitives and trying to combine them is feasible

More Related