200 likes | 213 Views
This research paper explores the concept of safe robot planning, including the importance of explicitly given goals, permitted changes in the robot's environment, and the avoidance of unsafe actions. The proposed solution involves enumerating goals and permissions, as well as distinguishing between irreversible and reversible actions. The implementation language is Prolog.
E N D
A framework of safe robot planning Roland Pihlakas Institute of Technology in University of Tartu august 2008
What is safe • Safe action or state is: • a goal which is explicitly given to the robot; • explicitly permitted change in the robot’s environment. • Everything else is unknown and possibly unsafe. Therefore should not be caused. • Some automatically calculated sub-goals can be unsafe too, unless permitted.
The problem • Why not to enumerate bad states: • too much work • humans are poor at systematically predicting things. Especially when they are not directly interested in that • Why not to tell the robot all the sub-steps towards the goal: • too much work • again, unexpected consequences of given steps
The problem • Opposed interests: • to let the human get the job of robot configuration / instruction done quickly • to still have control and no surprises
Proposed solution • Bad is implicit • Usually enumerate only: • goals; • “okay” changes => permissions • Perhaps simpler to enumerate • Analogy: public vs private law
An analogy • Three laws in the order of priority: • Do not do anything that is not explicitly permitted. • Fulfill the goals that are given to you. • Optionally fulfill the optional goals.
An useful concept • If you can undo something, it is usually safe, assuming that current state is safe. • From that follows • the principle of avoiding irreversible actions • two special classes for actions and their corresponding results, called “irreversible actions” and “reversible actions”.
Few motivating examples • Street cleaning • Room cleaning • Making room on harddisk • Littering crap
About permissions • The permissions are usually for: • changes in given dimensions • usually not about specific actions
A simple language example • Goal x = 2; • Allow y = any; • Reverse z; • Dontdisturb w; • Guarantee for q1 = 44 is q2 = 37; • Context a = 177; • Askauth allow b = any;
Data flow Sensors, certain functions calculating some value etc… The configuration • Three datastructures: • - preconditions / context • - keep-always conditions • - goal conditions Causal relations / prediction module Automatic plan generation Precondition checking
Adding optional goals to the language • An example:Obligatory { Goal robot.location = “in front of TV”;}Optional { Goal floor.still_clean = yes;}
The protocol • When giving permissions, make sure that context is correctly specified! • Opposing interests of human user: • to give many permissions and get quickly rid of the job • giving only necessary permissions and to specify their proper context • Selinux analogy
Robot learning • The sandbox • Levels of sandboxes
“Passive” safety • Distinguishes user’s commands from auto-generated ones; does not override users: • The robot distinguishes clearly between the orders that were given and the sub-goals it has set to itself. • By default avoids only own mistakes. • Even more: the robot may refuse to act.
Errata • May stop when encountering unexpected / unknown situations, unless instructed otherwise using context-specific goals.
Implementation language • Prolog: • has built-in parser (for configuration processing) • has variable data type • automatic memory management • has useful data types for expressing constraints, or uncertainty • conveniently short syntax for failing function calls and resuming alternatives at upper levels • “scriptable”
Future • Multiple contexts / goal sets • Online planning, partial plans • Online diagnostics and remedy taking in case of danger, faults etc. • Automatically finding unnecessary rights • More powerful prediction module • Time constraints
Future • Asking for authorisation during planning • Askauth allow x = any; • Asking the user to choose and authorise one plan from a set of automatically proposed alternatives • Different userlevels • Understanding changes caused by external agents or natural forces