1 / 50

Introduction Agent Programming

Introduction Agent Programming. Birna van Riemsdijk and Koen Hindriks Delft University of Technology, The Netherlands. Outline. Previous Lecture, last lecture on Prolog: “ Input & Output ” Negation as failure Search Coming lectures: Agents that use Prolog for knowledge representation

jaunie
Download Presentation

Introduction Agent Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IntroductionAgent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology, The Netherlands

  2. Outline • Previous Lecture, last lecture on Prolog: • “Input & Output” • Negation as failure • Search • Coming lectures: • Agents that use Prolog for knowledge representation • This lecture: • Agent Introduction • “Hello World” example in GOAL

  3. Agents: Act in environments Percepts Choose an action agent environment Action

  4. Agent Capabilities • Reactive – respond in timely manner to change • Proactive – (persistently) pursues multiple, explicit goals over time • Social – agents need to interact and perform e.g. as a team => last lecture

  5. Agents: Act to achieve goals Reactivity Proactivity Percepts agent events environment actions goals Action

  6. Agents: Represent environment Percepts agent events beliefs environment actions goals Action plans

  7. Agent Oriented Programming • Develop programming languages where events, beliefs, goals, actions, plans, .... are first class citizens in the language

  8. Language Elements Key language elements of APLs: • beliefs and goals to represent environment • events received from environment (& internal) • actions to update beliefs, adopt goals, send messages, act in environment • plans, capabilities & modules to structure action • rules to select actions/plans/modules/capabilities • support for multi-agent systems Inspired by Belief-Desire-Intention agent metaphor

  9. A Brief History of AOP • 1990: AGENT-0 (Shoham) • 1993: PLACA (Thomas; AGENT-0 extension with plans) • 1996: AgentSpeak(L) (Rao; inspired by PRS) • 1996: Golog (Reiter, Levesque, Lesperance) • 1997: 3APL (Hindriks et al.) • 1998: ConGolog (Giacomo, Levesque, Lesperance) • 2000: JACK (Busetta, Howden, Ronnquist, Hodgson) • 2000: GOAL (Hindriks et al.) • 2000: CLAIM (Amal El FallahSeghrouchni) • 2002: Jason (Bordini, Hubner; implementation of AgentSpeak) • 2003: Jadex (Braubach, Pokahr, Lamersdorf) • 2008: 2APL (successor of 3APL) This overview is far from complete!

  10. References Websites • 2APL: http://www.cs.uu.nl/2apl/ • Agent Factory: http://www.agentfactory.com • Goal: http://mmi.tudelft.nl/trac/goal • JACK: http://www.agent-software.com.au/products/jack/ • Jadex: http://jadex.informatik.uni-hamburg.de/ • Jason: http://jason.sourceforge.net/ • JIAC: http://www.jiac.de/ Books • Bordini, R.H.; Dastani, M.; Dix, J.; El Fallah Seghrouchni, A. (Eds.), 2005 Multi-Agent Programming Languages, Platforms and Applications. presents 3APL, CLAIM, Jadex, Jason • Bordini, R.H.; Dastani, M.; Dix, J.; El Fallah Seghrouchni, A. (Eds.), 2009, Multi-Agent Programming: Languages, Tools and Applications. presents a.o.: Brahms, CArtAgO, Goal, JIAC Agent Platform

  11. The GoalAgent Programming Language

  12. The Hello World example of Agent Programming The Blocks World

  13. The Blocks World A classic AI planning problem. Objective: Move blocks in initial state such that result is goal state. • Positioning of blocks on table is not relevant. • A block can be moved only if it there is no other block on top of it.

  14. Representing the Blocks World Prolog is the knowledge representation language used in Goal. Basic predicate: • on(X,Y). Defined predicates: • block(X) :- on(X, _). • clear(X) :- block(X), not(on(Y,X)). • clear(table). • tower([X]) :- on(X,table). tower([X,Y|T) :- on(X,Y),tower([Y|T]). EXERCISE:

  15. Representing the Initial State Using the on(X,Y) predicate we can represent the initial state. beliefs{ on(a,b). on(b,c). on(c,table). on(d,e). on(e,table). on(f,g). on(g,table). } Initial belief base of agent

  16. Representing the Blocks World • What about the rules we defined before? • Add clauses that do not change into the knowledge base. • tower([X]) :- on(X,table). • tower([X,Y|T]) :- on(X,Y),tower([Y|T]). • clear(X) :- block(X), not(on(Y,X)). clear(table). • block(X) :- on(X, _). knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } Static knowledge base of agent

  17. Representing the Goal State Using the on(X,Y) predicate we can represent the goal state. goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). } Initial goal base of agent

  18. One or Many Goals In the goal base using the comma- or period-separator makes a difference! goals{ on(a,table). on(b,a). on(c,b). } goals{ on(a,table), on(b,a), on(c,b). }  • Left goal base has three goals, right goal base has single goal. • Moving c on top of b (3rd goal), c to the table, a to the table (2nd goal) , and b on top of a (1st goal) achieves all three goals but not single goal of right goal base. • The reason is that the goal base on the left does not require block c to be on b, b to be on a, and a to be on the table at the same time.

  19. Mental State of Goal Agent The knowledge, belief, and goal sections together constitute the specification of the Mental State of a Goal Agent. knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b). on(b,c). on(c,table). on(d,e). on(e,table). on(f,g). on(g,table). } goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). } Initial mental state of agent

  20. Why a Separate Knowledge Base? • Concepts defined in knowledge base can be used in combination with both the belief and goal base. • Example • Since agent believeson(e,table),on(d,e), then infer: agent believestower([d,e]). • If agent wantson(a,table),on(b,a), then infer: agent wantstower([b,a]). • Knowledge base introduced to avoid duplicating clauses in belief and goal base.

  21. Using the Belief & Goal base • Selecting actions using beliefs and goals • Basic idea: • If I believe B then do action A (reactivity) • If I believe B and have goal G, then do action A (proactivity)

  22. Inspecting the Belief & Goal base • Operator bel()to inspect the belief base. • Operator goal()to inspect the goal base. • Where  is a Prolog conjunction of literals. • Examples: • bel(clear(a), not(on(a,c))). • goal(tower([a,b])).

  23. Inspecting the Belief Base • bel() succeeds if  follows from the belief base in combination with the knowledge base. • Condition  is evaluated as a Prolog query. • Example: • bel(clear(a), not(on(a,c))) succeeds knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b). on(b,c). on(c,table). on(d,e). on(e,table). on(f,g). on(g,table). }

  24. Inspecting the Belief Base EXERCISE: Which of the following succeed? • bel(on(b,c), not(on(a,c))). • bel(on(X,table), on(Y,X), not(clear(Y)). • bel(tower([X,b,d]). [X=c;Y=b] knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b). on(b,c). on(c,table). on(d,e). on(e,table). on(f,g). on(g,table). }

  25. Inspecting the Goal Base Use the goal(…) operator to inspect the goal base. • goal() succeeds if  follows from one of the goals in the goal base in combination with the knowledge base. • Example: • goal(clear(a))succeeds. • but notgoal(clear(a),clear(c)). knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). on(c,table). }

  26. Inspecting the Goal Base EXERCISE: Which of the following succeed? • goal(on(b,table), not(on(d,c))). • goal(on(X,table), on(Y,X), clear(Y)). • goal(tower([d,X]). knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). }

  27. Negation and Beliefs not(bel(on(a,c))) = bel(not(on(a,c)))? • Answer: Yes. • Because Prolog implements negation as failure. • If φ cannot be derived, then not(φ) can be derived. • We always have: not(bel()) = bel(not()) knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b). on(b,c). on(c,table). on(d,e). on(e,table). on(f,g). on(g,table). }

  28. Negation and Goals EXERCISE: not(goal())= goal(not())? • Answer: No. • We have, for example: goal(on(a,b)) and goal(not(on(a,b))). knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } goals{ on(a,b), on(b,table). on(a,c), on(c,table). }

  29. Combining Beliefs and Goals • Consider the following beliefs and goals • We have both bel(on(a,b))as well asgoal(on(a,b)). • Why have something as a goal that has already been achieved? Useful to combine the bel(…) and goal(…) operators. beliefs{ on(a,b). on(b,c). on(c,table). } goals{ on(a,b), on(b,table). }

  30. Combining Beliefs and Goals • Achievement goals: • a-goal() = goal(), not(bel()) • Agent only has an achievement goal if it does not believe the goal has been reached already. • Goal achieved: • goal-a() = goal(), bel() • A (sub)-goal  has been achieved if the agent believes in . Useful to combine the bel(…) and goal(…) operators.

  31. Expressing BW Concepts Mental States Possible to express key Blocks World concepts by means of basic operators. • Define: block X is misplaced • Solution: goal(tower([X|T])),not(bel(tower([X|T]))). • But this means that saying that a block is misplaced is saying that you have an achievement goal: a-goal(tower([X|T])). EXERCISE:

  32. Changing Blocks World Configurations Actions specifications

  33. Actions Change the Environment… move(a,d)

  34. and Require Updating Mental States. • To ensure adequate beliefs after performing an action the belief base needs to be updated (and possibly the goal base). • Addeffects to belief base: insert on(a,d) after move(a,d). • Delete old beliefs: delete on(a,b)after move(a,d).

  35. and Require Updating Mental States. • If a goal has been (believed to be) completely achieved, the goal is removed from the goal base. • It is not rationalto have a goal you believe to be achieved. • Default update implements a blind commitment strategy. move(a,b) beliefs{ on(a,table), on(b,table). } goals{ on(a,b), on(b,table). } beliefs{ on(a,b), on(b,table). } goals{ }

  36. Action Specifications • Actions in GOAL have preconditions and postconditions. • Executing an action in GOAL means: • Preconditions are conditions that need to be true: • Check preconditions on the belief base. • Postconditions (effects) are add/delete lists (STRIPS): • Add positive literals in the postcondition • Delete negative literals in the postcondition • STRIPS-style specification move(X,Y){ pre { clear(X), clear(Y), on(X,Z), not( on(X,Y) ) } post { not(on(X,Z)), on(X,Y) } }

  37. Actions Specifications move(X,Y){ pre { clear(X), clear(Y), on(X,Z), not( on(X,Y) )} post { not(on(X,Z)), on(X,Y) } } Example: move(a,b) • Check: clear(a), clear(b), on(a,Z), not( on(a,b) ) • Remove: on(a,Z) • Add: on(a,b) Note: first remove, then add. table

  38. Actions Specifications move(X,Y){ pre { clear(X), clear(Y), on(X,Z) } post { not(on(X,Z)), on(X,Y) } } Example: move(a,b) beliefs{ on(a,table), on(b,table). } beliefs{ on(b,table). on(a,b). }

  39. Actions Specifications EXERCISE: move(X,Y){ pre { clear(X), clear(Y), on(X,Z) } post { not(on(X,Z)), on(X,Y) } } • Is it possible to perform move(a,b)? • Is it possible to perform move(a,d)? knowledge{ block(a), block(b), block(c), block(d), block(e), block(f), block(g), block(h), block(i). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). } No, not( on(a,b) ) fails. Yes.

  40. Selecting actions to perform Action Rules

  41. Agent-Oriented Programming • How do humans choose and/or explain actions? • Examples: • I believe it rains; so, I will take an umbrella with me. • I go to the video store because I want to rent I-robot. • I don’t believe busses run today so I take the train. • Use intuitive common sense concepts: beliefs + goals => action See Chapter 1 of the Programming Guide

  42. Selecting Actions: Action Rules • Action rules are used to define a strategy for action selection. • Defining a strategy for blocks world: • If constructive move can be made, make it. • If block is misplaced, move it to table. • What happens: • Check condition, e.g. can a-goal(tower([X|T]))be derived given current mental state of agent? • Yes, then (potentially) select move(X,table). program{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). }

  43. Order of Action Rules • Action rules are executed by default in linear order. • The first rule that fires is executed. • Default order can be changed to random. • Arbitrary rule that is able to fire may be selected. program{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). } program[order=random]{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). }

  44. Example Program: Action Rules Agent program may allow for multiple action choices program[order=random]{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). } Random, arbitrary choice d To table

  45. The Sussman Anomaly (1/5) • Non-interleaved planners typically separate the main goal, on(A,B),on(B,C) into 2 sub-goals: on(A,B) and on(B,C). • Planning for these two sub-goals separately and combining the plans found does not work in this case, however. a c b b a c Initial state Goal state

  46. The Sussman Anomaly (2/5) • Initially, all blocks are misplaced • One constructive move can be made (c to table) • Note: move(b,c) not enabled. • Only action enabled: c to table (2x). Need to check conditions of action rules: if bel(tower([Y|T]),a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X|T))then move(X,table). We have bel(tower([c,a]) and a-goal(tower([c])). a c b b a c Initial state Goal state

  47. The Sussman Anomaly (3/5) • Only constructive move enabled is • Move b onto c Need to check conditions of action rules: if bel(tower([Y|T]), a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X|T))then move(X,table). Note that we have: a-goal(on(a,b),on(b,c),on(c,table)), but not: a-goal(tower[c])). a b b a c c Goal state Current state

  48. The Sussman Anomaly (4/5) • Again, only constructive move enabled • Move a onto b Need to check conditions of action rules: if bel(tower([Y|T]), a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X,T))then move(X,Y). Note that we have: a-goal(on(a,b),on(b,c),on(c,table)), but not: a-goal(tower[b,c]). a b b a c c Current state Goal state

  49. The Sussman Anomaly (5/5) • Upon achieving a goal completely that goal is automatically removed. • The idea is that no resources should be wasted on achieving the goal. In our case, goal(on(a,b),on(b,c),on(c,table)) has been achieved, and is dropped. The agent has no other goals and is ready. a a b b c c Current state Goal state

  50. Organisation • Read Programming Guide Ch1-3 (+ User Manual) • Tutorial: • Download Goal: See http://ii.tudelft.nl/trac/goal (v4537) • Practice exercises from Programming Guide • BW4T assignments 3 and 4 available • Next lecture: • Sensing, perception, environments • Other types of rules & macros • Agent architectures

More Related