800 likes | 1.02k Views
Agents. What is an agent?. Agenthood = 4 dimensions: autonomy proactiveness embeddedness distributedness. Autonomy. Programs are controlled by user interaction Agents take action without user control monitor: does anyone offer a cheap phone?
E N D
What is an agent? • Agenthood = 4 dimensions: • autonomy • proactiveness • embeddedness • distributedness
Autonomy Programs are controlled by user interaction Agents take action without user control • monitor: does anyone offer a cheap phone? • answer requests: do you want to buy a phone? • negotiate: negotiate delivery date. • can even refuse requests.
Techniques for Autonomy Procedures are replaced by behaviors: map situation action Programming agents = defining • behaviors • control architecture
Result Agents can act on users' behalf, for example in looking for products or bidding in auctions (ebay!) User might not always be available (mobile phones) Agents can represent users' interest, for example by choosing the best offers.
Proactiveness Programs are activated by commands: run ... Agents take action by themselves: • to react to events in their environment • to propagate changes or distribute information • to take advantage of opportunities in the environment
Techniques for proactive agents Agents must have explicit goals Goals are linked to plans for achieving them. Plans are continously reevaluated; new opportunities lead to replanning
Result React to ranges of conditions rather than a set of foreseen situations Gain flexibility in information systems Proactive agents really act on user's behalf
Embeddedness Programs take as long as they take Agents act under deadlines imposed by the real world: • scheduling, planning: plan must be finished before first step • trading: must follow the market • negotiation: limited response time and with limited resources (time, memory, communication) Asymptotic complexity analysis insufficient: does not give bounds for particular cases!
Techniques for resourcebounded reasoning 1) ''Anytime'' algorithms: quick and suboptimal solution iterative refinement 2) reasoning about resource usage: estimate computation time choose suitable computation parameters 3) learning, compilation
Result Agent can integrate in the real world: • driving a car • bidding in an auction • interacting with a user or other agents
Distributedness Programs have common data structures and algorithms Multiagent systems model distributed systems; agents are independent entities and may: • be programmed by different people. • function according to different principles. • be added and removed during operation. • be unknown to other agents in the system.
Techniques for distributed agent systems Agents run on platforms: • runtime environment/interfaces • communication langugages • support for mobility
Result Agent system reflects structure of the real system: • controlled by their owners • local decision making with local information • fault tolerant: no central authority
Summary Agents = situated software: • reacts to environment • under real time constraints • in distributed environment
From Agents to Intelligent Agents People understand agents to have intentions: John studied because he wanted to get a diploma. and also: The system is asking for a filename because it wants to save the data. Modeling intentions: reasoning + intelligence!
Situated Intelligence Agent interacts with its environment: • observe effects of actions • discovery • interaction with a user particular software architectures
behaviors world Behaviors • Simplest form of situated intelligence: feedback control Thermostat Robot following a wall Backup every new file
Layers... Behaviors should adapt themselves • people leaving the house • robot hitting the end of the wall • backup unit broken down Reasoning layer
Planning/reasoning behaviors world
Communication/cooperation Agents need to be instructed Multiple agents need to cooperate: • heating in different rooms • robots running into each other • several agents backing up the same file cooperation layer
Planning/reasoning behaviors world Other agents cooperation
Importance of reasoning/planning layer Behaviors operate at level of sensors/effectors: Goto position (35.73,76.14) Communication is symbolic: Go to the corner of the room! reasoning layer translates between them!
Intelligent Agents Intelligence has (at least) 4 dimensions: • rationality: reasoning/planning layer • symbolic communication about beliefs, goals, etc. • adaptivity • learning
Rational Agents Programs/Algorithms = always do the same thing rm r * wipes out operating system Rational agents = do the right thing rm r * will keep essential files
Rationality: goals File manager: • satisfy user's wish • keep a backup of all major file versions • ... • keep one version of all essential operating system files action serves to satisfy the goals!
Complex behavior, intelligence: adapt behavior to changing conditions Negotiation/selfinterest requires explicit goals! Learning and using new knowledge requires explicit structures
Techniques for implementing rationality • symbolic reasoning • planning • constraint satisfaction knowledge systems!
Communicating agents Programs/objects procedure call: • predefined set of possibilities Agents communication language: • no predefined set of messages or entities Communication is about: • beliefs, when passing information • goals and plans, when cooperating and negotiating
Agent Communication Languages Language = • syntax: predefined set of message types • semantics: common ontologies: sets of symbols and meanings Examples of languages: KQML, ACL
Needed for Coordination, cooperation and negotiation among agents Communicate about intentions, selfinterest ACL provides a higher abstraction layer that allows heterogeneous agents to communicate Add/remove agents in a running multiagent system
Adaptivity/Learning Adapt to user: • by explicit but highly flexible customization • automatically by observing behavior Learn from the environment: • know objects and operations • continuously improve behavior
Techniques for adaptive/learning agents Knowledge systems: explicit representation of goals, operators, plans easy to modify Automatic adaptation by machine learning or casebased reasoning Information gathering/machine learning techniques for learning about the environment Reinforcement learning, genetic algorithms for learning behaviors
Why is this important? Every user is different requires different agent behavior Impractical to program a different agent for everyone Programmers cannot foresee all aspects of environment Agent knows its environment better than a programmer
Summary: what are intelligent agents? Agents are a useful metaphor for computer science: • autonomous/proactive: behaviors • embedded: realtime • distributed • intentional: with explicit beliefs, goals, plans • communicative: through general ACL • selfadaptation/learning
Smart Agents Collaborative Learning Agents Collaborative Interface Agents Agents Learn Cooperate Autonomous
Autonomous agents Computational agents Biological agents Robotic agents Artificial life agents Software agents Entertainment agents Viruses Task-specific agents
Implementing agents... Computers always execute algorithms • how can anyone implement agents? Agents are a metaphor, implementation is limited: • limited sensors limited adaptativity • limited ontologies limited communication language • ...
Technologies for Intelligent Agents Methods for simple behaviors: • behaviors • reinforcement learning • distributed CSP Methods for controlling behaviors: • planning • casebased reasoning
Technologies for Intelligent Agents Formalisms for cooperation: • auctions • negotiation • BDI (Belief-Desire-Intention) • ACL/KQML • Ontologies
Technologies for Intelligent Agents Theories of agent systems: • selfinterestedness • competition/economies • behavior of agent systems Platforms: • auctions, markets (negotiation, contracts) • multiagent platforms • mobile agent platforms
Agent Communication Languages Structure: performatives + content language KQML Criteria for content languages
Setting Communication among heterogeneous agents: • no common data structures • no common messages • no common ontology but common communication language: • protocol • reference
Structure of an ACL Vocabulary (words): e.g. reference to objects Messages (sentences): e.g. request for an action Distributed Algorithms (conversations): e.g. negotiating task sharing
Levels of ACL Object sharing (Corba, RPC, RMI, Splice): shared objects, procedures, data structures Knowledge sharing (KQML, FIPA ACL): shared facts, rules, constraints, procedures and knowledge Intentional sharing: shared beliefs, plans, goals and intentions Cultural sharing: shared experiences and strategies
Human communication: intentional/cultural sharing Ideal example of a heterogeneous agent system: human society See agents as intentional systems: all actions and communication are motivated by beliefs and intentions Allows modeling agent behavior in a humanunderstandable way
Problems with intentional sharing BDI model requires modal logics Many modal logics pose unrealistic computational requirements: • all consequences known • combinatorial inference BDI model too general as a basis for agent cooperation
A feasible solution: knowledge sharing ACL = 2 components: • performative: request, inform, etc. • content: a piece of knowledge Allows formulating distributed algorithms in a heterogeneous agent society Basis: human communication/speech acts
Speech acts Language = • content (e.g., read a book) + • speech act (e.g., I want to, I want you to,...) Reference: • locution: physical utterance • illocution: act of conveying intentions • perlocutions: actions that occur as a result