1 / 38

LOGICAL AGENTS

LOGICAL AGENTS. Tuğçe ÜSTÜNER Artificial Intelligence IES 503. ‘ In which we introduce a logic that is sufficent for building knowledge - based agents !’. Contents. Introduction Knowledge - Based Agents Syntax and Semantics Entailment Logical Agents for the Wumpus World Inference

kayo
Download Presentation

LOGICAL AGENTS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LOGICAL AGENTS Tuğçe ÜSTÜNER ArtificialIntelligence IES 503 ‘Inwhichweintroduce a logicthat is sufficentfor buildingknowledge- basedagents!’

  2. Contents • Introduction • Knowledge-BasedAgents • SyntaxandSemantics • Entailment • LogicalAgentsfortheWumpusWorld • Inference • PropositionalLogic • WumpusWorldSentences • LogicalEqivalence • ImportantEquivalence • ValidityandSatisfiability • Resolution • Normal Forms • ForwardandBackwardChaining • Conclusion

  3. INTRODUCTION • Theconcept of thischapter is therepresentation of knowledgeandthereasoningprocessthatbringknowledgeto life. • Humans, it seems, know things and do reasoning. Knowledge and reasoning are alsoimportant for artificial agents because they enable successful behaviors that would be very hard toachieve. • The knowledge of problem-solving agents isvery specific and inflexible. • Logic will be the primary vehicle forthe representing knowledge.The knowledge of logical agents is always definitealthougheach proposition iseither true or false in the world.

  4. KnowledgeBasedAgents • Humans can know “things” and “reason” • Representation: How are the things stored? • Reasoning: How is the knowledge used? • To solve a problem… • To generate more knowledge… • Knowledge and reasoning are important to artificial agents because they enable successful behaviors difficult to achieve otherwise • Useful in partially observable environments • Can benefit from knowledge in very general forms, combining and recombining information

  5. KnowledgeBasedAgents • Central component of a Knowledge-Based Agent is a Knowledge-Base • A set of sentences in a formal language • Sentences are expressed using a knowledge representation language • Two generic functions: • TELL - add new sentences (facts) to the KB • “Tell it what it needs to know” • ASK - query what is known from the KB • “Ask what to do next”

  6. KnowledgeBasedAgents • The agent must be able to: • Represent states and actions • Incorporate new percepts • Update internal representations of the world • Deduce hidden properties of the world • Deduce appropriate actions Domain-Independent Algorithms InfereneEngine Knowledge-Base Domain-Specific Content

  7. KnowledgeBasedAgents • Declarative • You can build a knowledge-based agent simply by “TELLing” it what it needs to know • Procedural • Encode desired behaviors directly as program code • Minimizing therole of explicit representation and reasoning can result in a much more efficient system

  8. SyntaxandSemantics • Logicsare formal languages forrepresenting information such thatconclusions can be drawn • Syntax defines the sentences in the language • Semantics define the "meaning" of sentences • Termis a logicalexpressionthatrefersto an object • Atomicsentenceis formedfrom a predicatesymbolfollowedby a parenthesizedlist of terms.

  9. SyntaxandSemantics Example; -Syntax; • x+2 ≥ y is a sentence; • x2+y > {} is not a sentence -Semantics; • x+2 ≥ y is true iff the number x+2 is no less than the number y • x+2 ≥ y is true in a world where x = 7, y = 1 • x+2 ≥ y is false in a world where x = 0, y = 6

  10. Entailment • Definition:Knowledge base (KB) entails sentence a(alpha) ifandonlyifa(alpha) is true in all worlds where KB is true • Notation: KB ╞ a (alpha) ‘Entailment is a relationship between sentences that is based on semantics’

  11. Entailment • Example; The KB containing ‘the shirt is green’ and ‘the shirt is striped’entails ‘the shirt is green or the shirt is striped’. • Example; x+y=4 entails 4=x+y • Models:Models are formally structured worlds,with respect to which truth can be evaluated. • m is a model of a sentence a(alpha) if a(alpha) is true in m • M(a) is the set of all models of a(alpha) • KB ╞ a(alpha)if and only if M(KB) M(a) • Example; KB = The shirt is green and striped a(alpha) = The shirt is green

  12. WumpusWorld • Performance Measure • Gold +1000, Death – 1000 • Step -1, Use arrow -10 • Environment • Square adjacent to the Wumpus are smelly • Squares adjacent to the pit are breezy • Glitter iff gold is in the same square • Shooting kills Wumpus if you are facing it • Shooting uses up the only arrow • Grabbing picks up the gold if in the same square • Releasing drops the gold in the same square • Actuators • Left turn, right turn, forward, grab, release, shoot • Sensors • Breeze, glitter, and smell

  13. WumpusWorld • Characterization of Wumpus World • Observable • partial, only local perception • Deterministic • Yes, outcomes are specified • Episodic • No, sequential at the level of actions • Static • Yes, Wumpus and pits do not move • Discrete • Yes • Single Agent • Yes

  14. WumpusWorld • KB=wumpus-worldrules+observations

  15. WumpusWorld KB=wumpus-worldrules+observations

  16. WumpusWorld KB=wumpus-worldrules+observations

  17. Inference • KB ├iα = sentence α can be derived from KB by procedure i (i is an algorithm that derives α from KB ) • Soundness: i is sound if whenever KB ├iα, it is also true that KB╞ α • Completeness: i is complete if whenever KB╞ α, it is also true that KB ├iα

  18. PropositionalLogic • PropositionalSymbols; A,B,P1,P2,ShirtisGreen areatomicsentences. • If S,S1,S2aresentences, then Propositionalmodels;each model specifiestrue/falsefor eachpropositionsymbol.

  19. PropositionalLogic

  20. WumpusWorldSentences • PropositionalSymbols; Pi,jmeans; ‘there is a pit in [i,j]’ Bi,jmeans; ‘there is a breeze in [i,j]’ Sentences; ‘Pitscausebreezes in adjacentsquares A square is breezyifandonlyifthere is an adjacentpit

  21. LogicalEquivalence • Twosentencesarelogicallyequivalent, denotedby; • Iftheyaretrue in thesamemodels;

  22. ImportantEquivalences

  23. ValidityandSatisfiability • A sentence is validif it is true in allmodels • A sentence is satisfiableif it is true in somemodels; • A sentence is unsatisfiableif it is true in no models

  24. ValidityandSatisfiability Connectsvalidityandunsatisfiability is validifandonlyif is unsatisfiable Connectsinferenceandunsatisfiablity ifandonlyif is unsatisfiable

  25. Resolution • Therearetwokinds of proofmethods. Theseareapplication of inferencerulesandmodel checking. • Application of inferencerules; legitimate (sound) generation of newsentencesfromold. Proof; a sequence of inference rule applications can use inference rules as operators in a standard search algorithm. Typically (in algorithms) require transformation of sentences into a normal form. -Model Checking; KB ├iα • truth table enumeration (always exponential in n) • backtracking & improved backtracking, • heuristic search in model space (sound but incomplete)

  26. Normal Forms • Literal is an atomic sentence (propositional symbol), orthe negation of an atomic sentence • Clause a disjunction of literals • Conjunctive Normal Form (CNF):a conjunction of disjunctions of literals

  27. ResolutionAlgorithm • In mathematical logicand resolution is a rule of inferenceleading to a refutationtheorem-provingtechnique for sentences in propositional logic . • In other words, iteratively applying the resolution rule in a suitable way allows for telling whether a propositional formulais satisfiable and for proving that a first-order formula is unsatisfiable. • This method may prove the satisfiability but not always, as it is the case for all methods for first-order logic .

  28. Resolution • Example; • Wetness is highandweather is cloudy. • Ifweather is cloudy, it meansthat it willrain, • Ifthewetness is high,weather is hot. • Weather is not hot. CNF

  29. ForwardandBackwardChaining • Forward chaining is one of the two main methods of reasoning when using inference rulesin artificial intelligence and can be described logically.Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forward chaining is backward chaining. • Forward chaining starts with the available data and uses inference rules to extract more data until a goalis reached. An inference engineusing forward chaining searches the inference rules until it finds one where the‘If clause’is known to be true. When such a rule is found, the engine can conclude, or infer ‘Thenclause’, resulting in the addition of new informationto its data. • Inference engines will iterate through this process until a goal is reached.

  30. ForwardChaining • Suppose that the goal is to conclude the color of a pet named Fritz, • given that he croaks and eats flies, and that the rule base contains the following four rules: • If X croaks and eats flies - Then X is a frog • If X chirps and sings - Then X is a canary • If X is a frog - Then X is green • If X is a canary - Then X is yellow

  31. ForwardChaining • Let us illustrate forward chaining by following the pattern of a computer as it evaluates the rules. Assume the following facts: • Fritz croaks • Fritz eats flies • Tweety eats flies • Tweety chirps • Tweety is yellow

  32. ForwardChaining With forward reasoning, the computer can derive that Fritz is green in four steps: 1. Fritz croaks and Fritz eats flies Based on logic, the computer can derive: 2. Fritz croaks and eats flies Based on rule 1, the computer can derive: 3. Fritz is a frog Based on rule 3, the computer can derive: 4. Fritz is green.

  33. BackwardChaining • The name "forward chaining" comes from the fact that the computer starts with the data and reasons its way to the answer, as opposed to backward chaining, which works the other way around. • In the derivation, the rules are used in the reverse order as compared to backward chaining. • The data determines which rules are selected and used, this method is called data-driven, in contrast to goal-driven backward chaining inference. • One of the advantages of forward-chaining over backward-chaining is that the reception of new data can trigger new inferences, which makes the engine better suited to dynamic situations in which conditions are likely to change

  34. BackwardChaining • Example; • suppose that the goal is to conclude whether Tweety or Fritz is a frog, given information about each of them, and that the rule base contains the following four rules: • If X croaks and eats flies – Then X is a frog • If X chirps and sings – Then X is a canary • If X is a frog – Then X is green • If X is a canary – Then X is yellow • Let us illustrate backward chaining by following the pattern of a computer as it evaluates the rules. Assume the following facts: • Fritz croaks • Fritz eats flies • Tweety eats flies • Tweety chirps • Tweety is yellow

  35. BackwardChaining • With backward reasoning, the computer can answer the question "Who is a frog?" in four steps: In its reasoning, the computer uses a placeholder 1. ? is a frog Based on rule 1, the computer can derive: 2. ? croaks and eats flies Based on logic, the computer can derive: 3. ? croaks and ? eats flies Based on the facts, the computer can derive: 4. Fritz croaks and Fritz eats flies • This derivation will cause the computer to produce Fritz as the answer to the question "Who is a frog?". • Computer has not used any knowledge about Tweety to compute that Fritz is a frog.

  36. Forward&BackwardChaining • FC is data-driven, automatic, unconscious processing • May do lots of work that is irrelevant to the goal • BC is goal-driven, appropriate for problem-solving • Complexity of BC can be much less than linear in size of KB

  37. CONCLUSION • Logical agents apply inference to a knowledge baseto derive new information and make decisions • Basicconcepts of logicaresyntax, semantics, entailment,inference,soundnessandcompleteness. • Wumpusworldrequirestheabilitytorepresentpartialandnegatedinformation,reasonbycases. • Resolution is soundandcompleteforpropositionallogic. • Propositionallogiclacksexpressivepower.

  38. THANK YOU FOR YOUR LISTENING

More Related