130 likes | 264 Views
第五部分通信与集成. 第 2 3 章多 a g e n t. Interacting Agents Models of Other Agents A Modal Logic of Knowledge Additional Readings and Discussion. 第 2 3 章多 a g e n t. 23.1 交互 a g e n t Agents’ objectives To predict what another agent will do : Need methods to model another
E N D
第2 3章多a g e n t • Interacting Agents • Models of Other Agents • A Modal Logic of Knowledge • Additional Readings and Discussion
第2 3章多a g e n t • 23.1 交互a g e n t • Agents’ objectives • To predict what another agent will do : • Need methods to model another • To affect what another agent will do : • Need methods to communicate with another • Focus • Distributed artificial intelligence (DAI)
23.2 其他a g e n t模型 • 23.2.1 模型种类
23.3 知识模式逻辑 • 23.3.1 模式算子K • 为了说S a m (一个a g e n t的名字)知道积木A在B的上面,我们写下: K(S a m, O n(A, B) )
23.3 A Modal Logic of Knowledge • Modal Operators • Modal operator • to construct a formula whose intended meaning is that a certain agent knows a certain proposition • e.g) K( Sam, On(A,B) ) K(α,φ) or K α(φ) • knowledge and belief • Whereas an agent can believe a false proposition, it cannot know anything that is false. • logic of knowledge is simpler than logic of belief
23.3 A Modal Logic of Knowledge (Cont’d) • Modal first-order language • using the operator K • syntax 1. All of the wffs of ordinary first-order predicate calculus are also wwf of the modal language 2. If φ is a closed wff of the modal language, and if α is a ground term, then K(α, φ) is a wff of the modal language. 3. As usual, if φ and ψ are wffs, then so are any expressions that can be constructed from φ and ψ by the usual propositional connectives.
23.3 A Modal Logic of Knowledge (Cont’d) • As examples, • K[Agent1, K[Agent2, On(A,B))] : Agent1 knows that Agent2 knows that A is on B. • K(Agent1, On(A,B)) K(Agnet1, On(A,C)) : Either Agent1 knows that A is on B or it knows that A is on C • K[Agent1, On(A,B) On(A,C)] : Agent1 knows that either A is on B or that A is on C. • K(Agent1, On(A,B)) K(Agent1, ¬On(A,B)) : Agent1 knows whether or not A is on B. • ¬K(Agent1, On(A,B)) : Agent1 doesn’t know that A is on B. • (x)K(Agent1, On(x,B)) : illegal wwf
23.3 A Modal Logic of Knowledge (Cont’d) • Knowledge Axioms • , : compositional semantics • Semantics of K is not compositional. • truth value of Kα [φ] is not depend on αand φ compositionally • φ ψ, Kα(φ) Kα(ψ) for all α : not necessary since α might not know that φ is equivalent to ψ. • axiom schemas • distribution axiom • [Kα(φ) Kα(φ ψ)] Kα(ψ) … (1) ( Kα(φ ψ) [Kα(φ) Kα(ψ)] … (2) ) • knowledge axiom • Kα(φ) φ …(3) : An agent cannot possibly know something that is false. • positive-introspection axiom • Kα(φ) Kα(Kα(φ)) … (4)
23.3 A Modal Logic of Knowledge (Cont’d) • negative-introspection axiom • Kα(φ) Kα(¬Kα(φ)) … (5) • epistemic necessitation • ├φ infer Kα(φ) … (6) • logically omniscienct • φ ├ψ and from Kα(φ) infer Kα(ψ) … (7) ( ├ (φψ) infer Kα(φ) Kα(ψ) … (8) • from logical omniscience, K(α, (φψ)) K(α, φ) K(α, ψ) … (9)
23.3 A Modal Logic of Knowledge (Cont’d) • Reasoning about Other Agents’ Knowledge • Our agent can carry out proofs of some statements about the knowledge of other agents using only the axioms of knowledge, epistemic necessitation, and its own reasoning ability (modus ponens, resolution). • e.g) Wise-Man puzzle • assumption : among three wise men, at least one has a white spot on his forehead. Each wise man can see the others’ foreheads but not his own. Two of them said, “I don’t know whether I have a white spot”. • proof of K(A, White(A)) (where, A is the third man.) 1. KA[¬White(A) KB(¬White(A))] (given) 2. KA[KB(¬White(A) White(B))] (given) 3. KA(¬KB(White(B))) (given) 4. ¬White(A) KB(¬White(A)) (1, and axiom 3) 5. KB[¬White(A) White(B)) (2, and axiom 2)
23.3 A Modal Logic of Knowledge (Cont’d) 6. KB(¬White(A)) KB(White(B)) (5. and axiom 2) 7. ¬White(A) KB(White(B)) (resolution on the clause forms of 4. and 6.) 8. ¬KB(White(B)) White(A) (contrapositive of 7.) 9. KA[¬KB(White(B)) White(A)] (1.- 5., 8., rule 7) 10. KA(¬KB(White(B))) KA(White(A)) (axiom 2) 11. KA(White(A)) (modus ponens using 3. and 10.)
23.3 A Modal Logic of Knowledge (Cont’d) • Predicting Actions of Other Agents • in order to predict what another agent A1, • If A1 is not too complex, our agent may assume that A1’s action are controlled by a T-R program. Suppose the conditions in that program are i, for i=1, …, k. To predict A1’s future actions, our agent needs to reason about how A1 will evaluate these conditions. • It is often appropriate for our agent to take an intentional stance toward A1 and attempt to establish whether or not KA1(i) for i=1, …, k