220 likes | 331 Views
Learning to Share Meaning in a Multi-Agent System (Part I). Ganesh Padmanabhan. Article. Williams, A.B. , "Learning to Share Meaning in a Multi-Agent System " , Journal of Autonomous Agents and Multi-Agent Systems , Vol. 8, No. 2, 165-193, March 2004. (Most downloaded article in Journal).
E N D
Learning to Share Meaning in a Multi-Agent System(Part I) Ganesh Padmanabhan
Article • Williams, A.B., "Learning to Share Meaning in a Multi-Agent System ", Journal of Autonomous Agents and Multi-Agent Systems, Vol. 8, No. 2, 165-193, March 2004. (Most downloaded article in Journal)
Overview • Introduction (part I) • Approach (part I) • Evaluation (part II) • Related Work (part II) • Conclusions and Future Work (part II) • Discussion
Introduction • One Common Ontology? Does that work? • If not, what issues do we face when agents have similar views of the world but different vocabularies? • Reconciling Diverse Ontologies so that Agents can communicate effectively when appropriate.
Diverse Ontology Paradgm: Questions Addressed • “How do agents determine if they know the same semantic concepts?” • “How do agents determine if their different semantic concepts actually have the same meaning?” • “How can agents improve their interpretation of semantic concepts by recursively learning missing discriminating attributes?” • “How do these methods affect the group performance at a given collective task?”
Ontologies and Meaning • Operational Definitions Needed • Conceptualization, ontology, universe of discourse, functional basis set, relational basis set, object, class, concept description, meaning, object constant, semantic concept, semantic object, semantic concept set, distributed collective memory
Conceptualization • All objects that an agent presumes to exist and their interrelationships with one another. • Tuple: Universe of Discourse, Functional Basis Set, Relational Basis Set
Ontology • Specification of a conceptualization • Mapping of language symbols to an agent’s conceptualization • Terms used to name objects • Functions to interpret objects • Relations in the agent’s world
Object • Anything we can say something about • Concrete or Abstract classes • Primitive or Composite • Fictional or non-fictional
UOD and ontology • “The difference between the UOD and the ontology is that the UOD are objects that exist but until they are placed in an agent’s ontology, the agent does not have a vocabulary to specify objects in the UOD.”
Forming a Conceptualization • Agent’s first step at looking at the world. • Declarative Knowledge • Declarative Semantics • Interpretation Function maps an object in a conceptualization to language elements
Approach Overview • Assumptions • Agents’ use of supervised inductive learning to learn representations for their ontologies. • Mechanics of discovering similar semantic concepts, translation, and interpretation. • Recursive Semantic Context Rule Learning for improved performance.
Key Assumptions • “Agents live in a closed world represented by distributed collective memory.” • “The identity of the objects in this world are accessible to all agents and can be known by the agents.” • “Agents use a knowledge structure that can be learned using objects in the distributed collective memory.” • “The agents do not have any errors in their perception of the world even though their perceptions may differ.”
Semantic Concept Learning • Individual Learning, i.e. learning one’s own ontology • Group Learning, i.e. one agent learning that another agent knows a particular concept
WWW Example Domain • Web Page = specific semantic object • Groupings of Web Pages = semantic concept or class • Analogous to Bookmark organization • Words and HTML tags are taken to be boolean features. • Web Page represented by boolean vector. • Concepts Concept Vectors Learner Semantic Concept Description (rules)
Ontology Learning • Supervised Inductive Learning • Output = Semantic Concept Descriptions (SCD) • SCD are rules with a LHS and RHS etc. • Object instances are discriminated based on tokens contained within sometimes resulting in “…a peculiar learned descriptor vocabulary.” • Certainty Value
Locating Similar Semantic Concepts • Agent queries another agent for a concept by showing it examples. • Second agent receives examples and uses its own conceptualization to determine if it knows the concept (K), maybe knows it (M), or doesn’t know it (D). • For cases, K and M, the second agent sends back examples of what it thinks is the concept that was queried. • First agent receives the examples, and interprets those using its own conceptualization to “verify” that they are talking about the same concept. • If verified, the querying agent then adds that the other agent knows its concept to its own knowledge base.
Concept Similarity Estimation • Assuming two agents know a particular concept, it is feasible and probable given a large DCM, that the sets of concept defining objects differ completely. • Cannot simply assume that the target functions generated by each agent using supervised inductive learning from example will be the same. • Need to define other ways to estimate similarity.
Concept Similarity Estimation Function • Input: sample set of objects representing a concept in another agent • Output: Knows Concept (K), Might Know Concept (M), Don’t Know Concept(D). • Set of Objects Tries mapping set of objects to each of its concepts using description rules each concept receives an interpretation value interpretation value is compared with thresholds to make K,M, or D determination. • Interpretation Value for one concept is the proportion of objects in the CBQ that were inferred to be this concept. • Positive Interpretation Threshold = how often this concept description correctly determined an object in the training set to belong to this concept • Negative Interpretation Threshold
Group Knowledge • Individual Knowledge • Verification
Translating Semantic Concepts • Same algorithm as for locating similar concepts in other agents. • Two concepts determined to be the same, can be translated regardless of label in the ontologies. • Difference: After verification, knowledge is stored as “Agent B knows my semantic concept X as Y.”