1 / 29

SemTag and Seeker: Bootstrapping the Semantic Web via Automated Semantic Annotation

SemTag and Seeker: Bootstrapping the Semantic Web via Automated Semantic Annotation. Presented by: Hussain Sattuwala Stephen Dill, Nadav Eiron, David Gibson, Daniel Gruhl, R. Guha, Anant Jhingran, Tapas Kanungo, Sridhar Rjagopalan, Andrew Tomkins, John A. Tomlin, Jason Y. Zien

loyal
Download Presentation

SemTag and Seeker: Bootstrapping the Semantic Web via Automated Semantic Annotation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SemTag and Seeker: Bootstrapping the Semantic Web via Automated Semantic Annotation Presented by: Hussain Sattuwala Stephen Dill, Nadav Eiron, David Gibson, Daniel Gruhl, R. Guha, Anant Jhingran, Tapas Kanungo, Sridhar Rjagopalan, Andrew Tomkins, John A. Tomlin, Jason Y. Zien IBM Almaden Research Center http://www.almaden.ibm.com/webfountain/resources/semtag.pdf

  2. Outline • Motivation • Goal • SemTag • Architecture • Phases • TBD • Results • Methodology • Seeker • Design • Architecture • Environment • Conclusion • Related and Future work.

  3. Motivation • Natural language processing is the most significant obstacle in building machine understandable web. • To allow for the Semantic Web to become a reality we need: • Web-services to maintain & provide metadata. • Annotated documents (OWL, RDF, XML, ...).

  4. is complex is time consuming needs annotation by experts Annotations • Current practice of annotation for knowledge identification , extraction & other applications Reduce burden of text annotation for Knowledge Management

  5. Goal • To perform automated semantic tagging of large corpora. • To introduce a new disambiguation algorithm to resolve ambiguities in a natural language corpus. • To introduce the platform which different tagging applications can share. SemTag TBD Algo Seeker

  6. SemTag • The goal is to automatic add semantic tags to the existing HTML body of the web. Example: “The Chicago Bulls announced that Michael Jordan will…” Will be: The <resource ref = http://tap.stanford.edu/Basketball Team_Bulls>Chicago Bulls</resource> announced yesterday that <resource ref = “http://tap.stanford.edu/ AthleteJordan_Michael”> Michael Jordan</resource> will...’’

  7. SemTag • Uses TAP KB • TAP is a public broad, shallow knowledgebase. • TAP contains lexical and taxonomical information about popular objects like music, movies, sports, etc. Problem: No write access to original document How do you annotate??? • Uses the concept of Label Bureau from PICS (Platform for Internet Content Selection) • HTTP server that can be queried for annotation information • Separate store of semantic annotation information

  8. Example: Annotated Page

  9. SemTag Architecture Tagging Add to DB Disambiguate windows Spotting Retrieve documents Learning Automatic Manual Tokenize Find Context determine distribution of terms

  10. SemTag Phases • 1. Spotting: • Retrieve documents from Seeker. • Tokenize documents. • Find contexts (10 words + label + 10 words) that appears in TAP Taxonomy. • 2. Learning: • Scan the representative sample to determine distribution of terms at each internal node of the taxonomy.

  11. SemTag Phases, cont’d • 3. Tagging • Disambiguate windows (using TBD). • Add to the database. Ambiguities types: • Same label appears at multiple locations in TAP ontology. • Some entities have labels that occur in context that have no representative in the taxonomy. Training Data: • Automatic metadata • Manual metadata

  12. TBD Methodology • Each node has a set of labels. • E.g.: cats, football, cars all contain the label Jaguar. • Each label in the text is stored with a window of 20 words – the context • A spot(l,c) is a label in a context. • Each node has an associated similarity function mapping a context to a similarity • Higher similarity  more likely to contain a reference

  13. TBD - Similarity • Generate 200k dimensional vector corresponding to context. • TF-IDF scheme • Each entry of the vector is the frequency of the term occurring at that node divide by corpus frequency of the term. • IR Algorithm – Cosine Similarity • Vector product of sparse spot vector and dense node vector

  14. TBD - Algorithm • Some internal nodes very popular: • Associate a measurement Mus of how accurate Sim is likely to be at a node. • Also Mua, how ambiguous the node is overall (consistency of human judgment) • TBD Algorithm: returns 1 or 0 to indicate whether a particular context c is on topic for a node v • 82% accuracy on 434 million spots

  15. The TBD Algorithm

  16. SemTag Results • Applied on 264 million pages • Produced 550 million labels. • Final set of 434 million spots with Accuracy 82%.

  17. SemTag Methodology 1. Lexicon generation: • Approximately 90 million total words. • 1.4 million unique words . • Most frequent 200,000 words. 2. Similarity functions: • Estimated distribution of terms corresponding to 192 most common TAP nodes to derive fu.

  18. SemTag Methodology, cont’d 3. Measurement values: • Determined based on 750 relevant human judgments. 4. Full TBD Processing: • Applied to 550m spots. 5. Evaluation: • Compared TBD results with additional 378 human judgments.

  19. Seeker • A platform used by SemTag and other increasingly sophisticated text analytics applications. • Provides scalable, extensible knowledge extraction from erratic resources. Erratic resources???

  20. Seeker Design Goals • Composability • Modularity • Extensibilty • Scalability • Robustness

  21. Seeker Architecture SemTag Components Modular & Extensible Scalability & Robustness Indexing Tokens Annotators Miners Query Processing Storage & Communication Crawls WEB n/w level APIs

  22. Seeker Design • To achieve modularity and extensibility • SOA (service-oriented architecture) was used where communication among agents is done through a set of language-independent network-level APIs. • To achieve scalability and robustness • Infrastructure components.

  23. Infrastructure Components • The Data Store • Central repository for all data storage. • Communication medium. • The Indexer • For indexing sequences of tokens. • The Joiner • Query processing component.

  24. Analysis Agents • Annotators • Performs some local processing on each web page and write back results to the store in form of an annotation. • Miners • Performs Intermediate processing • Looks at the results of spots on many pages in order to disambiguate them.

  25. Observation • Advantage • Other application can obtain semantic annotation from web-available database. • Use both human & computer judgments to solve ambiguous data in their TBD algorithm • Disadvantage • The system require a large amount of storage space to store data. • Requires much larger and richer KB to build web scale ontology.

  26. Conclusion • Automatic semantic tagging is essential to bootstrap the Semantic Web. • It’s possible to achieve good accuracy with simple disambiguation approaches.

  27. Future Work • Develop more approaches and algorithms to automated tagging. • Make annotated data public and seeker as a public service.

  28. Related Work • Systems built as a result of the Semantic Web are divided among two types: • Create ontologies – semi automated • Page annotation. • Examples: Protégé, OntoAnnotate, Anntea, SHOE, … • Some AI approaches were used, but, they need a lot of training. Principal tool:Wrapping • Some used other NL understanding techniques, example ALPHA.

  29. Questions?

More Related