1 / 39

Semantic Web

Semantic Web. Applications Lecture XIV Dieter Fensel. Today’s lecture. Today‘s lecture. Applications for data integration (Piggy Bank, Nepomuk ) Applications for knowledge management (SWAML) Applications for Semantic Indexing and Semantic Portals (Watson)

Download Presentation

Semantic Web

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Semantic Web Applications Lecture XIV Dieter Fensel

  2. Today’s lecture

  3. Today‘s lecture • Applications for data integration (Piggy Bank, Nepomuk ) • Applications for knowledge management (SWAML) • Applications for Semantic Indexing and Semantic Portals (Watson) • Applications for meta-data annotation and enrichment and semantic content management (DBPedia) • Applications for description, discovery and selection (Search Monkey)

  4. Applications for Data Integration

  5. Applications for Data Integration • One of the main advantages of semantic technology is the interoperability of the used information • That implies many different data sources • Applications for data integration allow the use of cross source queries and merged view on the different information • Example applications: • Piggy Bank • NEPOMUK the social Semantic desktop

  6. Piggy Bank - What is it? • Firefox Extension • Transforms browser into mashup platform • Allows to search and exchange the collected information • Developed as part of the Simile Project • Current version: 3.1 *) *) *) Source: http://simile.mit.edu/wiki/Piggy_Bank

  7. Piggy Bank – How does it work? • Piggy Bank uses RDF • If a Web page links to RDF, information is simply retrieved • Otherwise, information is extracted from the raw content • RDF information is stored locally • Information can now be searched, tagged, browsed, etc.

  8. Piggy Bank – Features at a glance • Collect data (different plugins, so called Screen Scrapers for information retrieval available) • Save data for further use • Tag data to add additional information for more efficient use • Browse and search through stored information • Share the collected data by publishing it onto Semantic Bank

  9. Piggy Bank – Architecture overview • Firefox 2.0 as application plattform • Chrome additions, e.g. menu commands, toolbars etc. • XPCOM components bridging the chrome part and the Java part • Java Backend for managing the collected information

  10. NEPOMUK– What is it? • Nepomuk, The Social Semantic Desktop • Nepomuk is an acronym for Networked Environment for Personal Ontology-based Management of Unified Knowledge • It is a set of methods, tools and data structures to extend the personal computer into *) *) Source: http://nepomuk.semanticdesktop.org/xwiki/bin/view/Main1/

  11. NEPOMUK - Aspects • Desktop Aspect – tools for annotating and linking information on lokal desktop • Social Aspect – tools for social relation building and knowledge exchange • Community Uptake – build a community around the Social Semantic Desktop in order to use the full potential

  12. NEPOMUK – Projects on Top • SemanticDesktop.org (developer and user community on the topics of a „Social Semantic Desktop“) • NEPOMUK KDE (creating a semantic KDE environment) • NEPOMUK Eclipse (enabling a semantic P2P Semantic Eclipse Workbench) • NEPOMUK Mozilla (annotate Web data and emails)

  13. NEPOMUK – Ontologies used (excerpt) • NAO – NEPOMUK Annotation Ontology for annotating resources • NIE – NEPOMUK Information Element set of ontologies for describing information elements • NFO – NEPOMUK File Ontology for describing files and other desktop resources • NCO - NEPOMUK Conctact Ontology for describing contact information • NMO – NEPOMUK Message Ontology for describing emails and instant messages • PIMO – Personal Information Model Ontology for describing personal information

  14. Applications for Knowledge Management

  15. Applications for Knowledge Management • Simply storing or organizing information is not enough to turn information into knowledge • Knowledge is applied information • Unless people are able apply to a task information that knowledge is useless • Frequently collective knowledge • Example application: SWAML

  16. SWAML – What is it? • Mailinglist store vast knowledge capital • Major drawbacks: hard to query, unstructured, difficult to work with • SWAML generates RDF from mailing list archives, consequently • Developed by CTIC Foundation and the WESO-RG at University of Oviedo • Current version: 0.1.0

  17. SWAML – How does it work? • mbox as data source • SWAML core produces RDF data ; SIOC ontology used • Enrichment of stored data with FOAF using Sindice (Semantic Web Index) as source of infromation • Access and use stored semantic data via Buxon browser

  18. SWAML – The SIOC Ontology • SIOC is an acronym for Semantically-Interlinked Online Communities • Main objective: • to structure information of community based sites • Link information of community based sites • Consists of several classes and properties to describe community sites (weblogs, message boards, etc.) *) *) Source: http://rdfs.org/sioc/spec/

  19. Applications for Semantic Indexing and Semantic Portals

  20. Applications for Semantic Indexing and Semantic Portals • Web already offers topic-specifigc portals and generic structured directories like Yahoo! or DMOZ • With semantic technologies such portals could: • use deeper categorization and use ontologies • integrate indexed sources from many locations and communities • provide different structured views on the underlying information • Example application: Watson

  21. Watson – What is it? • Watson is a gateway for the semantic web • Provides efficient access point to the online ontologies and semantic data • Is developed at the Knoledge Media Institute of the Open Universit in Milton Keynes, UK *) *) Source: http://watson.kmi.open.ac.uk/Overview.html

  22. Watson – How does it work? • Watson collects available semantic content on the Web • Analyzes it to exstract useful metadata and indexes it • Implements efficient query facilities to acess the data *) *) Source: http://watson.kmi.open.ac.uk/Overview.html

  23. Watson – Features at a Glance • Attempt to provide high quality semantic data by ranking available data • Efficient exploration of implicit and explicit relations between ontologies • Selecting only relevant ontology modules by extraciting it from the whole ontology • Different interfaces for querying and navigation as well as different levels of formalization

  24. Watson – An example Search for movie and director Resulting ontologies

  25. Applications for meta-data annotation and enrichment and semantic content management

  26. Applications for meta-data annotation and enrichment and semantic content management • Applications that focus on adding, generating and managing meta-data of existing information • Often collaborative applications like Wikis with semantic capabilities • Example applications: SemanticMediaWiki, DBpedia

  27. DBpedia – What is it? • Approach to extract structured information from Wikipedia • Huge knowledge database consisting of more than 274 million RDF triples • Allows advanced queries against the stored information • Is maintained by Freie Universität Berlin and Universität Leipzig *) *) Source: http://wiki.dbpedia.org/About

  28. Dbpedia – How does it work? • Wikipedia contains structured information like infoboxes, categorizations, etc. • DBpedia extracts this kinds of structured information and transforms it into RDF-statements . This is done by the Dbpedia Information Extraction Framework • Provides a SPARQL-endpoint to access and query the data

  29. The DBpedia Ontology • DBpedia Ontology is used to extract data from infoboxes • Consists of more than 170 classes and 940 properties • Manual mappings from infobox to the Ontology define fine-granular rules how to parse infobox-values • Does not cover all Wikipedia infobox and infobox properties

  30. DBpedia – A query example • SPARQL Query that finds people who were born in Innsbruck before 1900 • Search with regular search mechanism virtually impossible

  31. Applications for description, discovery and selection

  32. Applications for description, discovery and selection • Category of applications the are closely related to semantic indexing and knowledge management • Applications mainly for helping users to locate a resource, product or service meeting their needs • Example application: SearchMonkey

  33. SearchMonkey – What is it? • Search monkey is a framework for creating small applications that enhance Yahoo! Search results • Additional data, structure, images and links may be added to search results • Yahoo provides meta-data *) *) Source: http://developer.yahoo.com/searchmonkey/smguide/index.html

  34. SearchMonkey – An example application • IMDB Infobar • Enhance searches for imdb.com/name and imdb.com/title • Adds information about the searched movie and links to the search result • May be added individually to enhance once search results

  35. SearchMonkey – How does it work? • Applications use two types of data services: custom ones and ones provided by Yahoo! • Yahoo! Data services include: • Indexed Web Data • Indexed Semantic Web Data • Cached 3rd party data feeds • Custom data services provide additional, individual data • SearchMonkey application processes the provided data and presents it *) *) Source http://developer.yahoo.com/searchmonkey/smguide/data.html

  36. SearchMonkey – Ontologies used • Common vocabularies used: Friend of a Friend( foaf), Dublin Core (dc), VCard(vcard), VCalendar(vcal), etc. • SearchMonkey specific: • searchmonkey-action.owl: for performing actions as e.g. comparing prices of items • searchmonkey- commerce.owl: for displaying various information collected about businesses • searchmonkey-feed.owl: for displaying information from a feed • searchmonkey-job.owl: for displaying information found in job descriptions or recruitment postings • searchmonkey-media.owl: for displaying information about different media types • searchmonkey-product.owl: for displaying information about products or manufacturers • searchmonkey-resume.owl: for displaying information from a CV • SearchMonkey does not support reasoning of OWL data

  37. References • http://www.w3.org/2001/sw/Europe/reports/chosen_demos_rationale_report/hp-applications-selection.html • http://dbpedia.org/About • http://watson.kmi.open.ac.uk/Overview.html • http://semanticweb.org/wiki/Main_Page • http://simile.mit.edu/wiki/Piggy_Bank • http://swaml.berlios.de/ • http://developer.berlios.de/projects/swaml/ • http://rdfs.org/sioc/spec/ • http://watson.kmi.open.ac.uk/Overview.html • http://developer.yahoo.com/searchmonkey/

  38. Next Lecture

  39. Questions? Lecture XIV Dieter Fensel

More Related