DBpedia explained

DBpedia
Developer:
Latest Release Version:DBpedia 2016-10
Latest Release Date:4 July 2017
License:GNU General Public License

DBpedia (from "DB" for "database") is a project aiming to extract structured content from the information created in the Wikipedia project. This structured information is made available on the World Wide Web using OpenLink Virtuoso. DBpedia allows users to semantically query relationships and properties of Wikipedia resources, including links to other related datasets.

The project was heralded as "one of the more famous pieces" of the decentralized Linked Data effort by Tim Berners-Lee, one of the Internet's pioneers. As of June 2021, DBPedia contained over 850 million triples.

Background

The project was started by people at the Free University of Berlin and Leipzig University[1] in collaboration with OpenLink Software, and is now maintained by people at the University of Mannheim and Leipzig University.[2] The first publicly available dataset was published in 2007.[1] The data is made available under free licenses (CC BY-SA), allowing others to reuse the dataset; it does not use an open data license to waive the sui generis database rights.

Wikipedia articles consist mostly of free text, but also include structured information embedded in the articles, such as "infobox" tables (the pull-out panels that appear in the top right of the default view of many Wikipedia articles, or at the start of the mobile versions), categorization information, images, geo-coordinates and links to external Web pages. This structured information is extracted and put in a uniform dataset which can be queried.

Dataset

The 2016-04 release of the DBpedia data set describes 6.0 million entities, out of which 5.2 million are classified in a consistent ontology, including 1.5 million persons, 810,000 places, 135,000 music albums, 106,000 films, 20,000 video games, 275,000 organizations, 301,000 species and 5,000 diseases.[3] DBpedia uses the Resource Description Framework (RDF) to represent extracted information and consists of 9.5 billion RDF triples, of which 1.3 billion were extracted from the English edition of Wikipedia and 5.0 billion from other language editions.

From this data set, information spread across multiple pages can be extracted. For example, book authorship can be put together from pages about the work, or the author.

One of the challenges in extracting information from Wikipedia is that the same concepts can be expressed using different parameters in infobox and other templates, such as and . Because of this, queries about where people were born would have to search for both of these properties in order to get more complete results. As a result, the DBpedia Mapping Language has been developed to help in mapping these properties to an ontology while reducing the number of synonyms. Due to the large diversity of infoboxes and properties in use on Wikipedia, the process of developing and improving these mappings has been opened to public contributions.

Version 2014 was released in September 2014.[4] A main change since previous versions was the way abstract texts were extracted. Specifically, running a local mirror of Wikipedia and retrieving rendered abstracts from it made extracted texts considerably cleaner. Also, a new data set extracted from Wikimedia Commons was introduced.

As of June 2021, DBPedia contains over 850 million triples.[5]

Examples

DBpedia extracts factual information from Wikipedia pages, allowing users to find answers to questions where the information is spread across multiple Wikipedia articles. Data is accessed using an SQL-like query language for RDF called SPARQL.

For example, if one were interested in the Japanese shōjo manga series Tokyo Mew Mew, and wanted to find the genres of other works written by its illustrator Mia Ikumi. DBpedia combines information from Wikipedia's entries on Tokyo Mew Mew, Mia Ikumi and on this author's works such as Super Doll Licca-chan and Koi Cupid. Since DBpedia normalises information into a single database, the following query can be asked without needing to know exactly which entry carries each fragment of information, and will list related genres:

PREFIX dbprop: PREFIX db: SELECT ?who, ?WORK, ?genre WHERE

Use cases

DBpedia has a broad scope of entities covering different areas of human knowledge. This makes it a natural hub for connecting datasets, where external datasets could link to its concepts.[6] The DBpedia dataset is interlinked on the RDF level with various other Open Data datasets on the Web. This enables applications to enrich DBpedia data with data from these datasets., there are more than 45 million interlinks between DBpedia and external datasets including: Freebase, OpenCyc, UMBEL, GeoNames, MusicBrainz, CIA World Fact Book, DBLP, Project Gutenberg, DBtune Jamendo, Eurostat, UniProt, Bio2RDF, and US Census data. The Thomson Reuters initiative OpenCalais, the Linked Open Data project of The New York Times, the Zemanta API[7] and DBpedia Spotlight also include links to DBpedia. The BBC uses DBpedia to help organize its content. Faviki uses DBpedia for semantic tagging. Samsung also includes DBpedia in its "Knowledge Sharing Platform".

Such a rich source of structured cross-domain knowledge is fertile ground for artificial intelligence systems. DBpedia was used as one of the knowledge sources in IBM Watson's Jeopardy! winning system[8]

Amazon provides a DBpedia Public Data Set that can be integrated into Amazon Web Services applications.

Data about creators from DBpedia can be used for enriching artworks' sales observations.[9]

The crowdsourcing software company, Ushahidi, built a prototype of its software that leveraged DBpedia to perform semantic annotations on citizen-generated reports. The prototype incorporated the "YODIE" (Yet another Open Data Information Extraction system) service[10] developed by the University of Sheffield, which uses DBpedia to perform the annotations. The goal for Ushahidi was to improve the speed and facility with which incoming reports could be validated managed.[11]

DBpedia Spotlight

DBpedia Spotlight is a tool for annotating mentions of DBpedia resources in text. This allows linking unstructured information sources to the Linked Open Data cloud through DBpedia. DBpedia Spotlight performs named entity extraction, including entity detection and name resolution (in other words, disambiguation). It can also be used for named entity recognition, and other information extraction tasks. DBpedia Spotlight aims to be customizable for many use cases. Instead of focusing on a few entity types, the project strives to support the annotation of all 3.5million entities and concepts from more than 320 classes in DBpedia. The project started in June 2010 at the Web Based Systems Group at the Free University of Berlin.

DBpedia Spotlight is publicly available as a web service for testing and a Java/Scala API licensed via the Apache License. The DBpedia Spotlight distribution includes a jQuery plugin that allows developers to annotate pages anywhere on the Web by adding one line to their page.[12] Clients are also available in Java or PHP.[13] The tool handles various languages through its demo page[14] and web services. Internationalization is supported for any language that has a Wikipedia edition.[15]

Archivo ontology database

From 2020, the DBpedia project provides a regularly updated database of web‑accessible ontologies written in the OWL ontology language. Archivo also provides a four star rating scheme for the ontologies it scrapes, based on accessibility, quality, and related fitness‑for‑use criteria. For instance, SHACL compliance for graph‑based data is evaluated when appropriate. Ontologies should also contain metadata about their characteristics and specify a public license describing their terms‑of‑use. the Archivo database contains 1368 entries.

History

DBpedia was initiated in 2007 by Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak and Zachary Ives.[1]

See also

Notes and References

  1. DBpedia: A Nucleus for a Web of Open Data, available at https://link.springer.com/chapter/10.1007/978-3-540-76298-0_52, https://dl.acm.org/doi/10.5555/1785162.1785216, or https://www.cis.upenn.edu/~zives/research/dbpedia.pdf
  2. Web site: Home. March 2024 .
  3. Web site: YEAH! We did it again ;) – New 2016-04 DBpedia release. DBpedia. 9 January 2019. 19 October 2016.
  4. Web site: Changelog. September 2014. DBpedia. 9 September 2014.
  5. Web site: Holze. Julia. 2021-07-23. Announcement: DBpedia Snapshot 2021-06 Release. 2021-07-28. DBpedia Association. en-GB.
  6. E. Curry, A. Freitas, and S. O'Riáin, "The Role of Community-Driven Data Curation for Enterprises", in Linking Enterprise Data, D. Wood, Ed. Boston, MA: Springer US, 2010, pp. 25-47.
  7. Web site: Zemanta API. 2021-07-26. dev.zemanta.com.
  8. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A. Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, and Chris Welty "Building Watson: An Overview of the DeepQA Project." In AI Magazine Fall, 2010. Association for the Advancement of Artificial Intelligence (AAAI).
  9. Book: Filipiak . Dominik . Filipowska . Agata . Business Information Systems Workshops . DBpedia in the Art Market . 2 December 2015 . 228 . 321–331 . 10.1007/978-3-319-26762-3_28 . Lecture Notes in Business Information Processing . 978-3-319-26761-6 .
  10. Web site: GATE.ac.uk - applications/yodie.html. gate.ac.uk. 11 May 2020.
  11. Web site: ushahidi/platform-comrades. GitHub. 30 June 2019. en. 9 March 2020.
  12. Web site: Mendes. Pablo. DBpedia Spotlight jQuery Plugin. jQuery Plugins. 15 September 2011. 3 April 2011. https://web.archive.org/web/20110403064338/http://plugins.jquery.com/project/dbpedia-spotlight. dead.
  13. Web site: DiCiuccio. Rob. PHP Client for DBpedia Spotlight. GitHub. 25 September 2016.
  14. Web site: Demo of DBpedia Spotlight. 8 September 2013.
  15. Web site: Internationalization of DBpedia Spotlight. GitHub. 8 September 2013.