-
A Case Study of Linked Enterprise Data
,
Bo Hu and Glenn Svensson
,
129-144
,
[OpenAccess]
,
[Publisher]
Even though its adoption in the enterprise environment lags behind the public domain, semantic (web) technologies, more recently the linked data initiative, started to penetrate into business domain with more and more people recognising the benefit of such technologies. An evident advantage of leveraging semantic technologies is the integration of distributed data sets that benefit companies with a great return of value. Enterprise data, however, present significantly different characteristics from public data on the Internet. These differences are evident in both technical and managerial perspectives. This paper reports a pilot study, carried out in an international organisation, aiming to provide a collaborative workspace for fast and low-overhead data sharing and integration. We believe that the design considerations, study outcomes, and learnt lessons can help making decisions of whether and how one should adopt semantic technologies in similar contexts.
-
Enterprise Data Classification using Semantic Web Technologies
,
David Ben-David,Tamar Domany and Abigail Tarem
,
66-81
,
[OpenAccess]
,
[Publisher]
Organizations today collect and store large amounts of data in various formats and locations. However they are sometimes required to locate all instances of a certain type of data. Good data classification allows marking enterprise data in a way that enables quick and efficient retrieval of information when needed. We introduce a generic, automatic classification method that exploits Semantic Web technologies to assist in several phases in the classification process; defining the classification requirements, performing the classification and representing the results. Using Semantic Web technologies enables flexible and extensible configuration, centralized management and uniform results. This approach creates general and maintainable classifications, and enables applying semantic queries, rule languages and inference on the results.
-
Experience of Using OWL Ontologies for Automated Inference of Routine Pre-Operative Screening Tests
,
Matt-Mouley Bouamrane,Alan L. Rector and Martin Hurrell
,
50-65
,
[OpenAccess]
,
[Publisher]
We describe our experience of designing and implementing a knowledge-based pre-operative assessment decision support system. We developed the system using semantic web technology, including modular ontologies developed in the OWL Web Ontology Language, the OWL Java Application Programming Interface and an automated logic reasoner. Using ontologies at the core of the system's architecture permits to efficiently manage a vast repository of pre-operative assessment domain knowledge, including classification of surgical procedures, classification of morbidities, and guidelines for routine pre-operative screening tests. Logical inference on the domain knowledge, according to individual patient's medical context (medical history combined with planned surgical procedure) enables to generate personalised patients' reports, consisting of a risk assessment and clinical recommendations, including relevant pre-operative screening tests.
-
I18n of Semantic Web Applications
,
Sören Auer,Matthias Weidl,Jens Lehmann,Amrapali Zaveri and Key-Sun Choi
,
1-16
,
[OpenAccess]
,
[Publisher]
Recently, the use of semantic technologies has gained quite some traction. With increased use of these technologies, their maturation not only in terms of performance, robustness but also with regard to support of non-latin-based languages and regional differences is of paramount importance. In this paper, we provide a comprehensive review of the current state of the internationalization (I18n) of Semantic Web technologies. Since resource identifiers play a crucial role for the Semantic Web, the internatinalization of resource identifiers is of high importance. It turns out that the prevalent resource identification mechanism on the Semantic Web, i.e. URIs, are not sufficient for an efficient internationalization of knowledge bases. Fortunately, with IRIs a standard for international resource identifiers is available, but its support needs much more penetration and homogenization in various semantic web technology stacks. In addition, we review various RDF serializations with regard to their support for internationalized knowledge bases. The paper also contains an in-depth review of popular semantic web tools and APIs with regard to their support for internationalization.
-
ISReal: An Open Platform for Semantic-Based 3D Simulations in the 3D Internet
,
Patrick Kapahnke,Pascal Liedtke,Stefan Nesbigall,Stefan Warwas and Matthias Klusch
,
161-176
,
[OpenAccess]
,
[Publisher]
We present the first open and cross-disciplinary 3D Internet research platform, called ISReal, for intelligent 3D simulation of realities. Its core innovation is the comprehensively integrated application of semantic Web technologies, semantic services, intelligent agents, verification and 3D graphics for this purpose. In this paper, we focus on the interplay between its components for semantic XML3D scene query processing and semantic 3D animation service handling, as well as the semantic-based perception and action planning with coupled semantic service composition by agent-controlled avatars in a virtual world. We demonstrate the use of the implemented platform for semantic-based 3D simulations in a small virtual world example with an intelligent user avatar and discuss results of the platform performance evaluation.
-
Knowledge Engineering for Historians on the Example of the Catalogus Professorum Lipsiensis
,
Thomas Riechert,Ulf Morgenstern,Sören Auer,Sebastian Tramp and Michael Martin
,
225-240
,
[OpenAccess]
,
[Publisher]
Although the Internet, as an ubiquitous medium for communication, publication and research, already significantly influenced the way historians work, the capabilities of the Web as a direct medium for collaboration in historic research are not much explored. We report about the application of an adaptive, semantics-based knowledge engineering approach for the development of a prosopographical knowledge base on the Web - the Catalogus Professorum Lipsiensis. In order to enable historians to collect, structure and publish prosopographical knowledge an ontology was developed and knowledge engineering facilities based on the semantic data wiki OntoWiki were implemented. The resulting knowledge base contains information about more than 14.000 entities and is tightly interlinked with the emerging Web of Data. For access and exploration by other historians a number of access interfaces were developed, such as a visual SPARQL query builder, a relationship finder and a Linked Data interface. The approach is transferable to other prosopographical research projects and historical research in general, thus improving the collaboration in historic research communities and facilitating the reusability of historic research results.
-
Linkage of Heterogeneous Knowledge Resources within In-store Dialogue Interaction
,
Sabine Janzen,Tobias Kowatsch,Wolfgang Maass and Andreas Filler
,
145-160
,
[OpenAccess]
,
[Publisher]
Dialogue interaction between customers and products improves presentation of relevant product information in in-store shopping situations. Thus, information needs of customers can be addressed more intuitive. In this article, we describe how access to product information can be improved based on dynamic linkage of heterogeneous knowledge representations. We therefore introduce a conceptual model of dialogue interaction based on multiple knowledge resources for in-store shopping situations and empirically test its utility with end-users.
-
Mapping Master: a Flexible Approach for Mapping Spreadsheets to OWL
,
Martin J. O'Connor,Christian Halaschek-Wiener and Mark A. Musen
,
194-208
,
[OpenAccess]
,
[Publisher]
We describe a mapping language for converting data contained in spreadsheets into the Web Ontology Language (OWL). The developed language, called M2, overcomes shortcomings with existing mapping techniques, including their restriction to well-formed spreadsheets reminiscent of a single relational database table and verbose syntax for expressing mapping rules when transforming spreadsheet contents into OWL. The M2 language provides expressive, yet concise mechanisms to create both individual and class axioms when generating OWL ontologies. We additionally present an implementation of the mapping approach, Mapping Master, which is available as a plug-in for the Protege ontology editor.
-
ORE - A Tool for Repairing and Enriching Knowledge Bases
,
Jens Lehmann and Lorenz Bühmann
,
177-193
,
[OpenAccess]
,
[Publisher]
While the number and size of Semantic Web knowledge bases increases, their maintenance and quality assurance are still difficult. In this article, we present ORE, a tool for repairing and enriching OWL ontologies. State-of-the-art methods in ontology debugging and supervised machine learning form the basis of ORE and are adapted or extended so as to work well in practice. ORE supports the detection of a variety of ontology modelling problems and guides the user through the process of resolving them. Furthermore, the tool allows to extend an ontology through (semi-)automatic supervised learning. A wizard-like process helps the user to resolve potential issues after axioms are added.
-
Semantic MediaWiki in Operation: Experiences with Building a Semantic Portal
,
Daniel M. Herzig and Basil Ell
,
114-128
,
[OpenAccess]
,
[Publisher]
Wikis allow users to collaboratively create and maintain content. Semantic wikis, which provide the additional means to annotate the content semantically and thereby allow to structure it, experience an enormous increase in popularity, because structured data is more usable and thus more valuable than unstructured data. As an illustration of leveraging the advantages of semantic wikis for semantic portals, we report on the experience with building the AIFB portal based on Semantic MediaWiki. We discuss the design, in particular how free, wiki-style semantic annotations and guided input along a predefined schema can be combined to create a flexible, extensible, and structured knowledge representation. How this structured data evolved over time and its flexibility regarding changes are subsequently discussed and illustrated by statistics based on actual operational data of the portal. Further, the features exploiting the structured data and the benefits they provide are presented. Since all benefits have its costs, we conducted a performance study of the Semantic MediaWiki and compare it to MediaWiki, the non- semantic base platform. Finally we show how existing caching techniques can be applied to increase the performance.
-
Semantic Techniques for Enabling Knowledge Reuse in Conceptual Modelling
,
Jorge Gracia del Río,Jochem Liem,Esther Lozano,Óscar Corcho,Michal Trna,Asunción Gómez-Pérez and Bert Bredeweg
,
82-97
,
[OpenAccess]
,
[Publisher]
Conceptual modelling tools allow users to construct formal representations of their conceptualisations. These models are typically developed in isolation, unrelated to other user models, thus losing the opportunity of incorporating knowledge from other existing models or ontologies that might enrich the modelling process. We propose to apply Semantic Web techniques to the context of conceptual modelling (more particularly to the domain of qualitative reasoning), to smoothly interconnect conceptual models created by different users, thus facilitating the global sharing of scientific data contained in such models and creating new learning opportunities for people who start modelling. This paper describes how semantic grounding techniques can be used during the creation of qualitative reasoning models, to bridge the gap between the imprecise user terminology and a well defined external common vocabulary. We also explore the application of ontology matching techniques between models, which can provide valuable feedback during the model construction process.
-
Semantic Technologies for Enterprise Cloud Management
,
Peter Haase,Tobias Mathäß,Michael Schmidt,Andreas Eberhart and Ulrich Walther
,
98-113
,
[OpenAccess]
,
[Publisher]
Enterprise clouds apply the paradigm of cloud computing to enterprise IT infrastructures, with the goal of providing easy, flexible, and scalable access to both computing resources and IT services. Realizing the vision of the fully automated enterprise cloud involves addressing a range of technological challenges. In this paper, we focus on the challenges related to intelligent information management in enterprise clouds and discuss how semantic technologies can help to fulfill them. In particular, we address the topics of data integration, collaborative documentation and annotation and intelligent information access and analytics and present solutions that are implemented in the newest addition to our eCloudManager product suite: The Intelligence Edition.
-
Social dynamics in conferences: analyses of data from the Live Social Semantics application
,
Alain Barrat,Ciro Cattuto,Martin Szomszor,Wouter Van den Broeck and Harith Alani
,
17-33
,
[OpenAccess]
,
[Publisher]
Popularity and spread of online social networking in recent years has given a great momentum to the study of dynamics and patterns of social interactions. However, these studies have often been confined to the online world, neglecting its interdependencies with the offline world. This is mainly due to the lack of real data that spans across this divide. The Live Social Semantics application is a novel platform that dissolves this divide, by collecting and integrating data about people from (a) their online social networks and tagging activities from popular social networking sites, (b) their publications and co-authorship networks from semantic repositories, and (c) their real-world face-to-face contacts with other attendees collected via a network of wearable active sensors. This paper investigates the data collected by this application during its deployment at three major conferences, where it was used by more than 400 people. Our analyses show the robustness of the patterns of contacts at various conferences, and the influence of various personal properties (e.g. seniority, conference attendance) on social networking patterns.
-
Time-Oriented Question Answering from Clinical Narratives Using Semantic-Web Techniques
,
Cui Tao,Harold R. Solbrig,Deepak K. Sharma,Wei-Qi Wei,Guergana K. Savova and Christopher G. Chute
,
241-256
,
[OpenAccess]
,
[Publisher]
The ability to answer temporal-oriented questions based on clinical narratives is essential to clinical research. The temporal dimension in medical data analysis enables clinical researches on many areas, such as, disease progress, individualized treatment, and decision support. The Semantic Web provides a suitable environment to represent the temporal dimension of the clinical data and reason about them. In this paper, we introduce a Semantic-Web based framework, which provides an API for querying temporal information from clinical narratives. The framework is centered by an OWL ontology called CNTRO (Clinical Narrative Temporal Relation Ontology), and contains three major components: time normalizer, SWRL based reasoner, and OWL-DL based reasoner. We also discuss how we adopted these three components in the clinical domain, their limitations, as well as extensions that we found necessary or desirable to archive the purposes of querying time-oriented data from real-world clinical narratives.
-
Using SPARQL to Test for Lattices: application to quality assurance in biomedical ontologies
,
Guo-Qiang Zhang and Olivier Bodenreider
,
273-288
,
[OpenAccess]
,
[Publisher]
We present a scalable, SPARQL-based computational pipeline for testing the lattice-theoretic properties of partial orders represented as RDF triples. The use case for this work is quality assurance in biomedical ontologies, one desirable property of which is conformance to lattice structures. At the core of our pipeline is the algorithm called NuMi, for detecting the Number of Minimal upper bounds of any pair of elements in a given finite partial order. Our technical contribution is the coding of NuMi completely in SPARQL. To show its scalability, we applied NuMi to the entirety of SNOMED CT, the largest clinical ontology (over 300,000 conepts). Our experimental results have been groundbreaking: for the first time, all non-lattice pairs in SNOMED CT have been identified exhaustively from 34 million candidate pairs using over 2.5 billion queries issued to Virtuoso. The percentage of non-lattice pairs ranges from 0 to 1.66 among the 19 SNOMED CT hierarchies. These non-lattice pairs represent target areas for focused curation by domain experts. RDF, SPARQL and related tooling provide an efficient platform for implementing lattice algorithms on large data structures.
-
Using Semantic Web technologies for Clinical Trial Recruitment
,
Paolo Besana,Marc Cuggia,Oussama Zekri,Annabel Bourde and Anita Burgun
,
34-49
,
[OpenAccess]
,
[Publisher]
Clinical trials are fundamental for medical science: they provide the evaluation for new treatments and new diagnostic approaches. One of the most difficult parts of clinical trials is the recruitment of patients: many trials fail due to lack of participants. Recruitment is done by matching the eligibility criteria of trials to patient conditions. This is usually done manually, but both the large number of active trials and the lack of time available for matching keep the recruitment ratio low. In this paper we present a method, entirely based on standard semantic web technologies and tool, that allows the automatic recruitment of a patient to the available clinical trials. We use a domain specific ontology to represent data from patients' health records and we use SWRL to verify the eligibility of patients to clinical trials.
-
Will Semantic Web Technologies Work for the Development of ICD-11?
,
Tania Tudorache,Sean M. Falconer,Csongor Nyulas,Natalya Fridman Noy and Mark A. Musen
,
257-272
,
[OpenAccess]
,
[Publisher]
The World Health Organization is beginning to use Semantic Web technologies in the development of the 11th revision of the International Classification of Diseases (ICD-11). Health officials use ICD in all United Nations member countries to compile basic health statistics, to monitor health-related spending, and to inform policy makers. While previous revisions of ICD encoded minimal information about a disease, and were mainly published as books and tabulation lists, the creators of ICD-11 envision that it will become a multi-purpose and coherent classification ready for electronic health records. Most important, they plan to have ICD-11 applied for a much broader variety of uses than previous revisions. The new requirements entail significant changes in the way we represent disease information, as well as in the technologies and processes that we use to acquire the new content. In this paper, we describe the previous processes and technologies used for developing ICD. We then describe the requirements for the new development process and present the Semantic Web technologies that we use for ICD-11. We outline the experiences of the domain experts using the software system that we implemented using Semantic Web technologies. We then discuss the benefits and challenges in following this approach and conclude with lessons learned from this experience.
-
dbrec - Music Recommendations Using DBpedia
,
Alexandre Passant
,
209-224
,
[OpenAccess]
,
[Publisher]
This paper describes the theoretical background and the implementation of dbrec, a music recommendation system built on top of DBpedia, offering recommendations for more than 39,000 bands and solo artists. We discuss the various challenges and lessons learnt while building it, providing relevant insights for people developing applications consuming Linked Data. Furthermore, we provide a user-centric evaluation of the system, notably by comparing it to last.fm.
-
A Feature and Information Theoretic Framework for Semantic Similarity and Relatedness
,
Giuseppe Pirrò and Jérôme Euzenat
,
615-630
,
[OpenAccess]
,
[Publisher]
Semantic similarity and relatedness measures between ontology concepts are useful in many research areas. While similarity only considers subsumption relations to assess how two objects are alike, relatedness takes into account a broader range of relations (e.g., part-of). In this paper, we present a framework, which maps the feature-based model of similarity into the information theoretic domain. A new way of computing IC values directly from an ontology structure is also introduced. This new model, called Extended Information Content (eIC) takes into account the whole set of semantic relations defined in an ontology. The proposed framework enables to rewrite existing similarity measures that can be augmented to compute semantic relatedness. Upon this framework, a new measure called FaITH (Feature and Information THeoretic) has been devised. Extensive experimental evaluations confirmed the suitability of the framework.
-
A Self-Policing Policy Language
,
Sebastian Speiser and Rudi Studer
,
730-746
,
[OpenAccess]
,
[Publisher]
Formal policies allow the non-ambiguous definition of situations in which usage of certain entities are allowed, and enable the automatic evaluation whether a situation is compliant. This is useful for example in applications using data provided via standardized interfaces. The low technical barriers of integrating such data sources is in contrast to the manual evaluation of natural language policies as they currently exist. Usage situations can themselves be regulated by policies, which can be restricted by the policy of a used entity. Consider for example the Google Maps API, which requires that applications using the API must be available without a fee, i.e. the application's policy must not require a payment. In this paper we present a policy language that can express such constraints on other policies, i.e. a self-policing policy language. We validate our approach by realizing a use case scenario, using a policy engine developed for our language.
-
An Expressive and Efficient Solution to the Service Selection Problem
,
Daniel Izquierdo,Maria-Esther Vidal and Blai Bonet
,
386-401
,
[OpenAccess]
,
[Publisher]
Given the large number of Semantic Web Services that can be created from online sources by using existing annotation tools, expressive formalisms and efficient and scalable approaches to solve the service selection problem are required to make these services widely available to the users. In this paper, we propose a framework that is grounded on logic and the Local-As-View approach for representing instances of the service selection problem. In our approach,Web services are semantically described using LAV mappings in terms of generic concepts from an ontology, user requests correspond to conjunctive queries on the generic concepts and, in addition, the user may specify a set of preferences that are used to rank the possible solutions to the given request. The LAV formulation allows us to cast the service selection problem as a query rewriting problem that must consider the relationships among the concepts in the ontology and the ranks induced by the preferences. Then, building on related work, we devise an encoding of the resulting query rewriting problem as a logical theory whose models are in correspondence with the solutions of the user request, and in presence of preferences, whose best models are in correspondence with the best-ranked solutions. Thus, by exploiting known properties of modern SAT solvers, we provide an efficient and scalable solution to the service selection problem. The approach provides the basis to represent a large number of real-world situations and interesting user requests.
-
AnQL: SPARQLing Up Annotated RDFS
,
Nuno Lopes,Axel Polleres,Umberto Straccia and Antoine Zimmermann
,
518-533
,
[OpenAccess]
,
[Publisher]
Starting from the general framework for annotated RDF(S) which we have presented in previous work, we address the development of a query language - AnQL - that is inspired by SPARQL, including several features of SPARQL 1.1. As a side effect we propose formal definitions of the semantics of these features (subqueries, aggregates, assignment, solution modifiers) which could serve as a basis for the ongoing work in SPARQL 1.1. We demonstrate the value of such a framework by comparing our approach to previously proposed extensions of SPARQL and show that AnQL generalises and extends them.
-
Assessing Trust in Uncertain Information
,
Achille Fokoue,Mudhakar Srivatsa and Robert Young
,
209-224
,
[OpenAccess]
,
[Publisher]
On the Semantic Web, decision makers (humans or software agents alike) are faced with the challenge of examining large volumes of information originating from heterogeneous sources with the goal of ascertaining trust in various pieces of information. While previous work has focused on simple models for review and rating systems, we introduce a new trust model for rich, complex and uncertain information.We present the challenges raised by the new model, and the results of an evaluation of the first prototype implementation under a variety of scenarios.
-
Combining Approximation and Relaxation in Semantic Web Path Queries
,
Alexandra Poulovassilis and Peter T. Wood
,
631-646
,
[OpenAccess]
,
[Publisher]
We develop query relaxation techniques for regular path queries and combine them with query approximation in order to support flexible querying of RDF data when the user lacks knowledge of its full structure or where the structure is irregular. In such circumstances, it is helpful if the querying system can perform both approximate matching and relaxation of the user's query and can rank the answers according to how closely they match the original query. Our framework incorporates both standard notions of approximation based on edit distance and RDFS-based inference rules. The query language we adopt comprises conjunctions of regular path queries, thus including extensions proposed for SPARQL to allow for querying paths using regular expressions. We provide an incremental query evaluation algorithm which runs in polynomial time and returns answers to the user in ranked order.
-
Compact Representation of Large RDF Data Sets for Publishing and Exchange
,
Javier D. Fernández,Miguel A. Martínez-Prieto and Claudio Gutierrez
,
193-208
,
[OpenAccess]
,
[Publisher]
Increasingly huge RDF data sets are being published on theWeb. Currently, they use different syntaxes of RDF, contain high levels of redundancy and have a plain indivisible structure. All this leads to fuzzy publications, inefficient management, complex processing and lack of scalability. This paper presents a novel RDF representation (HDT) which takes advantage of the structural properties of RDF graphs for splitting and representing, efficiently, three components of RDF data: Header, Dictionary and Triples structure. On-demand management operations can be implemented on top of HDT representation. Experiments show that data sets can be compacted in HDT by more than fifteen times the current naive representation, improving parsing and processing while keeping a consistent publication scheme. For exchanging, specific compression techniques over HDT improve current compression solutions.
-
Completeness Guarantees for Incomplete Reasoners
,
Giorgos Stoilos,Bernardo Cuenca Grau and Ian Horrocks
,
747-763
,
[OpenAccess]
,
[Publisher]
We extend our recent work on evaluating incomplete reasoners by introducing strict testing bases. We show how they can be used in practice to identify ontologies and queries where applications can exploit highly scalable incomplete query answering systems while enjoying completeness guarantees normally available only when using computationally intensive reasoning systems.
-
Converting and Annotating Quantitative Data Tables
,
Mark van Assem,Hajo Rijgersberg,Mari Wigham and Jan L. Top
,
16-31
,
[OpenAccess]
,
[Publisher]
Companies, governmental agencies and scientists produce a large amount of quantitative (research) data, consisting of measurements ranging from e.g. the surface temperatures of an ocean to the viscosity of a sample of mayonnaise. Such measurements are stored in tables in e.g. spreadsheet files and research reports. To integrate and reuse such data, it is necessary to have a semantic description of the data. However, the notation used is often ambiguous, making automatic interpretation and conversion to rdf or other suitable format difficult. For example, the table header cell ``f (Hz)'' refers to frequency measured in Hertz, but the symbol ``f'' can also refer to the unit farad or the quantities force or luminous flux. Current annotation tools for this task either work on less ambiguous data or perform a more limited task. We introduce new disambiguation strategies based on an ont, which allows to improve performance on ``sloppy'' datasets not yet targeted by existing systems.
-
Deciding Agent Orientation on Ontology Mappings
,
Paul Doran,Terry R. Payne,Valentina A. M. Tamma and Ignazio Palmisano
,
161-176
,
[OpenAccess]
,
[Publisher]
Effective communication in open environments relies on the ability of agents to reach a mutual understanding of the exchanged message by reconciling the vocabulary (ontology) used. Various approaches have considered how mutually acceptable mappings between corresponding concepts in the agents' own ontologies may be determined dynami- cally through argumentation-based negotiation (such as Meaning-based Argumentation, MbA). In this paper we present a novel approach to the dynamic determination of mutually acceptable mappings, that allows agents to express a private acceptability threshold over the types of mappings they prefer. We empirically compare this approach with the Meaning-based Argumentation and demonstrate that the proposed approach produces larger agreed alignments thus better enabling agent communication. Furthermore, we compare and evaluate the fitness for purpose of the generated alignments, and we empirically demonstrate that the proposed approach has comparable performance to the MbA approach.
-
Declarative Semantics for the Rule Interchange Format Production Rule Dialect
,
Carlos Viegas Damásio,José Júlio Alferes and João Leite
,
798-813
,
[OpenAccess]
,
[Publisher]
The Rule Interchange Format Production Rule Dialect (RIF-PRD) is a W3C Recommendation to define production rules for the Semantic Web, whose semantics is defined operationally via labeled terminal transition systems. In this paper, we introduce a declarative logical characterization of the full default semantics of RIF-PRD based on Answer Set Programming (ASP), including matching, conflict resolution and acting. Our proposal to the semantics of RIF-PRD enjoys several features. Being based on ASP, it enables a straightforward integration with Logic Pro- gramming rule based technology, namely for reasoning and acting with ontologies. Then, its full declarative logical character facilitates the in- vestigation of formal properties of RIF-PRD itself. Furthermore, it turns out that our characterization based on ASP is flexible enough so that new conflict resolution semantics for RIF-PRD can easily be defined and encoded. Finally, it immediately serves as the declarative specification of an implementation, whose prototype we developed.
-
EL with Default Attributes and Overriding
,
Piero A. Bonatti,Marco Faella and Luigi Sauro
,
64-79
,
[OpenAccess]
,
[Publisher]
Biomedical ontologies and semantic web policy languages based on description logics (DLs) provide fresh motivations for extending DLs with nonmonotonic inferences - a topic that has attracted a significant amount of attention along the years. Despite this, nonmonotonic inferences are not yet supported by the existing DL engines. One reason is the high computational complexity of the existing decidable fragments of nonmonotonic DLs. In this paper we identify a fragment of circumscribed $\EL^\bot$ that supports attribute inheritance with specificity-based overriding (much like an object-oriented language), and such that reasoning about default attributes is in P.
-
Enabling Ontology-based Access to Streaming Data Sources
,
Jean-Paul Calbimonte,Óscar Corcho and Alasdair J. G. Gray
,
96-111
,
[OpenAccess]
,
[Publisher]
The availability of streaming data sources is progressively increasing thanks to the development of ubiquitous data capturing technologies such as sensor networks. The heterogeneity of these sources introduces the requirement of providing data access in a unified and coherent manner, whilst allowing the user to express their needs at an ontological level. In this paper we describe an ontology-based streaming data access service. Sources link their data content to ontologies through s2o mappings. Users can query the ontology using sparqlStream, an extension of sparql for streaming data. A preliminary implementation of the approach is also presented. With this proposal we expect to set the basis for future efforts in ontology-based streaming data integration.
-
Enhancing the Open-Domain Classification of Named Entity using Linked Open Data
,
Yuan Ni,Lei Zhang,Zhaoming Qiu and Chen Wang
,
566-581
,
[OpenAccess]
,
[Publisher]
Many applications make use of named entity classification. Machine learning is the preferred technique adopted for many named entity classification methods where the choice of features is critical to final performance. Existing approaches explore only the features derived from the characteristic of the named entity itself or its linguistic context. With the development of the SemanticWeb, a large number of data sources are published and connected across the Web as Linked Open Data (LOD). LOD provides rich a priori knowledge about entity type information, knowledge that can be a valuable asset when used in connection with named entity classification. In this paper, we explore the use of LOD to enhance named entity classification. Our method extracts information from LOD and builds a type knowledge base which is used to score a (named entity string, type) pair. This score is then injected as one or more features into the existing classifier in order to improve its performance. We conducted a thorough experimental study and report the results, which confirm the effectiveness of our proposed method.
-
EvoPat - Pattern-Based Evolution and Refactoring of RDF Knowledge Bases
,
Christoph Rieß,Norman Heino,Sebastian Tramp and Sören Auer
,
647-662
,
[OpenAccess]
,
[Publisher]
Facilitating the seamless evolution of RDF knowledge bases on the Semantic Web presents still a major challenge. In this work we devise EvoPat - a pattern-based approach for the evolution and refactoring of knowledge bases. The approach is based on the definition of basic evolution patterns, which are represented declaratively and can capture simple evolution and refactoring operations on both data and schema levels. For more advanced and domain-specific evolution and refactorings, several simple evolution patterns can be combined into a compound one. We performed a comprehensive survey of possible evolution patterns with a combinatorial analysis of all possible before/after combinations, resulting in an extensive catalog of usable evolution patterns. Our approach was implemented as an extension for the OntoWiki semantic collaboration platform and framework.
-
Evolution of DL-Lite Knowledge Bases
,
Diego Calvanese,Evgeny Kharlamov,Werner Nutt and Dmitriy Zheleznyakov
,
112-128
,
[OpenAccess]
,
[Publisher]
We study the problem of evolution for Knowledge Bases (KBs) expressed in Description Logics (DLs) of the DL-Lite family. DL-Lite is at the basis of OWL 2 QL, one of the tractable fragments of OWL 2, the recently proposed revision of theWeb Ontology Language.We propose some fundamental principles that KB evolution should respect. We review known model and formula-based approaches for evolution of propositional theories. We exhibit limitations of a number of model-based approaches: besides the fact that they are either not expressible in DL-Lite or hard to compute, they intrinsically ignore the structural properties of KBs, which leads to undesired properties of KBs resulting from such an evolution. We also examine proposals on update and revision of DL KBs that adopt the model-based approaches and discuss their drawbacks. We show that known formula-based approaches are also not appropriate for DL-Lite evolution, either due to high complexity of computation, or because the result of such an action of evolution is not expressible in DL-Lite. Building upon the insights gained, we propose two novel formula-based approaches that respect our principles and for which evolution is expressible in DL-Lite. For our approaches we also developed polynomial time algorithms to compute evolution of DL-Lite KBs.
-
Finding the Achilles Heel of the Web of Data: using network analysis for link-recommendation
,
Christophe Guéret,Paul T. Groth,Frank van Harmelen and Stefan Schlobach
,
289-304
,
[OpenAccess]
,
[Publisher]
The Web of Data is increasingly becoming an important infrastructure for such diverse sectors as entertainment, government, e-commerce and science. As a result, the robustness of this Web of Data is now crucial. Prior studies show that the Web of Data is strongly dependent on a small number of central hubs, making it highly vulnerable to single points of failure. In this paper, we present concepts and algorithms to analyse and repair the brittleness of the Web of Data. We apply these on a substantial subset of it, the 2010 Billion Triple Challenge dataset. We first distinguish the physical structure of the Web of Data from its semantic structure. For both of these structures, we then calculate their robustness, taking betweenness centrality as a robustness-measure. To the best of our knowledge, this is the first time that such robustness-indicators have been calculated for the Web of Data. Finally, we determine which links should be added to the Web of Data in order to improve its robustness most effectively. We are able to determine such links by interpreting the question as a very large optimisation problem and deploying an evolutionary algorithm to solve this problem. We believe that with this work, we offer an effective method to analyse and improve the most important structure that the Semantic Web community has constructed to date.
-
Forgetting Fragments from Evolving Ontologies
,
Heather S. Packer,Nicholas Gibbins and Nicholas R. Jennings
,
582-597
,
[OpenAccess]
,
[Publisher]
Ontologies underpin the semantic web; they define the concepts and their relationships contained in a data source. An increasing number of ontologies are available on-line, but an ontology that combines information from many different sources can grow extremely large. As an ontology grows larger, more resources are required to use it, and its response time becomes slower. Thus, we present and evaluate an on-line approach that forgets fragments from an OWL ontology that are infrequently or no longer used, or are cheap to relearn, in terms of time and resources. In order to evaluate our approach, we situate it in a controlled simulation environment, RoboCup OWLRescue, which is an extension of the widely used RoboCup Rescue platform, which enables agents to build ontologies automatically based on the tasks they are required to perform. We benchmark our approach against other comparable techniques and show that agents using our approach spend less time forgetting concepts from their ontology, allowing them to spend more time deliberating their actions, to achieve a higher average score in the simulation environment.
-
Fusion - Visually Exploring and Eliciting Relationships in Linked Data
,
Samur Araújo,Geert-Jan Houben,Daniel Schwabe and Jan Hidders
,
1-15
,
[OpenAccess]
,
[Publisher]
Building applications over Linked Data often requires a mapping between the application model and the ontology underlying the source dataset in the Linked Data cloud. This mapping can be defined in many ways. For instance, by describing the application model as a view over the source dataset, by giving mappings in the form of dependencies between the two datasets, or by inference rules that infer the application model from the source dataset. Explicitly formulating these mappings demands a comprehensive understanding of the underlying schemas (RDF ontologies) of the source and target datasets. This task can be supported by integrating the process of schema exploration into the mapping process and help the application designer with finding the implicit relationships that she wants to map. This paper describes Fusion - a framework for closing the gap between the application model and the underlying ontologies in the Linked Data cloud. Fusion simplifies the definition of mappings by providing a visual user interface that integrates the exploratory process and the mapping process. Its architecture allows the creation of new applications through the extension of existing Linked Data with additional data.
-
How to Reuse a Faceted Classification and Put it on the Semantic Web
,
Bene Rodriguez-Castro,Hugh Glaser and Leslie Carr
,
663-678
,
[OpenAccess]
,
[Publisher]
There are ontology domain concepts that can be represented according to multiple alternative classification criteria. Current ontology modeling guidelines do not explicitly consider this aspect in the representation of such concepts. To assist with this issue, we examined a domain-specific simplified model for facet analysis used in Library Science. This model produces a Faceted Classification Scheme (FCS) which accounts for the multiple alternative classification criteria of the domain concept under scrutiny. A comparative analysis between a FCS and the Normalisation Ontology Design Pattern (ODP) indicates the existence of key similarities between the elements in the generic structure of both knowledge representation models. As a result, a mapping is identified that allows to transform a FCS into an OWL DL ontology applying the Normalisation ODP. Our contribution is illustrated with an existing FCS example in the domain of "Dishwashing Detergent" that benefits from the outcome of this study.
-
Integrated Metamodeling and Diagnosis in OWL 2
,
Birte Glimm,Sebastian Rudolph and Johanna Völker
,
257-272
,
[OpenAccess]
,
[Publisher]
Ontological metamodeling has a variety of applications yet only very restricted forms are supported by OWL 2 directly. We propose a novel encoding scheme enabling class-based metamodeling inside the domain ontology with full reasoning support through standard OWL 2 reasoning systems. We demonstrate the usefulness of our method by applying it to the OntoClean methodology. En passant, we address performance problems arising from the inconsistency diagnosis strategy originally proposed for OntoClean by introducing an alternative technique where sources of conflicts are indicated by means of marker predicates.
-
JustBench: A Framework for OWL Benchmarking
,
Samantha Bail,Bijan Parsia and Ulrike Sattler
,
32-47
,
[OpenAccess]
,
[Publisher]
Analysing the performance of OWL reasoners on expressive OWL ontologies is an ongoing challenge. In this paper, we present a new approach to performance analysis based on justifications for entailments of OWL ontologies. Justifications are minimal subsets of an ontology that are sufficient for an entailment to hold, and are commonly used to debug OWL ontologies. In JustBench, justifications form the key unit of test, which means that individual justifications are tested for correctness and reasoner performance instead of entire ontologies or random subsets. Justifications are generally small and relatively easy to analyse, which makes them very suitable for transparent analytic micro-benchmarks. Furthermore, the JustBench approach also allows us to isolate reasoner errors and inconsistent behaviour. We present the results of initial experiments using JustBench with FaCT++, HermiT, and Pellet. Finally, we show how JustBench can be used by reasoner developers and ontology engineers seeking to understand and improve the performance characteristics of reasoners and ontologies.
-
Justification Oriented Proofs in OWL
,
Matthew Horridge,Bijan Parsia and Ulrike Sattler
,
354-369
,
[OpenAccess]
,
[Publisher]
Justifications - that is, minimal entailing subsets of an ontology - are currently the dominant form of explanation provided by ontology engineering environments, especially those focused on the Web Ontology Language (OWL). Despite this, there are naturally occurring justifications that can be very difficult to understand. In essence, justifications are merely the premises of a proof and, as such, do not articulate the (often non-obvious) reasoning which connect those premises with the conclusion. This paper presents justification oriented proofs as a potential solution to this problem.
-
Linked Data Query Processing Strategies
,
Günter Ladwig and Thanh Tran
,
453-469
,
[OpenAccess]
,
[Publisher]
Recently, processing of queries on linked data has gained attention. We identify and systematically discuss three main strategies: a bottom-up strategy that discovers new sources during query processing by following links between sources, a top-down strategy that relies on complete knowledge about the sources to select and process relevant sources, and a mixed strategy that assumes some incomplete knowledge and discovers new sources at run-time. To exploit knowledge discovered at run-time, we propose an additional step, explicitly scheduled during query processing, called correct source ranking. Additionally, we propose the adoption of stream-based query processing to deal with the unpredictable nature of data access in the distributed Linked Data environment. In experiments, we show that our implementation of the mixed strategy leads to early reporting of results and thus, more responsive query processing, while not requiring complete knowledge.
-
Linking and Building Ontologies of Linked Data
,
Rahul Parundekar,Craig A. Knoblock and José Luis Ambite
,
598-614
,
[OpenAccess]
,
[Publisher]
The Web of Linked Data is characterized by linking structured data from different sources using equivalence statements, such as owl:sameAs, as well as other types of linked properties. The ontologies behind these sources, however, remain unlinked. This paper describes an extensional approach to generate alignments between these ontologies. Specifically our algorithm produces equivalence and subsumption relationships between classes from ontologies of different Linked Data sources by exploring the space of hypotheses supported by the existing equivalence statements. We are also able to generate a complementary hierarchy of derived classes within an existing ontology or generate new classes for a second source where the ontology is not as refined as the first. We demonstrate empirically our approach using Linked Data sources from the geospatial, genetics, and zoology domains. Our algorithm discovered about 800 equivalences and 29,000 subset relationships in the alignment of five source pairs from these domains. Thus, we are able to model one Linked Data source in terms of another by aligning their ontologies and understand the semantic relationships between the two sources.
-
Making sense of Twitter
,
David Laniado and Peter Mika
,
470-485
,
[OpenAccess]
,
[Publisher]
Twitter enjoys enormous popularity as a microblogging service largely due to its simplicity. On the downside, there is little organization to the Twitterverse and making sense of the stream of messagespassing through the system has become a significant challenge for everyone involved. As a solution, Twitter users have adopted the convention ofadding a hash at the beginning of a word to turn it into a hashtag. Hashtags have become the means in Twitter to create threads of conversationand to build communities around particular interests.In this paper, we take a first look at whether hashtags behave as strongidentifiers, and thus whether they could serve as identifiers for the Semantic Web. We introduce some metrics that can help identify hashtagsthat show the desirable characteristics of strong identifiers. We look atthe various ways in which hashtags are used, and show through evaluation that our metrics can be applied to detect hashtags that representreal world entities.
-
Measuring the dynamic bi-directional influence between content and social networks
,
Shenghui Wang and Paul T. Groth
,
814-829
,
[OpenAccess]
,
[Publisher]
The Social Semantic Web has begun to provide connections between users within social networks and the content they produce across the whole of the Social Web. Thus, the Social Semantic Web provides a basis to analyze both the communication behavior of users together with the content of their communication. However, there is little research combining the tools to study communication behaviour and communication content, namely, social network analysis and content analysis. Furthermore, there is even less work addressing the longitudinal characteristics of such a combination. This paper presents a general framework for measuring the dynamic bi-directional influence between communication content and social networks. We apply this framework in two use-cases: online forum discussions and conference publications. The results provide a new perspective over the dynamics involving both social networks and communication content.
-
OWL-POLAR: Semantic Policies for Agent Reasoning
,
Murat Sensoy,Timothy J. Norman,Wamberto Weber Vasconcelos and Katia P. Sycara
,
679-695
,
[OpenAccess]
,
[Publisher]
Policies are declarations of constraints on the behaviour of components within distributed systems, and are often used to capture norms within agent-based systems. A few machine-processable representations for policies have been proposed, but they tend to be either limited in the types of policies that can be expressed or limited by the complexity of associated reasoning mechanisms. In this paper, we argue for a language that sufficiently expresses the types of policies essential in practical systems, and which enables both policy-governed decisionmaking and policy analysis within the bounds of decidability.We then propose an OWL-based representation of policies that meets these criteria using and a reasoning mechanism that uses a novel combination of ontology consistency checking and query answering. In this way, agent-based systems can be developed that operate flexibly and effectively in policy-constrainted environments.
-
One size does not fit all: Customizing Ontology Alignment Using User Feedback
,
Songyun Duan,Achille Fokoue and Kavitha Srinivas
,
177-192
,
[OpenAccess]
,
[Publisher]
A key problem in ontology alignment is that different ontological features (e.g., lexical, structural or semantic) vary widely in their importance for different ontology comparisons. In this paper, we present a set of principled techniques that exploit user feedback to customize the alignment process for a given pair of ontologies. Specifically, we propose an iterative supervised-learning approach to (i) determine the weights assigned to each alignment strategy and use these weights to combine them for matching ontology entities; and (ii) determine the degree to which the information from such matches should be propagated to their neighbors along different relationships for collective matching. We demonstrate the utility of these techniques with standard benchmark datasets and large, real-world ontologies, showing improvements in F-scores of up to 70% from the weighting mechanism and up to 40% from collective matching, compared to an unweighted linear combination of matching strategies without information propagation.
-
Ontology Alignment for Linked Open Data
,
Prateek Jain,Pascal Hitzler,Amit P. Sheth,Kunal Verma and Peter Z. Yeh
,
402-417
,
[OpenAccess]
,
[Publisher]
The Web of Data currently coming into existence through the Linked Open Data (LOD) effort is a major milestone in realizing the Semantic Web vision. However, the development of applications based on LOD faces difficulties due to the fact that the different LOD datasets are rather loosely connected pieces of information. In particular, links between LOD datasets are almost exclusively on the level of instances, and schema-level information is being ignored. In this paper, we therefore present a system for finding schema-level links between LOD datasets in the sense of ontology alignment. Our system, called BLOOMS, is based on the idea of bootstrapping information already present on the LOD cloud. We also present a comprehensive evaluation which shows that BLOOMS outperforms state-of-the-art ontology alignment systems on LOD datasets. At the same time, BLOOMS is also competitive compared with these other systems on the Ontology Evaluation Alignment Initiative Benchmark datasets.
-
Ontology similarity in the alignment space
,
Jérôme David,Jérôme Euzenat and Ondrej Sváb-Zamazal
,
129-144
,
[OpenAccess]
,
[Publisher]
Measuring similarity between ontologies can be very useful for different purposes, e.g., finding an ontology to replace another, or finding an ontology in which queries can be translated. Classical measures compute similarities or distances in an ontology space by directly comparing the content of ontologies. We introduce a new family of ontology measures computed in an alignment space: they evaluate the similarity between two ontologies with regard to the available alignments between them.We define two sets of such measures relying on the existence of a path between ontologies or on the ontology entities that are preserved by the alignments. The former accounts for known relations between ontologies, while the latter reflects the possibility to perform actions such as instance import or query translation. All these measures have been implemented in the OntoSim library, that has been used in experiments which showed that entity preserving measures are comparable to the best ontology space measures. Moreover, they showed a robust behaviour with respect to the alteration of the alignment space.
-
Optimising Ontology Classification
,
Birte Glimm,Ian Horrocks,Boris Motik and Giorgos Stoilos
,
225-240
,
[OpenAccess]
,
[Publisher]
Ontology classification - the computation of subsumption hierarchies for classes and properties is one of the most important tasks for OWL reasoners. Based on the algorithm by Shearer and Horrocks [9], we present a new classification procedure that addresses several open issues of the original algorithm, and that uses several novel optimisations in order to achieve superior performance. We also consider the classification of (object and data) properties. We show that algorithms commonly used to implement that task are incomplete even for relatively weak ontology languages. Furthermore, we show how to reduce the property classification problem into a standard (class) classification problem, which allows reasoners to classify properties using our optimised procedure. We have implemented our algorithms in the OWL HermiT reasoner, and we present the results of a performance evaluation.
-
Optimize First, Buy Later: Analyzing Metrics to Ramp-up Very Large Knowledge Bases
,
Paea LePendu,Natalya Fridman Noy,Clement Jonquet,Paul R. Alexander,Nigam H. Shah and Mark A. Musen
,
486-501
,
[OpenAccess]
,
[Publisher]
As knowledge bases move into the landscape of larger ontologies and have terabytes of related data, we must work on optimizing the performance of our tools. We are easily tempted to buy bigger machines or to fill rooms with armies of little ones to address the scalability problem. Yet, careful analysis and evaluation of the characteristics of our data-using metrics-often leads to dramatic improvements in performance. Firstly, are current scalable systems scalable enough? We found that for large or deep ontologies (some as large as 500,000 classes) it is hard to say because benchmarks obscure the load-time costs for materialization. Therefore, to expose those costs, we have synthesized a set of more representative ontologies. Secondly, in designing for scalability, how do we manage knowledge over time? By optimizing for data distribution and ontology evolution, we have reduced the population time, including materialization, for the NCBO Resource Index, a knowledge base of 16.4 billion annotations linking 2.4 million terms from 200 ontologies to 3.5 million data elements, from one week to less than one hour for one of the large datasets on the same machine.
-
Optimizing Enterprise-scale OWL 2 RL Reasoning in a Relational Database System
,
Vladimir Kolovski,Zhe Wu and George Eadon
,
436-452
,
[OpenAccess]
,
[Publisher]
OWL 2 RL was standardized as a less expressive but scalable subset of OWL 2 that allows a forward-chaining implementation. However, building an enterprise-scale forward-chaining based inference engine that can 1) take advantage of modern multi-core computer architectures, and 2) efficiently update inference for additions remains a challenge. In this paper, we present an OWL 2 RL inference engine implemented inside the Oracle database system, using novel techniques for parallel processing that can readily scale on multi-core machines and clusters. Additionally, we have added support for efficient incremental maintenance of the inferred graph after triple additions. Finally, to handle the increasing number of owl:sameAs relationships present in Semantic Web datasets, we have provided a hybrid in-memory/disk based approach to efficiently compute compact equivalence closures. We have done extensive testing to evaluate these new techniques; the test results demonstrate that our inference engine is capable of performing efficient inference over ontologies with billions of triples using a modest hardware configuration.
-
Preference-based Web Service Composition: A Middle Ground Between Execution and Search
,
Shirin Sohrabi and Sheila A. McIlraith
,
713-729
,
[OpenAccess]
,
[Publisher]
Much of the AI-related work on Web Service Composition (WSC) relates it to an Artificial Intelligence (AI) planning problem, where the composition is primarily done offline prior to execution. Recent research on WSC has argued convincingly for the importance of optimizing quality of service and user preferences. While some of this optimization can be done offline, many interesting and useful optimizations are data-dependent, and must be done following execution of at least some information-providing services. In this paper, we examine this class of WSC problems, attempting to bridge the gap between offline composition and online information gathering with a view to producing high-quality compositions without excessive data gathering. Our investigation is performed in the context of an existing preference-based Hierarchical Task Networks (HTNs) WSC system. Our experiments show an improvement in both the quality and speed of finding a composition.
-
Query strategy for sequential ontology debugging
,
Kostyantyn M. Shchekotykhin and Gerhard Friedrich
,
696-712
,
[OpenAccess]
,
[Publisher]
Debugging is an important prerequisite for the wide-spread application of ontologies, especially in areas that rely upon everyday users to create and maintain knowledge bases, such as the Semantic Web. Most recent approaches use diagnosis methods to identify sources of inconsistency. However, in most debugging cases these methods return many alternative diagnoses, thus placing the burden of fault localization on the user. This paper demonstrates how the target diagnosis can be identified by performing a sequence of observations, that is, by querying an oracle about entailments of the target ontology. We exploit probabilities of typical user errors to formulate information theoretic concepts for query selection. Our evaluation showed that the suggested method reduces the number of required observations compared to myopic strategies.
-
Representing and Querying Validity Time in RDF and OWL: A Logic-Based Approach
,
Boris Motik
,
550-565
,
[OpenAccess]
,
[Publisher]
RDF(S) and OWL 2 currently support only static ontologies. In practice, however, the truth of statements often changes with time, and Semantic Web applications often need to represent such changes and reason about them. In this paper we present a logic-based approach for representing validity time in RDF and OWL. Unlike the existing proposals, our approach is applicable to entailment relations that are not deterministic, such as the Direct Semantics or the RDF-Based Semantics of OWL 2.We also extend SPARQL to temporal RDF graphs and present a query evaluation algorithm. Finally, we present an optimization of our algorithm that is applicable to entailment relations characterized by a set of deterministic rules, such RDF(S) and OWL 2 RL/RDF entailment.
-
SAOR: Template Rule Optimisations for Distributed Reasoning over 1 Billion Linked Data Triples
,
Aidan Hogan,Jeff Z. Pan,Axel Polleres and Stefan Decker
,
337-353
,
[OpenAccess]
,
[Publisher]
In this paper, we discuss optimisations of rule-based materialisation approaches for reasoning over large static RDF datasets. We generalise and reformalise what we call the "partial-indexing" approach to scalable rule-based materialisation: the approach is based on a separation of terminological data, which has been shown in previous and related works to enable highly scalable and distributable reasoning for specific rulesets; in so doing, we provide some completeness propositions with respect to semi-naive evaluation. We then show how related work on template rules - T-Box-specific dynamic rulesets created by binding the terminological patterns in the static ruleset - can be incorporated and optimised for the partial-indexing approach. We evaluate our methods using LUBM(10) for RDFS, pD* (OWL Horst) and OWL 2 RL, and thereafter demonstrate pragmatic distributed reasoning over 1.12 billion Linked Data statements for a subset of OWL 2 RL/RDF rules we argue to be suitable for Web reasoning.
-
SPARQL Beyond Subgraph Matching
,
Birte Glimm and Markus Krötzsch
,
241-256
,
[OpenAccess]
,
[Publisher]
We extend the Semantic Web query language SPARQL by defining the semantics of SPARQL queries under the entailment regimes of RDF, RDFS, and OWL. The proposed extensions are part of the SPARQL 1.1 Entailment Regimes working draft which is currently being developed as part of the W3C standardization process of SPARQL 1.1. We review the conditions that SPARQL imposes on such extensions, discuss the practical difficulties of this task, and explicate the design choices underlying our proposals. In addition, we include an overview of current implementations and their underlying techniques.
-
SPARQL Query Optimization on Top of DHTs
,
Zoi Kaoudi,Kostis Kyzirakos and Manolis Koubarakis
,
418-435
,
[OpenAccess]
,
[Publisher]
We study the problem of SPARQL query optimization on top of distributed hash tables. Existing works on SPARQL query processing in such environments have never been implemented in a real system, or do not utilize any optimization techniques and thus exhibit poor performance. Our goal in this paper is to propose efficient and scalable algorithms for optimizing SPARQL basic graph pattern queries. We augment a known distributed query processing algorithm with query optimization strategies that improve performance in terms of query response time and bandwidth usage. We implement our techniques in the system Atlas and study their performance experimentally in a local cluster.
-
SameAs Networks and Beyond: Analyzing Deployment Status and Implications of owl:sameAs in Linked Data
,
Li Ding,Joshua Shinavier,Zhenning Shangguan and Deborah L. McGuinness
,
145-160
,
[OpenAccess]
,
[Publisher]
Millions of owl:sameAs statements have been published on the Web of Data. Due to its unique role and heavy usage in Linked Data integration, owl:sameAs has become a topic of increasing interest and debate. This paper provides a quantitative analysis of owl:sameAs deployment status and uses these statistics to focus discussion around its usage in Linked Data.
-
Semantic Need: Guiding Metadata Annotations by Questions People #ask
,
Hans-Jörg Happel
,
321-336
,
[OpenAccess]
,
[Publisher]
In its core, the Semantic Web is about the creation, collection and interlinking of metadata on which agents can perform tasks for human users. While many tools and approaches support either the creation or usage of semantic metadata, there is neither a proper notion of metadata need, nor a related theory of guidance which metadata should be created. In this paper, we propose to analyze structured queries to help identifying missing metadata. We conduct a study on Semantic MediaWiki (SMW), one of the most popular Semantic Web applications to date, analyzing structured "ask"-queries in public SMWinstances. Based on that, we describe Semantic Need, an extension for SMW which guides contributors to provide semantic annotations, and summarize feedback from an online survey among 30 experienced SMW users.
-
Semantic Recognition of Ontology Refactoring
,
Gerd Gröner,Fernando Silva Parreiras and Steffen Staab
,
273-288
,
[OpenAccess]
,
[Publisher]
Ontologies are used for sharing information and are often collaboratively developed. They are adapted for different applications and domains resulting in multiple versions of an ontology that are caused by changes and refactorings. Quite often, ontology versions (or parts of them) are syntactical very different but semantically equivalent. While there is existing work on detecting syntactical and structural changes in ontologies, there is still a need in analyzing and recognizing ontology changes and refactorings by a semantically comparison of ontology versions. In our approach, we start with a classification of model refactorings found in software engineering for identifying such refactorings in OWL ontologies using DL reasoning to recognize these refactorings.
-
Signal/Collect: Graph Algorithms for the (Semantic) Web
,
Philip Stutz,Abraham Bernstein and William W. Cohen
,
764-780
,
[OpenAccess]
,
[Publisher]
The Semantic Web graph is growing at an incredible pace, enabling opportunities to discover new knowledge by interlinking and analyzing previously unconnected data sets. This confronts researchers with a conundrum: Whilst the data is available the programming models that facilitate scalability and the infrastructure to run various algorithms on the graph are missing. Some use MapReduce - a good solution for many problems. However, even some simple iterative graph algorithms do not map nicely to that programming model requiring programmers to shoehorn their problem to the MapReduce model. This paper presents the Signal/Collect programming model for synchronous and asynchronous graph algorithms. We demonstrate that this abstraction can capture the essence of many algorithms on graphs in a concise and elegant way by giving Signal/Collect adaptations of various relevant algorithms. Furthermore, we built and evaluated a prototype Signal/Collect framework that executes algorithms in our pro- gramming model. We empirically show that this prototype transpar- ently scales and that guiding computations by scoring as well as asyn- chronicity can greatly improve the convergence of some example algorithms. We released the framework under the Apache License 2.0 (at http://www.ifi.uzh.ch/ddis/research/sc).
-
Summary Models for Routing Keywords to Linked Data Sources
,
Thanh Tran,Lei Zhang and Rudi Studer
,
781-797
,
[OpenAccess]
,
[Publisher]
The proliferation of linked data on the Web paves the way to a new generation of applications that exploit heterogeneous data from different sources. However, because this Web of data is large and continuously evolving, it is non-trivial to identify the relevant link data sources and to express some given information needs as structured queries against these sources. In this work, we allow users to express needs in terms of simple keywords. Given the keywords, we define the problem of finding the relevant sources as the one of keyword query routing. As a solution, we present a family of summary models, which compactly represents the Web of linked data and allows to quickly find relevant sources. The proposed models capture information at different levels, representing summaries of varying granularity. They represent different trade-offs between effectiveness and efficiency. We provide a theoretical analysis of these trade-offs and also, verify them in experiments carried out in a real-world setting using more than 150 publicly available datasets.
-
Supporting Natural Language Processing with Background Knowledge: Coreference Resolution Case
,
Volha Bryl,Claudio Giuliano,Luciano Serafini and Kateryna Tymoshenko
,
80-95
,
[OpenAccess]
,
[Publisher]
Systems based on statistical and machine learning methods have been shown to be extremely effective and scalable for the analysis of large amount of textual data. However, in the recent years, it becomes evident that one of the most important directions of improvement in natural language processing (NLP) tasks, like word sense disambiguation, coreference resolution, relation extraction, and other tasks related to knowledge extraction, is by exploiting semantics. While in the past, the unavailability of rich and complete semantic descriptions constituted a serious limitation of their applicability, nowadays, the Semantic Web made available a large amount of logically encoded information (e.g. ontologies, RDF(S)-data, linked data, etc.), which constitutes a valuable source of semantics. However, web semantics cannot be easily plugged into machine learning systems. Therefore the objective of this paper is to define a reference methodology for combining semantic information available in the web under the form of logical theories, with statistical methods for NLP. The major problems that we have to solve to implement our methodology concern (i) the selection of the correct and minimal knowledge among the large amount available in the web, (ii) the representation of uncertain knowledge, and (iii) the resolution and the encoding of the rules that combine knowledge retrieved from Semantic Web sources with semantics in the text. In order to evaluate the appropriateness of our approach, we present an application of the methodology to the problem of intra-document coreference resolution, and we show by means of some experiments on the standard dataset, how the injection of knowledge leads to the improvement of this task performance.
-
Talking about Data: Sharing Richly Structured Information through Blogs and Wikis
,
Edward Benson,Adam Marcus,Fabian Howahl and David R. Karger
,
48-63
,
[OpenAccess]
,
[Publisher]
Several projects have brought rich data semantics to collaborative wikis, but blogging platforms remain primarily limited to text. As blogs comprise a significant portion of the web's content, engagement of the blogging community is crucial to the development of the semantic web. We provide a study of blog content to show a latent need for better data publishing and visualization support in blogging software. We then present DataPress, an extension to the WordPress blogging platform that enables users to publish, share, aggregate, and visualize structured information using the same workflow that they already apply to text-based content. In particular, we aim to preserve those attributes that make blogs such a successful publication medium: one-click access to the information, one-click publishing of it, natural authoring interfaces, and easy copy and paste of information (and visualizations) from other sources. We reflect on how our designs make progress toward these goals with a study of how users who installed DataPress made use of various features.
-
Toponym Resolution in Social Media
,
Neil Ireson and Fabio Ciravegna
,
370-385
,
[OpenAccess]
,
[Publisher]
Increasingly user-generated content is being utilised as a source of information, however each individual piece of content tends to contain low levels of information. In addition, such information tends to be informal and imperfect in nature; containing imprecise, subjective, ambiguous expressions. However the content does not have to be interpreted in isolation as it is linked, either explicitly or implicitly, to a network of interrelated content; it may be grouped or tagged with similar content, comments may be added by other users or it may be related to other content posted at the same time or by the same author or members of the author's social network. This paper generally examines how ambiguous concepts within user-generated content can be assigned a specific/formal meaning by considering the expanding context of the information, i.e. other information contained within directly or indirectly related content, and specifically considers the issue of toponym resolution of locations.
-
Using Reformulation Trees to Optimize Queries over Distributed Heterogeneous Sources
,
Yingjie Li and Jeff Heflin
,
502-517
,
[OpenAccess]
,
[Publisher]
In order to effectively and quickly answer queries in environments with distributed RDF/OWL, we present a query optimization algorithm to identify the potentially relevant Semantic Web data sources using structural query features and a term index. This algorithm is based on the observation that the join selectivity of a pair of query triple patterns is often higher than the overall selectivity of these two patterns treated independently. Given a rule goal tree that expresses the reformulation of a conjunctive query, our algorithm uses a bottom-up approach to estimate the selectivity of each node. It then prioritizes loading of selective nodes and uses the information from these sources to further constrain other nodes. Finally, we use an OWL reasoner to answer queries over the selected sources and their corresponding ontologies. We have evaluated our system using both a synthetic data set and a subset of the real-world Billion Triple Challenge data.
-
Using Semantics for Automating the Authentication of Web APIs
,
Maria Maleshkova,Carlos Pedrinaci,John Domingue,Guillermo Alvaro Rey and Ivan Martinez
,
534-549
,
[OpenAccess]
,
[Publisher]
Recent technology developments in the area of services on the Web are marked by the proliferation of Web applications and APIs. The implementation and evolution of applications based on Web APIs is, however, hampered by the lack of automation that can be achieved with current technologies. Research on semantic Web services is there fore trying to adapt the principles and technologies that were devised for traditional Web services, to deal with this new kind of services. In this paper we show that currently more than 80% of the Web APIs require some form of authentication. Therefore authentication plays a major role for Web API invocation and should not be neglected in the context of mashups and composite data applications. We present a thorough anal ysis carried out over a body of publicly available APIs that determines the most commonly used authentication approaches. In the light of these results, we propose an ontology for the semantic annotation of Web API authentication information and demonstrate how it can be used to cre ate semantic Web API descriptions. We evaluate the applicability of our approach by providing a prototypical implementation, which uses au thentication annotations as the basis for automated service invocation.
-
When owl:sameAs Isn’t the Same: An Analysis of Identity in Linked Data
,
Harry Halpin,Patrick J. Hayes,James P. McCusker,Deborah L. McGuinness and Henry S. Thompson
,
305-320
,
[OpenAccess]
,
[Publisher]
In Linked Data, the use of owl:sameAs is ubiquitous in interlinking data-sets. There is however, ongoing discussion about its use, and potential misuse, particularly with regards to interactions with inference. In fact, owl:sameAs can be viewed as encoding only one point on a scale of similarity, one that is often too strong for many of its current uses. We describe how referentially opaque contexts that do not allow inference exist, and then outline some varieties of referentially-opaque alternatives to owl:sameAs. Finally, we report on an empirical experiment over randomly selected owl:sameAs statements from the Web of data. This theoretical apparatus and experiment shed light upon how owl:sameAs is being used (and misused) on the Web of data.
-
4sr - Scalable Decentralized RDFS Backward Chained Reasoning
,
Manuel Salvadores,Gianluca Correndo,Steve Harris,Nicholas Gibbins and Nigel Shadbolt
,
[OpenAccess]
,
[Publisher]
This poster paper presents the design and implementation of an RDFS reasoner based on a backward chaining approach and implemented on a clustered RDF triplestore. The system presented, called 4sr, uses 4store as base infrastructure. In order to achieve a highly scalable system we implemented the reasoning at the lowest level of the quad store, the bind operation. The bind operation in 4sr traverses the quad store indexes matching or expanding the query variables with awareness of the RDFS semantics.
-
A Graph-based Approach to Indexing Semantic Web Data
,
Xin He and Mark Baker
,
[OpenAccess]
,
[Publisher]
To the best of our knowledge, existing Semantic Web (SW) search systems fail to index RDF graph structures as graphs. They either do not index graph structures and retrieve them by run-time formal queries, or index all row triples from the back-end repositories. This increases the overhead of indexing for very large RDF documents. Moreover, the graph explorations from row triples can be complicated when blank nodes, RDF collections and containers are involved. This paper provides a means to index SW data in graph structures, which potentially benefit the graph exploration and ranking in SW querying.
-
A Graphical Evaluation Tool for Semantic Web Service Matchmaking
,
Ulrich Lampe,Melanie Siebenhaar,Stefan Schulte and Ralf Steinmetz
,
[OpenAccess]
,
[Publisher]
Semantic matchmaking, i.e., the task of finding matching (Web) services based on semantic information, has been a prominent field of research lately, and a wide range of supporting tools both for research and practice have been published. However, no suitable solution for the visualization of matchmaking results exists so far. In this paper, we present the Matchmaking Visualizer, an application for the visual representation and analysis of semantic matchmaking results. It allows for the comparing of matchmaking approaches for semantic Web services in a fine-grained manner and thus complements existing evaluation suites that are based on rather coarse-grained information retrieval metrics.
-
A SILK Graphical UI for Defeasible Reasoning, with a Biology Causal Process Example
,
Benjamin N. Grosof,Mark H. Burstein,Mike Dean,Carl Andersen,Brett Benyo,William Ferguson,Daniela Inclezan and Richard Shapiro
,
[OpenAccess]
,
[Publisher]
SILK is an expressive Semantic Web rule language and system equipped with scalable reactive higher-order defaults. We present one of its latest novel features: a graphical user interface (GUI) for knowledge entry, query answering, and justification browsing that supports user specification and understanding of advanced courteous prioritized defeasible reasoning. We illustrate the use of the GUI in an example from college-level biology of modeling and reasoning about hierarchically structured causal processes with interfering multiple causes.
-
A Semantic Web Repository for Managing and Querying Aligned Knowledge
,
James P. McGlothlin and Latifur Khan
,
[OpenAccess]
,
[Publisher]
Ontology alignment is the task of matching concepts and terminology from multiple ontologies. Ontology alignment is especially relevant in the semantic web domain as RDF documents and OWL ontologies are quite heterogeneous, yet often describe related concepts. The end goal for ontology matching is to allow the knowledge sets to interoperate. To achieve this goal, it is necessary for queries to return results that include knowledge, and inferred knowledge, from multiple datasets and terminologies, using the alignment information. Furthermore, ontology alignment is not an exact science, and concept matchings often involve uncertainty. The goal of this paper is to provide a semantic web repository that supports applying alignments to the dataset and reasoning with alignments. Our goal is to provide high performance queries that return results that include inference across alignment matchings, and rank results using certainty information. Our semantic web repository uses distributed inference and probabilistic reasoning to allow datasets to be efficiently updated with ontology alignments. We materialize the inferred, aligned data and make it available in efficient queries.
-
A web-based Evaluation Service for Ontology Matching
,
Jérôme Euzenat,Christian Meilicke,Heiner Stuckenschmidt and Cássia Trojahn dos Santos
,
[OpenAccess]
,
[Publisher]
Evaluation of semantic web technologies at large scale, including ontology matching, is an important topic of semantic web research. This paper presents a web-based evaluation service for automatically executing the evaluation of ontology matching systems. This service is based on the use of a web service interface wrapping the functionality of a matching tool to be evaluated and allows developers to launch evaluations of their tool at any time on their own. Furthermore, the service can be used to visualise and manipulate the evaluation results. The approach allows the execution of the tool on the machine of the tool developer without the need for a runtime environment.
-
Automated Mapping Generation for Converting Databases into Linked Data
,
Simeon Polfliet and Ryutaro Ichise
,
[OpenAccess]
,
[Publisher]
Most of the data on the Web is stored in relational databases. In order to make the Semantic Web grow we need to provide easy-to-use tools to convert those databases into linked data, so that even people with little knowledge of the semantic web can use them. Some programs able to convert relational databases into RDF files have been developed, but the user still has to link manually the database attribute names to existing ontology properties and this generated "linked data" is not actually linked with external relevant data. We propose here a method to associate automatically attribute names to existing ontology entities in order to complete the automation of the conversion of databases. We also present a way - rather basic, but with low error rate - to add links automatically to relevant data from other data sets.
-
Avalanche: Putting the Spirit of the Web back into Semantic Web Querying
,
Cosmin Basca and Abraham Bernstein
,
[OpenAccess]
,
[Publisher]
Traditionally Semantic Web applications either included a web crawler or relied on external services to gain access to the Web of Data. Recent efforts, have enabled applications to query the entire Semantic Web for up-to-date results. Such approaches are based on either centralized indexing of semantically annotated metadata or link traversal and URI dereferencing as in the case of Linked Open Data. They pose a number of limiting assumptions, thus breaking the openness principle of the Web. In this demo we present a novel technique called Avalanche, designed to allow a data surfer to query the Semantic Web transparently. The technique makes no prior assumptions about data distribution. Specifically, Avalanche can perform "live" queries over the Web of Data. First, it gets on-line statistical information about the data distribution, as well as bandwidth availability. Then, it plans and executes the query in a distributed manner trying to quickly provide first answers.
-
BRAMBLE: A Web-based Framework for Interactive RDF-Graph Visualisation
,
Nikolas Schmitt,Mathias Niepert and Heiner Stuckenschmidt
,
[OpenAccess]
,
[Publisher]
Most graph visualisation tools for RDF data are desktop applications focused on loading complete ontologies and metadata from a file and allowing users to filter out information if needed. Recently both scientific and commercial frameworks have started to shift their focus to the web, however they still rely on plugins such as Java and rarely handle larger collections of RDF statements efficiently. In this abstract we present a framework which visualises RDF graphs in a native browser environment, leveraging both the SVG standard and JavaScript technology to provide a responsive user interface. Graphs can be directly expanded, modified and explored. Users select nodes and edges from a central data repository containing millions of statements. The resulting graph can be shared with other users retaining full interactivity for collaborative work or presentation purposes.
-
Building Linked Data Applications with Fusion: A Visual Interface for Exploration and Mapping
,
Samur Araújo,Geert-Jan Houben,Daniel Schwabe and Jan Hidders
,
[OpenAccess]
,
[Publisher]
Building applications over Linked Data often requires a mapping between the application model and the ontology underlying the source dataset in the Linked Data cloud. Explicitly formulating these mappings demands a comprehensive understanding of the underlying schemas (RDF ontologies) of the source and target datasets. This task can be supported by integrating the process of schema exploration into the mapping process and help the application designer with finding the implicit relationships that she wants to map. This demo describes Fusion - a framework for closing the gap between the application model and the underlying ontologies in the Linked Data cloud. Fusion simplifies the definition of mappings by providing a visual user interface that integrates the exploratory process and the mapping process. Its architecture allows the creation of new applications through the extension of existing Linked Data sources with additional data.
-
Demo: Enriching Text with RDF/OWL Encoded Senses
,
Delia Rusu,Tadej Stajner,Lorand Dali,Blaz Fortuna and Dunja Mladenic
,
[OpenAccess]
,
[Publisher]
This demo paper describes an extension of the Enrycher text enhancement system, which annotates words in context, from a text fragment, with RDF/OWL encoded senses from WordNet and OpenCyc. The extension is based on a general purpose disambiguation algorithm which takes advantage of the structure and/or content of knowledge resources, reaching state-of-the-art performance when compared to other knowledge-lean word sense disambiguation algorithms.
-
Displaying email-related contextual information using Contextify
,
Gregor Leban and Marko Grobelnik
,
[OpenAccess]
,
[Publisher]
Contextify is a tool for maximizing user productivity by showing email-related contextual information. The contextual information is determined based on the currently selected email and includes related emails, people, attachments and web links. This content is displayed in a sidebar in Microsoft Outlook and in a special dialog that can display an extended context.
-
Efficient processing of large RDF streams using memory management algorithms
,
Vaibhav Khadilkar,Murat Kantarcioglu,Latifur Khan and Bhavani M. Thuraisingham
,
[OpenAccess]
,
[Publisher]
As more RDF streaming applications are being developed, there is a growing need for an efficient mechanism for storing and performing inference over these streams. In this poster, we present a tool that stores these streams in a unified model by combining memory and disk based mechanisms. We explore various memory management algorithms and disk-persistence strategies to optimize query performance. Our unified model produces an optimized query execution and inference performance for RDF streams that benefit from the advantages of using both, memory and disk.
-
Enterprise Data Classification Using Semantic Web Technologies
,
Tamar Domany,Abigail Tarem and David Ben-David
,
[OpenAccess]
,
[Publisher]
Organizations today collect and store large amounts of data in various formats and locations, however they are sometimes required to locate all instances of a certain type of data. Data classification enables efficient retrieval of information when needed. This work presents a reference implementation for enterprise data classification using Semantic Web technologies. We demonstrate automatic discovery and classification of Personally Identifiable Information (PII) in relational databases, using a classification model in RDF/OWL describing the elements to discover and classify. At the end of the process the results are also stored in RDF, enabling simple navigation between the input model and the findings in different databases. Recorded demo link: https://www.research.ibm.com/haifa/info/demos/ piidiscovery_full.htm
-
Extending SMW+ with a Linked Data Integration Framework
,
Christian Becker,Christian Bizer,Michael Erdmann and Mark Greaves
,
[OpenAccess]
,
[Publisher]
In this paper, we present a project which extends a SMW+ semantic wiki with a Linked Data Integration Framework that performs Web data access, vocabulary mapping, identity resolution, and quality evaluation of Linked Data. As a result, a large collection of neurogenomics-relevant data from the Web can be fiexibly transformed into a uni?ed ontology, allowing uni?ed querying, navigation, and visualization; as well as support for wiki-style collaboration, crowdsourcing, and commentary on chosen data sets.
-
Generating RDF for Application Testing
,
Daniel Blum and Sara Cohen
,
[OpenAccess]
,
[Publisher]
Application testing is a critical component of application development. Testing of Semantic Web applications requires large RDF datasets, conforming to an expected form or schema, and preferably, to an expected data distribution. Finding such datasets often proves impossible, while generating input datasets is often cumbersome. The GRR (Generating Random RDF) system is a convenient, yet powerful, tool for generating random RDF, based on a SPARQLlike syntax. In this poster and demo, we show how large datasets can be easily generated using intuitive commands.
-
HANNE - A Holistic Application for Navigational Knowledge Engineering
,
Sebastian Hellmann,Jörg Unbehauen and Jens Lehmann
,
[OpenAccess]
,
[Publisher]
Although research towards the reduction of the knowledge acquisition bottleneck in ontology engineering is advancing, a central issue remains unsolved: Light-weight processes for collaborative knowledge engineering by a massive user base. In this demo, we present HANNE, a holistic application that implements all necessary prerequisites for Navigational Knowledge Engineering and thus reduces the complexity of creating expressive knowledge by disguising it as navigation. HANNE enables users and domain experts to navigate over knowledge bases by selecting examples. From these examples, formal OWL class expressions are created and refined by a scalable Iterative Machine Learning approach. When saved by users, these class expressions form an expressive OWL ontology, which can be exploited in numerous ways: as navigation suggestions for users, as a hierarchy for browsing, as input for a team of ontology editors.
-
Hybrid Graph based Keyword Query Interpretation on RDF
,
Kaifeng Xu,Junquan Chen,Haofen Wang and Yong Yu
,
[OpenAccess]
,
[Publisher]
Adopting keyword query interface to semantic search on RDF data can help users keep away from learning the SPARQL query syntax and understanding the complex and fast evolving data schema. The existing approaches are divided into two categories: instance-based approaches and schema-based approaches. The instance-based approaches relying on the original RDF graph can generate precise answers but take a long processing time. In contrast, the schema-based approaches relying on the reduced summary graph require much less processing time but cannot always generate correct answers. In this paper, we propose a novel approach based on a hybrid graph which can achieve significant improvements on processing time with a limited accuracy drop compared with instance-based approaches, and meanwhile, can achieve promising accuracy gains at an affordable time cost compared with schema-based approaches.
-
KiWi - A Platform for Semantic Social Software (Demonstration)
,
Thomas Kurz,Sebastian Schaffert,Tobias Bürger,Stephanie Stroka,Rolf Sint,Mihai Radulescu and Szabolcs Grünwald
,
[OpenAccess]
,
[Publisher]
The combination of semantic technologies and social software has become more and more popular in the last few years as can be seen by the emergence of Semantic Wikis or the popularity of vocabularies such as FOAF or SIOC. The KiWi project is based upon these principles and offers features required for Social Media applications such as versioning, (semantic) tagging, rich text editing, easy linking, rating and commenting, as well as advanced "smart" services such as recommendation, rule-based reasoning, information extraction, intelligent search and querying, a sophisticated social reputation system, vocabulary management, and rich visualization. KiWi can be used both, as a platform for building custom Semantic Media applications, and as a Semantic Social Index, integrating content and data from a variety of different sources, e.g. Wikis, blogs and content management systems in an enterprise internet. Third-party applications can access the KiWi System using simple-to-use web services. The demo presents the whole functionality of the Open Source development platform KiWi in its final version within one integrated project management scenario. Furthermore it shows different KiWi-based Social Media projects to illustrate its various fields of application.
-
LDSpider: An open-source crawling framework for the Web of Linked Data
,
Robert Isele,Jürgen Umbrich,Christian Bizer and Andreas Harth
,
[OpenAccess]
,
[Publisher]
The Web of Linked Data is growing and currently consists of several hundred interconnected data sources altogether serving over 25 billion RDF triples to the Web. What has hampered the exploitation of this global dataspace up till now is the lack of an open-source Linked Data crawler which can be employed by Linked Data applications to localize (parts of) the dataspace for further processing. With LDSpider, we are closing this gap in the landscape of publicly available Linked Data tools. LDSpider traverses the Web of Linked Data by following RDF links between data items, it supports di?fferent crawling strategies and allows crawled data to be stored either in fi?les or in an RDF store.
-
Learning Co-reference Relations for FOAF Instances
,
Jennifer Sleeman and Tim Finin
,
[OpenAccess]
,
[Publisher]
FOAF is widely used on the Web to describe people, groups and organizations and their properties. Since FOAF does not require unique IDs, it is often unclear when two FOAF instances are co-referent, i.e., denote the same entity in the world. We describe a prototype system that identifies sets of co-referent FOAF instances using logical constraints (e.g., IFPs), strong heuristics (e.g., FOAF agents described in the same file are not co-referent), and a Support Vector Machine (SVM) generated classifier.
-
Lily-LOM: An Efficient System for Matching Large Ontologies with Non-Partitioned Method
,
Peng Wang
,
[OpenAccess]
,
[Publisher]
Since the high time and space complexity, most existing ontology matching systems are not well scalable to solve the large ontology matching problem. Moreover, the popular divide-and-conquer matching solution faces two disadvantages: First, partitioning ontology is a complicate process; Second, it will lead to loss of semantic information during matching. To avoid these drawbacks, this paper presents an efficient large ontology matching system Lily-LOM, which uses a non-partitioned method. Lily-LOM is based on two kinds of reduction anchors, i.e. positive and negative reduction anchors, to reduce the time complexity problem. Some empirical strategies for reducing the space complexity are also discussed. The experiments show that Lily-LOM is effective.
-
Linked data from your pocket: The Android RDFContentProvider
,
Jérôme David and Jérôme Euzenat
,
[OpenAccess]
,
[Publisher]
Smartphones are becoming main personal information repositories. Unfortunately this information is stored in independent silos managed by applications. We have seen that already: in the Palm operating system, application "databases" were only accessible when the application schemas were known and worked by opening other application databases. Our goal is to provide support for applications to deliver their data in RDF. This would allow applications to exploit this information in a uniform way without knowing beforehand application schemas. This would also connect this information to the semantic web and the web of data through reference from and to device information. We present a way to do this in a uniform manner within the Android platform. Moreover, we propose to do it along the linked data principles (provide RDF, describe in ontologies, use URIs, link other sources). We first consider how the integration of RDF could be further pushed within the context of the Android platform. We demonstrate its feasibility through a linked data browser that allows for browsing the phone information.
-
MoKi: a Wiki-Based Conceptual Modeling Tool
,
Chiara Ghidini,Marco Rospocher and Luciano Serafini
,
[OpenAccess]
,
[Publisher]
The success of wikis for collaborative knowledge construction is triggering the development of a number of tools for collaborative conceptual modeling based on them. In this paper we present a completely revised version of MoKi, a tool for modelling ontologies and business process models in an integrated way.
-
Ontology Mapping Neural Network: An Approach to Learning and Inferring Correspondences among Ontologies
,
Yefei Peng,Paul W. Munro and Ming Mao
,
[OpenAccess]
,
[Publisher]
An ontology mapping neural network (OMNN) is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network (IENN)'s ability to represent and map complex relationships. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the networks. The output of one network in response to a stimulus to another network can be interpreted as an analogical mapping. In a similar fashion, the networks can be explicitly trained to map specific items in one domain to specific items in another domain. Representation layer helps the network learn relationship mapping with direct training method. OMNN is applied to several OAEI benchmark test cases to test its performance on ontology mapping. Results show that OMNN approach is competitive to the top performing systems that participated in OAEI 2009.
-
Parallelization Techniques for Semantic Web Reasoning Applications
,
Alexey Cheptsov and Matthias Assel
,
[OpenAccess]
,
[Publisher]
Performance is the most critical aspect towards achieving high scalability of Semantic Web reasoning applications, and considerably limits the application areas of them. There is still a deep mismatch between the requirements for reasoning on a Web scale and performance of the existing reasoning engines. The performance limitation can be considerably reduced by utilizing such large-scale e-Infrastructures as LarKC - the Large Knowledge Collider - an experimental platform for massive distributed incomplete reasoning, which offers several innovative approaches removing the scalability barriers, in particularly, by enabling transparent access to HPC systems. Efficient utilization of such resources is facilitated by means of parallelization being the major element for accomplishing performance and scalability of semantic applications. Here we discuss application of some emerging parallelization strategies and show the benefits obtained by using such systems as LarKC.
-
Publishing Bibliographic Data on the Semantic Web using BibBase
,
Reynold Xin,Oktie Hassanzadeh,Christian Fritz,Shirin Sohrabi,Yang Yang,Minghua Zhao and Renée J. Miller
,
[OpenAccess]
,
[Publisher]
We present BibBase, a system for publishing and managing bibliographic data available in BibTeX files on the Semantic Web. BibBase uses a powerful yet light-weight approach to transform BibTeX files into rich Linked Data as well as custom HTML and RSS code that can readily be integrated within a user's website. The data can instantly be queried online on the system's SPARQL endpoint. In this demo, we present a brief overview of the features of our system and outline a few challenges in the design and implementation of such a system.
-
RDF On the Go: RDF Storage and Query Processor for Mobile Devices
,
Danh Le Phuoc,Josiane Xavier Parreira,Vinny Reynolds and Manfred Hauswirth
,
[OpenAccess]
,
[Publisher]
We present RDF On the Go, a full-edged RDF storage and SPARQL query processor for mobile devices. Implemented by adapting the widely used Jena and ARQ Semantic Web Toolkit and query engine, it uses Berkeley DB for storing the RDF data, R-Trees for indexing spatial data indexing and a query processor that supports both standard and spatial queries. By storing and querying RDF data locally at the user's mobile device, RDF On the Go contributes to improving scalability, decreasing transmission costs, and controlling access to user's personal information. It also enables the development of a next generation of mobile applications. RDF On the Go is available for the Android platform and can be downloaded at http://rdfonthego.googlecode.com/.
-
RDOTE - Transforming Relational Databases into Semantic Web Data
,
Konstantinos N. Vavliakis,Theofanis K. Grollios and Pericles A. Mitkas
,
[OpenAccess]
,
[Publisher]
During the last decade, there has been intense research and development in creating methodologies and tools able to map Relational Databases with the Resource Description Framework. Although some systems have gained wider acceptance in the Semantic Web community, they either require users to learn a declarative language for encoding mappings, or have limited expressivity. Thereupon we present RDOTE, a framework for easily transporting data residing in Relational Databases into the Semantic Web. RDOTE is available under GNU/GPL license and provides friendly graphical interfaces, as well as enough expressivity for creating custom RDF dumps.
-
RExplorator - supporting reusable explorations of Semantic Web Linked Data
,
Marcelo Cohen and Daniel Schwabe
,
[OpenAccess]
,
[Publisher]
This demo presents RExplorator, an environment which allows nontechnically savvy users, but who understand the problem domain, to explore the data until they understand its structure They employ a combination of search, query and faceted navigation in a direct manipulation, query-byexample style interface. In this process, users can reuse previously found solutions by other users, which may accomplish sub-tasks of the problem at hand. It is also possible to create an end-user friendly interface to allow them to access the information. Once a solution has been found, it can be generalized, and optionally made available for reuse by other users. This enables the establishment of a social network of users that share solutions for problems in particular domains (repositories) of interest.
-
RightField: Embedding Ontology Term Selection into Spreadsheets for the Annotation of Biological Data
,
Katy Wolstencroft,Matthew Horridge,Stuart Owen,Wolfgang Müller,Finn Bacall,Jacky L. Snoep,Olga Krebs and Carole A. Goble
,
[OpenAccess]
,
[Publisher]
RightField is an open source application that provides a mechanism for embedding ontology annotation support for Life Science data in Microsoft Excel spreadsheets. Individual cells, columns, or rows can be restricted to particular ranges of allowed classes or instances from chosen ontologies. Informaticians, with experience in ontologies and data annotation prepare RightField-enabled spreadsheets with embedded ontology term selection for use by a wider community of laboratory scientists. The RightField-enabled spreadsheet presents selected ontology terms to the users as a simple drop-down list, enabling scientists to consistently annotate their data without the need to understand the numerous metadata standards and ontologies available to them. The spreadsheets are self-contained and remain "vanilla" Excel so that they can be readily exchanged, processed offline and are usable by regular Excel tooling. The result is semantic annotation by stealth, with an annotation process that is less error-prone, more efficient, and more consistent with community standards. RightField has been developed and deployed for a consortium of some 300 Systems Biologists. RightField is open source under a BSD license and freely available from http://www.sysmo-db.org/RightField.
-
SPARQL Views: A Visual SPARQL Query Builder for Drupal
,
Lin Clark
,
[OpenAccess]
,
[Publisher]
Publishing Linked Data on the Web has become much easier with tools such as Drupal. However, consuming that data and presenting it in a meaningful way is still difficult for both Web developers and for Semantic Web practitioners. We demonstrate a module for Drupal which supports visual query building for SPARQL queries and enables meaningful displays of the query result.
-
STEREO: a SaT-based tool for an optimal solution of the sERvice selEctiOn problem
,
Daniel Izquierdo,Maria-Esther Vidal and Blai Bonet
,
[OpenAccess]
,
[Publisher]
We present STEREO, a system that offers an expressive formalism and implements techniques firmly grounded on logic to solve the Service Selection Problem (SSP). STEREO adopts the Local-As-View approach (LAV) to represent services' functionality as views on ontology concepts, while user requests are expressed as conjunctive queries on these concepts. Additionally, users can describe their preferences, which are used to rank the solutions. We discuss the LAV formulation of SSP; then, we illustrate the encoding of SSP as a logical theory whose models are in correspondence with the problem solutions, and in presence of preferences, the best models are in correspondence with the best-ranked solutions. We demonstrate STEREO and the properties of modern SAT solvers that provide an efficient and scalable solution to SSP.
-
SemWebVid - Making Video a First Class Semantic Web Citizen and a First Class Web Bourgeois
,
Thomas Steiner
,
[OpenAccess]
,
[Publisher]
SemWebVid is an online Ajax application that allows for the automatic generation of Resource Description Framework (RDF) video descriptions. These descriptions are based on two pillars: first, on a combination of user-generated metadata such as title, summary, and tags; and second, on closed captions which can be user-generated, or be auto-generated via speech recognition. The plaintext contents of both pillars are being analyzed using multiple Natural Language Processing (NLP) Web services in parallel whose results are then merged and where possible matched back to concepts in the sense of Linking Open Data (LOD). The final result is a deep-linkable RDF description of the video, and a "scroll-along" view of the video as an example of video visualization formats.
-
Semantic Web Technologies for a Smart Energy Grid: Requirements and Challenges
,
Andreas Wagner,Sebastian Speiser and Andreas Harth
,
[OpenAccess]
,
[Publisher]
The Smart Grid aims at making the current energy grid more efficient and eco-friendly. The Smart Grid features an IT-layer, which allows communication between a multitude of stakeholders and will have to be integrated with other "smart" systems (e.g., smart factories or smart cities) to operate effectively. Thus, many participants will be involved and will exchange large volumes of data, leading to a heterogeneous system with ad-hoc data exchange in which centralised coordination and control will be very difficult to achieve. In this paper, we show parallels between requirements for the (Semantic) Web and the Smart Grid. We argue that the communication architecture for the Smart Grid can be built upon existing (Semantic) Web technologies. We point out differences between the existing Web and the Smart Grid, thereby identifying remaining challenges.
-
Semantic-based Complex Event Processing in the AAL Domain
,
Yongchun Xu,Peter Wolf,Nenad Stojanovic and Hans-Jörg Happel
,
[OpenAccess]
,
[Publisher]
Ambient assisted living (AAL) is a new research area focusing on services that support people in their daily life with a particular focus on elderly people. In the AAL domain sensor technologies are used to identify situations that pose a risk to the assisted person (AP) or that indicate the need of proactive assistance. These situations of interest are detected by analyzing sensor data coming from a whole variety of sensors. Considering the need for immediate assistance especially in the case of safety- and health-critical situations, the detection of situations must be achieved in real-time. In this paper we propose to use Complex Event Processing (CEP) based on semantic technologies to detect typical AAL-like situations. In particular, we present how the ETALIS CEP engine can be used to detect situations in real-time and how this can lead to immediate and proper assistance even in critical situations in conjunction with the semantic AAL service platform openAAL.
-
Semantic-based Mobile Mashup Platform
,
Zhipeng Peng,Huajun Chen,Jinghai Rao,Ying Liu,Lei Wang and Jian Chen
,
[OpenAccess]
,
[Publisher]
Mobile devices contain more personal data such as GPS location, contacts and music, with which users can create innovative and pragmatic mashup applications for different areas such as social networking, E-commerce, and entertainment. We propose a semantic-based mobile mashup platform which enables users to create mashup applications by simply selecting service nodes, linking them together and configuring some connection parameters. Our platform also offers a recommendation mechanism on linkable services by adding semantic annotation to service description, so that users do not need to read specifications of web services in order to find out linkable ones. Therefore, users can focus more on the innovation and practicability of their mashup applications, which will surely result in the emergence of abundant mobile mashup applications.
-
Semantics for music researchers: How country is my country?
,
Kevin R. Page,Benjamin Fields,Bart J. Nagel,Gianni O'Neill,David De Roure and Tim Crawford
,
[OpenAccess]
,
[Publisher]
The Linking Open Data cloud contains several music related datasets that hold great potential for enhancing the process of research in the ?eld of Music Information Retrieval (MIR) and which, in turn, can be enriched by MIR results. We demonstrate a system with several related aims: to enable MIR researchers to utilise these datasets through incorporation in their research systems and work?ows; to publish MIR research output on the Semantic Web linked to existing datasets (thereby also increasing the size and applicability of the datasets for use in MIR); and to present MIR research output, with cross-referencing to other linked data sources, for manipulation and evaluation by researchers and re-use within the wider Semantic Web. By way of example we gather and publish RDF describing signal collections derived from the country of an artist. Genre analysis over these collections and integration of collection and result metadata enables us to ask: "how country is my country?".
-
Silk - Generating RDF Links while publishing or consuming Linked Data
,
Anja Jentzsch,Robert Isele and Christian Bizer
,
[OpenAccess]
,
[Publisher]
The central idea of the Web of Data is to interlink data items using RDF links. However, in practice most data sources are not sufficiently interlinked with related data sources. The Silk Link Discovery Framework addresses this problem by providing tools to generate links between data items based on user-provided link specifications. It can be used by data publishers to generate links between data sets as well as by Linked Data consumers to augment Web data with additional RDF links. In this poster we present the Silk Link Discovery Framework and report on two usage examples in which we employed Silk to generate links between two data sets about movies as well as to find duplicate persons in a stream of data items that is crawled from the Web.
-
T2LD: Interpreting and Representing Tables as Linked Data
,
Varish Mulwad,Tim Finin,Zareen Syed and Anupam Joshi
,
[OpenAccess]
,
[Publisher]
We describe a framework and prototype system for interpreting tables and extracting entities and relations from them, and producing a linked data representation of the table's contents. This can be used to annotate the table or to add new facts to the linked data collection.
-
Text Based Similarity Metrics and Delta for Semantic Web Graphs
,
Krishnamurthy Viswanathan and Tim Finin
,
[OpenAccess]
,
[Publisher]
Recognizing that two Semantic Web documents or graphs are similar and characterizing their differences is useful in many tasks, including retrieval, updating, version control and knowledge base editing. We describe several text-based similarity metrics that characterize the relation between Semantic Web graphs and evaluate these metrics for three specific?c cases of similarity: similarity in classes and properties, similarity disregarding differences in base-URIs, and versioning relationship. We apply these techniques for a specific use case: generating a delta between versions of a Semantic Web graph. We have evaluated our system on several tasks using a collection of graphs from the archive of the Swoogle Semantic Web search engine.
-
The Catalogus Professorum Lipsiensis - Semantics-based Collaboration and Exploration for Historians
,
Thomas Riechert,Ulf Morgenstern,Sören Auer,Sebastian Tramp and Michael Martin
,
[OpenAccess]
,
[Publisher]
The World Wide Web (WWW), as an ubiquitous medium for publication and exchange, already significantly influenced the way how historians work: the availability of public catalogs and bibliographies enable efficient research of relevant content for a certain investigation; the increasing digitization of works from historical archives and libraries, in addition, enables historians to directly access historical sources remotely. The capabilities of the WWW as a medium for collaboration, however, are only starting to be explored. Many historical questions are only answerable by combining information from different sources, from different researchers and organizations. Furthermore, after analyzing original sources, the derived information is often more comprehensive than can be captured by simple keyword indexing. In [3] we report about the application of an adaptive, semantics-based knowledge engineering approach for the development of a prosopographical knowledge base. In this demonstration we will showcase the comprehensive prosopographical knowledge base and its potential for applications. In prosopographical research, historians analyze common characteristics of historical groups by studying statistically relevant quantities of individual biographies. Untraceable periods of biographies can be determined on the basis of such accomplished analyses in combination with statistically examinations as well as patterns of relationships between individuals and their activities. In our case, researchers from the Historical Seminar at the University of Leipzig aimed at creating a prosopographical knowledge base about the life and work of professors in the 600 years history of University of Leipzig ranging from the year 1409 till 2009 - the Catalogus Professorum Lipsiensis (CPL).
-
The Polish interface for Linked Open Data
,
Aleksander Pohl
,
[OpenAccess]
,
[Publisher]
This paper describes an application which aims at producing Polish descriptions for the data available as Linked Open Data, the MusicBrainz knowledge base contents in particular.
-
The eCloudManager Intelligence Edition - Semantic Technologies for Enterprise Cloud Management
,
Peter Haase,Tobias Mathäß,Michael Schmidt,Andreas Eberhart and Ulrich Walther
,
[OpenAccess]
,
[Publisher]
Enterprise clouds apply the paradigm of cloud computing to enterprise IT infrastructures, with the goal of providing easy, flexible, and scalable access to both computing resources and IT services. Realizing the vision of the fully automated enterprise cloud involves addressing a range of technological challenges. In this demonstration, we show how semantic technologies can help to address the challenges related to intelligent information management in enterprise clouds. In particular, we address the topics of data integration, collaborative documentation and annotation and intelligent information access and analytics and demonstrate solutions that are implemented in the newest addition to our eCloudManager product suite: The Intelligence Edition.
-
Toward Seoul Road Sign Management on LarKC Platform
,
Tony Kyung-il Lee,Stanley Park,Zhisheng Huang and Emanuele Della Valle
,
[OpenAccess]
,
[Publisher]
Geo Semantic Technology is evaluated as the core technology for supporting interoperability of geospatial data and building urban computing environment. We made semantic integrations of LOD's Linked Geo Data and Open Street Map with Korean POI data set, and have researched for developing intelligent road sign management system based on the LarKC platform.
-
Towards Linked Data Services
,
Sebastian Speiser and Andreas Harth
,
[OpenAccess]
,
[Publisher]
Large amounts of data reside in sources which are not web- accessible. Wrappers - small software programs that provide uniform access to data - are often used to transform legacy data sources for use on the Semantic Web.Wrappers, as well as links between data from wrapped sources and data that already exists on the Web, are typically created in an ad-hoc fashion. We propose a principled approach to integrating data-providing services with Linked Data. Linked Data Services (LIDS) can be used in various application scenarios to provide uniform access to legacy data and enable automatic interlinkage with existing data sets.
-
Towards Stable Semantic Ontology Measurement
,
Yinglong Ma
,
[OpenAccess]
,
[Publisher]
Stable semantic ontology measurement is crucial to obtain significant and comparable measurement results. In this paper, we present a summary of the definition of ontology measurement stability and the preprocessing for stable semantic ontology measurement from [5]. Meanwhile, we describe two existing ontology metrics. For each of them, we compare their stability from the perspectives of structural and semantic ontology measurements, respectively. The experiments show that some structural ontology measurements may be unusable in cases when we want to compare the measurements of different models, unless the pre-processing of the models is performed.
-
Using the Annotation Ontology in Semantic Digital Libraries
,
Leyla Jael García-Castro,Olga L. Giraldo and Alexander García Castro
,
[OpenAccess]
,
[Publisher]
The Living Document Project aims to harness the collective knowledge within communities in digital libraries, making it possible to enhance knowledge discovery and dissemination as well as to facilitate interdisciplinary collaborations amongst readers. Here we present a prototype that allows users to annotate content within digital libraries; the annotation schema is built upon the Annotation Ontology; data is available as RDF, making it possible to publish it as linked data and use SPARQL and SWRL for querying, reasoning, and processing. Our demo illustrates how a social tagging system could be used within the context of digital libraries in life sciences so that users are able to better organize, share, and discover knowledge embedded in research articles. Availability: http://www.biotea.ws/videos/ld_ao/ld_ao.html
-
Visual Reasoning about Ontologies
,
John Howse,Gem Stapleton and Ian Oliver
,
[OpenAccess]
,
[Publisher]
We explore a diagrammatic logic suitable for specifying ontologies using a case study. Diagrammatic reasoning is used to establish consequences of the ontology.
-
Visualizing Populated Ontologies with OntoTrix
,
Benjamin Bach,Gennady Legostaev and Emmanuel Pietriga
,
[OpenAccess]
,
[Publisher]
Most tools for visualizing Semantic Web data structure the representation according to the concept definitions and interrelations that constitute the ontology's vocabulary. Instances are often treated as somewhat peripheral information, when considered at all. The visualization of instance-level data poses different but significant challenges as instances will often be orders of magnitude more numerous than the concept definitions that give them machine-processable meaning. We present a visualization technique designed to visualize large instance sets and the relations that connect them. This visualization uses both nodelink and adjacency matrix representations of graphs to visualize different parts of the data depending on their semantic and local structural properties, exploiting ontological knowledge to drive the layout of, and navigation in, the visualization.
-
WSML2Reasoner - A Comprehensive Reasoning Framework for the Semantic Web
,
Reto Krummenacher,Daniel Winkler and Adrian Marte
,
[OpenAccess]
,
[Publisher]
The amount of data on the Internet is rapidly growing. Formal languages are used to annotate such data in order to make it machine-understandable; i.e., allow machines to reason about it, to check consistency, to answer queries, or to infer new facts. Essential for this are formalisms that allow for tractable and efficient reasoning algorithms. Particular care is demanded in efficiently responding to the trade-off between expressivity and usefulness. The updated Web Ontology Language (OWL 2) provides dialects that are restricted in their semantic expressivity for optimizing the reasoning behavior; e.g., the OWL 2 EL or OWL 2 RL profiles. Such dialects are very important to respond to the aforementioned trade-off. Profiles reflect particular requirements and yield purposeful balance between expressivity and computational complexity. The support for dialects is not only given in OWL 2, but also in the Rule Interchange Format (RIF) standards. RIF specifies formalisms for the knowl- edge exchange between different rule systems. The same applies for the WSML language that provides variants for Description Logics and rule-based reasoning. The goal remains the same, formalisms that are expressive enough to be useful, while exhibiting reasoning characteristics that can scale to the size of the Web. Leveraging this is exactly the objective of the WSML2Reasoner framework. In Section 2 we present WSML2Reasoner and our reasoners IRIS and Ell
-
WebProtégé: Supporting the Creation of ICD-11
,
Sean M. Falconer,Tania Tudorache,Csongor Nyulas,Natalya Fridman Noy and Mark A. Musen
,
[OpenAccess]
,
[Publisher]
WebProtege is a highly customizable Web interface for browsing and editing ontologies, which provides support for collaboration. We have created a customized version to support the World Health Organization with the collaborative development of the 11th revision of the International Classification of Diseases (ICD-11). Our demo will present this customized version and focus on how content creation and collaboration is being supported in WebProtege for the development of ICD-11.
-
xhRank: Ranking Entities on the Semantic Web
,
Xin He and Mark Baker
,
[OpenAccess]
,
[Publisher]
In general, ranking entities (resources) on the Semantic Web (SW) is subject to importance, relevance, and query length. Few existing SW search systems cover all of these aspects. Moreover, many existing efforts simply reuse the technologies from conventional Information Retrieval (IR), which are not designed for SW data. This paper proposes a ranking mechanism, which includes all three categories of rankings and are tailored to SW data.