Skip to content

Reasoner Evaluation

Tiffany J. Callahan edited this page Dec 4, 2019 · 5 revisions

Logical Reasoners

On this page, we will describe the evaluation we performed in order to understand the impact of applying different reasoners.



OWL Reasoner Selection Criteria .

Using the following reviews (shown below), we selected reasoners that met the following criteria:

  1. Low response time
  2. Available via the OWLAPI
  3. Open source
Khamparia A, Pandey B. Comprehensive analysis of semantic
web reasoners and tools: a survey. Education and Information
Technologies. 2017 Nov 1;22(6):3121-45.
Parsia B, Matentzoglu N, Gonçalves RS, Glimm B, Steigmiller A.
The OWL reasoner evaluation (ORE) 2015 competition report.
Journal of Automated Reasoning. 2017 Dec 1;59(4):455-82.


Eligible Reasoners

Reasoner Language OWLTools
ELK EL Yes
ELepHant EL No
Pellet DL Yes
RACER DL No
FACT++ DL No
Chainsaw DL No
Konclude DL No
Crack DL No
TrOWL DL+EL No
MORe DL+EL No


Evaluation

  1. Benchmark each of the algorithms on HPO+Imports

    • Run-time
    • Justifications
    • Count of inferred axioms
    • Consistency
  2. For all algorithms that pass the benchmark, run them against PheKnowLator

    • Including disjointness axioms
    • Excluding disjointness axioms
  3. Clinician Evaluation via @jwyrwa

    • Create a spreadsheet of the inferred axioms by algorithm and mark them as:
      • Correct/Incorrect
      • Definitely Clinically relevant, Maybe clinically relevant, not clinically relevant
Clone this wiki locally