CAVES-dataset accepted at SIGIR'22
-
Updated
Aug 9, 2024
CAVES-dataset accepted at SIGIR'22
tornado plots for model sensitivity analysis
Transform the way you work with boolean logic by forming them from discrete propositions. This enables you to dynamically generate custom output, such as providing explanations about the causes behind a result.
A framework for evaluating natural language explanations of neurons.
We introduce XBrainLab, an open-source user-friendly software, for accelerated interpretation of neural patterns from EEG data based on cutting-edge computational approach.
Domestic robot example configured for the multi-level explainability framework
ML Pipeline. Detail documentation of the project in README. Click on actions to see the script.
The mechanisms behind image classification using a pretrained CNN model in high-dimensional spaces 🏞️
Code for ER-Test, accepted to the Findings of EMNLP 2022
[TMLR] "Can You Win Everything with Lottery Ticket?" by Tianlong Chen, Zhenyu Zhang, Jun Wu, Randy Huang, Sijia Liu, Shiyu Chang, Zhangyang Wang
Comprehensible Convolutional Neural Networks via Guided Concept Learning
(WWW'21) ATON - an Outlier Interpreation / Outlier explanation method
TS4NLE is converts the explanation of an eXplainable AI (XAI) system into natural language utterances comprehensible by humans.
A project in an AI seminar
Experiments to explain entity resolution systems
List of papers in the area of Explainable Artificial Intelligence Year wise
Codebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
Add a description, image, and links to the explanability topic page so that developers can more easily learn about it.
To associate your repository with the explanability topic, visit your repo's landing page and select "manage topics."