A foundational Haxe framework for cross-platform development
-
Updated
Oct 10, 2024 - JavaScript
A foundational Haxe framework for cross-platform development
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Python implementation of two low-light image enhancement techniques via illumination map estimation
Qt-DAB, a general software DAB (DAB+) decoder with a (slight) focus on showing the signal
ProjectFNF is a mostly quality-of-life engine for Friday Night Funkin. It is easy to understand and is super flexible.
InterpretDL: Interpretation of Deep Learning Models,基于『飞桨』的模型可解释性算法库。
Build an scikit-learn model to predict churn using customer telco data.
C# LIME protocol implementation
Multicycles.org aggregates on one map, more than 300 share vehicles like bikes, scooters, mopeds and cars. Demo APP for the Data Flow API, see https://flow.fluctuo.com
Application of the LIME algorithm by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin to the domain of time series classification
A port of Friday Night Funkin' v0.2.8 made by rebuilding the code via reverse engineering.
In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)
Local explanations with uncertainty 💐!
Add a description, image, and links to the lime topic page so that developers can more easily learn about it.
To associate your repository with the lime topic, visit your repo's landing page and select "manage topics."