LIME (local interpretable model-agnostic explanations) is an explainability method used to inspect machine learning models. Include a tag for Python, R, etc. depending on the LIME implementation.
LIME (local interpretable model-agnostic explanations) is an explainability method used to inspect machine learning models and debug their predictions. It was originally proposed in “Why Should I Trust You?”: Explaining the Predictions of Any Classifier (Ribeiro et al., NAACL 2016) as a way to explain how models were making predictions in natural language processing tasks. Since then, people implemented the approach in several packages, and the technique inspired later techniques for "explainable machine learning," such as the SHAP approach.
Related Concepts
- machine-learning
- shap
- partial dependence plots
LIME Implementations
Implementations of this approach exist in several software packages.
Python
lime
: https://github.com/marcotcr/limeeli5
: https://eli5.readthedocs.io/en/latest/shap
: https://shap.readthedocs.io/en/latest/index.html
R
Further Reading
- Marco Tulio Ribeiro, LIME - Local Interpretable Model-Agnostic Explanations
- Christoph Molnar, "Interpretable Machine Learning", chapter 9: Local Surrogate (LIME)
- Przemyslaw Biecek and Tomasz Burzykowski, "Explanatory Model Analysis: Explore, Explain, and Examine Predictive Models," chapter 9: Local Interpretable Model-agnostic Explanations (LIME)
- C3.ai: What is Local Interpretable Model Agnostic Explanations (LIME)?