Skip to content

localModel

LIME-Based Explanations with Interpretable Inputs Based on Ceteris Paribus Profiles

v0.5 · Sep 14, 2021 · GPL

Description

Local explanations of machine learning models describe, how features contributed to a single prediction. This package implements an explanation method based on LIME (Local Interpretable Model-agnostic Explanations, see Tulio Ribeiro, Singh, Guestrin (2016) <doi:10.1145/2939672.2939778>) in which interpretable inputs are created based on local rather than global behaviour of each original feature.

Downloads

2.6K

Last 30 days

2649th

8K

Last 90 days

25.7K

Last year

Trend: -4.3% (30d vs prior 30d)

CRAN Check Status

14 OK
Show all 14 flavors
Flavor Status
r-devel-linux-x86_64-debian-clang OK
r-devel-linux-x86_64-debian-gcc OK
r-devel-linux-x86_64-fedora-clang OK
r-devel-linux-x86_64-fedora-gcc OK
r-devel-macos-arm64 OK
r-devel-windows-x86_64 OK
r-oldrel-macos-arm64 OK
r-oldrel-macos-x86_64 OK
r-oldrel-windows-x86_64 OK
r-patched-linux-x86_64 OK
r-release-linux-x86_64 OK
r-release-macos-arm64 OK
r-release-macos-x86_64 OK
r-release-windows-x86_64 OK

Check History

OK 14 OK · 0 NOTE · 0 WARNING · 0 ERROR · 0 FAILURE Mar 10, 2026

Reverse Dependencies (1)

suggests

Dependency Network

Dependencies Reverse dependencies glmnet DALEX ggplot2 partykit ingredients DALEXtra localModel

Version History

new 0.5 Mar 10, 2026
updated 0.5 ← 0.3.12 diff Sep 13, 2021
updated 0.3.12 ← 0.3.11 diff Dec 17, 2019
new 0.3.11 Apr 13, 2019