
shap.plots.force — SHAP latest documentation
For SHAP values, it should be the value of explainer.expected_value. However, it is recommended to pass in a SHAP Explanation object instead (shap_values is not necessary in this case).
Image examples — SHAP latest documentation
Image examples These examples explain machine learning models applied to image data. They are all generated from Jupyter notebooks available on GitHub. Image classification Examples using …
Explaining quantitative measures of fairness — SHAP latest …
By using SHAP (a popular explainable AI tool) we can decompose measures of fairness and allocate responsibility for any observed disparity among each of the model’s input features.
Be careful when interpreting predictive models in search of causal ...
SHAP and other interpretability tools can be useful for causal inference, and SHAP is integrated into many causal inference packages, but those use cases are explicitly causal in nature.
Release notes — SHAP latest documentation
Nov 11, 2025 · This release incorporates many changes that were originally contributed by the SHAP community via @dsgibbons 's Community Fork, which has now been merged into the main shap …
scatter plot — SHAP latest documentation
The y-axis is the SHAP value for that feature (stored in explanation.values), which represents how much knowing that feature’s value changes the output of the model for that sample’s prediction.
shap.plots.waterfall — SHAP latest documentation
The waterfall plot is designed to visually display how the SHAP values (evidence) of each feature move the model output from our prior expectation under the background data distribution, to the final model …
Iris classification with scikit-learn — SHAP latest documentation
Using 120 background data samples could cause slower run times. Consider using shap.sample(data, K) or shap.kmeans(data, K) to summarize the background as K samples.
shap.KernelExplainer — SHAP latest documentation
Uses the Kernel SHAP method to explain the output of any function. Kernel SHAP is a method that uses a special weighted linear regression to compute the importance of each feature.
shap.plots.scatter — SHAP latest documentation
Plots the value of the feature on the x-axis and the SHAP value of the same feature on the y-axis. This shows how the model depends on the given feature, and is like a richer extension of classical partial …