Below are digital resources that complement the book Practical Machine Learning with R: Tutorials and Case Studies.
A comprehensive tutorial by Conor O’Sullivan in Towards Data Science. It describes the intuition and maths behind Partial Dependence Plots (PDP) and Individual Conditional Expectation (ICE) plots.
A YouTube video by Davnsh Senthi. In the video, he explains the underlying idea of Permutation Based Feature Importance with an example step by step.
This blog post from the AI blog of Carsten Lange shows how you can create a simplified SHAP value approximation in R.
The post explains the code, talks about drawbacks, and provides alternatives that are available as R packages. The related R script is also provided.
An article in Towards Data Science by Giorgio Visani. The article explains visually and intuitively how LIME works.
With machine learning interpretability growing in importance, several R packages designed to provide this capability are gaining in popularity. In recent blog posts I assessed lime for model agnostic local interpretability functionality and DALEX for both local and global machine learning explanation plots. This post examines the iml package to assess its functionality in providing machine learning interpretability to help you determine if it should become part of your preferred machine learning toolbox.
SHAP values are not comprehensible. But, starting from them, it is possible to express a model choices in terms of impact on probability (a concept far more understandable for humans)
This blog post from the AI blog of Carsten Lange is related to the interactive section in this chapter. The post explains how the SHAP values for the Random Forest vaccination model are created with the DALEX R package and it provides the source code.
A detailed video about LIME from the DeepFindr video series. The level is slightly higher than the level in this book. However, details of LIME are covered with examples.