Digital Resources for Chapter 11: Interpretation of Machine Learning Results

Below are digital resources that complement the book Practical Machine Learning with R: Tutorials and Case Studies.



The Ultimate Guide to PDPs and ICE Plots


A comprehensive tutorial by Conor O’Sullivan in Towards Data Science. It describes the intuition and maths behind Partial Dependence Plots (PDP) and Individual Conditional Expectation (ICE) plots.






Machine Learning Made Simple: Permutation Based Feature Importance


A YouTube video by Davnsh Senthi. In the video, he explains the underlying idea of Permutation Based Feature Importance with an example step by step.






How to Create Your Own SHAP Algorithm in R


This blog post from the AI blog of Carsten Lange shows how you can create a simplified SHAP value approximation in R.

The post explains the code, talks about drawbacks, and provides alternatives that are available as R packages. The related R script is also provided.






LIME: Explain Machine Learning Predictions: Intuition and Geometrical Interpretation


An article in Towards Data Science by Giorgio Visani. The article explains visually and intuitively how LIME works.






Interpreting Machine Learning Models with the iml Package


With machine learning interpretability growing in importance, several R packages designed to provide this capability are gaining in popularity. In recent blog posts I assessed lime for model agnostic local interpretability functionality and DALEX for both local and global machine learning explanation plots. This post examines the iml package to assess its functionality in providing machine learning interpretability to help you determine if it should become part of your preferred machine learning toolbox.






Black-Box models are actually more explainable than a Logistic Regression


SHAP values are not comprehensible. But, starting from them, it is possible to express a model choices in terms of impact on probability (a concept far more understandable for humans)






How to Interpret SHAP Values from a Vaccination Behavior Model


This blog post from the AI blog of Carsten Lange is related to the interactive section in this chapter. The post explains how the SHAP values for the Random Forest vaccination model are created with the DALEX R package and it provides the source code.






Explainable AI explained! #3 LIME


A detailed video about LIME from the DeepFindr video series. The level is slightly higher than the level in this book. However, details of LIME are covered with examples.