Top 5 Resources for Learning Shapley’s Values ​​for Machine Learning

0

Shapley’s values ​​are an attribution method derived from cooperative game theory developed by economist Lloyd Shapley. It has recently gained attention for being a powerful method for explaining the predictions of ML learning models. This is a widely used approach, derived from cooperative game theory, which exhibits desirable properties. For example, it is mainly applied to ML models in the loan industry to explain why someone has been refused a loan. This article introduces the reader to the best resources to learn more about Shapley’s values ​​in machine learning.

Tutorials

Kaggle’s Machine Learning Explainability Course

Kaggle has a tutorial on Shape values. SHAP values ​​(an acronym for Shapley Additive ExPlanations) break down a prediction to show the impact of each feature. The tutorial provides a guide on how shape values ​​work and how to interpret them. Additionally, it teaches how to calculate the code for the value of Shap. Finally, the tutorial provides the learner with a problem that can be solved by applying Shape values.

Apply>>

Find the free tutorial here.

Shap Python Tutorial by GitHub

Shap Python Tutorial by GitHub is a practical introduction to explaining machine learning models with Shapley values. The tutorial is designed to help gain a solid understanding of computer science and the interpretation of Shapley-based explanations of machine learning models. The main topics covered in the tutorial are:

  • Introduction to explainable AI with Shapley’s values
  • Accuracy when interpreting predictive models for causal information
  • Quantitative measures of equity

The tutorial explains how Shapley values ​​are applied to sample text, table, genomics, and image. It’s a living document and serves as an introduction to the Shap Python package.

Find the free tutorial here.

Video conference by Fiddler AI

Fiddler AI released its how-to video series with the first video in the series devoted to Shapley’s values ​​- axioms, challenges and how this applies to the explainability of ML models. The 12-minute talk uploaded to YouTube was given by Dr Ankur Taly, head of data science at Fiddler Labs. a 82 PPT slides accompanies the conference.

Books

Interpretable Machine Learning: A Guide to Making Black Box Models Explainable

By Christoph Molnar2020

This book by Christoph Molnar aims to make machine learning models and their decisions interpretable. The chapters focus on general model independent methods for interpreting black box models such as the importance of features and accumulated local effects and explaining individual predictions with Shapley values. Some chapters in the book look at kernelSHAP, treeSHAP, the importance of SHAP features, the SHAP dependency diagram, the SHAP summary diagram, and the pros and cons of SHAP.

Find the free eBook version here.

Explainable AI with Python

By Leonida Gianfagna, Antonio Di Cecco2021

This book provides a comprehensive overview of current concepts and techniques available to make machine learning systems more explainable. The book chapter on Model Agnostic Methods for XAI describes local expansion of XAI with SHAP, KernelSHAP and TreeSHAP. Explainable AI with Python has been released by Springer and the eBook version can be purchased from Springer Shop.

Find the book here.


Subscribe to our newsletter

Receive the latest updates and relevant offers by sharing your email.


Join our Telegram Group. Be part of an engaging community


Abhishree Choudhary

Abhishree is an aspiring technical journalist with a UGD in political science. In her spare time, Abhishree watches classic French New Wave movies and plays with dogs.


Source link

Share.

About Author

Comments are closed.