Skip to main content
HomeCode-alongsData Science

How to Explain Black-Box Machine Learning Models

Learn about the importance of model interpretation.
Feb 2023
Code along with us onCode Along

Interpretable machine learning is needed because machine learning by itself is incomplete as a solution. The complex problems we solve with machine learning require linear algebra, calculus, and statistics precisely because we don’t understand all of the problem space we’re trying to solve.

One of the most significant issues is that given the high accuracy of our machine learning solutions, we tend to increase our confidence level to the point we fully understand the problem. Then, we are misled into thinking our solution covers all of the problem space. By explaining a model’s decisions, we can cover gaps in our understanding of the problem.

Black-box machine learning models are thought to be impenetrable. However, with inputs and outputs alone, a lot can be learned about the reasoning behind their predictions. In this session, we will cover the importance of model interpretation and explain various methods and their classifications, including feature importance, feature summary, and local explanations.

Key takeaways

  • The importance of model interpretability

  • The difference between global and local, and model-agnostic and model-specific interpretation methods

  • A deep dive into a variety of interpretability methods, such as feature importance methods, feature summary methods, and local explanations.



Interpretable Machine Learning

Serg Masis talks about the different challenges affecting model interpretability in machine learning, how bias can produce harmful outcomes in machine learning systems, the different types of technical and non-technical solutions to tackling bias, the

Adel Nehme's photo

Adel Nehme

51 min


Turning Machine Learning Models into APIs in Python

Learn to how to create a simple API from a machine learning model in Python using Flask.
Sayak Paul's photo

Sayak Paul

20 min


An Introduction to SHAP Values and Machine Learning Interpretability

Machine learning models are powerful but hard to interpret. However, SHAP values can help you understand how model features impact predictions.
Abid Ali Awan's photo

Abid Ali Awan

9 min


Getting Started with Machine Learning in Python

Learn the fundamentals of supervised learning by using scikit-learn.
George Boorman's photo

George Boorman


Managing Machine Learning Models with MLflow

Learn to use MLflow to track and package a machine learning model, and see the process for getting models into production.
Weston Bassler's photo

Weston Bassler


Sentiment Analysis and Prediction in Python

Learn how to build a machine learning model predicting sentiment.
Justin Saddlemyer's photo

Justin Saddlemyer

See MoreSee More