Skip to main content
HomeCode-alongsData Science

How to Explain Black-Box Machine Learning Models

Learn about the importance of model interpretation.
Feb 2023
Code along with us onCode Along

Interpretable machine learning is needed because machine learning by itself is incomplete as a solution. The complex problems we solve with machine learning require linear algebra, calculus, and statistics precisely because we don’t understand all of the problem space we’re trying to solve.

One of the most significant issues is that given the high accuracy of our machine learning solutions, we tend to increase our confidence level to the point we fully understand the problem. Then, we are misled into thinking our solution covers all of the problem space. By explaining a model’s decisions, we can cover gaps in our understanding of the problem.

Black-box machine learning models are thought to be impenetrable. However, with inputs and outputs alone, a lot can be learned about the reasoning behind their predictions. In this session, we will cover the importance of model interpretation and explain various methods and their classifications, including feature importance, feature summary, and local explanations.

Key takeaways

  • The importance of model interpretability

  • The difference between global and local, and model-agnostic and model-specific interpretation methods

  • A deep dive into a variety of interpretability methods, such as feature importance methods, feature summary methods, and local explanations.

Topics