Hands-on learning experience
Companies using DataCamp achieve course completion rates 6X higher than traditional online course providers
Learn MoreInterpretable machine learning is needed because machine learning by itself is incomplete as a solution. The complex problems we solve with machine learning require linear algebra, calculus, and statistics precisely because we don’t understand all of the problem space we’re trying to solve.
One of the most significant issues is that given the high accuracy of our machine learning solutions, we tend to increase our confidence level to the point we fully understand the problem. Then, we are misled into thinking our solution covers all of the problem space. By explaining a model’s decisions, we can cover gaps in our understanding of the problem.
Black-box machine learning models are thought to be impenetrable. However, with inputs and outputs alone, a lot can be learned about the reasoning behind their predictions. In this session, we will cover the importance of model interpretation and explain various methods and their classifications, including feature importance, feature summary, and local explanations.
Key Takeaways:
The importance of model interpretability
The difference between global and local, and model-agnostic and model-specific interpretation methods
A deep dive into a variety of interpretability methods, such as feature importance methods, feature summary methods, and local explanations.
Climate & Agronomic Data Scientist at Syngenta
Learn how to scale Machine Learning deployment in your organization
Craft a 21st-century data strategy to optimize business outcomes.
Find out where AI, ML, and data science intersect and where they diverge.
Companies using DataCamp achieve course completion rates 6X higher than traditional online course providers
Learn More