Skip to main content

Responsible AI: Evaluating Machine Learning Models in Python


In many situations, you need to ensure that your machine learning models are fair, interpretable, or reliable. Unfortunately, it's often not clear how to go about measuring these things. In this live training, Ruth shows you how to debug your machine learning models to evaluate these properties of your model. You'll use a mix of standard Python and Microsoft's open-source Responsible AI Toolbox.

Key Takeaways:

  • Learn how to use the interactive Responsible AI dashboard to debug and mitigate model issues faster
  • Learn how to identify issues from AI models, like fairness, interpretability, and reliability.
  • Learn how to debug your machine learning models to find predictive performance or data bias issues.

To code along with this live training, you need to have Miniconda and Visual Studio Code installed. 

Open this GitHub repository to code along:

Link to Slides

Ruth Yakubu Headshot
Ruth Yakubu

Principal Cloud Advocate at Microsoft

View More Webinars

Hands-on learning experience

Companies using DataCamp achieve course completion rates 6X higher than traditional online course providers

Learn More

Upskill your teams in data science and analytics

Learn More

Join 2,500+ companies and 80% of the Fortune 1000 who use DataCamp to upskill their teams.

Don’t just take our word for it.