In many situations, you need to ensure that your machine learning models are fair, interpretable, or reliable. Unfortunately, it's often not clear how to go about measuring these things. In this live training, Ruth shows you how to debug your machine learning models to evaluate these properties of your model. You'll use a mix of standard Python and Microsoft's open-source Responsible AI Toolbox.
Presenter Bio
Ruth YakubuPrincipal Cloud Advocate at Microsoft
Ruth is an expert in data analytics, cloud data platforms and responsible AI. She's on a mission to help bring awareness on tools available to debug machine learning models to be less harmful and expose responsible AI issues.. As a programmer turned Principal Cloud Advocate at Microsoft, Ruth helps people make better use of their data in Azure.