Understanding Machine Learning Metrics
Key Takeaways:- Learn how to measure model performance across regression, classification, clustering, NLP, and other model types.
- Understand which metrics are most appropriate for which machine learning task and why.
- Discover best practices for evaluating, interpreting, and communicating model results.
Description
Choosing the right metric is critical to interpreting machine learning results, guiding decisions, and ensuring models deliver real-world impact. But with so many metrics across regression, classification, clustering, and NLP, it’s easy to misinterpret results or pick the wrong measure altogether. This session equips you with a clear framework for evaluating models across tasks and domains.
In this presentation, Wojtek Kuberski and Santiago Viquez, the authors of The Little Book of ML Metrics, will take you through a deep dive into model performance metrics. You’ll learn how to measure and compare results for different types of machine learning models—from regression errors to classification AUCs, clustering indexes to NLP perplexity. You’ll also pick up best practices to avoid common pitfalls and ensure your evaluations lead to actionable insights.
Presenter Bio

Wojtek runs Product at data quality platform Soda. Previously he was CTO and Co-founder at NannyML (acquired by Soda). Wojtek is a serial entrepreneur, having previously founded machine learning solutions provider Prophecy Labs. He is co-author of "The Little Book of Machine Learning Metrics".

Santiago runs Developer Relations at data quality platform, Soda. He is an experienced data scientist, with stints at Walmart and UPS. Santiago is co-author of "The Little Book of Machine Learning Metrics".