Skip to main content

Model Validation in Python

Learn the basics of model validation, validation techniques, and begin creating validated and high performing models.

Start Course for Free
4 Hours15 Videos47 Exercises12,587 Learners
3700 XP

Create Your Free Account



By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA. You confirm you are at least 16 years old (13 if you are an authorized Classrooms user).

Loved by learners at thousands of companies

Course Description

Machine learning models are easier to implement now more than ever before. Without proper validation, the results of running new data through a model might not be as accurate as expected. Model validation allows analysts to confidently answer the question, how good is your model? We will answer this question for classification models using the complete set of tic-tac-toe endgame scenarios, and for regression models using fivethirtyeight’s ultimate Halloween candy power ranking dataset. In this course, we will cover the basics of model validation, discuss various validation techniques, and begin to develop tools for creating validated and high performing models.

  1. 1

    Basic Modeling in scikit-learn


    Before we can validate models, we need an understanding of how to create and work with them. This chapter provides an introduction to running regression and classification models in scikit-learn. We will use this model building foundation throughout the remaining chapters.

    Play Chapter Now
    Introduction to model validation
    50 xp
    Modeling steps
    50 xp
    Seen vs. unseen data
    100 xp
    Regression models
    50 xp
    Set parameters and fit a model
    100 xp
    Feature importances
    100 xp
    Classification models
    50 xp
    Classification predictions
    100 xp
    Reusing model parameters
    100 xp
    Random forest classifier
    100 xp
  2. 2

    Validation Basics

    This chapter focuses on the basics of model validation. From splitting data into training, validation, and testing datasets, to creating an understanding of the bias-variance tradeoff, we build the foundation for the techniques of K-Fold and Leave-One-Out validation practiced in chapter three.

    Play Chapter Now
  3. 3

    Cross Validation

    Holdout sets are a great start to model validation. However, using a single train and test set if often not enough. Cross-validation is considered the gold standard when it comes to validating model performance and is almost always used when tuning model hyper-parameters. This chapter focuses on performing cross-validation to validate model performance.

    Play Chapter Now
  4. 4

    Selecting the best model with Hyperparameter tuning.

    The first three chapters focused on model validation techniques. In chapter 4 we apply these techniques, specifically cross-validation, while learning about hyperparameter tuning. After all, model validation makes tuning possible and helps us select the overall best model.

    Play Chapter Now

In the following tracks

Machine Learning Scientist


Chester IsmayBecca Robins
Kasey Jones Headshot

Kasey Jones

Research Data Scientist

Kasey Jones is a research data scientist at RTI International. His work focuses primarily on agent-based model simulations and natural language processing analysis. He also enjoys creating unique visualizations using D3, and building R-Shiny and python Dash dashboards. Outside of RTI he spends his time working through leet code problems, playing chess, and traveling all over the world.
See More

What do other learners have to say?

I've used other sites—Coursera, Udacity, things like that—but DataCamp's been the one that I've stuck with.

Devon Edwards Joseph
Lloyds Banking Group

DataCamp is the top resource I recommend for learning data science.

Louis Maiden
Harvard Business School

DataCamp is by far my favorite website to learn from.

Ronald Bowers
Decision Science Analytics, USAA