Loved by learners at thousands of companies
Machine learning models are easier to implement now more than ever before. Without proper validation, the results of running new data through a model might not be as accurate as expected. Model validation allows analysts to confidently answer the question, how good is your model? We will answer this question for classification models using the complete set of tic-tac-toe endgame scenarios, and for regression models using fivethirtyeight’s ultimate Halloween candy power ranking dataset. In this course, we will cover the basics of model validation, discuss various validation techniques, and begin to develop tools for creating validated and high performing models.
Basic Modeling in scikit-learnFree
Before we can validate models, we need an understanding of how to create and work with them. This chapter provides an introduction to running regression and classification models in scikit-learn. We will use this model building foundation throughout the remaining chapters.
This chapter focuses on the basics of model validation. From splitting data into training, validation, and testing datasets, to creating an understanding of the bias-variance tradeoff, we build the foundation for the techniques of K-Fold and Leave-One-Out validation practiced in chapter three.Creating train, test, and validation datasets50 xpCreate one holdout set100 xpCreate two holdout sets100 xpWhy use holdout sets50 xpAccuracy metrics: regression models50 xpMean absolute error100 xpMean squared error100 xpPerformance on data subsets100 xpClassification metrics50 xpConfusion matrices100 xpConfusion matrices, again100 xpPrecision vs. recall100 xpThe bias-variance tradeoff50 xpError due to under/over-fitting100 xpAm I underfitting?100 xp
Holdout sets are a great start to model validation. However, using a single train and test set if often not enough. Cross-validation is considered the gold standard when it comes to validating model performance and is almost always used when tuning model hyper-parameters. This chapter focuses on performing cross-validation to validate model performance.The problems with holdout sets50 xpTwo samples100 xpPotential problems50 xpCross-validation50 xpscikit-learn's KFold()100 xpUsing KFold indices100 xpsklearn's cross_val_score()50 xpscikit-learn's methods100 xpImplement cross_val_score()100 xpLeave-one-out-cross-validation (LOOCV)50 xpWhen to use LOOCV50 xpLeave-one-out-cross-validation100 xp
Selecting the best model with Hyperparameter tuning.
The first three chapters focused on model validation techniques. In chapter 4 we apply these techniques, specifically cross-validation, while learning about hyperparameter tuning. After all, model validation makes tuning possible and helps us select the overall best model.Introduction to hyperparameter tuning50 xpCreating Hyperparameters100 xpRunning a model using ranges100 xpRandomizedSearchCV50 xpPreparing for RandomizedSearch100 xpImplementing RandomizedSearchCV100 xpSelecting your final model50 xpBest classification accuracy50 xpSelecting the best precision model100 xpCourse completed!50 xp
In the following tracksMachine Learning Scientist
PrerequisitesSupervised Learning with scikit-learn
Research Data Scientist
Kasey Jones is a research data scientist at RTI International. His work focuses primarily on agent-based model simulations and natural language processing analysis. He also enjoys creating unique visualizations using D3, and building R-Shiny and python Dash dashboards. Outside of RTI he spends his time working through leet code problems, playing chess, and traveling all over the world.
What do other learners have to say?
I've used other sites—Coursera, Udacity, things like that—but DataCamp's been the one that I've stuck with.
Devon Edwards Joseph
Lloyds Banking Group
DataCamp is the top resource I recommend for learning data science.
Harvard Business School
DataCamp is by far my favorite website to learn from.
Decision Science Analytics, USAA