Machine Learning with Tree-Based Models in R

Learn how to use tree-based models and ensembles to make classification and regression predictions with tidymodels.
Start Course for Free
4 Hours16 Videos58 Exercises
4850 XP

Create Your Free Account

GoogleLinkedInFacebook
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA. You confirm you are at least 16 years old (13 if you are an authorized Classrooms user).

Loved by learners at thousands of companies


Course Description

Tree-based machine learning models can reveal complex non-linear relationships in data and often dominate machine learning competitions. In this course, you'll use the tidymodels package to explore and build different tree-based models—from simple decision trees to complex random forests. You’ll also learn to use boosted trees, a powerful machine learning technique that uses ensemble learning to build high-performing predictive models. Along the way, you'll work with health and credit risk data to predict the incidence of diabetes and customer churn.

  1. 1

    Classification Trees

    Free
    Ready to build a real machine learning pipeline? Complete step-by-step exercises to learn how to create decision trees, split your data, and predict which patients are most likely to suffer from diabetes. Last but not least, you’ll build performance measures to assess your models and judge your predictions.
    Play Chapter Now
  2. 2

    Regression Trees and Cross-Validation

    Ready for some candy? Use a chocolate rating dataset to build regression trees and assess their performance using suitable error measures. You’ll overcome statistical insecurities of single train/test splits by applying sweet techniques like cross-validation and then dive even deeper by mastering the bias-variance tradeoff.
    Play Chapter Now
  3. 3

    Hyperparameters and Ensemble Models

    Time to get serious with tuning your hyperparameters and interpreting receiver operating characteristic (ROC) curves. In this chapter, you’ll leverage the wisdom of the crowd with ensemble models like bagging or random forests and build ensembles that forecast which credit card customers are most likely to churn.
    Play Chapter Now
  4. 4

    Boosted Trees

    Ready for the high society of tree-based models? Apply gradient boosting to create powerful ensembles that perform better than anything that you have seen or built. Learn about their fine-tuning and how to compare different models to pick a winner for production.
    Play Chapter Now
Datasets
Chocolate ratingsDiabetes riskBank customer churn
Collaborators
Maggie MatsuiJames ChapmanJustin Saddlemyer
Sandro Raabe Headshot

Sandro Raabe

Data Scientist
Sandro is an aspiring Data Scientist, mathematician, teacher, and developer. He strongly believes that anyone - not only professionals - can create data applications using R's open interfaces. Having completed his studies in Germany, Oxford, Sydney, Pretoria, and online, he has gained professional experience in the finance and healthcare sector, providing companies with data-driven insights to solve significant problems. As an active contributor to the open-source community, he created vistime, an R package for generating timeline plots.
See More

What do other learners have to say?

I've used other sites—Coursera, Udacity, things like that—but DataCamp's been the one that I've stuck with.

Devon Edwards Joseph
Lloyds Banking Group

DataCamp is the top resource I recommend for learning data science.

Louis Maiden
Harvard Business School

DataCamp is by far my favorite website to learn from.

Ronald Bowers
Decision Science Analytics, USAA