Tree-based machine learning models can reveal complex non-linear relationships in data and often dominate machine learning competitions. In this course, you'll use the tidymodels package to explore and build different tree-based models—from simple decision trees to complex random forests. You’ll also learn to use boosted trees, a powerful machine learning technique that uses ensemble learning to build high-performing predictive models. Along the way, you'll work with health and credit risk data to predict the incidence of diabetes and customer churn.
Ready to build a real machine learning pipeline? Complete step-by-step exercises to learn how to create decision trees, split your data, and predict which patients are most likely to suffer from diabetes. Last but not least, you’ll build performance measures to assess your models and judge your predictions.Welcome to the course!50 xpWhy tree-based methods?100 xpSpecify that tree100 xpTrain that model100 xpHow to grow your tree50 xpTrain/test split100 xpAvoiding class imbalances100 xpFrom zero to hero100 xpPredict and evaluate50 xpMake predictions100 xpCrack the matrix100 xpAre you predicting correctly?100 xp
Regression Trees and Cross-Validation
Ready for some candy? Use a chocolate rating dataset to build regression trees and assess their performance using suitable error measures. You’ll overcome statistical insecurities of single train/test splits by applying sweet techniques like cross-validation and then dive even deeper by mastering the bias-variance tradeoff.Continuous outcomes50 xpTrain a regression tree100 xpPredict new values100 xpInspect model output50 xpPerformance metrics for regression trees50 xpIn-sample performance100 xpOut-of-sample performance100 xpBigger mistakes, bigger penalty100 xpCross-validation50 xpCreate the folds100 xpFit the folds100 xpEvaluate the folds100 xpBias-variance tradeoff50 xpCall things by their names100 xpAdjust model complexity100 xpIn-sample and out-of-sample performance100 xp
Hyperparameters and Ensemble Models
Time to get serious with tuning your hyperparameters and interpreting receiver operating characteristic (ROC) curves. In this chapter, you’ll leverage the wisdom of the crowd with ensemble models like bagging or random forests and build ensembles that forecast which credit card customers are most likely to churn.Tuning hyperparameters50 xpGenerate a tuning grid100 xpTune along the grid100 xpPick the winner100 xpMore model measures50 xpCalculate specificity100 xpDraw the ROC curve100 xpArea under the ROC curve100 xpBagged trees50 xpCreate bagged trees100 xpIn-sample ROC and AUC100 xpCheck for overfitting100 xpRandom forest50 xpBagged trees vs. random forest50 xpVariable importance100 xp
Ready for the high society of tree-based models? Apply gradient boosting to create powerful ensembles that perform better than anything that you have seen or built. Learn about their fine-tuning and how to compare different models to pick a winner for production.Introduction to boosting50 xpBagging vs. boosting50 xpSpecify a boosted ensemble100 xpGradient boosting50 xpTrain a boosted ensemble100 xpEvaluate the ensemble100 xpCompare to a single classifier100 xpOptimize the boosted ensemble50 xpTuning preparation100 xpThe actual tuning100 xpFinalize the model100 xpModel comparison50 xpCompare AUC100 xpPlot ROC curves100 xpWrap-up50 xp
In the following tracksMachine Learning Fundamentals in RMachine Learning Scientist with RSupervised Machine Learning in R
PrerequisitesModeling with tidymodels in R
Sandro RaabeSee More
Sandro is an aspiring Data Scientist, mathematician, teacher, and developer. He strongly believes that anyone - not only professionals - can create data applications using R's open interfaces. Having completed his studies in Germany, Oxford, Sydney, Pretoria, and online, he has gained professional experience in the finance and healthcare sector, providing companies with data-driven insights to solve significant problems. As an active contributor to the open-source community, he created vistime, an R package for generating timeline plots.