Ensemble Methods in Python
Learn how to build advanced and effective machine learning models in Python using ensemble techniques such as bagging, boosting, and stacking.
Start Course for Free4 hours15 videos52 exercises10,127 learnersStatement of Accomplishment
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.Training 2 or more people?
Try DataCamp for BusinessLoved by learners at thousands of companies
Course Description
Continue your machine learning journey by diving into the wonderful world of ensemble learning methods! These are an exciting class of machine learning techniques that combine multiple individual algorithms to boost performance and solve complex problems at scale across different industries. Ensemble techniques regularly win online machine learning competitions as well!
In this course, you’ll learn all about these advanced ensemble techniques, such as bagging, boosting, and stacking. You’ll apply them to real-world datasets using cutting edge Python machine learning libraries such as scikit-learn, XGBoost, CatBoost, and mlxtend.
Training 2 or more people?
Get your team access to the full DataCamp platform, including all the features.In the following Tracks
Supervised Machine Learning in Python
Go To Track- 1
Combining Multiple Models
FreeDo you struggle to determine which of the models you built is the best for your problem? You should give up on that, and use them all instead! In this chapter, you'll learn how to combine multiple models into one using "Voting" and "Averaging". You'll use these to predict the ratings of apps on the Google Play Store, whether or not a Pokémon is legendary, and which characters are going to die in Game of Thrones!
Introduction to ensemble methods50 xpExploring Google apps data50 xpPredicting the rating of an app100 xpVoting50 xpChoosing the best model100 xpAssembling your first ensemble100 xpEvaluating your ensemble100 xpAveraging50 xpJourney to Westeros50 xpPredicting GoT deaths100 xpSoft vs. hard voting100 xp - 2
Bagging
Bagging is the ensemble method behind powerful machine learning algorithms such as random forests. In this chapter you'll learn the theory behind this technique and build your own bagging models using scikit-learn.
The strength of “weak” models50 xpRestricted and unrestricted decision trees100 xp"Weak" decision tree50 xpBootstrap aggregating50 xpTraining with bootstrapping100 xpA first attempt at bagging100 xpBaggingClassifier: nuts and bolts50 xpBagging: the scikit-learn way100 xpChecking the out-of-bag score100 xpBagging parameters: tips and tricks50 xpExploring the UCI SECOM data50 xpA more complex bagging model100 xpTuning bagging hyperparameters100 xp - 3
Boosting
Boosting is class of ensemble learning algorithms that includes award-winning models such as AdaBoost. In this chapter, you'll learn about this award-winning model, and use it to predict the revenue of award-winning movies! You'll also learn about gradient boosting algorithms such as CatBoost and XGBoost.
The effectiveness of gradual learning50 xpIntroducing the movie database50 xpExploring movie features50 xpPredicting movie revenue100 xpBoosting for predicted revenue100 xpAdaptive boosting: award winning model50 xpYour first AdaBoost model100 xpTree-based AdaBoost regression100 xpMaking the most of AdaBoost100 xpGradient boosting50 xpRevisiting Google app reviews50 xpSentiment analysis with GBM100 xpGradient boosting flavors50 xpMovie revenue prediction with CatBoost100 xpBoosting contest: Light vs Extreme100 xp - 4
Stacking
Get ready to see how things stack up! In this final chapter you'll learn about the stacking ensemble method. You'll learn how to implement it using scikit-learn as well as with the mlxtend library! You'll apply stacking to predict the edibility of North American mushrooms, and revisit the ratings of Google apps with this more advanced approach.
The intuition behind stacking50 xpExploring the mushroom dataset50 xpPredicting mushroom edibility100 xpK-nearest neighbors for mushrooms100 xpBuild your first stacked ensemble50 xpApplying stacking to predict app ratings100 xpBuilding the stacking classifier100 xpStacked predictions for app ratings100 xpLet's mlxtend it!50 xpA first attempt with mlxtend100 xpBack to regression with stacking100 xpMushrooms: a matter of life or death100 xpEnsembling it all together50 xp
Training 2 or more people?
Get your team access to the full DataCamp platform, including all the features.In the following Tracks
Supervised Machine Learning in Python
Go To Trackdatasets
App ratingsApp reviewsGame of ThronesPokémonSECOM (Semiconductor Manufacturing)TMDb (The Movie Database)collaborators
Román de las Heras
See MoreData Scientist at Appodeal
Román de las Heras is a seasoned Data Scientist, currently working at Appodeal. He studied Systems Engineering and took a simultaneous Math degree with Computer Science at the National Autonomous University of Honduras (UNAH). The ensemble of these two careers drove him into data science. His daily work includes developing ML models, building recommendation engines and ranking systems, training junior team members on the field, and doing ad-hoc data analyses to present results and insights. He is also a passionate and experienced educator, as well as a strong believer in "learning by doing".
What do other learners have to say?
Join over 15 million learners and start Ensemble Methods in Python today!
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.