Extreme Gradient Boosting with XGBoost
Learn the fundamentals of gradient boosting and build state-of-the-art machine learning models using XGBoost to solve classification and regression problems.
Start Course for FreeCreate Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA. You confirm you are at least 16 years old (13 if you are an authorized Classrooms user).Loved by learners at thousands of companies
Course Description
Do you know the basics of supervised learning and want to use state-of-the-art models on real-world datasets? Gradient boosting is currently one of the most popular techniques for efficient modeling of tabular datasets of all sizes. XGboost is a very fast, scalable implementation of gradient boosting, with models using XGBoost regularly winning online data science competitions and being used at scale across different industries. In this course, you'll learn how to use this powerful library alongside pandas and scikit-learn to build and tune supervised learning models. You'll work with real-world datasets to solve classification and regression problems.
- 1
Classification with XGBoost
FreeThis chapter will introduce you to the fundamental idea behind XGBoost—boosted learners. Once you understand how XGBoost works, you'll apply it to solve a common classification problem found in industry: predicting whether a customer will stop being a customer at some point in the future.
Welcome to the course!50 xpWhich of these is a classification problem?50 xpWhich of these is a binary classification problem?50 xpIntroducing XGBoost50 xpXGBoost: Fit/Predict100 xpWhat is a decision tree?50 xpDecision trees100 xpWhat is Boosting?50 xpMeasuring accuracy100 xpMeasuring AUC100 xpWhen should I use XGBoost?50 xpUsing XGBoost50 xp - 2
Regression with XGBoost
After a brief review of supervised regression, you'll apply XGBoost to the regression task of predicting house prices in Ames, Iowa. You'll learn about the two kinds of base learners that XGboost can use as its weak learners, and review how to evaluate the quality of your regression models.
Regression review50 xpWhich of these is a regression problem?50 xpObjective (loss) functions and base learners50 xpDecision trees as base learners100 xpLinear base learners100 xpEvaluating model quality100 xpRegularization and base learners in XGBoost50 xpUsing regularization in XGBoost100 xpVisualizing individual XGBoost trees100 xpVisualizing feature importances: What features are most important in my dataset100 xp - 3
Fine-tuning your XGBoost model
This chapter will teach you how to make your XGBoost models as performant as possible. You'll learn about the variety of parameters that can be adjusted to alter the behavior of XGBoost and how to tune them efficiently so that you can supercharge the performance of your models.
Why tune your model?50 xpWhen is tuning your model a bad idea?50 xpTuning the number of boosting rounds100 xpAutomated boosting round selection using early_stopping100 xpOverview of XGBoost's hyperparameters50 xpTuning eta100 xpTuning max_depth100 xpTuning colsample_bytree100 xpReview of grid search and random search50 xpGrid search with XGBoost100 xpRandom search with XGBoost100 xpLimits of grid search and random search50 xpWhen should you use grid search and random search?50 xp - 4
Using XGBoost in pipelines
Take your XGBoost skills to the next level by incorporating your models into two end-to-end machine learning pipelines. You'll learn how to tune the most important XGBoost hyperparameters efficiently within a pipeline, and get an introduction to some more advanced preprocessing techniques.
Review of pipelines using sklearn50 xpExploratory data analysis50 xpEncoding categorical columns I: LabelEncoder100 xpEncoding categorical columns II: OneHotEncoder100 xpEncoding categorical columns III: DictVectorizer100 xpPreprocessing within a pipeline100 xpIncorporating XGBoost into pipelines50 xpCross-validating your XGBoost model100 xpKidney disease case study I: Categorical Imputer100 xpKidney disease case study II: Feature Union100 xpKidney disease case study III: Full pipeline100 xpTuning XGBoost hyperparameters50 xpBringing it all together100 xpFinal Thoughts50 xp
In the following tracks
Machine Learning Scientist
Sergey Fogelson
VP of Analytics and Measurement Sciences, Viacom
Sergey loves applying his quantitative skills to large-scale data intensive problems, mentoring junior colleagues, and is an avid learner who is always trying to refine his programming chops and to apply state of the art analytical and statistical methods to tackling hard data problems. He began his career as an academic at Dartmouth College in Hanover, New Hampshire, where he researched the neural bases of visual category learning and obtained his Ph.D. in Cognitive Neuroscience.
After leaving academia, Sergey got into the rapidly growing startup scene in the NYC metro area, where he has worked as a data scientist in digital advertising, cybersecurity, finance, and media. He is heavily involved in the NYC-area teaching community and has taught courses at various bootcamps, and has been a volunteer teacher in computer science through TEALSK12. When Sergey is not working or teaching, he is probably hiking. (He thru-hiked the Appalachian trail before graduate school).
What do other learners have to say?
I've used other sites—Coursera, Udacity, things like that—but DataCamp's been the one that I've stuck with.
Devon Edwards Joseph
Lloyds Banking Group
DataCamp is the top resource I recommend for learning data science.
Louis Maiden
Harvard Business School
DataCamp is by far my favorite website to learn from.
Ronald Bowers
Decision Science Analytics, USAA
Join over 9 million learners and start Extreme Gradient Boosting with XGBoost today!
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA. You confirm you are at least 16 years old (13 if you are an authorized Classrooms user).