Loved by learners at thousands of companies
Prepare for Your Machine Learning InterviewHave you ever wondered how to properly prepare for a Machine Learning Interview? In this course, you will prepare answers for 15 common Machine Learning (ML) in Python interview questions for a data scientist role.
These questions will revolve around seven important topics: data preprocessing, data visualization, supervised learning, unsupervised learning, model ensembling, model selection, and model evaluation.
Refresh Your Machine Learning KnowledgeYou’ll start by working on data pre-processing and data visualization questions. After performing all the preprocessing steps, you’ll create a predictive ML model to hone your practical skills.
Next, you’ll cover some supervised learning techniques before moving on to unsupervised learning. Depending on the role, you’ll likely cover both topics in your machine learning interview.
Finally, you’ll finish by covering model selection and evaluation, looking at how to evaluate performance for model generalization, and look at various techniques as you build an ensemble model.
Practice Answers to the Most Common Machine Learning Interview QuestionsBy the end of the course, you will possess both the required theoretical background and the ability to develop Python code to successfully answer these 15 questions.
The coding examples will be mainly based on the scikit-learn package, given its ease of use and ability to cover the most important machine learning techniques in the Python language.
The course does not teach machine learning fundamentals, as these are covered in the course's prerequisites.
Data Pre-processing and VisualizationFree
In the first chapter of this course, you'll perform all the preprocessing steps required to create a predictive machine learning model, including what to do with missing values, outliers, and how to normalize your dataset.Handling missing data50 xpThe hunt for missing values100 xpSimple imputation100 xpIterative imputation100 xpData distributions and transformations50 xpTraining vs test set distributions and transformations50 xpTrain/test distributions100 xpLog and power transformations100 xpData outliers and scaling50 xpOutlier detection100 xpHandling outliers100 xpZ-score standardization100 xp
In the second chapter of this course, you'll practice different several aspects of supervised machine learning techniques, such as selecting the optimal feature subset, regularization to avoid model overfitting, feature engineering, and ensemble models to address the so-called bias-variance trade-off.Regression: feature selection50 xpBest feature subset50 xpFilter and wrapper methods100 xpFeature selection through feature importance100 xpRegression: regularization50 xpAvoiding overfitting50 xpLasso regularization100 xpRidge regularization100 xpClassification: feature engineering50 xpClassification model features50 xpLogistic regression baseline classifier100 xpEnsemble methods50 xpBootstrap aggregation (bagging)100 xpBoosting100 xpXG Boost100 xp
In the third chapter of this course, you'll use unsupervised learning to apply feature extraction and visualization techniques for dimensionality reduction and clustering methods to select not only an appropriate clustering algorithm but optimal cluster number for a dataset.Dimensionality reduction: feature extraction50 xpThe curse of dimensionality50 xpPrincipal component analysis100 xpSingular value decomposition100 xpDimensionality reduction: visualization techniques50 xpReducing high-dimensional data50 xpVisualization separation of classes with PCA I100 xpVisualization PCs with a scree plot100 xpClustering analysis: selecting the right clustering algorithm50 xpClustering algorithms50 xpK-means clustering100 xpHierarchical agglomerative clustering100 xpClustering analysis: choosing the optimal number of clusters50 xpWhat is the optimal k?50 xpSilhouette method100 xpElbow method100 xp
Model Selection and Evaluation
In the fourth and final chapter of this course, you'll really step it up and apply bootstrapping and cross-validation to evaluate performance for model generalization, resampling techniques to imbalanced classes, detect and remove multicollinearity, and build an ensemble model.Model generalization: bootstrapping and cross-validation50 xpValidating model performance50 xpDecision tree100 xpA forest of decision trees100 xpModel evaluation: imbalanced classification models50 xpX-ray weapon detection50 xpImbalanced class metrics100 xpResampling techniques100 xpModel selection: regression models50 xpAddressing multicollinearity50 xpMulticollinearity techniques - feature engineering100 xpMulticollinearity techniques - PCA100 xpModel selection: ensemble models50 xpRandom forest vs gradient boosting50 xpRandom forest ensemble100 xpGradient boosting ensemble100 xpWrap-Up50 xp
Lisa Stuart is a Data Scientist with a wealth of industry experience. She is currently on the LSLPG Data Science Team at Thermo Fisher Scientific where she and her team build solutions to support the company motto to 'make the world healthier, cleaner and safer.' Prior to that, she built predictive models for targeted marketing at Costco and Expedia and managed dashboards for process automation. At Starbucks, she managed a team of data scientists to build a predictive model on geopolitical stability of countries around the world to make informed decisions on expansion and supply routes. As part of the DSP Big Data Analytics Team at Amazon, she and her team used statistical analysis and machine learning to improve processes around successful and on-time delivery for each and every Amazon order. In her free time, you'll find her at the dog park hanging out with her beloved dogs Blaze, Stella and Kona.