Dive into the world of machine learning and discover how to design, train, and deploy end-to-end models with this comprehensive course. Through engaging, real-world examples and hands-on exercises, you'll learn to tackle complex data problems and build powerful ML models. By the end of this course, you'll be equipped with the skills needed to create, monitor, and maintain high-performing models that deliver actionable insights.
Start by learning the essentials of exploratory data analysis (EDA) and data preparation - you'll clean and preprocess your data, ensuring it's ready for model training. Next, master the art of feature engineering and selection to optimize your models for real-world challenges; learn how to use the Boruta library for feature selection, log experiments with MLFlow, and fine-tune your models using k-fold cross-validation. Uncover the secrets of effective error metrics and diagnose overfitting, setting your models up for success.
You'll also explore the importance of feature stores and model registries in end-to-end ML frameworks. Learn how to deploy and monitor your model's performance over time using Docker and AWS. Understand the concept of data drift and how to detect it using statistical tests. Implement feedback loops, retraining, and labeling strategies to maintain your models' performance in the face of ever-changing data.
Transform your machine learning expertise with this comprehensive, hands-on course and become an end-to-end ML pro!
Design and ExplorationFree
In this initial chapter,you will engage in the foundational stages of any machine learning project: designing an end-to-end machine learning use case, exploratory data analysis, and data preparation. By the end of the chapter, you will have a solid understanding of the early stages of a machine learning project, from conceptualizing a use case to preparing the data for further processing and model training.Designing an End-to-End Machine Learning Use Case50 xpMachine learning lifecycle phase definitions50 xpMachine learning lifecycle100 xpExploratory Data Analysis50 xpVisualizing your data100 xpFinding class imbalance100 xpGoals of EDA100 xpData preparation50 xpData preparation functions100 xpAdvanced Imputation100 xpCleaning your dataset100 xp
Model Training and Evaluation
This chapter will delve deep into the essential processes of model training and evaluation. It comprises four comprehensive lessons, focusing on various aspects of feature engineering, model training, logging experiments, and model evaluation.Feature engineering and selection50 xpFeature engineering: classification100 xpNormalization and Standardization100 xpFeature selection100 xpModel training50 xpApplying Occam's Razor50 xpModel types50 xpTraining a model100 xpLogging experiments on MLFlow50 xpOrdering MLflow steps100 xpMLflow functionality50 xpMLFlow for logging and retrieving data100 xpModel evaluation and visualization50 xpKFold cross validation100 xpConfusion matrix interpretation50 xpUnderstanding confusion matrices50 xpEvaluating a model100 xp
This chapter delves into the essential elements of model deployment, a crucial phase in the machine learning lifecycle. Starting with testing, the chapter then progresses to architectural components, with a focus on feature stores and model registries. Subsequently, we will dive into the realm of packaging and containerization. The chapter concludes with an overview of Continuous Integration and Continuous Deployment (CI/CD).Testing a model50 xpReasons for tests50 xpWriting unit tests100 xpArchitectural components in end-to-end machine learning frameworks50 xpFeature stores vs model registries100 xpDefining features for a feature store100 xpFeature store using Feast100 xpPackaging and containerization50 xpContainerization steps100 xpContainerization using Docker50 xpInference using Docker100 xpContinuous integration and continuous deployment (CI/CD)50 xpCI/CD principles50 xpDeploying a model using AWS EB100 xpDeployment: bringing it all together100 xp
In the final chapter, you will navigate the intricacies of model monitoring, a critical phase in the machine learning lifecycle.Monitoring and visualization50 xpMonitoring a deployed model50 xpVisualizing a deployed model's output over time100 xpData drift50 xpTechniques for detecting and correcting data drift50 xpDetecting data drift using the Kolmogorov-Smirnov test100 xpFeedback loop, re-training, and labeling50 xpConceptualizing the feedback loop50 xpDangers of feedback loops50 xpFeedback loops100 xpServing the model50 xpModel monitoring case study100 xpWrap-up50 xp
In the following tracksMachine Learning Engineer
Joshua StapletonSee More
Machine Learning Engineer
Joshua Stapleton is a machine learning engineer and consultant with years of experience in the healthcare, defense, and education sectors. He currently works with a number of international companies and groups in a variety of capacities. He also works with AIExplained, a popular AI Youtuber, and is pursuing his Master’s in Machine Learning at Imperial College London.