Aiman Moyaid has completed
Introduction to Deep Learning with PyTorch
Start course For Free4 hr
3,900 XP

Loved by learners at thousands of companies
Course Description
Understanding the power of Deep Learning
Deep learning is everywhere: in smartphone cameras, voice assistants, and self-driving cars. It has even helped discover protein structures and beat humans at the game of Go. Discover this powerful technology and learn how to leverage it using PyTorch, one of the most popular deep learning libraries.Train your first neural network
First, tackle the difference between deep learning and "classic" machine learning. You will learn about the training process of a neural network and how to write a training loop. To do so, you will create loss functions for regression and classification problems and leverage PyTorch to calculate their derivatives.Evaluate and improve your model
In the second half, learn the different hyperparameters you can adjust to improve your model. After learning about the different components of a neural network, you will be able to create larger and more complex architectures. To measure your model performances, you will leverage TorchMetrics, a PyTorch library for model evaluation.Upon completion, you will be able to leverage PyTorch to solve classification and regression problems on both tabular and image data using deep learning. A vital capability for experienced data professionals looking to advance their careers.
Training 2 or more people?
Get your team access to the full DataCamp platform, including all the features.- 1
Introduction to PyTorch, a Deep Learning Library
FreeSelf-driving cars, smartphones, search engines... Deep learning is now everywhere. Before you begin building complex models, you will become familiar with PyTorch, a deep learning framework. You will learn how to manipulate tensors, create PyTorch data structures, and build your first neural network in PyTorch with linear layers.
Introduction to deep learning with PyTorch50 xpGetting started with PyTorch tensors100 xpChecking and adding tensors100 xpNeural networks and layers50 xpLinear layer network100 xpUnderstanding weights50 xpHidden layers and parameters50 xpYour first neural network100 xpStacking linear layers100 xpCounting the number of parameters100 xp - 2
Neural Network Architecture and Hyperparameters
To train a neural network in PyTorch, you will first need to understand additional components, such as activation and loss functions. You will then realize that training a network requires minimizing that loss function, which is done by calculating gradients. You will learn how to use these gradients to update your model's parameters.
Discovering activation functions50 xpActivate your understanding!50 xpThe sigmoid and softmax functions100 xpRunning a forward pass50 xpBuilding a binary classifier in PyTorch100 xpFrom regression to multi-class classification100 xpUsing loss functions to assess model predictions50 xpCreating one-hot encoded labels100 xpCalculating cross entropy loss100 xpUsing derivatives to update model parameters50 xpAccessing the model parameters100 xpUpdating the weights manually100 xpUsing the PyTorch optimizer100 xp - 3
Training a Neural Network with PyTorch
Now that you've learned the key components of a neural network, you'll train one using a training loop. You'll explore potential issues like vanishing gradients and learn strategies to address them, such as alternative activation functions and tuning learning rate and momentum.
A deeper dive into loading data50 xpUsing TensorDataset100 xpUsing DataLoader100 xpWriting our first training loop50 xpUsing the MSELoss100 xpWriting a training loop100 xpReLU activation functions50 xpImplementing ReLU100 xpImplementing leaky ReLU100 xpUnderstanding activation functions50 xpLearning rate and momentum50 xpExperimenting with learning rate100 xpExperimenting with momentum100 xp - 4
Evaluating and Improving Models
Training a deep learning model is an art, and to make sure our model is trained correctly, we need to keep track of certain metrics during training, such as the loss or the accuracy. We will learn how to calculate such metrics and how to reduce overfitting.
Layer initialization and transfer learning50 xpFine-tuning process100 xpFreeze layers of a model100 xpLayer initialization100 xpEvaluating model performance50 xpWriting the evaluation loop100 xpCalculating accuracy using torchmetrics100 xpFighting overfitting50 xpExperimenting with dropout100 xpUnderstanding overfitting50 xpImproving model performance50 xpImplementing random search100 xpWrap-up video50 xp
Training 2 or more people?
Get your team access to the full DataCamp platform, including all the features.In other tracks
Machine Learning Scientistcollaborators





Senior Data Science and AI Content Developer, DataCamp
Jasmin is a Senior Content Developer at DataCamp. After ten years as a global marketing manager in the music industry, she changed careers to follow her curiosity for data. Her passion is value exchange and making data science and AI accessible to all.

Senior Machine Learning Engineer
Thomas is passionate about AI, the environment, and education, and is always looking for new challenges. He specializes in computer vision, machine learning model training and deployment (cloud and edge), and data pipelines.
Join over 18 million learners and start Introduction to Deep Learning with PyTorch today!
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.