Introduction to Deep Learning with PyTorch
Learn the power of deep learning in PyTorch. Build your first neural network, adjust hyperparameters, and tackle classification and regression problems.Start Course for Free
4 Hours16 Videos53 Exercises
Create Your Free Account
Loved by learners at thousands of companies
Introduction to Deep Learning with PyTorchDeep learning is everywhere: in smartphone cameras, voice assistants, and self-driving cars. It has even helped discover protein structures and beat humans at the game of Go. In this course, you will discover this powerful technology and learn how to leverage it using PyTorch, one of the most popular deep learning libraries.
Train your first neural networkFirst, this course tackles the difference between deep learning and "classic" machine learning and will introduce neural networks. You will learn about the training process of a neural network and how to write a training loop. To do so, you will create loss functions for regression and classification problems and leverage PyTorch to calculate their derivatives.
Evaluate and improve your modelIn the second half of this course, you will learn about the different hyperparameters you can adjust to improve your model. After learning about the different components of a neural network, you will be able to create larger and more complex architectures. To measure your model performances, you will leverage TorchMetrics, a PyTorch library for model evaluation. By the end of this course, you will be able to leverage PyTorch to solve classification and regression problems on both tabular and image data using deep learning.
Introduction to PyTorch, a Deep Learning libraryFree
Self-driving cars, smartphones, search engines... Deep learning is now everywhere. Before you begin building complex models, you will become familiar with PyTorch, a deep learning framework. You will learn how to manipulate tensors, create PyTorch data structures, and build your first neural network in PyTorch.Introduction to deep learning with PyTorch50 xpMachine learning vs. deep learning100 xpCreating tensors and accessing attributes100 xpCreating tensors from NumPy arrays100 xpCreating our first neural network50 xpYour first neural network100 xpStacking linear layers100 xpDiscovering activation functions50 xpActivate your understanding!50 xpUsing the sigmoid and softmax functions100 xp
Training Our First Neural Network with PyTorch
To train a neural network in PyTorch, you will first need to understand the job of a loss function. You will then realize that training a network requires minimizing that loss function, which is done by calculating gradients. You will learn how to use these gradients to update your model's parameters, and finally, you will write your first training loop.Running a forward pass50 xpBuilding a binary classifier in PyTorch100 xpFrom regression to multi-class classification100 xpUsing loss functions to assess model predictions50 xpCreating one-hot encoded labels100 xpCalculating cross entropy loss100 xpUsing derivatives to update model parameters50 xpEstimating a sample100 xpAccessing the model parameters100 xpUpdating the weights manually100 xpUsing the PyTorch optimizer100 xpWriting our first training loop50 xpUsing the MSELoss100 xpWriting a training loop100 xp
Neural network architecture and hyperparameters
Hyperparameters are parameters, often chosen by the user, that control model training. The type of activation function, the number of layers in the model, and the learning rate are all hyperparameters of neural network training. Together, we will discover the most critical hyperparameters of a neural network and how to modify them.Discovering activation functions between layers50 xpImplementing ReLU100 xpImplementing leaky ReLU100 xpUnderstanding activation functions50 xpA deeper dive into neural network architecture50 xpCounting the number of parameters100 xpManipulating the capacity of a network100 xpLearning rate and momentum50 xpExperimenting with learning rate100 xpExperimenting with momentum100 xpLayer initialization and transfer learning50 xpFine-tuning process100 xpFreeze layers of a model100 xpLayer initialization100 xp
Training, Evaluating and Iterating
Training a deep learning model is an art, and to make sure our model is trained correctly, we need to keep track of certain metrics during training, such as the loss or the accuracy. We will learn how to calculate such metrics and how to reduce overfitting using an image dataset as an example.Manipulating image data50 xpUsing the flatten layer100 xpUsing the normalization transform100 xpPutting it all together100 xpEvaluating models50 xpWriting the evaluation loop100 xpCalculating accuracy using torchmetrics100 xpFighting overfitting50 xpUsing data augmentation100 xpExperimenting with dropout100 xpUnderstanding overfitting50 xpImproving performances of the model50 xpImplementing random search100 xpCreating the best model100 xpWrap up video50 xp
Maham KhanSee More
Senior Data Science Content Developer at DataCamp
Hi, I am a Data Scientist and Senior Content Developer at DataCamp, on a mission to make data skills accessible for everyone. Most recently, I've worked on creating toolkits and exploring experimental applications of data science for urban analytics, disaster risk management and climate change mitigation, at the World Bank. I have a background in Experimental Psychology and Philosophy from the University of Oxford, and Urban Data Science from NYU.
What do other learners have to say?
Join over 11 million learners and start Introduction to Deep Learning with PyTorch today!
Create Your Free Account