Discover Deep Learning with PyTorchNeural networks have been at the forefront of Artificial Intelligence research during the last few years and have provided solutions to many difficult problems like image classification, language translation or Alpha Go.
PyTorch is one of the leading deep learning frameworks, being both powerful and easy to use. In this course, you will use PyTorch to first learn about the basic concepts of neural networks before building your first neural network to predict digits from an MNIST dataset.
Explore Deep Learning ModelsYou’ll start with an introduction to PyTorch, exploring the PyTorch library and its applications for neural networks and deep learning. Next, you’ll cover artificial neural networks and learn how to train them using real data.
Learn to Use Neural NetworksAs you progress through the course, you’ll learn about convolutional neural networks and use them to build much more powerful models which give more accurate results. You will evaluate the results and use different techniques to improve them. You'll also cover concepts including regularization and transfer learning.
Following the course, you’ll have the confidence to delve deeper into neural networks and progress your knowledge further.
Introduction to PyTorchFree
In this first chapter, we introduce basic concepts of neural networks and deep learning using PyTorch library.Introduction to PyTorch50 xpCreating tensors in PyTorch100 xpMatrix multiplication100 xpForward propagation50 xpForward pass100 xpBackpropagation by auto-differentiation50 xpBackpropagation by hand50 xpBackpropagation using PyTorch100 xpCalculating gradients in PyTorch100 xpIntroduction to Neural Networks50 xpYour first neural network100 xpYour first PyTorch neural network100 xp
Artificial Neural Networks
In this second chapter, we delve deeper into Artificial Neural Networks, learning how to train them with real datasets.Activation functions50 xpNeural networks100 xpReLU activation100 xpReLU activation again100 xpLoss functions50 xpCalculating loss function by hand50 xpCalculating loss function in PyTorch100 xpLoss function of random scores100 xpPreparing a dataset in PyTorch50 xpPreparing MNIST dataset100 xpInspecting the dataloaders100 xpTraining neural networks50 xpBuilding a neural network - again100 xpTraining a neural network100 xpUsing the network to make predictions100 xp
Convolutional Neural Networks (CNNs)
In this third chapter, we introduce convolutional neural networks, learning how to train them and how to use them to make predictions.Convolution operator50 xpConvolution operator - OOP way100 xpConvolution operator - Functional way100 xpPooling operators50 xpMax-pooling operator100 xpAverage-pooling operator100 xpConvolutional Neural Networks50 xpYour first CNN - __init__ method100 xpYour first CNN - forward() method100 xpTraining Convolutional Neural Networks50 xpTraining CNNs100 xpUsing CNNs to make predictions100 xp
Using Convolutional Neural Networks
In this last chapter, we learn how to make neural networks work well in practice, using concepts like regularization, batch-normalization and transfer learning.The sequential module50 xpSequential module - init method100 xpSequential module - forward() method100 xpThe problem of overfitting50 xpValidation set100 xpDetecting overfitting50 xpRegularization techniques50 xpL2-regularization100 xpDropout100 xpBatch-normalization100 xpTransfer learning50 xpFinetuning a CNN100 xpTorchvision module100 xpCongratulations!50 xp
Ismail EleziSee More
Researcher PHD Student at Ca' Foscari University of Venice
I am a third year PhD Student of Deep Learning, supervised by professor Marcello Pelillo at Ca' Foscari, University of Venice. During my PhD, I did an exchange period at ZHAW Datalab (Switzerland) working with professor Thilo Stadelmann. From January on, I am visiting professor's Laura Leal-Taixe lab in Technical University of Munich.