Skip to main content
HomeTutorialsArtificial Intelligence (AI)

Building a Transformer with PyTorch

Learn how to build a Transformer model using PyTorch, a powerful tool in modern machine learning.
Updated Aug 2023  · 26 min read

The aim of this tutorial is to provide a comprehensive understanding of how to construct a Transformer model using PyTorch. The Transformer is one of the most powerful models in modern machine learning. They have revolutionized the field, particularly in Natural Language Processing (NLP) tasks such as language translation and text summarization. Long Short-Term Memory (LSTM) networks have been replaced by Transformers in these tasks due to their ability to handle long-range dependencies and parallel computations.

The tool utilized in this guide to build the Transformer is PyTorch, a popular open-source machine learning library known for its simplicity, versatility, and efficiency. With a dynamic computation graph and extensive libraries, PyTorch has become a go-to for researchers and developers in the realm of machine learning and artificial intelligence.

For those unfamiliar with PyTorch, a visit to DataCamp's course, Introduction to Deep Learning with PyTorch is recommended for a solid grounding.

Background and Theory

First introduced in the paper Attention is All You Need by Vaswani et al., Transformers have since become a cornerstone of many NLP tasks due to their unique design and effectiveness.

At the heart of Transformers is the attention mechanism, specifically the concept of 'self-attention,' which allows the model to weigh and prioritize different parts of the input data. This mechanism is what enables Transformers to manage long-range dependencies in data. It is fundamentally a weighting scheme that allows a model to focus on different parts of the input when producing an output.

This mechanism allows the model to consider different words or features in the input sequence, assigning each one a 'weight' that signifies its importance for producing a given output.

For instance, in a sentence translation task, while translating a particular word, the model might assign higher attention weights to words that are grammatically or semantically related to the target word. This process allows the Transformer to capture dependencies between words or features, regardless of their distance from each other in the sequence.

Transformers' impact in the field of NLP cannot be overstated. They have outperformed traditional models in many tasks, demonstrating superior capacity to comprehend and generate human language in a more nuanced way.

For a deeper understanding of NLP, DataCamp's Introduction to Natural Language Processing in Python course is a recommended resource.

Setting up PyTorch

Before diving into building a Transformer, it is essential to set up the working environment correctly. First and foremost, PyTorch needs to be installed. PyTorch (current stable version - 2.0.1) can be easily installed through pip or conda package managers.

For pip, use the command:

pip3 install torch torchvision torchaudio

For conda, use the command:

conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

For using pytorch with a cpu kindly visit the pytorch documentation.

Additionally, it is beneficial to have a basic understanding of deep learning concepts, as these will be fundamental to understanding the operation of Transformers. For those who need a refresher, the DataCamp course Deep Learning in Python is a valuable resource that covers key concepts in deep learning.

Building the Transformer Model with PyTorch

To build the Transformer model the following steps are necessary:

  1. Importing the libraries and modules
  2. Defining the basic building blocks - Multi-head Attention, Position-Wise Feed-Forward Networks, Positional Encoding
  3. Building the Encoder block
  4. Building the Decoder block
  5. Combining the Encoder and Decoder layers to create the complete Transformer network

1. Importing the necessary libraries and modules

We’ll start with importing the PyTorch library for core functionality, the neural network module for creating neural networks, the optimization module for training networks, and the data utility functions for handling data. Additionally, we’ll import the standard Python math module for mathematical operations and the copy module for creating copies of complex objects.

These tools set the foundation for defining the model's architecture, managing data, and establishing the training process.

import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as data
import math
import copy

2. Defining the basic building blocks: Multi-Head Attention, Position-wise Feed-Forward Networks, Positional Encoding

Multi-head Attention

The Multi-Head Attention mechanism computes the attention between each pair of positions in a sequence. It consists of multiple “attention heads” that capture different aspects of the input sequence.

To know more about Multi-Head Attention, check out this Attention mechanisms section of the Large Language Models (LLMs) Concepts course.

Figure 1. Multi-Head Attention (source: image created by author)

Figure 1. Multi-Head Attention (source: image created by author)

class MultiHeadAttention(nn.Module):
    def __init__(self, d_model, num_heads):
        super(MultiHeadAttention, self).__init__()
        # Ensure that the model dimension (d_model) is divisible by the number of heads
        assert d_model % num_heads == 0, "d_model must be divisible by num_heads"
        
        # Initialize dimensions
        self.d_model = d_model # Model's dimension
        self.num_heads = num_heads # Number of attention heads
        self.d_k = d_model // num_heads # Dimension of each head's key, query, and value
        
        # Linear layers for transforming inputs
        self.W_q = nn.Linear(d_model, d_model) # Query transformation
        self.W_k = nn.Linear(d_model, d_model) # Key transformation
        self.W_v = nn.Linear(d_model, d_model) # Value transformation
        self.W_o = nn.Linear(d_model, d_model) # Output transformation
        
    def scaled_dot_product_attention(self, Q, K, V, mask=None):
        # Calculate attention scores
        attn_scores = torch.matmul(Q, K.transpose(-2, -1)) / math.sqrt(self.d_k)
        
        # Apply mask if provided (useful for preventing attention to certain parts like padding)
        if mask is not None:
            attn_scores = attn_scores.masked_fill(mask == 0, -1e9)
        
        # Softmax is applied to obtain attention probabilities
        attn_probs = torch.softmax(attn_scores, dim=-1)
        
        # Multiply by values to obtain the final output
        output = torch.matmul(attn_probs, V)
        return output
        
    def split_heads(self, x):
        # Reshape the input to have num_heads for multi-head attention
        batch_size, seq_length, d_model = x.size()
        return x.view(batch_size, seq_length, self.num_heads, self.d_k).transpose(1, 2)
        
    def combine_heads(self, x):
        # Combine the multiple heads back to original shape
        batch_size, _, seq_length, d_k = x.size()
        return x.transpose(1, 2).contiguous().view(batch_size, seq_length, self.d_model)
        
    def forward(self, Q, K, V, mask=None):
        # Apply linear transformations and split heads
        Q = self.split_heads(self.W_q(Q))
        K = self.split_heads(self.W_k(K))
        V = self.split_heads(self.W_v(V))
        
        # Perform scaled dot-product attention
        attn_output = self.scaled_dot_product_attention(Q, K, V, mask)
        
        # Combine heads and apply output transformation
        output = self.W_o(self.combine_heads(attn_output))
        return output

Class Definition and Initialization:

class MultiHeadAttention(nn.Module):
    def __init__(self, d_model, num_heads):

The class is defined as a subclass of PyTorch's nn.Module.

  1. d_model: Dimensionality of the input.
  2. num_heads: The number of attention heads to split the input into.

The initialization checks if d_model is divisible by num_heads, and then defines the transformation weights for query, key, value, and output.

Scaled Dot-Product Attention:

def scaled_dot_product_attention(self, Q, K, V, mask=None):
  1. Calculating Attention Scores: attn_scores = torch.matmul(Q, K.transpose(-2, -1)) / math.sqrt(self.d_k). Here, the attention scores are calculated by taking the dot product of queries (Q) and keys (K), and then scaling by the square root of the key dimension (d_k).
  2. Applying Mask: If a mask is provided, it is applied to the attention scores to mask out specific values.
  3. Calculating Attention Weights: The attention scores are passed through a softmax function to convert them into probabilities that sum to 1.
  4. Calculating Output: The final output of the attention is calculated by multiplying the attention weights by the values (V).

Splitting Heads:

def split_heads(self, x):

This method reshapes the input x into the shape (batch_size, num_heads, seq_length, d_k). It enables the model to process multiple attention heads concurrently, allowing for parallel computation.

Combining Heads:

def combine_heads(self, x):

After applying attention to each head separately, this method combines the results back into a single tensor of shape (batch_size, seq_length, d_model). This prepares the result for further processing.

Forward Method:

def forward(self, Q, K, V, mask=None):

The forward method is where the actual computation happens:

  1. Apply Linear Transformations: The queries (Q), keys (K), and values (V) are first passed through linear transformations using the weights defined in the initialization.
  2. Split Heads: The transformed Q, K, V are split into multiple heads using the split_heads method.
  3. Apply Scaled Dot-Product Attention: The scaled_dot_product_attention method is called on the split heads.
  4. Combine Heads: The results from each head are combined back into a single tensor using the combine_heads method.
  5. Apply Output Transformation: Finally, the combined tensor is passed through an output linear transformation.

In summary, the MultiHeadAttention class encapsulates the multi-head attention mechanism commonly used in transformer models. It takes care of splitting the input into multiple attention heads, applying attention to each head, and then combining the results. By doing so, the model can capture various relationships in the input data at different scales, improving the expressive ability of the model.

Position-wise Feed-Forward Networks

class PositionWiseFeedForward(nn.Module):
    def __init__(self, d_model, d_ff):
        super(PositionWiseFeedForward, self).__init__()
        self.fc1 = nn.Linear(d_model, d_ff)
        self.fc2 = nn.Linear(d_ff, d_model)
        self.relu = nn.ReLU()

    def forward(self, x):
        return self.fc2(self.relu(self.fc1(x)))

Class Definition:

class PositionWiseFeedForward(nn.Module):

The class is a subclass of PyTorch's nn.Module, which means it will inherit all functionalities required to work with neural network layers.

Initialization:

def __init__(self, d_model, d_ff):
    super(PositionWiseFeedForward, self).__init__()
    self.fc1 = nn.Linear(d_model, d_ff)
    self.fc2 = nn.Linear(d_ff, d_model)
    self.relu = nn.ReLU()
  1. d_model: Dimensionality of the model's input and output.
  2. d_ff: Dimensionality of the inner layer in the feed-forward network.
  3. self.fc1 and self.fc2: Two fully connected (linear) layers with input and output dimensions as defined by d_model and d_ff.
  4. self.relu: ReLU (Rectified Linear Unit) activation function, which introduces non-linearity between the two linear layers.

Forward Method:

def forward(self, x):
    return self.fc2(self.relu(self.fc1(x)))
  1. x: The input to the feed-forward network.
  2. self.fc1(x): The input is first passed through the first linear layer (fc1).
  3. self.relu(...): The output of fc1 is then passed through a ReLU activation function. ReLU replaces all negative values with zeros, introducing non-linearity into the model.
  4. self.fc2(...): The activated output is then passed through the second linear layer (fc2), producing the final output.

In summary, the PositionWiseFeedForward class defines a position-wise feed-forward neural network that consists of two linear layers with a ReLU activation function in between. In the context of transformer models, this feed-forward network is applied to each position separately and identically. It helps in transforming the features learned by the attention mechanisms within the transformer, acting as an additional processing step for the attention outputs.

Positional Encoding

Positional Encoding is used to inject the position information of each token in the input sequence. It uses sine and cosine functions of different frequencies to generate the positional encoding.

class PositionalEncoding(nn.Module):
    def __init__(self, d_model, max_seq_length):
        super(PositionalEncoding, self).__init__()
        
        pe = torch.zeros(max_seq_length, d_model)
        position = torch.arange(0, max_seq_length, dtype=torch.float).unsqueeze(1)
        div_term = torch.exp(torch.arange(0, d_model, 2).float() * -(math.log(10000.0) / d_model))
        
        pe[:, 0::2] = torch.sin(position * div_term)
        pe[:, 1::2] = torch.cos(position * div_term)
        
        self.register_buffer('pe', pe.unsqueeze(0))
        
    def forward(self, x):
        return x + self.pe[:, :x.size(1)]

Class Definition:

class PositionalEncoding(nn.Module):

The class is defined as a subclass of PyTorch's nn.Module, allowing it to be used as a standard PyTorch layer.

Initialization:

def __init__(self, d_model, max_seq_length):
    super(PositionalEncoding, self).__init__()
    
    pe = torch.zeros(max_seq_length, d_model)
    position = torch.arange(0, max_seq_length, dtype=torch.float).unsqueeze(1)
    div_term = torch.exp(torch.arange(0, d_model, 2).float() * -(math.log(10000.0) / d_model))
    
    pe[:, 0::2] = torch.sin(position * div_term)
    pe[:, 1::2] = torch.cos(position * div_term)
    
    self.register_buffer('pe', pe.unsqueeze(0))
  1. d_model: The dimension of the model's input.
  2. max_seq_length: The maximum length of the sequence for which positional encodings are pre-computed.
  3. pe: A tensor filled with zeros, which will be populated with positional encodings.
  4. position: A tensor containing the position indices for each position in the sequence.
  5. div_term: A term used to scale the position indices in a specific way.
  6. The sine function is applied to the even indices and the cosine function to the odd indices of pe.
  7. Finally, pe is registered as a buffer, which means it will be part of the module's state but will not be considered a trainable parameter.

Forward Method:

def forward(self, x):
    return x + self.pe[:, :x.size(1)]

The forward method simply adds the positional encodings to the input x.

It uses the first x.size(1) elements of pe to ensure that the positional encodings match the actual sequence length of x.

Summary

The PositionalEncoding class adds information about the position of tokens within the sequence. Since the transformer model lacks inherent knowledge of the order of tokens (due to its self-attention mechanism), this class helps the model to consider the position of tokens in the sequence. The sinusoidal functions used are chosen to allow the model to easily learn to attend to relative positions, as they produce a unique and smooth encoding for each position in the sequence.

3. Building the Encoder Blocks

Figure 2. The Encoder part of the transformer network (Source: image from the original paper)

Figure 2. The Encoder part of the transformer network (Source: image from the original paper)

class EncoderLayer(nn.Module):
    def __init__(self, d_model, num_heads, d_ff, dropout):
        super(EncoderLayer, self).__init__()
        self.self_attn = MultiHeadAttention(d_model, num_heads)
        self.feed_forward = PositionWiseFeedForward(d_model, d_ff)
        self.norm1 = nn.LayerNorm(d_model)
        self.norm2 = nn.LayerNorm(d_model)
        self.dropout = nn.Dropout(dropout)
        
    def forward(self, x, mask):
        attn_output = self.self_attn(x, x, x, mask)
        x = self.norm1(x + self.dropout(attn_output))
        ff_output = self.feed_forward(x)
        x = self.norm2(x + self.dropout(ff_output))
        return x

Class Definition:

class EncoderLayer(nn.Module):

The class is defined as a subclass of PyTorch's nn.Module, which means it can be used as a building block for neural networks in PyTorch.

Initialization:

def __init__(self, d_model, num_heads, d_ff, dropout):
    super(EncoderLayer, self).__init__()
    self.self_attn = MultiHeadAttention(d_model, num_heads)
    self.feed_forward = PositionWiseFeedForward(d_model, d_ff)
    self.norm1 = nn.LayerNorm(d_model)
    self.norm2 = nn.LayerNorm(d_model)
    self.dropout = nn.Dropout(dropout)

Parameters:

  1. d_model: The dimensionality of the input.
  2. num_heads: The number of attention heads in the multi-head attention.
  3. d_ff: The dimensionality of the inner layer in the position-wise feed-forward network.
  4. dropout: The dropout rate used for regularization.

Components:

  1. self.self_attn: Multi-head attention mechanism.
  2. self.feed_forward: Position-wise feed-forward neural network.
  3. self.norm1 and self.norm2: Layer normalization, applied to smooth the layer's input.
  4. self.dropout: Dropout layer, used to prevent overfitting by randomly setting some activations to zero during training.

Forward Method:

def forward(self, x, mask):
    attn_output = self.self_attn(x, x, x, mask)
    x = self.norm1(x + self.dropout(attn_output))
    ff_output = self.feed_forward(x)
    x = self.norm2(x + self.dropout(ff_output))
    return x

Input:

  1. x: The input to the encoder layer.
  2. mask: Optional mask to ignore certain parts of the input.

Processing Steps:

  1. Self-Attention: The input x is passed through the multi-head self-attention mechanism.
  2. Add & Normalize (after Attention): The attention output is added to the original input (residual connection), followed by dropout and normalization using norm1.
  3. Feed-Forward Network: The output from the previous step is passed through the position-wise feed-forward network.
  4. Add & Normalize (after Feed-Forward): Similar to step 2, the feed-forward output is added to the input of this stage (residual connection), followed by dropout and normalization using norm2.
  5. Output: The processed tensor is returned as the output of the encoder layer.

Summary:

The EncoderLayer class defines a single layer of the transformer's encoder. It encapsulates a multi-head self-attention mechanism followed by position-wise feed-forward neural network, with residual connections, layer normalization, and dropout applied as appropriate. These components together allow the encoder to capture complex relationships in the input data and transform them into a useful representation for downstream tasks. Typically, multiple such encoder layers are stacked to form the complete encoder part of a transformer model.

4. Building the Decoder Blocks

Figure 3. The Decoder part of the Transformer network (Souce: Image from the original paper)

class DecoderLayer(nn.Module):
    def __init__(self, d_model, num_heads, d_ff, dropout):
        super(DecoderLayer, self).__init__()
        self.self_attn = MultiHeadAttention(d_model, num_heads)
        self.cross_attn = MultiHeadAttention(d_model, num_heads)
        self.feed_forward = PositionWiseFeedForward(d_model, d_ff)
        self.norm1 = nn.LayerNorm(d_model)
        self.norm2 = nn.LayerNorm(d_model)
        self.norm3 = nn.LayerNorm(d_model)
        self.dropout = nn.Dropout(dropout)
        
    def forward(self, x, enc_output, src_mask, tgt_mask):
        attn_output = self.self_attn(x, x, x, tgt_mask)
        x = self.norm1(x + self.dropout(attn_output))
        attn_output = self.cross_attn(x, enc_output, enc_output, src_mask)
        x = self.norm2(x + self.dropout(attn_output))
        ff_output = self.feed_forward(x)
        x = self.norm3(x + self.dropout(ff_output))
        return x

Class Definition:

class DecoderLayer(nn.Module):

Initialization:

def __init__(self, d_model, num_heads, d_ff, dropout):
    super(DecoderLayer, self).__init__()
    self.self_attn = MultiHeadAttention(d_model, num_heads)
    self.cross_attn = MultiHeadAttention(d_model, num_heads)
    self.feed_forward = PositionWiseFeedForward(d_model, d_ff)
    self.norm1 = nn.LayerNorm(d_model)
    self.norm2 = nn.LayerNorm(d_model)
    self.norm3 = nn.LayerNorm(d_model)
    self.dropout = nn.Dropout(dropout)

Parameters:

  1. d_model: The dimensionality of the input.
  2. num_heads: The number of attention heads in the multi-head attention.
  3. d_ff: The dimensionality of the inner layer in the feed-forward network.
  4. dropout: The dropout rate for regularization.

Components:

  1. self.self_attn: Multi-head self-attention mechanism for the target sequence.
  2. self.cross_attn: Multi-head attention mechanism that attends to the encoder's output.
  3. self.feed_forward: Position-wise feed-forward neural network.
  4. self.norm1, self.norm2, self.norm3: Layer normalization components.
  5. self.dropout: Dropout layer for regularization.

Forward Method:

ef forward(self, x, enc_output, src_mask, tgt_mask):
    attn_output = self.self_attn(x, x, x, tgt_mask)
    x = self.norm1(x + self.dropout(attn_output))
    attn_output = self.cross_attn(x, enc_output, enc_output, src_mask)
    x = self.norm2(x + self.dropout(attn_output))
    ff_output = self.feed_forward(x)
    x = self.norm3(x + self.dropout(ff_output))
    return x

Input:

  1. x: The input to the decoder layer.
  2. enc_output: The output from the corresponding encoder (used in the cross-attention step).
  3. src_mask: Source mask to ignore certain parts of the encoder's output.
  4. tgt_mask: Target mask to ignore certain parts of the decoder's input.

Processing Steps:

  1. Self-Attention on Target Sequence: The input x is processed through a self-attention mechanism.
  2. Add & Normalize (after Self-Attention): The output from self-attention is added to the original x, followed by dropout and normalization using norm1.
  3. Cross-Attention with Encoder Output: The normalized output from the previous step is processed through a cross-attention mechanism that attends to the encoder's output enc_output.
  4. Add & Normalize (after Cross-Attention): The output from cross-attention is added to the input of this stage, followed by dropout and normalization using norm2.
  5. Feed-Forward Network: The output from the previous step is passed through the feed-forward network.
  6. Add & Normalize (after Feed-Forward): The feed-forward output is added to the input of this stage, followed by dropout and normalization using norm3.
  7. Output: The processed tensor is returned as the output of the decoder layer.

Summary:

The DecoderLayer class defines a single layer of the transformer's decoder. It consists of a multi-head self-attention mechanism, a multi-head cross-attention mechanism (that attends to the encoder's output), a position-wise feed-forward neural network, and the corresponding residual connections, layer normalization, and dropout layers. This combination enables the decoder to generate meaningful outputs based on the encoder's representations, taking into account both the target sequence and the source sequence. As with the encoder, multiple decoder layers are typically stacked to form the complete decoder part of a transformer model.

Next, the Encoder and Decoder blocks are brought together to construct the comprehensive Transformer model.

5. Combining the Encoder and Decoder layers to create the complete Transformer network

Figure 4. The Transformer Network (Source: Image from the original paper)

Figure 4. The Transformer Network (Source: Image from the original paper)

class Transformer(nn.Module):
    def __init__(self, src_vocab_size, tgt_vocab_size, d_model, num_heads, num_layers, d_ff, max_seq_length, dropout):
        super(Transformer, self).__init__()
        self.encoder_embedding = nn.Embedding(src_vocab_size, d_model)
        self.decoder_embedding = nn.Embedding(tgt_vocab_size, d_model)
        self.positional_encoding = PositionalEncoding(d_model, max_seq_length)

        self.encoder_layers = nn.ModuleList([EncoderLayer(d_model, num_heads, d_ff, dropout) for _ in range(num_layers)])
        self.decoder_layers = nn.ModuleList([DecoderLayer(d_model, num_heads, d_ff, dropout) for _ in range(num_layers)])

        self.fc = nn.Linear(d_model, tgt_vocab_size)
        self.dropout = nn.Dropout(dropout)

    def generate_mask(self, src, tgt):
        src_mask = (src != 0).unsqueeze(1).unsqueeze(2)
        tgt_mask = (tgt != 0).unsqueeze(1).unsqueeze(3)
        seq_length = tgt.size(1)
        nopeak_mask = (1 - torch.triu(torch.ones(1, seq_length, seq_length), diagonal=1)).bool()
        tgt_mask = tgt_mask & nopeak_mask
        return src_mask, tgt_mask

    def forward(self, src, tgt):
        src_mask, tgt_mask = self.generate_mask(src, tgt)
        src_embedded = self.dropout(self.positional_encoding(self.encoder_embedding(src)))
        tgt_embedded = self.dropout(self.positional_encoding(self.decoder_embedding(tgt)))

        enc_output = src_embedded
        for enc_layer in self.encoder_layers:
            enc_output = enc_layer(enc_output, src_mask)

        dec_output = tgt_embedded
        for dec_layer in self.decoder_layers:
            dec_output = dec_layer(dec_output, enc_output, src_mask, tgt_mask)

        output = self.fc(dec_output)
        return output

Class Definition:

class Transformer(nn.Module):

Initialization:

def __init__(self, src_vocab_size, tgt_vocab_size, d_model, num_heads, num_layers, d_ff, max_seq_length, dropout):

The constructor takes the following parameters:

  1. src_vocab_size: Source vocabulary size.
  2. tgt_vocab_size: Target vocabulary size.
  3. d_model: The dimensionality of the model's embeddings.
  4. num_heads: Number of attention heads in the multi-head attention mechanism.
  5. num_layers: Number of layers for both the encoder and the decoder.
  6. d_ff: Dimensionality of the inner layer in the feed-forward network.
  7. max_seq_length: Maximum sequence length for positional encoding.
  8. dropout: Dropout rate for regularization.

And it defines the following components:

  1. self.encoder_embedding: Embedding layer for the source sequence.
  2. self.decoder_embedding: Embedding layer for the target sequence.
  3. self.positional_encoding: Positional encoding component.
  4. self.encoder_layers: A list of encoder layers.
  5. self.decoder_layers: A list of decoder layers.
  6. self.fc: Final fully connected (linear) layer mapping to target vocabulary size.
  7. self.dropout: Dropout layer.

Generate Mask Method:

def generate_mask(self, src, tgt):

This method is used to create masks for the source and target sequences, ensuring that padding tokens are ignored and that future tokens are not visible during training for the target sequence.

Forward Method:

def forward(self, src, tgt):

This method defines the forward pass for the Transformer, taking source and target sequences and producing the output predictions.

  1. Input Embedding and Positional Encoding: The source and target sequences are first embedded using their respective embedding layers and then added to their positional encodings.
  2. Encoder Layers: The source sequence is passed through the encoder layers, with the final encoder output representing the processed source sequence.
  3. Decoder Layers: The target sequence and the encoder's output are passed through the decoder layers, resulting in the decoder's output.
  4. Final Linear Layer: The decoder's output is mapped to the target vocabulary size using a fully connected (linear) layer.

Output:

The final output is a tensor representing the model's predictions for the target sequence.

Summary:

The Transformer class brings together the various components of a Transformer model, including the embeddings, positional encoding, encoder layers, and decoder layers. It provides a convenient interface for training and inference, encapsulating the complexities of multi-head attention, feed-forward networks, and layer normalization.

This implementation follows the standard Transformer architecture, making it suitable for sequence-to-sequence tasks like machine translation, text summarization, etc. The inclusion of masking ensures that the model adheres to the causal dependencies within sequences, ignoring padding tokens and preventing information leakage from future tokens.

These sequential steps empower the Transformer model to efficiently process input sequences and produce corresponding output sequences.

Training the PyTorch Transformer Model

Sample data preparation

For illustrative purposes, a dummy dataset will be crafted in this example. However, in a practical scenario, a more substantial dataset would be employed, and the process would involve text preprocessing along with the creation of vocabulary mappings for both the source and target languages.

src_vocab_size = 5000
tgt_vocab_size = 5000
d_model = 512
num_heads = 8
num_layers = 6
d_ff = 2048
max_seq_length = 100
dropout = 0.1

transformer = Transformer(src_vocab_size, tgt_vocab_size, d_model, num_heads, num_layers, d_ff, max_seq_length, dropout)

# Generate random sample data
src_data = torch.randint(1, src_vocab_size, (64, max_seq_length))  # (batch_size, seq_length)
tgt_data = torch.randint(1, tgt_vocab_size, (64, max_seq_length))  # (batch_size, seq_length)

Hyperparameters:

These values define the architecture and behavior of the transformer model:

  1. src_vocab_size, tgt_vocab_size: Vocabulary sizes for source and target sequences, both set to 5000.
  2. d_model: Dimensionality of the model's embeddings, set to 512.
  3. num_heads: Number of attention heads in the multi-head attention mechanism, set to 8.
  4. num_layers: Number of layers for both the encoder and the decoder, set to 6.
  5. d_ff: Dimensionality of the inner layer in the feed-forward network, set to 2048.
  6. max_seq_length: Maximum sequence length for positional encoding, set to 100.
  7. dropout: Dropout rate for regularization, set to 0.1.

Creating a Transformer Instance:

transformer = Transformer(src_vocab_size, tgt_vocab_size, d_model, num_heads, num_layers, d_ff, max_seq_length, dropout)

This line creates an instance of the Transformer class, initializing it with the given hyperparameters. The instance will have the architecture and behavior defined by these hyperparameters.

Generating Random Sample Data:

The following lines generate random source and target sequences:

  1. src_data: Random integers between 1 and src_vocab_size, representing a batch of source sequences with shape (64, max_seq_length).
  2. tgt_data: Random integers between 1 and tgt_vocab_size, representing a batch of target sequences with shape (64, max_seq_length).
  3. These random sequences can be used as inputs to the transformer model, simulating a batch of data with 64 examples and sequences of length 100.

Summary:

The code snippet demonstrates how to initialize a transformer model and generate random source and target sequences that can be fed into the model. The chosen hyperparameters determine the specific structure and properties of the transformer. This setup could be part of a larger script where the model is trained and evaluated on actual sequence-to-sequence tasks, such as machine translation or text summarization.

Training the Model

Next, the model will be trained utilizing the aforementioned sample data. However, in a real-world scenario, a significantly larger dataset would be employed, which would typically be partitioned into distinct sets for training and validation purposes.

criterion = nn.CrossEntropyLoss(ignore_index=0)
optimizer = optim.Adam(transformer.parameters(), lr=0.0001, betas=(0.9, 0.98), eps=1e-9)

transformer.train()

for epoch in range(100):
    optimizer.zero_grad()
    output = transformer(src_data, tgt_data[:, :-1])
    loss = criterion(output.contiguous().view(-1, tgt_vocab_size), tgt_data[:, 1:].contiguous().view(-1))
    loss.backward()
    optimizer.step()
    print(f"Epoch: {epoch+1}, Loss: {loss.item()}")

Loss Function and Optimizer:

  1. criterion = nn.CrossEntropyLoss(ignore_index=0): Defines the loss function as cross-entropy loss. The ignore_index argument is set to 0, meaning the loss will not consider targets with an index of 0 (typically reserved for padding tokens).
  2. optimizer = optim.Adam(...): Defines the optimizer as Adam with a learning rate of 0.0001 and specific beta values.

Model Training Mode:

  1. transformer.train(): Sets the transformer model to training mode, enabling behaviors like dropout that only apply during training.

Training Loop:

The code snippet trains the model for 100 epochs using a typical training loop:

  1. for epoch in range(100): Iterates over 100 training epochs.
  2. optimizer.zero_grad(): Clears the gradients from the previous iteration.
  3. output = transformer(src_data, tgt_data[:, :-1]): Passes the source data and the target data (excluding the last token in each sequence) through the transformer. This is common in sequence-to-sequence tasks where the target is shifted by one token.
  4. loss = criterion(...): Computes the loss between the model's predictions and the target data (excluding the first token in each sequence). The loss is calculated by reshaping the data into one-dimensional tensors and using the cross-entropy loss function.
  5. loss.backward(): Computes the gradients of the loss with respect to the model's parameters.
  6. optimizer.step(): Updates the model's parameters using the computed gradients.
  7. print(f"Epoch: {epoch+1}, Loss: {loss.item()}"): Prints the current epoch number and the loss value for that epoch.

Summary:

This code snippet trains the transformer model on randomly generated source and target sequences for 100 epochs. It uses the Adam optimizer and the cross-entropy loss function. The loss is printed for each epoch, allowing you to monitor the training progress. In a real-world scenario, you would replace the random source and target sequences with actual data from your task, such as machine translation.

Transformer Model Performance Evaluation

After training the model, its performance can be evaluated on a validation dataset or test dataset. The following is an example of how this could be done:

transformer.eval()

# Generate random sample validation data
val_src_data = torch.randint(1, src_vocab_size, (64, max_seq_length))  # (batch_size, seq_length)
val_tgt_data = torch.randint(1, tgt_vocab_size, (64, max_seq_length))  # (batch_size, seq_length)

with torch.no_grad():

    val_output = transformer(val_src_data, val_tgt_data[:, :-1])
    val_loss = criterion(val_output.contiguous().view(-1, tgt_vocab_size), val_tgt_data[:, 1:].contiguous().view(-1))
    print(f"Validation Loss: {val_loss.item()}")

Evaluation Mode:

  1. transformer.eval(): Puts the transformer model in evaluation mode. This is important because it turns off certain behaviors like dropout that are only used during training.

Generate Random Validation Data:

  1. val_src_data: Random integers between 1 and src_vocab_size, representing a batch of validation source sequences with shape (64, max_seq_length).
  2. val_tgt_data: Random integers between 1 and tgt_vocab_size, representing a batch of validation target sequences with shape (64, max_seq_length).

Validation Loop:

  1. with torch.no_grad(): Disables gradient computation, as we don't need to compute gradients during validation. This can reduce memory consumption and speed up computations.
  2. val_output = transformer(val_src_data, val_tgt_data[:, :-1]): Passes the validation source data and the validation target data (excluding the last token in each sequence) through the transformer.
  3. val_loss = criterion(...): Computes the loss between the model's predictions and the validation target data (excluding the first token in each sequence). The loss is calculated by reshaping the data into one-dimensional tensors and using the previously defined cross-entropy loss function.
  4. print(f"Validation Loss: {val_loss.item()}"): Prints the validation loss value.

Summary:

This code snippet evaluates the transformer model on a randomly generated validation dataset, computes the validation loss, and prints it. In a real-world scenario, the random validation data should be replaced with actual validation data from the task you are working on. The validation loss can give you an indication of how well your model is performing on unseen data, which is a critical measure of the model's generalization ability.

For further details about Transformers and Hugging Face, our tutorial, An Introduction to Using Transformers and Hugging Face, is useful.

Conclusion and Further Resources

In conclusion, this tutorial demonstrated how to construct a Transformer model using PyTorch, one of the most versatile tools for deep learning. With their capacity for parallelization and the ability to capture long-term dependencies in data, Transformers have immense potential in various fields, especially NLP tasks like translation, summarization, and sentiment analysis.

For those eager to deepen their understanding of advanced deep learning concepts and techniques, consider exploring the course Advanced Deep Learning with Keras on DataCamp. You can also read about building a simple neural network with PyTorch in a separate tutorial.

Topics
Related

What is Llama 3? The Experts' View on The Next Generation of Open Source LLMs

Discover Meta’s Llama3 model: the latest iteration of one of today's most powerful open-source large language models.
Richie Cotton's photo

Richie Cotton

5 min

How Walmart Leverages Data & AI with Swati Kirti, Sr Director of Data Science at Walmart

Swati and Richie explore the role of data and AI at Walmart, how Walmart improves customer experience through the use of data, supply chain optimization, demand forecasting, scaling AI solutions, and much more. 
Richie Cotton's photo

Richie Cotton

31 min

Creating an AI-First Culture with Sanjay Srivastava, Chief Digital Strategist at Genpact

Sanjay and Richie cover the shift from experimentation to production seen in the AI space over the past 12 months, how AI automation is revolutionizing business processes at GENPACT, how change management contributes to how we leverage AI tools at work, and much more.
Richie Cotton's photo

Richie Cotton

36 min

Writing Custom Context Managers in Python

Learn the advanced aspects of resource management in Python by mastering how to write custom context managers.
Bex Tuychiev's photo

Bex Tuychiev

How to Convert a List to a String in Python

Learn how to convert a list to a string in Python in this quick tutorial.
Adel Nehme's photo

Adel Nehme

How to Improve RAG Performance: 5 Key Techniques with Examples

Explore different approaches to enhance RAG systems: Chunking, Reranking, and Query Transformations.
Eugenia Anello's photo

Eugenia Anello

See MoreSee More