Skip to main content
Tahami Ahmad avatar

Tahami Ahmad has completed

Fine-Tuning with Llama 3

Start course For Free
2 hr
1,700 XP
Statement of Accomplishment Badge

Loved by learners at thousands of companies


Course Description

Fine-tuning the Llama model

This course provides a comprehensive guide to preparing and working with Llama models. Through hands-on examples and practical exercises, you'll learn how to configure various Llama fine-tuning tasks.

Prepare datasets for fine-tuning

Start by exploring dataset preparation techniques, including loading, splitting, and saving datasets using the Hugging Face Datasets library, ensuring high-quality data for your Llama projects.

Work with fine-tuning frameworks

Explore fine-tuning workflows using cutting-edge libraries such TorchTune and Hugging Face’s SFTTrainer. You'll learn how to configure fine-tuning recipes, set up training arguments, and utilize efficient techniques like LoRA (Low-Rank Adaptation) and quantization using BitsAndBytes to optimize resource usage. By combining techniques learned throughout the course, you’ll be able to customize Llama models to suit your projects' needs in an efficient way.
For Business

Training 2 or more people?

Get your team access to the full DataCamp platform, including all the features.
DataCamp for BusinessFor a bespoke solution book a demo.
  1. 1

    Preparing for Llama fine-tuning

    Free

    Explore options for fine-tuning Llama 3 models and dive into TorchTune, a library built to simplify fine-tuning. This chapter guides you through data preparation, TorchTune's recipe-based system, and efficient task configuration, providing the key steps to launch your first fine-tuning task.

    Play Chapter Now
    The Llama fine-tuning libraries
    50 xp
    Listing TorchTune recipes
    50 xp
    Running a TorchTune task
    50 xp
    Preprocessing data for fine-tuning
    50 xp
    Filtering datasets for evaluation
    100 xp
    Creating training samples
    100 xp
    Saving preprocessed datasets
    100 xp
    Fine-tuning with TorchTune
    50 xp
    Defining custom recipes
    100 xp
    Saving custom recipes
    100 xp
    Running custom fine-tuning
    50 xp
  2. 2

    Fine-tuning with SFTTrainer on Hugging Face

    Learn how fine-tuning can significantly improve the performance of smaller models for specific tasks. Start with fine-tuning smaller Llama models to enhance their task-specific capabilities. Next, discover parameter-efficient fine-tuning techniques such as LoRA, and explore quantization to load and use even larger models.

    Play Chapter Now
For Business

Training 2 or more people?

Get your team access to the full DataCamp platform, including all the features.

collaborators

Collaborator's avatar
James Chapman

prerequisites

Working with Llama 3
Francesca Donadoni HeadshotFrancesca Donadoni

AI Curriculum Manager at DataCamp

Francesca is an AI Curriculum Manager at DataCamp, where she works to create courses, content, and solutions for AI learning. She has a keen interest in inclusive and accessible AI technologies. Before joining DataCamp, she earned a PhD from University College London and held Data Scientist and Machine Learning Engineer roles across the healthcare sector and at multiple startups.
See More

Join over 18 million learners and start Fine-Tuning with Llama 3 today!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.