
Loved by learners at thousands of companies
Course Description
Fine-tuning the Llama model
This course provides a comprehensive guide to preparing and working with Llama models. Through hands-on examples and practical exercises, you'll learn how to configure various Llama fine-tuning tasks.Prepare datasets for fine-tuning
Start by exploring dataset preparation techniques, including loading, splitting, and saving datasets using the Hugging Face Datasets library, ensuring high-quality data for your Llama projects.Work with fine-tuning frameworks
Explore fine-tuning workflows using cutting-edge libraries such TorchTune and Hugging Face’s SFTTrainer. You'll learn how to configure fine-tuning recipes, set up training arguments, and utilize efficient techniques like LoRA (Low-Rank Adaptation) and quantization using BitsAndBytes to optimize resource usage. By combining techniques learned throughout the course, you’ll be able to customize Llama models to suit your projects' needs in an efficient way.Training 2 or more people?
Get your team access to the full DataCamp platform, including all the features.- 1
Preparing for Llama fine-tuning
FreeExplore options for fine-tuning Llama 3 models and dive into TorchTune, a library built to simplify fine-tuning. This chapter guides you through data preparation, TorchTune's recipe-based system, and efficient task configuration, providing the key steps to launch your first fine-tuning task.
The Llama fine-tuning libraries50 xpListing TorchTune recipes50 xpRunning a TorchTune task50 xpPreprocessing data for fine-tuning50 xpFiltering datasets for evaluation100 xpCreating training samples100 xpSaving preprocessed datasets100 xpFine-tuning with TorchTune50 xpDefining custom recipes100 xpSaving custom recipes100 xpRunning custom fine-tuning50 xp - 2
Fine-tuning with SFTTrainer on Hugging Face
Learn how fine-tuning can significantly improve the performance of smaller models for specific tasks. Start with fine-tuning smaller Llama models to enhance their task-specific capabilities. Next, discover parameter-efficient fine-tuning techniques such as LoRA, and explore quantization to load and use even larger models.
Model fine-tuning with Hugging Face50 xpSetting up Llama training arguments100 xpFine-tuning Llama for customer service QA100 xpEvaluate generated text using ROUGE100 xpEfficient fine-tuning with LoRA50 xpUsing LoRA adapters100 xpLoRA fine-tuning Llama for customer service100 xpMaking models smaller with quantization50 xpLoading 8-bit models100 xpSpeeding up inference in quantized models100 xpCongratulations!50 xp
Training 2 or more people?
Get your team access to the full DataCamp platform, including all the features.collaborators

prerequisites
Working with Llama 3AI Curriculum Manager at DataCamp
Francesca is an AI Curriculum Manager at DataCamp, where she works to create courses, content, and solutions for AI learning. She has a keen interest in inclusive and accessible AI technologies. Before joining DataCamp, she earned a PhD from University College London and held Data Scientist and Machine Learning Engineer roles across the healthcare sector and at multiple startups.
Join over 18 million learners and start Fine-Tuning with Llama 3 today!
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.