深受数千家公司学习者的喜爱
培训2人或更多?
试用DataCamp for Business课程描述
Fine-tuning the Llama model
This course provides a comprehensive guide to preparing and working with Llama models. Through hands-on examples and practical exercises, you'll learn how to configure various Llama fine-tuning tasks.Prepare datasets for fine-tuning
Start by exploring dataset preparation techniques, including loading, splitting, and saving datasets using the Hugging Face Datasets library, ensuring high-quality data for your Llama projects.Work with fine-tuning frameworks
Explore fine-tuning workflows using cutting-edge libraries such TorchTune and Hugging Face’s SFTTrainer. You'll learn how to configure fine-tuning recipes, set up training arguments, and utilize efficient techniques like LoRA (Low-Rank Adaptation) and quantization using BitsAndBytes to optimize resource usage. By combining techniques learned throughout the course, you’ll be able to customize Llama models to suit your projects' needs in an efficient way.先决条件
Working with Llama 31
Preparing for Llama fine-tuning
Explore options for fine-tuning Llama 3 models and dive into TorchTune, a library built to simplify fine-tuning. This chapter guides you through data preparation, TorchTune's recipe-based system, and efficient task configuration, providing the key steps to launch your first fine-tuning task.
2
Fine-tuning with SFTTrainer on Hugging Face
Learn how fine-tuning can significantly improve the performance of smaller models for specific tasks. Start with fine-tuning smaller Llama models to enhance their task-specific capabilities. Next, discover parameter-efficient fine-tuning techniques such as LoRA, and explore quantization to load and use even larger models.
Fine-Tuning with Llama 3
课程完成 通过 DataCamp for Mobile 提升您的数据技能
随时随地通过我们的移动课程和每日 5 分钟编程挑战提升技能。