Fine-Tuning Your Own Llama 3 Model
Maxime, one of the world's leading thinkers in generative AI research, shows you how to fine-tune the Llama 3 LLM using Python and the Hugging Face platform.
Aug 27, 2024
RelatedSee MoreSee More
tutorial
Fine-Tuning Llama 3 and Using It Locally: A Step-by-Step Guide
We'll fine-tune Llama 3 on a dataset of patient-doctor conversations, creating a model tailored for medical dialogue. After merging, converting, and quantizing the model, it will be ready for private local use via the Jan application.
Abid Ali Awan
19 min
tutorial
Fine-tuning Llama 3.2 and Using It Locally: A Step-by-Step Guide
Learn how to access Llama 3.2 lightweight and vision models on Kaggle, fine-tune the model on a custom dataset using free GPUs, merge and export the model to the Hugging Face Hub, and convert the fine-tuned model to GGUF format so it can be used locally with the Jan application.
Abid Ali Awan
14 min
tutorial
Fine-Tuning Llama 3.1 for Text Classification
Get started with the new Llama models and customize Llama-3.1-8B-It to predict various mental health disorders from the text.
Abid Ali Awan
13 min
tutorial
Fine-Tuning LLaMA 2: A Step-by-Step Guide to Customizing the Large Language Model
Learn how to fine-tune Llama-2 on Colab using new techniques to overcome memory and computing limitations to make open-source large language models more accessible.
Abid Ali Awan
12 min
tutorial
Unsloth Guide: Optimize and Speed Up LLM Fine-Tuning
Fine-tuning the Llama 3.1 model to solve specialized algebra problems with high accuracy and detailed results using Unsloth.
Abid Ali Awan
11 min
code-along
Fine-Tuning Your Own Llama 2 Model
In this session, we take a step-by-step approach to fine-tune a Llama 2 model on a custom dataset.
Maxime Labonne