Fine-Tuning Your Own Llama 2 Model
In this session, we take a step-by-step approach to fine-tune a Llama 2 model on a custom dataset.
17 nov. 2023
ApparentéSee MoreSee More
didacticiel
Fine-Tuning LLaMA 2: A Step-by-Step Guide to Customizing the Large Language Model
Learn how to fine-tune Llama-2 on Colab using new techniques to overcome memory and computing limitations to make open-source large language models more accessible.
Abid Ali Awan
12 min
didacticiel
Fine-Tuning Llama 3 and Using It Locally: A Step-by-Step Guide
We'll fine-tune Llama 3 on a dataset of patient-doctor conversations, creating a model tailored for medical dialogue. After merging, converting, and quantizing the model, it will be ready for private local use via the Jan application.
Abid Ali Awan
19 min
didacticiel
Fine-tuning Llama 3.2 and Using It Locally: A Step-by-Step Guide
Learn how to access Llama 3.2 lightweight and vision models on Kaggle, fine-tune the model on a custom dataset using free GPUs, merge and export the model to the Hugging Face Hub, and convert the fine-tuned model to GGUF format so it can be used locally with the Jan application.
Abid Ali Awan
14 min
didacticiel
Fine-Tuning Llama 3.1 for Text Classification
Get started with the new Llama models and customize Llama-3.1-8B-It to predict various mental health disorders from the text.
Abid Ali Awan
13 min
didacticiel
LlaMA-Factory WebUI Beginner's Guide: Fine-Tuning LLMs
Learn how to fine-tune LLMs on custom datasets, evaluate performance, and seamlessly export and serve models using the LLaMA-Factory's low/no-code framework.
Abid Ali Awan
12 min
code-along
Fine-Tuning Your Own Llama 3 Model
Maxime, one of the world's leading thinkers in generative AI research, shows you how to fine-tune the Llama 3 LLM using Python and the Hugging Face platform.
Maxime Labonne