Working with Llama 3
Explore the latest techniques for running the Llama LLM locally, fine-tuning it, and integrating it within your stack.
Comece O Curso Gratuitamente4 horas14 vídeos43 exercícios
Crie sua conta gratuita
ou
Ao continuar, você aceita nossos Termos de Uso, nossa Política de Privacidade e que seus dados são armazenados nos EUA.Treinar 2 ou mais pessoas?Experimente o DataCamp For Business
Amado por alunos de milhares de empresas
Descrição do Curso
Learn to Use the Llama Large-Language Model (LLM)
What is the Llama LLM, and how can you use it to enhance your projects? This course will teach you about the architecture of Llama and its applications. It will also provide you with the techniques required to fine-tune and deploy the model for specific tasks, and to optimize its performance.Understand the Llama Model and Its Applications
You’ll start with an introduction to the foundational concepts of Llama, learning how to interact with Llama models and exploring their general use cases. You'll also gain hands-on experience setting up, running, and performing inference using the llama-cpp-python library.Learn to Fine-Tune and Deploy Llama Models
You'll explore dataset preprocessing, model fine-tuning with Hugging Face, and advanced optimization techniques for efficient performance. To wrap up the course, you'll implement a RAG system using Llama and LangChain.Throughout the course, you'll engage with practical examples, including an example of creating a customer service bot, to reinforce your understanding of these concepts.
This is an ideal introduction to Llama for developers and AI practitioners. It explores the foundations of this powerful open-source LLM and how to apply it in real-world scenarios.
Para Empresas
Treinar 2 ou mais pessoas?
Obtenha acesso à biblioteca completa do DataCamp, com relatórios, atribuições, projetos e muito mais centralizadosNas seguintes faixas
Engenheiro associado de IA para cientistas de dados
Ir para a trilha- 1
Understanding LLMs and Llama
GratuitoThe field of large language models has exploded, and Llama is a standout. With Llama 3, possibilities have soared. Explore how it was built, learn to use it with llama-cpp-python, and understand how to craft precise prompts to control the model's behavior.
- 2
Using Llama Locally
Language models are often useful as agents, and in this Chapter, you'll explore how you can leverage llama-cpp-python's capabilities for local text generation and creating agents with personalities. You'll also learn about decoding parameters' impact on output quality. Finally, you'll build specialized inference classes for diverse text generation tasks.
Performing inference with Llama50 xpCreating a JSON inventory list100 xpGenerating answers with a JSON schema100 xpTuning inference parameters50 xpMaking safe responses100 xpMaking a creative chatbot100 xpCreating an LLM inference class50 xpPersonal shopping agent100 xpMulti-agent conversations100 xpImproving the Agent class100 xp - 3
Finetuning Llama for Customer Service using Hugging Face & Bitext Dataset
Language models are powerful, and you can unlock their full potential with the right techniques. Learn how fine-tuning can significantly improve the performance of smaller models for specific tasks. Dive into fine-tuning smaller Llama models to enhance their task-specific capabilities. Next, discover parameter-efficient fine-tuning techniques such as LoRA, and explore quantization to load and use even larger models.
Preprocessing data for fine-tuning50 xpFiltering datasets for evaluation100 xpCreating training samples100 xpModel fine-tuning with Hugging Face50 xpSetting up Llama training arguments100 xpFine-tuning Llama for customer service QA100 xpEvaluate generated text using ROUGE100 xpEfficient fine-tuning with LoRA50 xpUsing LoRA adapters100 xpLoRA fine-tuning Llama for customer service100 xp - 4
Creating a Customer Service Chatbot with Llama and LangChain
LLMs work best when they solve a real-world problem, such as creating a customer service chatbot using Llama and LangChain. Explore how to customize LangChain, integrate fine-tuned models, and craft templates for a real-world use case, utilizing RAG to enhance your chatbot's intelligence and accuracy. This chapter equips you with the technical skills to develop responsive and specialized chatbots.
Getting started with LangChain50 xpCreating a LangChain pipeline100 xpAdding custom template to LangChain pipeline100 xpCustomizing LangChain for specific use-cases50 xpUsing a customized Hugging Face model in LangChain100 xpClosed question-answering with LangChain100 xpDocument retrieval with Llama50 xpPreparing documents for retrieval100 xpCreating retrieval function100 xpBuilding a Retrieval Augmented Generation system50 xpCreating a RAG pipeline100 xpExtract retrieved documents from a RAG chain100 xpExtract LLM response from a RAG chain100 xpRecap: Working with Llama 350 xp
Para Empresas
Treinar 2 ou mais pessoas?
Obtenha acesso à biblioteca completa do DataCamp, com relatórios, atribuições, projetos e muito mais centralizadosNas seguintes faixas
Engenheiro associado de IA para cientistas de dados
Ir para a trilhacolaboradores
pré-requisitos
Introduction to LLMs in PythonImtihan Ahmed
Ver MaisMachine Learning Engineer
O que os outros alunos têm a dizer?
Junte-se a mais de 14 milhões de alunos e comece Working with Llama 3 hoje mesmo!
Crie sua conta gratuita
ou
Ao continuar, você aceita nossos Termos de Uso, nossa Política de Privacidade e que seus dados são armazenados nos EUA.