Course
What is RAFT? Combining RAG and Fine-Tuning To Adapt LLMs To Specialized Domains
RAFT combines Retrieval-Augmented Generation (RAG) and fine-tuning to boost large language models' performance in specialized domains
May 15, 2024 · 11 min read
Topics
Continue Your AI Learning Journey Today!
2 hr
55.7K
Course
Introduction to LLMs in Python
4 hr
19.5K
Course
Developing LLM Applications with LangChain
3 hr
22.9K
See More
RelatedSee MoreSee More
blog
What is Retrieval Augmented Generation (RAG)?
Learn how Retrieval Augmented Generation (RAG) enhances large language models by integrating external data sources.
Natassha Selvaraj
6 min
blog
Advanced RAG Techniques
Learn advanced RAG methods like dense retrieval, reranking, or multi-step reasoning to tackle issues like hallucination or ambiguity.
Stanislav Karzhev
12 min
Tutorial
RAG vs Fine-Tuning: A Comprehensive Tutorial with Practical Examples
Learn the differences between RAG and Fine-Tuning techniques for customizing model performance and reducing hallucinations in LLMs.
Abid Ali Awan
13 min
Tutorial
Boost LLM Accuracy with Retrieval Augmented Generation (RAG) and Reranking
Discover the strengths of LLMs with effective information retrieval mechanisms. Implement a reranking approach and incorporate it into your own LLM pipeline.
Iván Palomares Carrascosa
11 min
Tutorial
Fine-Tuning LLMs: A Guide With Examples
Learn how fine-tuning large language models (LLMs) improves their performance in tasks like language translation, sentiment analysis, and text generation.
Josep Ferrer
11 min
Tutorial
Speculative RAG Implementation With Transformers
Learn Speculative RAG, a technique that improves RAG through a two-step drafting and verification process, and apply your skills with a hands-on implementation using Hugging Face Transformers.
Bhavishya Pandit
8 min