Skip to main content
HomeCode-alongsArtificial Intelligence (AI)

Retrieval Augmented Generation with LlamaIndex

In this session you'll learn how to get started with Chroma and perform Q&A on some documents using Llama 2, the RAG technique, and LlamaIndex.
Dec 2023
Code along with us onCode Along

Large language models (LLMs) like Llama 2 are the must-have technology of the year. Unfortunately, LLMs can't accurately answer questions about your business because they lack enough domain knowledge. The solution is to combine the LLM with a vector database like Chroma—a technique known as retrieval augmented generation (RAG). Beyond this, incorporating AI into products is best done with an AI application framework, like LlamaIndex.

In this session you'll learn how to get started with Chroma and perform Q&A on some documents using Llama 2, the RAG technique, and LlamaIndex.

Key Takeaways:

  • Learn how to store text in the Chroma vector database.
  • Learn how to use retrieval augmented generation to combine LLama 2 and Chroma.
  • Learn how to develop AI applications using LlamaIndex

Additional Resources

[COURSE] Dan's course: Introduction to Deep Learning in Python

[CODE-ALONG SERIES] Become a Generative AI developer

[SKILL TRACK] OpenAI Fundamentals

[SKILL TRACK] Deep Learning in Python

[BLOG] The Top 5 Vector Databases

[TUTORIAL] Mastering Vector Databases with Pinecone: A Comprehensive Guide

Topics
Related

blog

What is Retrieval Augmented Generation (RAG)?

Explore Retrieval Augmented Generation (RAG) RAG: Integrating LLMs with data search for nuanced AI responses. Understand its applications and impact.
Natassha Selvaraj's photo

Natassha Selvaraj

8 min

tutorial

Boost LLM Accuracy with Retrieval Augmented Generation (RAG) and Reranking

Discover the strengths of LLMs with effective information retrieval mechanisms. Implement a reranking approach and incorporate it into your own LLM pipeline.
Iván Palomares Carrascosa's photo

Iván Palomares Carrascosa

11 min

tutorial

How to Run Llama 3 Locally: A Complete Guide

Run LLaMA 3 locally with GPT4ALL and Ollama, and integrate it into VSCode. Then, build a Q&A retrieval system using Langchain, Chroma DB, and Ollama.
Abid Ali Awan's photo

Abid Ali Awan

15 min

code-along

Retrieval Augmented Generation with GPT and Milvus

In this session you'll learn how to get started with Milvus and perform Q&A on some documents using GPT and the RAG technique.
Yujian Tang's photo

Yujian Tang

code-along

Evaluating LLM Responses

In this session, we cover the different evaluations that are useful for reducing hallucination and improving retrieval quality of LLMs.
Josh Reini's photo

Josh Reini

code-along

Retrieval Augmented Generation with the OpenAI API & Pinecone

Build a movie recommender system using GPT and learn key techniques to minimize hallucinations and ensure factual answers.
Vincent Vankrunkelsven's photo

Vincent Vankrunkelsven

See MoreSee More