Large language models (LLMs) like GPT are the must-have technology of the year. Unfortunately, LLMs can't accurately answer questions about your business because they lack enough domain knowledge. The solution is to combine the LLM with a vector database like Milvus—a technique known as retrieval augmented generation (RAG).
In this session you'll learn how to get started with Milvus and perform Q&A on some documents using GPT and the RAG technique.
Key Takeaways:
- Learn how to store text in the Milvus vector database.
- Learn how to use retrieval augmented generation to combine Milvus and GPT.
- Learn how to ask securely questions about business documents
Notes:
- Please have Python Version 3.9, 3.10 or 3.11 installed
-
Download and install Visual Studio Code on your local machine. Get started here.
How to get LlamaIndex set up and running with Milvus
Additional Resources
[TUTORIAL] How to Build LLM Applications with LangChain
[TUTORIAL] Using ChatGPT to Moderate ChatGPT Responses
[CODE ALONG] An Introduction to GPT & Whisper with the OpenAI API in Python