Skip to content
Project: Building RAG Chatbots for Technical Documentation
  • AI Chat
  • Code
  • Report
  • You're working for a well-known car manufacturer who is looking at implementing LLMs into vehicles to provide guidance to drivers. You've been asked to experiment with integrating car manuals with an LLM to create a context-aware chatbot. They hope that this context-aware LLM can be hooked up to a text-to-speech software to read the model's response aloud.

    As a proof of concept, you'll integrate several pages from a car manual that contains car warning messages and their meanings and recommended actions. This particular manual, stored as an HTML file, mg-zs-warning-messages.html, is from an MG ZS, a compact SUV. Armed with your newfound knowledge of LLMs and LangChain, you'll implement Retrieval Augmented Generation (RAG) to create the context-aware chatbot.

    Before you start

    In order to complete the project you will need to create a developer account with OpenAI and store your API key as a secure environment variable. Instructions for these steps are outlined below.

    Create a developer account with OpenAI

    1. Go to the API signup page.

    2. Create your account (you'll need to provide your email address and your phone number).

    3. Go to the API keys page.

    4. Create a new secret key.

    1. Take a copy of it. (If you lose it, delete the key and create a new one.)

    Add a payment method

    OpenAI sometimes provides free credits for the API, but this can vary depending on geography. You may need to add debit/credit card details.

    This project should cost less than 1 US cents with GPT-3.5-Turbo (but if you rerun tasks, you will be charged every time).

    1. Go to the Payment Methods page.

    2. Click Add payment method.

    1. Fill in your card details.

    Add an environmental variable with your OpenAI key

    1. In the workbook, click on "Environment," in the top toolbar and select "Environment variables".

    2. Click "Add" to add environment variables.

    3. In the "Name" field, type "OPENAI_API_KEY". In the "Value" field, paste in your secret key.

    1. Click "Create", then you'll see the following pop-up window. Click "Connect," then wait 5-10 seconds for the kernel to restart, or restart it manually in the Run menu.

    Update to Python 3.10

    Due to how frequently the libraries required for this project are updated, you'll need to update your environment to Python 3.10:

    1. In the workbook, click on "Environment," in the top toolbar and select "Session details".

    2. In the workbook language dropdown, select "Python 3.10".

    3. Click "Confirm" and hit "Done" once the session is ready.

    # Set your API key to a variable
    import os
    openai_api_key = os.environ["OPENAI_API_KEY"]
    
    # Import the required packages
    import langchain
    from langchain import PromptTemplate
    from langchain.chains import SimpleSequentialChain
    from langchain_openai import ChatOpenAI
    from langchain.document_loaders import UnstructuredHTMLLoader
    from langchain_openai import OpenAIEmbeddings
    from langchain.schema.runnable import RunnablePassthrough
    from langchain.text_splitter import RecursiveCharacterTextSplitter
    from langchain.vectorstores import Chroma
    from langchain_community.document_loaders import UnstructuredHTMLLoader
    
    # Load the HTML as a LangChain document loader
    loader = UnstructuredHTMLLoader(file_path="data/mg-zs-warning-messages.html")
    car_docs = loader.load()
    
    # Initialize RecursiveCharacterTextSplitter to make chunks of HTML text
    text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
    
    # Split GDPR HTML
    splits = text_splitter.split_documents(car_docs)
    
    # Initialize Chroma vectorstore with documents as splits and using OpenAIEmbeddings
    vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings(openai_api_key=openai_api_key))
    
    # Setup vectorstore as retriever
    retriever = vectorstore.as_retriever()
    
    # Define RAG prompt
    prompt = PromptTemplate(input_variables=['question', 'context'], template="You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\nQuestion: {question} \nContext: {context} \nAnswer:")
    
    # Initialize chat-based LLM with 0 temperature and using GPT-3.5 Turbo
    model = ChatOpenAI(openai_api_key=openai_api_key, model_name="gpt-3.5-turbo", temperature=0)
    
    # Setup the chain
    rag_chain = SimpleSequentialChain(
        chains=[
            {"context": retriever , "question": RunnablePassthrough()},
            prompt,
            model
        ]
    )
    
    # Initialize query
    query = "The Gasoline Particular Filter Full warning has appeared. What does this mean and what should I do about it?"
    
    # Invoke the query
    answer = rag_chain.invoke(query).content
    print(answer)