Skip to main content
HomeTutorialsArtificial Intelligence (AI)

GPTCache Tutorial: Enhancing Efficiency in LLM Applications

Learn how GPTCache retrieves cached results instead of generating new responses from scratch.
Updated Mar 2024  · 8 min read

GPTCache is an open-source framework for large language model (LLM) applications like ChatGPT. It stores previously generated LLM responses to similar queries. Instead of relying on the LLM, the application checks the cache for a relevant response to save you time.

This guide explores how GPTCache works and how you can use it effectively in your projects.

What is GPTCache?

GPTCache is a caching system designed to improve the performance and efficiency of large language models (LLMs) like GPT-3. It helps LLMs store the previously generated queries to save time and effort.

When a similar query comes up again, the LLM can pull up the cached response instead of developing a new one from scratch.

Unlike other tools, GPTCache works on semantic caching. Semantic caches hold the objective of a query/request. As a result, when the previously stored queries are recalled, their result reduces the server’s workload and improves cache hit rates.

Benefits of Using GPTCache

The main idea behind GPTCache is to store and reuse the intermediate computations generated during the inference process of an LLM. Doing so has several benefits:

Cost savings on LLM API calls

Most LLMs charge a specific fee per request based on the number of tokens processed. That’s when GPTCache comes in handy. It minimizes the number of LLM API calls by serving previously generated responses for similar queries. As a result, this saves costs by reducing extra LLM call expenses.

Improved response time and efficiency

Retrieving the response from a cache is substantially faster than generating it from scratch by querying the LLM. It boosts the speed and improves response times. Efficient responses reduce the burden on the LLM itself and free up space that can be allocated to other tasks.

Enhanced user experience through faster application performance

Suppose you’re searching questions for your content. Every question you ask takes ages for AI to answer. Why? Because most LLM services enforce request limits within set periods. Exceeding these limits blocks further requests until the limit resets, which causes service interruptions.

GPT reached its response generating limit

ChatGPT can reach its response generating limit

To avoid these issues, GPTchache caches previous answers to similar questions. When you ask for something, it quickly checks its memory and delivers the information in a flash. As a result, you get your response in less time than usual.

Simply put, by leveraging cached responses, GPTCache ensures LLM-based applications become responsive and efficient—just like you'd expect from any modern tool.

Setting Up GPTCache

Here’s how you can install GPTCache directly:

Installation and configuration

Install the GPTCache package using this code.

! pip install -q gptcache

Next, import GPTCache into your application.

from gptcache import GPTCache
cache = GPTCache()  
# keep the mode default 

That’s it, and you’re done!

Integration with LLMs

You can integrate GPTCache with LLMs through its LLM Adapter. As of now, it is compatible with only two large language model adapters:

  • OpenAI
  • Langchain

Here’s how you can integrate it with both adapters:

GPTCache with OpenAI ChatGPT API

To integrate GPTCache with OpenAI, initialize the cache and import openai from gptcache.adapter.

from gptcache import cache
from gptcache.adapter import openai

cache.init()
cache.set_openai_key()

Before you run the example code, set the OPENAI_API_KEY environment variable by executing echo $OPENAI_API_KEY.

If it is not already set, you can set it by using export OPENAI_API_KEY=YOUR_API_KEY on Unix/Linux/MacOS systems or set OPENAI_API_KEY=YOUR_API_KEY on Windows systems.

Then, if you ask ChatGPT two exact questions, it will retrieve the answer to the second question from the cache instead of asking ChatGPT again.

Here’s an example code for similar search cache:

import time


def response_text(openai_resp):
    return openai_resp['choices'][0]['message']['content']

print("Cache loading.....")

# To use GPTCache, that's all you need
# -------------------------------------------------
from gptcache import cache
from gptcache.adapter import openai

cache.init()
cache.set_openai_key()
# -------------------------------------------------

question = "what's github"
for _ in range(2):
    start_time = time.time()
    response = openai.ChatCompletion.create(
      model='gpt-3.5-turbo',
      messages=[
        {
            'role': 'user',
            'content': question
        }
      ],
    )
    print(f'Question: {question}')
    print("Time consuming: {:.2f}s".format(time.time() - start_time))
    print(f'Answer: {response_text(response)}\n')

Here’s what you will see in the output:

The second time, GPT took nearly 0 seconds to answer the same question

GPTCache with LangChain

If you want to utilize a different LLM, try the LangChain adapter. Here’s how you can integrate GPTCahe with LangChain:

from langchain.globals import set_llm_cache
from langchain_openai import OpenAI

# To make the caching really obvious, lets use a slower model.
llm = OpenAI(model_name="gpt-3.5-turbo-instruct", n=2, best_of=2)

Learn how to build LLM applications with Langchain.

Using GPTCache in Your Projects

Let's look at how GPTCache can support your projects.

Basic operations

LLMs can become ineffective due to the inherent complexity and variability of LLM queries, resulting in a low cache hit rate.

To overcome this limitation, GPTCache adopts semantic caching strategies. Semantic caching stores similar or related queries—increasing the probability of cache hits and enhancing the overall caching efficiency.

GPTCache leverages embedding algorithms to convert queries into numerical representations called embeddings. These embeddings are stored in a vector store, enabling efficient similarity searches. This process allows GPTCache to identify and retrieve similar or related queries from the cache storage.

With its modular design, you can customize semantic cache implementations according to your requirements.

However—sometimes false cache hits and cache misses can occur in a semantic cache. To monitor this performance, GPTCache provides three performance metrics:

  • Hit ratio measures a cache's success rate in fulfilling requests. Higher values indicate better performance.
  • Latency indicates the time taken to retrieve data from the cache, where lower is better.
  • Recall shows the proportion of correctly served cache queries. Higher percentages reflect better accuracy.

Advanced features

All basic data elements like the initial queries, prompts, responses, and access timestamps are stored in a 'data manager.' GPTCache currently supports the following cache storage options:

  • SQLite
  • MySQL
  • PostgreSQL databases.

It doesn’t support the ‘NoSQL’ database yet, but it’s planned to be incorporated soon.

Using the eviction policies

However, GPTCache can remove data from the cache storage based on a specified limit or count. To manage the cache size, you can implement either a Least Recently Used (LRU) eviction policy or a First In, First Out (FIFO) approach.

  • LRU eviction policy evicts the least recently accessed items.
  • Meanwhile, the FIFO eviction policy discards the cached items that have been present for the longest duration.

Evaluating response performance

GPTCache uses an ‘evaluation’ function to assess whether a cached response addresses a user query. To do so, it takes three inputs:

  • user's request for data
  • cached data being evaluated
  • user-defined parameters (if any)

You can also use two other functions:

  • log_time_func’ lets you record and report the duration of intensive tasks like generating ‘embeddings’ or performing cache ‘searches.’ This helps monitor the performance characteristics.
  • With ‘similarity_threshold,’ you can define the threshold for determining when two embedding vectors (high-dimensional representations of text data) are similar enough to be matched.

GPTCache Best Practices and Troubleshooting

Now that you know how GPTCache functions, here are some best practices and tips to ensure you reap its benefits.

Optimizing GPTCache performance

There are several steps you can take to optimize the performance of GPTCache, as outlined below.

1. Clarify your prompts

How you prompt your LLM impacts how well GPTCache works. So, keep your phrasing consistent to enhance your chances of reaching the cache.

For example, use consistent phrasing like "I can't log in to my account." This way, GPTCache recognizes similar issues, such as "Forgot my password" or "Account login problems," more efficiently.

2. Use the built-in tracking metrics

Monitor built-in metrics like hit ratio, recall, and latency to analyze your cache’s performance. A higher hit ratio indicates that the cache more effectively serves requested content from stored data, helping you understand its effectiveness.

3. Scaling GPTCache for LLM applications with large user bases

To scale GPTCache for larger LLM applications, implement a shared cache approach that utilizes the same cache for user groups with similar profiles. Create user profiles and classify them to identify similar user groups.

Leveraging a shared cache for users of the same profile group yields good returns regarding cache efficiency and scalability.

This is because users within the same profile group tend to have related queries that can benefit from cached responses. However, you must employ the right user profiling and classification techniques to group users and maximize the benefits of shared caching accurately.

Troubleshooting common GPTCache issues

If you’re struggling with GPTCache, there are several steps you can take to troubleshoot the issues.

1. Cache invalidation

GPTCache relies on up-to-date cache responses. If the underlying LLM's responses or the user's intent changes over time, the cached responses may become inaccurate or irrelevant.

To avoid this, set expiration times for cached entries based on the expected update frequency of the LLM and regularly refresh the cache.

2. Over-reliance on cached responses

While GPTCache can improve efficiency, over-reliance on cached responses can lead to inaccurate information if the cache is not invalidated properly.

For this purpose, make sure your application occasionally retrieves fresh responses from the LLM, even for similar queries. This maintains the accuracy and quality of the responses when dealing with critical or time-sensitive information.

3. Ignoring cache quality

The quality and relevance of the cached response impact the user experience. So, you should use evaluation metrics to assess the quality of cached responses before serving them to users.

By understanding these potential pitfalls and their solutions, you can ensure that GPTCache effectively improves the performance and cost-efficiency of your LLM-powered applications—without compromising accuracy or user experience.

Wrap-up

GPTCache is a powerful tool for optimizing the performance and cost-efficiency of LLM applications. Proper configuration, monitoring, and cache evaluation strategies are required to ensure you get accurate and relevant responses.

If you’re new to LLMs, these resources might help:

FAQs

How do you initialize the cache to run GPTCache and import the OpenAI API?

To initialize the cache and import the OpenAI API, import openai from gptcache.adapter. This will automatically set the data manager to match the exact cache. Here’s how you can do this:

from gptcache.adapter import openai

What happens if you ask ChatGPT the same question twice?

GPTCache stores the previous responses in the cache and retrieves the answer from the cache instead of making a new request to the API. So, the answer to the second question will be obtained from the cache without requesting ChatGPT again.


Photo of Laiba Siddiqui
Author
Laiba Siddiqui

I'm a content strategist who loves simplifying complex topics. I’ve helped companies like Splunk, Hackernoon, and Tiiny Host create engaging and informative content for their audiences.

Topics
Related

You’re invited! Join us for Radar: AI Edition

Join us for two days of events sharing best practices from thought leaders in the AI space
DataCamp Team's photo

DataCamp Team

2 min

The Art of Prompt Engineering with Alex Banks, Founder and Educator, Sunday Signal

Alex and Adel cover Alex’s journey into AI and what led him to create Sunday Signal, the potential of AI, prompt engineering at its most basic level, chain of thought prompting, the future of LLMs and much more.
Adel Nehme's photo

Adel Nehme

44 min

The Future of Programming with Kyle Daigle, COO at GitHub

Adel and Kyle explore Kyle’s journey into development and AI, how he became the COO at GitHub, GitHub’s approach to AI, the impact of CoPilot on software development and much more.
Adel Nehme's photo

Adel Nehme

48 min

A Comprehensive Guide to Working with the Mistral Large Model

A detailed tutorial on the functionalities, comparisons, and practical applications of the Mistral Large Model.
Josep Ferrer's photo

Josep Ferrer

12 min

Serving an LLM Application as an API Endpoint using FastAPI in Python

Unlock the power of Large Language Models (LLMs) in your applications with our latest blog on "Serving LLM Application as an API Endpoint Using FastAPI in Python." LLMs like GPT, Claude, and LLaMA are revolutionizing chatbots, content creation, and many more use-cases. Discover how APIs act as crucial bridges, enabling seamless integration of sophisticated language understanding and generation features into your projects.
Moez Ali's photo

Moez Ali

How to Improve RAG Performance: 5 Key Techniques with Examples

Explore different approaches to enhance RAG systems: Chunking, Reranking, and Query Transformations.
Eugenia Anello's photo

Eugenia Anello

See MoreSee More