Kurs
At the OpenAI Dev Day, the company made several announcements, including a new Assistants API. See the articles on GPT-4 Turbo and GPTs and the GPT Store for details of the other announcements.
The Assistants API extends the existing OpenAI API to make it easier for software developers to build AI assistants, like chatbots.
Four new features were announced:
- "Threads" to help manage longer conversations
- "Retrieval" to help store text
- Built-in code interpretation
- Improvements to the function-calling functionality.
Assistants API Key Features
Here, we'll cover each of these features in more detail.
Easier conversation management with threads
While some tasks can be performed by sending a single prompt to the API and getting a single response back, chatbots require a longer conversation (or "thread"). Until now, the onus was on the developer to keep track of the previous conversation state and decide which of the previous messages to send to the API. As conversations grow in length, this requires sending more and more text in each API call, which slows down performance.
Additionally, once the conversation exceeds the "context window" (the amount of text that GPT can remember at once), decisions need to be made: do you discard older messages, or try to summarize them, or store and make decisions at a later date on which ones are relevant and need to be included? It can quickly get fiddly and slow down application development.
The new threading tools turn the OpenAI from a "stateless" model (no memory) to a "stateful" one. Previous messages can be stored on the OpenAI side, avoiding the developer having to worry about these management issues.
Retrieval tools let you store additional text
The OpenAI presentation also mentioned retrieval tools for storing text. This was billed as a second feature, though since the threading feature appears to make use of this, it's unclear how separate this feature is.
Technical details were sparse during the presentation, though it is possible to take a guess at how this works. Currently, including generative AIino applications requires two different technologies: a large language model (LLM) like GPT, and a vector database.
Vector databases store text (or images or other unstructured data types) as numeric vectors by a process known as embedding. You can retrieve these pieces of text later to include in prompts. A typical use case includes having a store of facts related to the topic you want to discuss.
For example, if you are creating a chatbot to help answer questions about your company's products, you can store the product information in the vector database and use it to ensure that the LLM gives accurate factual responses.
The retrieval features mentioned in the presentation hint that the Assistants API might be allowing you API access to a vector database. This raises interesting possibilities beyond the capabilities for longer conversations since it would mean that you don't need to use a separate vector database such as Pinecone, Milvus, or Weaviate to store your text.
This is speculation at this point, and we shall have to wait for concrete details of the Assistants API to become available.
Code-interpretation is built-in
ChatGPT has an Advanced Data Analysis tool that allows GPT to generate and execute Python code based on natural language instructions. While details in the OpenAI presentation were somewhat vague, it was hinted that this feature will be built into the Assistants API, so you can give prompts that make GPT run Python code.
Improved function calling makes it easier to interact with other software
The function calling feature of the API allows you to write a natural language instruction and have GPT return a JSON string that represents a call to a function that you have defined. This is important for AI agents, which are designed to perform tasks based on a natural language input.
The announcement describes two improvements to this feature. Firstly, there is a "JSON mode" where the response is guaranteed to be valid JSON and should adhere more closely to the specified function signature. Until now, there was a chance that the response would be invalid, requiring thorough error detection code in your software.
The new functionality should simplify the creation of AI agents and make it easier to build natural language interfaces to software.
Summary
While the generative AI revolution in 2023 has been astounding, substantial software engineering skill has been needed to incorporate generative AI into other pieces of software. The Assistants API promises to reduce that barrier to entry somewhat, enabling more products to incorporate the technology faster.
Keep Learning
DataCamp has several courses to teach you how to use the OpenAI API. Start with Working with the OpenAI API and move on to Introduction to Embeddings with the OpenAI API.
You can also learn about the function calling API in the OpenAI Function Calling Tutorial.