Skip to main content
HomeBlogArtificial Intelligence (AI)

Everything We Know About GPT-5

Predicting what the next evolution in OpenAI's AI technology might look like and what advancements the GPT-5 model might have.
Feb 2024  · 10 min read

It’s already been more than a year since ChatGPT was first launched and open to the public. It initially astounded us all with its ability to understand and generate natural language.

However, the current steady march of AI innovation means that OpenAI cannot have all the limelight itself. From the launch of Google’s Bard to the announcement of its cutting-edge new model Gemini, the entrance of new competitors such as Anthropic, and the strong open-source movement boosted by Meta’s LLaMA, OpenAI will have to move quickly if it wants to keep its lead in the AI field.

Today, as we stand on the beginning of another technological milestone, the expectations surrounding GPT-5 grow. Mainly fueled by our imagination and the speculation circulating within the tech community.

This article tries to shed some light on what we might expect from GPT-5, drawing ideas from its predecessors like GPT-4 and the trajectory of the main advancements in the AI field.

It is important to consider that much of what is discussed herein is based on predictions, painting a picture of a future that is both exciting and, as of yet, extremely uncertain.

So, let’s try to uncover some truth about what is yet to come with GPT-5.

What is GPT-5?

Generative Pre-trained Transformer or GPT is a series of large language models (LLM) developed by OpenAI that have significantly influenced both the ML and AI fields.

GPT, at its core, is designed to understand and generate human-like text based on the input it receives. These models are trained from vast datasets. The GPT family of models has been instrumental in popularizing LLM-based applications, setting new benchmarks for what is possible in natural language processing, generation, and beyond.

GPT-5 represents the next iteration in the GPT series. Some of you might be wondering what the next iteration means. Let's look at the history of GPT models so far: 


In 2018, OpenAI introduced the concept of generative pre-training with GPT-1, using a transformer architecture to enhance natural language understanding. This model, detailed in their paper "Improving Language Understanding by Generative Pre-Training," served as a proof-of-concept and was not publicly released.


A year later, OpenAI released GPT-2, showcasing significant improvements in text generation. GPT-2 was capable of generating short passages of text, marking a notable advancement from its predecessor. It was publicly available, allowing for broader experimentation in the machine learning community.


With the release of GPT-3 in 2020, OpenAI scaled up its model significantly, boasting 100 times more parameters than GPT-2. This expansion enabled GPT-3 to produce much longer and more coherent text, performing impressively across various tasks. The introduction of ChatGPT, a conversation-focused iteration within the GPT-3.5 series, demonstrated the model's remarkable ability to generate human-like text, achieving rapid adoption and reaching 100 million users in just two months.


GPT-4, the latest iteration in the series, further refines the capabilities introduced by its predecessors. With an even larger dataset and more parameters, GPT-4 improves upon the natural language understanding and generation capabilities of GPT-3. It exhibits enhanced performance in generating coherent, contextually relevant text over extended passages and shows better understanding in complex conversation scenarios.

GPT-4's advancements include a more nuanced understanding of context, improved factuality, and a reduction in generating biased or harmful content. Its adoption spans various applications, from advanced conversational agents to sophisticated content creation tools, highlighting its versatility and the ongoing evolution of AI-driven natural language processing technologies. 

 In November 2023, OpenAI unveiled GPT-4 Turbo with Vision, which updated several features. You can learn more about the evolution of the GPT family in our previous article regarding GPT-4.


So, GPT-5 likely represents the next version of the Generative Pre-trained Transformer.

Although information about the potential next iteration is scarce, we know that GPT-4 presented significant improvements over its predecessors, particularly in its capacity for logical reasoning. Even though it remains unaware of events beyond April 2023, GPT-4 still boasts a more extensive general knowledge base and a deeper understanding of our world. So, everything so far indicates that GPT-5 will follow the same trend and improve the current GPT-4 model.

An image created with DALLE-3 in GPT-4 with the prompt ‘the evolution of the GPT models’

An image created with DALLE-3 in GPT-4 with the prompt ‘the evolution of the GPT models

When Will GPT-5 Be Released?

In a January 2024 Sam Altman’s discussion with Bill Gates, Gates received confirmation that work on GPT-5 had begun without giving any clue about when the release date could be.

We can consider what’s happened with GPT-4 to try to predict what might happen with GPT-5’s launch. Despite OpeanAI releasing GPT-4 only a few months after ChatGPT, we know that the development cycle of GPT-4, including a training phase, development, and testing, took over two years.

Therefore, if GPT-5 follows a similar schedule, its launch could potentially extend to the end 2025. Even though this new launch seems far away, this does not necessarily mean that OpenAI won’t continue to improve GPT-4.

OpenAI is most likely to keep improving GPT-4, and we might see the introduction of an intermediary update, GPT-4.5, as we already saw with GPT-3.5.

What Features Can We Expect From GPT-5?

With GPT-5's release possibly a year or two in the future, most predictions about its advancements are based on current trends shaped by Google and open-source AI initiatives. These developments give us valuable insights into the future direction of the industry.

However, there are some first clues coming directly from the OpenAI core team. During Gates's interview, Altman highlighted that OpenAI's efforts would concentrate on enhancing reasoning abilities and incorporating video processing capabilities.

So, let’s try to make a little sense of it all and discuss some key enhancements expected from GPT-5.

Parameter size

While the exact parameter size of GPT-4 remains under wraps, there’s an ongoing trend toward more complex and capable models. Most sources indicate the number might be around 1.5 trillion parameters.

Image by Author. GPT family number of parameters evolution.

Image by Author. GPT family number of parameters evolution.

If this trajectory continues, GPT-5 could redefine the limits of current LLMs, offering an unprecedented size.


Given that the existing GPT-4 model already supports speech and image functionalities, the integration of video processing emerges as a natural progression for GPT-5. We’ve already seen Google start to experiment with this feature in its Gemini model, so it’s only a matter of time before competition forces OpenAI to innovate as well.

Therefore, GPT-5 could improve current GPT-4 multimodal capabilities and add new features like video integration, generating a pivotal shift in how we interact with AI, enabling more natural and versatile forms of communication.

From Chatbot to Agent

The transition from chatbots to fully autonomous agents is another exciting frontier. Imagine if you could assign menial tasks or jobs to a GPT-powered app. This could actually become a reality if OpenAI keeps integrating third-party services. We’ve already seen the introduction of Custom GPTs, and this will likely continue to develop.

This new feature would allow GPT-5 to connect to various services and perform actions in the world seamlessly, acting on behalf of users to accomplish tasks without direct human oversight. For instance, we could ask an autonomous agent to buy our groceries based on our own dietary preferences.

Better accuracy

With each iteration, the accuracy of GPT models has improved, making them more reliable in understanding context and generating appropriate responses. A next generation in the GPT models would mean an increase in its training dataset size and variety.

The current GPT-4 model is 40% better than its predecessor GPT-3, so GPT-5 is expected to continue this trend, reducing errors and enhancing the fidelity of its interactions.

Increased context windows

One of the limitations of current models is the size of the context window they can consider for generating responses. Given that GPT-5 might be trained with a larger amount of data, it is anticipated to have an expanded context window, allowing it to understand and reference larger portions of text, leading to more coherent and contextually relevant outputs.

Cost-effective use of the OpenAI API

As newer models emerge, we can also anticipate a reduction in the cost of using the OpenAI API, making technologies like GPT-4 and GPT-3.5 more accessible. A launch of GPT-5 could mean that GPT-4 will become accessible and cheaper to use.

This democratization of access could spur a wave of innovation, enabling a broader range of developers and organizations to integrate advanced AI into their applications.

Once it becomes cheaper and more accessible, the GPT models could become more proficient at performing complex tasks like coding or research. If you haven’t tried OpenAI’s API yet, I strongly recommend you follow DataCamp’s guide to the OpenAI API to get a taste of it.


While we eagerly await concrete details about GPT-5, it's crucial to remember that our current discussions are rooted in speculation and mere prediction based on historical facts, AI general trends, and some small clues that OpenAI’s team seems to share.

History suggests that we may see incremental updates, such as a GPT-4.5, before the arrival of GPT-5 in the mid-term.

Regardless of the timeline, the evolution of the GPT series continues to captivate the imagination, promising a future where AI's potential is limited only by our ability to envision its applications.

If you’re eager to get started exploring all that GPT models have to offer, start with our Introduction to ChatGPT course or, if you’re already familiar with the model, our webinar on Using ChatGPT’s Advanced Data Analysis.

Photo of Josep Ferrer
Josep Ferrer

Josep is a Data Scientist and Project Manager at the Catalan Tourist Board, using data to improve the experience of tourists in Catalonia. His expertise includes the management of data storage and processing, coupled with advanced analytics and the effective communication of data insights.

He is also a dedicated educator, teaching the Big Data Master's program at the University of Navarra, and regularly contributing insightful articles on data science to Medium and KDNuggets.

He holds a BS in Engineering Physics from the Polytechnic University of Catalonia as well as an MS in Intelligent Interactive Systems from Pompeu Fabra University.

Currently, he is passionately committed to making data-related technologies more accessible to a wider audience through the Medium publication ForCode'Sake.


Start Your AI Journey Today!


Introduction to ChatGPT

1 hr
Learn how to use ChatGPT. Discover best practices for writing prompts and explore common business use cases for the powerful AI tool.
See DetailsRight Arrow
Start Course
See MoreRight Arrow

You’re invited! Join us for Radar: AI Edition

Join us for two days of events sharing best practices from thought leaders in the AI space
DataCamp Team's photo

DataCamp Team

2 min

The Art of Prompt Engineering with Alex Banks, Founder and Educator, Sunday Signal

Alex and Adel cover Alex’s journey into AI and what led him to create Sunday Signal, the potential of AI, prompt engineering at its most basic level, chain of thought prompting, the future of LLMs and much more.
Adel Nehme's photo

Adel Nehme

44 min

The Future of Programming with Kyle Daigle, COO at GitHub

Adel and Kyle explore Kyle’s journey into development and AI, how he became the COO at GitHub, GitHub’s approach to AI, the impact of CoPilot on software development and much more.
Adel Nehme's photo

Adel Nehme

48 min

A Comprehensive Guide to Working with the Mistral Large Model

A detailed tutorial on the functionalities, comparisons, and practical applications of the Mistral Large Model.
Josep Ferrer's photo

Josep Ferrer

12 min

Serving an LLM Application as an API Endpoint using FastAPI in Python

Unlock the power of Large Language Models (LLMs) in your applications with our latest blog on "Serving LLM Application as an API Endpoint Using FastAPI in Python." LLMs like GPT, Claude, and LLaMA are revolutionizing chatbots, content creation, and many more use-cases. Discover how APIs act as crucial bridges, enabling seamless integration of sophisticated language understanding and generation features into your projects.
Moez Ali's photo

Moez Ali

How to Improve RAG Performance: 5 Key Techniques with Examples

Explore different approaches to enhance RAG systems: Chunking, Reranking, and Query Transformations.
Eugenia Anello's photo

Eugenia Anello

See MoreSee More