Data Trends & Predictions 2025 with DataCamp's CEO & COO, Jonathan Cornelissen & Martijn Theuwissen

As the Co-founder & CEO of DataCamp, he helped grow DataCamp to upskill over 10M+ learners and 2800+ teams and enterprise clients. He is interested in everything related to data science, education, and entrepreneurship. He holds a Ph.D. in financial econometrics and was the original author of an R package for quantitative finance.
As the COO and co-founder of DataCamp, Martijn helps DataCamp’s enterprise clients with their data and digital transformation strategies, enabling them to make the most of DataCamp for Business’s offering, and helping them transform how their workforce uses data.

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.
Key Quotes
There's definitely a bet all the big players are making. They're building more compute and so they will be scaling through more compute next year. And then the question is, OK, is there more data? And this is where it gets more nuanced. And I think it's a spectrum. It's not black and white.
I'm very bullish on AR glasses combined with AI. I think there's so many use cases that will be unlocked. It feels like we're very close to the inflection point. Meta has these glasses, and they're building the most powerful AI models. Just think about those AI models being able to hear and see exactly what you as a human are seeing and what you can do with that. I think that's gonna be incredibly exciting and powerful. We're probably at the inflection point where in one year there's gonna be so many new use cases that are really powerful and impactful to people's lives.
Key Takeaways
AI models are moving toward more complex reasoning, which may increase response times for certain tasks. Optimize use cases by aligning low-latency requirements with faster models and reserving reasoning-heavy tasks for high-value scenarios.
As scaling laws hit limits with natural text, synthetic data is expected to gain traction, especially for code and video. Explore tools and frameworks to generate high-quality synthetic datasets for niche use cases.
With video generation AI maturing for sub-10-second clips, industries like social media, marketing, and advertising are poised to benefit. Consider integrating short-form AI-generated video content into your strategies for engaging audiences.
Transcript
Richie Cotton: I think this year, a lot of the generative AI hype, has been centered around OpenAI and Google. So do you think they're going to continue to be leaders in 2025?
Martijn Theuwissen: Yeah. So as you said, like, I think top of the leaderboards, mainly LLMs from open AI, Google, probably also a little bit on tropics, but they've been having a little bit of a struggle in recent months. So you could say open AI, Google duopoly for 2024. Now I think for 2025, like these tech leads are going to be hard to sustain.
I think we're going to see a new challenger at leaderboard. So you have for example, XAI's Grok. So, the one from Elon Musk, which is climbing the leaderboard rankings. Also like they're doubling the size of their GPU cluster. So if the scaling laws hold they're going to have the infrastructure capabilities to compete at that very top level.
So I think that's a possible contender, like one we're going to see like up there by the end of 2025. And then besides Croc you also have the Lama models. So Mattis Lama models I think the past year has shown that they can be like a very consistent player. Like they regularly bring out new models that are like on the heels of these top models.
and even I think like if you, for example, compare like Mark Zuckerberg's doc with the other CEOs of the Magnificent seven, you see that AI is, for Meta, like, this very, very important thing. And, like, that tells me something. So, I think it's very reasonable to make a bet that they will... See more
And yeah, maybe a quick shameless plug here. Like, the other game has some great tracks, All Llama as well. So, like, if you want to learn more about it, definitely check those out. it's actually a very accessible model as well. And then there are like the, the outliers, like, okay, how is Amazon Snova model going to do, which got announced in the second part of the year, like NVIDIA brought out its own model, and the Chinese players, the Kwan model of Alibaba.
So, I think if we're going to look back 12 months from now on like how the leaderboard looks compared to today, where it's mainly like focused on open AI and Google, I think we're going to see like, some very big differences. Including like some open source at the top there.
Richie Cotton: That's cool. There's so much competition. I mean, you listed a lot of models there, and it seems like there's just so many people just want that glory of having the best model out there that, that, yeah, competition's hot, and those tech leads are hard to sustain. So, yeah, we might see some, some real challenges this year.
Jonathan Cornelissen: I think the good news about all of this is that so price will continue to, go down as the quality goes up. So it's, I think it's gonna be amazing for consumers and businesses built. around this stuff, especially now, now that we have Amazon on the game going to keep pushing the price down as well as the open source players like Meta.
Richie Cotton: Yeah, definitely. So I mean, something like that, the cost per token has already sort of dramatically dropped over the last couple of years. And yeah Amazon is known for sort of low cost approaches to things. So that will be interesting to see if the prices drop even further. yeah.
Okay. So, the prediction then so we can remember for next year is we're gonna have another challenger at the top of the sort of LLM leaderboards to challenge sort of OpenAI and Google by the end of the year. Nice. Let's say hello on Panzer. So, in December, we've seen some big breakthroughs in LLM reasoning.
So, do you think that well, actually, rather than be putting words in your mouth, just tell me what are your predictions around AI reasoning for 2025?
Jonathan Cornelissen: Yeah, this is an interesting one, because when we were preparing and thinking about our predictions for next year the prediction we were going to make was we're going to see a breakthrough in reasoning. and since then, that actually already happened. So maybe we should talk about what happened a second and then.
how we can adjust the prediction because the reality was ahead of our recording timing. So, so, for those who don't know, there's all these benchmarks for AI models and a consistent kind of criticism of a lot of these benchmarks is that, well, models are essentially kind of, a compression mechanism, and they just remember what the answers are, and maybe they generalize a little bit.
And so in this context, you have a benchmark called the ArcPrice developed by Francois Chollet and one other guy, I think. And idea behind this benchmark is that they're really novel types of problems. So they're, fairly easy, some of them, to solve for humans, but they've historically been really hard to solve.
to solve for AI models. This was the one benchmark that was kind of unbeaten. Most recently, OpenAI model where they essentially broke the records on the ARC. ARC benchmark, I think they're at 87. 5%. It took them a while, it took them a lot of money, but they were able to, for the first time, make significant progress on that benchmark and kind of reach human level reasoning capabilities.
in some ways, we were going to make the prediction there's a significant breakthrough in reasoning. That just happens. Basically so I was thinking about how can we adjust the prediction? what kind of logically follows from from what happened recently? And I would adjust prediction to we're going to see a scientific breakthrough as a result of the ability of these models to reason.
And my bet would be that it's somewhere in a space where there's a very kind of large search space or something like kind of biology or, pharmaceutical. Because these reasoning models, while expensive there are definitely use cases where they can where you, can pay a lot of money for them to think through things and search through potential solutions.
Maybe that's more hope than a well founded prediction, but I think there's going to be really interesting things happening in this space.
Richie Cotton: Yeah, absolutely. Those sort of recent breakthroughs have been pretty amazing. It does look like we're going from just models that can just memorize stuff to models that can come up with new ideas and really do reasoning well. So just push you a bit on your prediction. So we've already had things like AlphaFold, which does protein folding that has contributed to scientific research already.
So, what's going to be different about these models doing research and making breakthroughs in 2025?
Jonathan Cornelissen: So my expectation is that, or make the prediction more clear, I would say It's the generic models that now start to have a chance at having breakthroughs. because you're absolutely right, there's been models that were specifically designed for a certain situation. AlphaFold is probably the most exciting and the most impactful one.
I think what's important to state is that O3 is a, a generic model. It's, it's not optimized for the art prize or like a certain type of reasoning. It's, optimized for reasoning. And that's what makes it exciting. So we could, we could see scientific breakthroughs in, in areas we really don't expect it.
I think that's what's new in the prediction. Does that make sense?
Richie Cotton: It does. Yeah. So we've had artificial narrow intelligence that's been sort of better performing than humans in, in specific tasks. But here we've got these general purpose models and they're going to be contributing to scientific research now. So, all right. Yeah. I hope this one pans out.
Martijn Theuwissen: And to, to add to that one is like you can think of different use cases as well. Like I recently came across a post by Ethan Mollick, who basically had a scientific paper with a mathematical error in it. And he basically applied one of the OpenAI models to it and asked like, Hey, figure out the mathematical error in this and it could figure it out like in a matter of seconds.
Like if you think about like having such a generic tool which is available to everyone, available to all researchers in the world, like pre publishing or like when they evaluate their own research, like that's extremely powerful. And that in itself can like lead to to breakthroughs.
Richie Cotton: Yeah, I was like, reviewing papers is one of the least favorite things I think of most academics and having AI do that, that seems like a pretty amazing feat. In fact, I am now wondering how much existing research, if you go and just like, ask people. loop over every single paper that's been published over the last few years.
Just see, are there any errors in this? I think we're going to find a lot of existing science be
Jonathan Cornelissen: Oh my God. Yeah. That's going to be a bloodbath.
Richie Cotton: Yeah. Well, it's going to be interesting times for that, I think. All right. So, on the subject of better reasoning, this sort of feeds into a, as a sort of broader conversation about AI scaling.
So it seems it's a quite divided opinion on some people like, okay, AGI is coming soon, we're going to hit the singularity. AI overlords and all that, other people are like, well, actually, you know what GPT is sort of plateauing, all these sort of large language models, there's only so good you can get at writing text.
We're not going to see much improvement over the next few years. So, do you think the AI scaling laws are going to hold?
Jonathan Cornelissen: Yeah. So I think it's a really interesting debate and a lot depends on the outcome. I don't think it's binary, though. But maybe let's take a step back first. So what are we talking about here? I think when people refer to the scaling laws, refer to the fact that what we've observed is that ETF more computes More data and larger models, you get significant improvements in the performance of these models, right?
And what tends to be true is that you kind of need all three to go up to get the best kind of improvements going forward. So if you kind of take them one by one more compute there you can just look at what's happening. And what's happening is that there's massive amount of build out happening for data centers.
There's a lot of GPUs being bought that are not yet delivered or not yet in production. so there's definitely a bet all the big players are making. They're building more compute and so there will be scaling through more compute next year. And then the question is, okay, is there more data?
And this is where it gets more nuanced. And I think it's a spectrum. It's not black and white. If you just look at. Text data a lot of it has been consumed or is being consumed by these models. I'm sure there's quality improvements and so on that you can make. But that's where you're starting to hit the maximum available data.
For text at least. so I think we can infer a couple of predictions from this. Which is like, where are we not out of data yet? I think video is the clearest example. There's, there's a lot of video that is available or can be available that hasn't been trained on yet. And then the most interesting one is like, can we generate more data?
to train on and is that going to be useful? I think that's a really interesting debate. My prediction for next year would be that there's kind of a comeback of synthetic data and the importance of synthetic data, especially in areas where you can validate whether or not synthetic data makes sense and holds true.
So think, for example, of code, There's only a certain amount of code on GitHub and available publicly. sure most of that has been trained on. But the beautiful thing is like you can generate code in all types of ways. You can use cursor AI Devon to the foundational models themselves to generate more code.
And you can validate whether it's doing what you're expecting it to do various ways. So you can actually generate an infinite demand of additional code that's sensible. And it doesn't matter that there's all these hallucinations and there's a bunch of junk code that gets generated. You can just throw it away and only train on the code that is that makes sense.
so I think there's certain areas where the scaling loss will keep going forward significantly, and that's making abstraction of any model innovation that's happening because the reasoning models see that's kind of model innovation that's happening there more than anything else.
So, yeah, in short, the prediction I would have is a resurgence of synthetic data or synthetic data going mainstream, especially where you can validate whether the data is sensible or not.
Richie Cotton: Yeah, that's interesting that With text, in a lot of cases, there's only like so good you can make it, like if it just, you're asking a simple general knowledge question, there's only so good the answer can be, but for things like code there's a divergence of like, whether the model currently works well or not, so if you're writing Python, it's brilliant if you're writing in some obscure language where there's just not that much data there, then, yeah, it just doesn't work very well, and so you need to create
Jonathan Cornelissen: And even within Binance, and if you're using packages that are just not that popular, you notice a huge difference as well. Right? that's a good example of, you can probably Solve that by, generating more code of the usage of that package. for the models to train on.
Richie Cotton: so I like the idea that you're going to create synthetic, code in order to train the next generation of models, and then you're going to get better performance in those areas where it doesn't work very well currently. All right, cool. Okay, so I guess the prediction then, we're going to continue to see scaling, but It's not necessarily going to be general purpose conversation.
It's going to be in the niche areas where things don't perform very well at the moment. that the prediction?
Jonathan Cornelissen: I wouldn't say niche, right? Like, coding is a huge area. Video is a huge area. And we can, talk about video specifically because I think it's a, should be a prediction of its own. But Yes, with the caveat that I wouldn't call those areas niche, I think they're, they're massive in potential.
Richie Cotton: Okay. All right. So yeah, I agree that coding is, is is a, is a fairly big topic. So, yeah, I guess the prediction is just going to be we're, we're filling out the gaps where models currently don't perform very well. So we should see consistently good performance everywhere.
Is that, is that the idea? cool. I guess the other area that's been hyped a lot is around multi modal AI. And we talked before about how well, image generation kind of works. Now, you said last year that video generation is going to kick off in 2025. Do you still think this is going to be the case Martin?
Martijn Theuwissen: Maybe. So, let me clarify. So I think Vito, Gen AI will take off actually for social media, advertising, and basically everything that needs videos, like below 10, 15 seconds. So like we chatted already, like the, the new models that got released in the last like two months, like SORA by OpenAI, VO2 by Google, they're really impressive, the short demos they show, like for the listeners, who haven't like watched any of those, definitely do so, like it's super impressive.
So I think that these models are ready to tackle like video that's below that 10 seconds mark which then brings you like to, okay, where are those short videos used like social media, advertising but to be clear, like there's been already like videos, AI videos created, like in a longer form, you have the indie band, washed out to create like a full fledged music video for those that are familiar with the typical Coca Cola Christmas spot.
So Coca Cola made like an one of their famous Christmas commercials, like with Gen AI. So like it is possible, but I still think that the longer form is, is More B2B phenomena, partly because of cost, partly because of okay, the type of prompting you're going to need to do is like way more complicated.
So if you think about that example I gave about this indie band that made it someone like music video, if you read the prompts that they needed to provide, these were like full blown essays with tons of like jargon around like, okay, how to hold your camera and so on. So definitely not something that like is accessible to.
Everyone, but from a price perspective as from a technical understanding perspective. But a lot of that does not matter. Like when you're creating like your next social media video, or when you're creating like this short advertising. And so I think like in that context. We're going to see takeoff from video gen AI.
And I think it's going to be used to be honest. if you think how much, well, social media content is created, but if you also think about like, okay, how much short videos you encounter, like every day in the form of ads and, if that all goes to like an AI model that's a lot that like moves all of a sudden to the Vegas side.
And I think the models like seem to be ready for that,
Richie Cotton: Okay, so that's interesting. So yeah, that does seem to make a lot of sense that it's going to be this sort of sub 10 second videos they're going to take off. So we'll see like, I guess, infinite AI, like cat videos on the social media side. And then I guess, I guess, like, cooler adverts, maybe. So all that stuff popping up in your web browser, maybe on TV too, it's going to be like, yeah, lot of AI generated stuff.
And maybe Maybe we'll, we'll save for 2026 on the long form video. All right. So, video takes off in a short form way in, 2025. Now we talked a bit about the performance of AI models. What we haven't really talked about is of the AI product experience. So, I guess, sometimes it feels like, well, chatting with whatever AI was kind of cool, but now it's like, Ooh, I've got to, I've got to wait three seconds for my response.
And that feels far too long. So, yeah, talk me through, how is this product experience going to change in 2025?
Jonathan Cornelissen: Yeah, I think you will have a kind of bifurcation to some extent where you'll have these in the moment interactions with AI through chat, through voice. I would expect voice to become much more prominent next year. But I think what might be more interesting is that or the prediction I would formally make is going to be way more AI product interactions where it takes a while for it to come back to you with a response.
It might be minute, but it could be even half an hour to an hour in some cases. Because if you see the trends of these reasoning models, They, they take time to think and it's, it's expensive for them to think. But the strength of the answer is just so much higher. And there's definitely economic use cases where you're willing to wait or you're willing to spend a little bit of money to, to get an answer that's really good and that's well reasoned.
so that's what I would predict to see more next year. I don't think we're seeing a lot of that just yet. But it goes back to all the excitement around agents. I think both agents and models will have some time just like you would outsource or delegate a task to a human being, you'll do that to AI models and product features and they may take half an hour or an hour to get back to you.
And that will become more normalized, I think, next year.
Richie Cotton: Wow, that's the opposite of what I was expecting. I thought you were going to say, well, okay, yeah models are going to get faster. And so, latency is going to be a big issue. Actually saying I suppose this makes a lot of sense, but things like 0103, they take a long time to respond. And so actually we're going to see these high reasoning use cases where it does take a long time.
Jonathan Cornelissen: Yeah, Richard, get me wrong. I think what you're saying is true as well. just think you're going to get a bifurcation, so for example, think about customer support. If I'm chatting with a customer support rep, And it's an AI rep, like I want an answer immediately in certain situations.
But once that agent fully understands, Oh, this is the problem, there's a whole series of steps that that needs to happen or, or say the model that's optimized to interact with you really can solve the problem. You can imagine a tier two support agent that has to think more and can take a while to then get back to the customer, if that makes sense.
So I'm not saying that what you're saying is not true. I'm just saying I think there's a bifurcation and we'll get that second tier as well.
Richie Cotton: Okay, yes, there's a sort of latency reasoning trade off here and you've just got to pick whatever the most appropriate thing is for your use case. So, going back to your idea of bottles that take a long time to run what's your prediction there? Like how long do you think it's going to be?
what's the longest running model do you think is going to happen?
Jonathan Cornelissen: I think the bigger problem than time is, cost because it does get really expensive after, if you start thinking in hours so I think cost is probably the real constraint, less than,
Richie Cotton: So, that's interesting. Okay, so, cost is going to be a big constraint there.
Martijn Theuwissen: Maybe on, on that point, like it's asking a question, like. How important will the inputs be? Because like, okay, once it starts running, it's like, okay, your time is gone, your costs are going up. and if you provide it to wrong prompt, well, you kind of have an issue. actually, like, I'm wondering when we were like discussing this, okay, what, will that mean for like the type of input you provide to model?
Like, how do you frame the question? Like, how are you, Hedging yourself against a bill that keeps ramping up. And at the end of the two hours, you get a piece of output that you're like, Oh, well, I should have put a little bit more time my question here.
Jonathan Cornelissen: Yeah, isn't that exactly similar, Martin, to just how companies work and delegating work to humans,
Martijn Theuwissen: Yes, it is.
Jonathan Cornelissen: ask someone to do something and you can be very vague and then you get something back and you're like, Oh, wait, that's not at all what I expected. And it's because the ask wasn't clear enough.
now more and more people will have to do it. To learn that skill, it's a form of prompt engineering again, because the costs are going to be very real, at least initially.
Richie Cotton: Oh man, so that sounds like you're going to need to learn some people management skills to deal with your AI.
Martijn Theuwissen: a new term. Like let's prompt management. Like it's going to hear first, you heard it here first.
Richie Cotton: Last year, one of your predictions was that the AI hype is going to fade, it's going to disappear. That hasn't really happened. So I'm wondering, is it going to continue into 2025? Is AI still going to be cool? Is the hype still going to persist for another year?
Martijn Theuwissen: I do think it's going to be like less prominent in like product marketing. Like I'm not expecting every week and other email that as a subject line, like new AI, thing announced, AI feature announced or something like that. So like the time of slapping AI on everything is, over. I think I do think it made sense like in 2024, like just as a signal to like buyers, wanted to like know if the products they were buying were like on top of innovation.
But I also think like in 2025, buyers are going to get a bit more. And well, 2024 was a lot about like Dell in the show and tell, like, I think 2025 is going to be a lot more about the show and being able to actually display the value that's adding AI features, AI functionalities trio tools have I also think that Most users are going to care less and less, where there are certain functionalities going to be done with AI or not meaning you don't need to like explicitly say it going back to my car story, like, do I really care that was used somewhere in the car I'm going to buy?
And like, Probably not. I care about like, can it bring me from point A to B? Can it drive fast? Does it have enough room and so on? something that Benedict Evans said in like one of his like newsletter, like when it works, it's like just softer. I think we're going more and more into that direction.
There's maybe like one exception here. And that's like When the product is for technical users or like products that have this productivity increase based on like AI as their like explicit value proposition. So like if you think about co pilots or if you think about like AI journaling these are a couple of, Products where users really love the features and like, want to be sure that they're like AI based.
So I think in those types of products, you're still going to see like, okay, this is like your AI software engineer or your AI performance marketer. But at the same time, like I think in other sets of products where it's more built in as a feature of functionality, it's going to be less of like a key message and we're going back to like normal there.
Richie Cotton: yeah, I can certainly see how if you're trying to sell co pilot, you're probably gonna need to mention AI at some point. But maybe for most products, people don't necessarily care whether there's AI in it or not. We've talked a lot about what's happening for sort of, I guess, users of AI. I'm curious as to what's happening at the executive level.
So, how do you think AI strategies are gonna shift for companies in the next year? Yeah.
Jonathan Cornelissen: a great question. It's something we obviously think about bit, mostly from the upskilling angle. so let me talk a little bit about that angle. I think in the last 12 months, as Martin mentioned earlier, one of the biggest shifts we saw was Especially larger organizations starting to focus not just on data literacy for their workforce, but also on AI literacy.
I think that trend will 100 percent continue. And I think what we've seen some indications of in the last 12 months that I would predict Much more strongly is AI skills for software engineers will become even more important from an upscaling perspective. Because generally in this year we've gone from a lot of prototypes to AI being part of products, AI in production.
So logically what's going to happen now is people are going to look at usage. so if you take Microsoft's Copilot, so now GitHub Copilot, but if you take Microsoft's Copilot as an example. People have actually been shocked by how little usage it gets, because you have the hype and then kind of usage stapers off.
And I think what's fundamentally behind that is that because Microsoft, we're not sponsored by Microsoft or anything like that, but very powerful what you can do with Microsoft Go pilots. so the real, problem there is that people just don't have the skills or the creativity, or they're just stuck in certain ways of doing things.
And will require training, retraining for them to fully leverage the capabilities. And I think the conversation is going to shift there simply because Huge investments have been made and so they need an ROI and humans are ultimately going to be the bottleneck in that ROI if they're stuck in their old ways.
so I would expect the focus to shift from well, last year we saw prototype to production. Now we're going to talk about usage and why is there not more usage? And it's ultimately going to come down to, Hey, we have to upscale and reskill our, workforce. And that's true for. all knowledge workers for if you think about things like Microsoft's GoPilots, but it's going to be true for AI features for software engineers as well for data folks.
So it's going to be true across the spectrum, and that would be, I think, where the conversation shifts in the next year or so.
Richie Cotton: That's interesting that despite all the kind of the hype the problem now is just getting people to use all these new tools that exist. But yeah, I suppose, yeah change management tricky. Like, if you want to change your processes to using a new tool, you've got to train people. Then you've got to just set down a load of time just to be like, well, this is how we should do it in a better way.
so yeah, a lot of change management, a lot of upskilling in the future there. Okay so, what's the prediction going to be around this then?
Jonathan Cornelissen: So I think the prediction is that the conversations shifts towards usage and we'll see even more focus on AI upskilling. And that means AI literacy for knowledge workers, and specific AI skills for STEM workers.
Richie Cotton: so, lots of upskilling needed then just to make sure that usage of AI tools going to kick in. nice. So actually, while we're talking about leadership issues one of the big sort of stories this year was like the launch of the EU AI Act, and there's a lot more AI regulation coming.
So I'm curious as to what the impacts of AI regulation are going to be over the next year Martin.
Martijn Theuwissen: Yeah, this one is close to my heart, being based in Europe myself. So this is not a fun prediction. Like actually think that the EU will only fall further behind US and everything like AI. Related. And I kind of split it into like three things. so you have on the one end, like the creation of new models, and you have like product development based on these new models, and then you have the AI literacy of your general population.
So if you go like one by one here, like, so I think the first one is quite simple. Training new models requires a lot of energy. think for example, like Microsoft announcing that they're going to lease nuclear plants. In Europe, if you look at the policies and the regulations they did around energy, like energy is actually very expensive in Europe compared to these other regions, so like.
training these new models, which is very energy intensive, like, nobody's going to have, their, like, location of choice, because they know, like, hey, the costs I'm going to incur are going to be higher, and if the scaling laws hold true, like, it's even going to, like, go higher, higher, and higher every time.
So, issue number one, so training new models in Europe, not that great of an idea. And the second thing is, there's all these All this regulation regarding like the deployment of. AI models. So what's happening is like OpenAI, Meta, but Apple did like a similar announcement, like they're delaying or even like cancelling the rollout of their latest models in Europe.
If you as a product developer, like in Europe, if you do not get access to the latest models. The products that you're going to develop, will be behind. if you or Joe are based in the U. S., can interact with better models than I can do in Europe, you're probably going to be able to deliver better products.
You're going to be able to create a better agent. You're going to be able to work with better agents. Which puts me, at a disadvantage as a product developer. So, yeah, I think that's it. This weakens, Europe's position, like, as a creator of, excellent products and then. The final point I can on the third one, like AI literacy, like if you don't have access to the latest models, or the best AI products, your population's level of AI literacy is also going to lack because you're just not working with the latest tools.
You're not reading up on the latest things. So compared to your peers in again, China in the U. S. You're going to be lagging. So, like, the AI literacy level in Europe is going to be lower than the AI literacy level in one of these other regions. So you have, like, three key areas. Where we as Europeans are behind compared to the rest of the world.
And I don't see like a scenario where I don't see like a change in policy where we're like catching up to that. And the comparison I'm thinking of is okay, like, if you work in AI from Europe, it's like competing on the 100 meter dash. And the difference is that you actually start 50 meters behind everyone else.
And like on top of that, somebody forgot like to take away like all the hurdles on your lane. So like, it's a lot of fun. To go and do all that stuff. So I am afraid that Europe is today, like in a position that if you think for the next five or 10 years, it's going to be really hard to catch up.
And we're also always going to be like lagging. There,
Richie Cotton: Okay. That's interesting. Yeah. I agree. Not, not, not particularly fun prediction that one, but it's interesting how,
Martijn Theuwissen: never know if like, as a policymaker listens to this podcast and decides to like do something about it, that would be a good outcome.
Richie Cotton: okay. Yeah. So, as soon as you think You mentioned that the cost of energy there, and that seems to be, well, generative is just incredibly energy intensive. And so, places with cheap energy, then I guess you're going to have some sort of competitive advantage for attracting infrastructure and talent with, with AI.
Is that about right?
Martijn Theuwissen: Yes, I guess it's one of the elements.
Richie Cotton: Okay. And then maybe we need some sort of softer touch regulation rather than very comprehensive regulation. So it's not to scare off
Martijn Theuwissen: Yeah, I think you want to have openness towards the latest models at least in my view. So rather than the very prudent approach that's been taken today. the fact that Apple says like, iPhone with less features in Europe. if you would have thought that like 10 years ago, I think people would be baffled about that.
It's pretty crazy that this type of bifurcation is starting to happen.
Jonathan Cornelissen: Yeah, I want to plus one that think it's really sad. live in the US, but was born in Belgium. So as a fellow European, it's really sad to see kind of crippling regulation continue in the AI space. Imagine that's what Europe did at the beginning of the Internet. They sort of did, but it took them a while.
I think for better or worse, they're very much on top of their game now from a regulation perspective, so they're much faster. And so the damage is much bigger, be blunt. Especially because, if you look at what's happening in France and in Paris, In particular, European Union does still have very strong engineering schools in certain areas.
And so you do have a lot of really high quality research coming out of universities. And so there is potential and, it's sad to then see that ultimately that gets exported. Or people who want to build companies in this space will ultimately emigrate. Thanks. or try to emigrate in a lot of cases, so it's kind of a self inflicted problem from a European perspective.
So I hope it changes. I don't think it will. so I agree with Martin's prediction I wish it wasn't true.
Richie Cotton: Are there any particular areas of regulation that you think are causing problems? mentioned things like features being missing from iPhones. I'm not sure what those sort of things are. It's been a while since I compared European and U. S. sort of product features. I'm just wondering, are there any particular areas of regulation that you think are problematic?
Martijn Theuwissen: The regulation itself, I think to me, it's a very strange concept that you try to regulate something that you can't even exactly define today on what it is and what its implications are. So like, what are you regulating?
Richie Cotton: Okay. Yeah, certainly you can end up having too broad a regulation if you're not quite sure what you're defining there. Okay. Okay.
Jonathan Cornelissen: And just to be clear, I don't think Martin or myself are saying, like, There's no risks in AI and there should be no regulation at all. I don't think that's the case. do think ultimately there are risks with AI and there should be some, sort of regulation. I just think if you're too early, damage the ability to innovate.
And the fact that so many American companies are indeed saying, Hey, we're just not going to launch this new version of this model or this iPhone in Europe, that's incredibly damaging. And it's actually shocking how little attention that gets in Europe itself. It gets attention outside of Europe more than inside of Europe, it feels like.
Richie Cotton: Interesting. All right. So we've got a prediction. We hope it's not going to come true then that Europe Europe, particularly the European Union is going to fall behind. Okay.
Jonathan Cornelissen: yeah, we can only win on this one.
Richie Cotton: We've talked a bit before about how adoption of AI is increasing around the world. I'm curious in 2025, do you think there are going to be any particular areas that are going to see a dramatic uptick in usage of AI?
Martijn Theuwissen: Yeah, I can, I can take a first one. So, I think. one of the things we've been discussing at DataCamp is that we actually think that teachers will be one of the quickest adopters of AI, or we're going to see at least like a huge increase in AI usage from like the education sector and part of it is driven by the fact that like The underlying thing is like, okay, younger people are quicker adopters of technology than older people.
Like we're generalizing here, but like more or less that's going to be true. So there's this whole generation of AI natives being developed. Now, a lot of them are still at school. So that means that actually their teachers are exposed to AI through their students. they're the first ones that are exposed to this generation of, this new generation of like AI natives.
And probably a lot more than any other profession. And so I think this is going to lead to like this much higher pressure on teachers to adopt AI. to use it in school. And there's for sure going to be this ongoing battle on, is AI good or a bad thing in education? I think it's mixed.
Like, there are going to be many great use cases. you can get personalized help. You can get answers to your questions quicker. You can get, explanations in a way that better suits you compared to your neighbor that sits next to you in the class. But I also think, like, okay, there are arguments to be made going back to an earlier point where like the understanding, the logic behind it makes all the sense where it's probably better that people or students are taught that from first level principles.
Now, regardless of that, like, I think. Due to these dynamics, there's going to be this dramatic uptick of AI in education, in school, in universities. I'm also seeing, like, the first, courses that teach teachers, how to use pump engineering to make their lessons plan and so on.
So, yeah, it's, like, very high hopes for some kind of, evolution slash revolution in the education sector in the next year around AI usage.
Richie Cotton: That's interesting that basically because teachers hang around children a lot, they're much more likely to be more clued into what's going on with, with ai. But yeah, I think you're right about, there's just so many like cool things you can do with AI and education that, that the sector's just bound to adopt it even more.
Yeah. Jill, did you want to add onto that?
Jonathan Cornelissen: Yeah, I, totally agree. I think the exposure teachers have to students that are all over this is, is really gonna accelerate AI adoption schools. The only thing maybe to add to it is, is do a shameless plug. So if you are a teacher, a professor and you, want. To educate your students in data and AI.
We have a data cam for the classrooms program that allows you to use data cam for free for six months in a classroom setting. So, definitely check that out. It can be helpful to have that for your students so you don't have to make it up all yourself. So we try to support teachers in this area as well.
Richie Cotton: Absolutely. Wonderful stuff that. So yeah, for any teachers listening in please do get in touch. We've made a lot of predictions around AI so far. We've not done so much with data. Is data science still going to be relevant in 2025?
Jonathan Cornelissen: I would say yes, but again, it depends on how you define it because There's a convergence of the data and AI space and data and AI skills. some of the skills that used to be important might not be as important anymore. There's kind of an AI layer on top of those skills. But yeah, think if you're a data scientist and you pick up AI skills, will truly have the skills.
Superpowers. Are you going to be called a data scientist or an AI engineer? Who knows? Maybe it depends on the company you work at, but in some sense, there might be a bit of a rebranding of data science to the things that sound sexier now, but the fundamental skills I think are similar.
At the end of the day.
Richie Cotton: Yeah, that's kind of interesting. Cause I mean, I think at one point data scientist was the coolest job role you could have, and now it's all, it's become mainstream and so you need some new titles for the same important skills, just to keep it sounding exciting.
Jonathan Cornelissen: I was thinking about this like maybe AI engineer is the new data scientist some ways.
Richie Cotton: Oh, man. Okay. Yeah. So, that's interesting. Competition for the hottest sort of data related job role then. So, yeah. Maybe AI engineer supplants data scientist as the coolest job. It's going to be a difficult one to measure, I think, at the end of the year, but yeah. That does sound like a fun prediction.
All right. So, just to wrap up, what are you most excited about for 2025 in the world of data and AI?
Martijn Theuwissen: From my part, it's going to be around the if you go back to the origins of data camp, in our, our mission, like, okay, a big part is around democratizing data skills, analytical skills. And I think that prompt engineering is allowing again, more and more people to become their own data analysts, become their own data scientists, interact in novel ways very easy ways with the data at their disposal, if I.
Look back at the past months and there was probably like every week that I did some kind of interesting prompt engineering exercise that gave me a new insight that I could easily share with my colleagues, opening up possibilities towards them, the power. of prompting with the data they have, like an ability that they not have before.
Like that really excites me because like there are new ways to look at your performance marketing. There are new ways to evaluate the sales goals that folks do. There are even like new ways to evaluate the UI, UX of your of your website. And If you think about like having every employee in your company access to that type of availability and knowing how to use it by seeing use cases of other folks, like that really excites me. Because I think a lot of it is like a creative endeavor.
Richie Cotton: That's very cool. I do like the idea of just empowering people who have less technical skills to be able to perform tasks that have traditionally been technical and just being able to do things that you've not been able to do before. Okay. And Joe how about you?
Jonathan Cornelissen: from a business perspective, totally agree with Martin. Data chem's mission was always about democratizing data and AI skills. And so As you get more power into the hands of people educating them to use that power is so much more impactful. so I think really inspiring.
And it makes what we do even more relevant to a lot of people. so from a business perspective I have the same excitement for 2025. From a personal perspective, I'm very bullish on AR glasses combined with AI. I think there's so many use cases that will be unlocked, and it feels like we're very close to the inflection points.
Meta has these glasses, they're building most powerful AI models and just think about Those AI models being able to hear and see exactly what you as a human are seeing and what you can do with that. I think that's going to be incredibly exciting and powerful. and I think we're probably at the inflection point where, in one year there's going to be so many new use cases that are really powerful To people's lives.
Richie Cotton: Yeah, certainly glasses that can tell you the name of that person sitting in front of you where you've forgotten. Oh, yeah, that'd be incredibly useful. Yeah many, many use cases for that. So, yeah, interesting idea. Let's hope it finally takes off. It's, I mean, it's been pitched for, well, more than a decade now, this idea of
Jonathan Cornelissen: Yeah. Yeah. I, I know it's been pitched for a while. I, I feel like in the next two years is when it's actually gonna become reality.
Richie Cotton: cool. Okay. So we've got a lot of predictions. I guess I shall see you again next year for the results. I shall probably see you both again before then, but yeah. Thank you both for taking the time to chat.
podcast
Reviewing Our Data Trends & Predictions of 2024 with DataCamp's CEO & COO, Jonathan Cornelissen & Martijn Theuwissen
podcast
Data Trends & Predictions 2024 with DataCamp's CEO & COO, Jo Cornelissen & Martijn Theuwissen
podcast
Data Trends & Predictions for 2023
podcast
Data & AI Trends in 2024, with Tom Tunguz, General Partner at Theory Ventures
podcast
How Data and AI are Changing Data Management with Jamie Lerner, CEO, President, and Chairman at Quantum
podcast