Skip to main content

Industry Roundup #3: The Rise of Reasoning LLMs, OpenAI Operator, Project Stargate, and Gemini’s Struggle for Recognition

Adel and Richie discuss the rise of reasoning LLMs like DeepSeek R1 and the competition shaping the AI space, OpenAI’s Operator and the broader push for AI agents to control computers, and the implications of massive AI infrastructure investments like OpenAI’s Stargate project.
Feb 19, 2025

Richie Cotton's photo
Guest
Richie Cotton
LinkedIn

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.


Adel is a Data Science educator, speaker, and VP of Media at DataCamp. Adel has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.

Key Takeaways

1

The AI reasoning model space is heating up, with DeepSeek R1 offering high performance at a fraction of the cost, pushing competition among OpenAI, Google, and others.

2

Google’s Gemini 2.0 model is strong but suffers from poor discoverability and branding, leading to less public recognition compared to OpenAI and other competitors.

3

AI research tools like OpenAI’s Deep Research and Google’s Deep Research product have potential, but struggle with speed and accuracy on emerging topics.

4

Massive AI infrastructure investments like Project Stargate highlight the growing national security and economic importance of AI development.

5

Replit’s new mobile app hints at a future where AI acts as an operating system, generating apps and tools on demand with natural language.

Links From The Show

YouTube Tutorial: Fine Tune DeepSeek R1 | Build a Medical Chatbot External Link

Transcript

Adel Nehme: All right, all right, all right, Richie Cotton, how are you?

Richie Cotton: Hey Dale life is good. I'm feeling excited. So much stuff to talk about. It feels like it's been a while since we did one of these industry roundups. we could probably talk for hours on what's happened in the last month.

Adel Nehme: Yeah, indeed, AI goes in light year speed. So month and a half not covering news means we have like six hours of content for you, but we're going to try to condense it to our favorite stories. And we have three stories for you today. We're going to cover them. I'll go first, right?

First one is going to be on the rise of reasoning LLMs and the reasoning LLM wars that we've been seeing over the past couple of weeks. So Richie, have you seen DeepSeq? Have you, did you manage to catch up on

Richie Cotton: impossible to miss all the talk of DeepSeek's R1 model. It's just been absolutely everywhere. But it's also making YouTube famous. I saw your video went viral.

Adel Nehme: Yeah, we actually had a YouTube video. Everyone do check it out on our YouTube channel. fine tune DeepSeq on a medical data set and yeah, that blew up. I was surprised. But that said let's maybe go a bit, give you a bit of a background on DeepSeq R1. what's been going on in the ecosystem there.

Sure. So DeepSeek R1 has been wildly hyped. if you've been on any social media in our space, you probably have already seen what DeepSeek is but in a nutshell, it's a reasoning model devel... See more

oped by the Chinese company called DeepSeek which rivals O1 in terms of performance, but it's much cheaper, apparently much, much cheaper.

DeepSeek claims that it costs around 5. 5 to 6 million to train. And it's so much cheaper to use the DeepSeek R01 API. I'm talking like fractions of the price of the R01 API. So we're talking 55 cents per 1 million non cashed input tokens and 2. 19 per 1 million output tokens. Compare that to a one, which is in the 15 and 60 range for output tokens.

So really changes the game and the economics of the reasoning LLM space. So a lot of people have had takes, you know, is DeepSeek lying? Is it actually legit? What does this mean for LLMs? Actually, we don't know, so I'm not going to pretend like I have a take, but what this means is like more competition.

So, OpenAI reacted. Brought forward the release of the O3 mini model. It's accessible on chat GPT today. And they have the O3 mini and the O3 mini high model, which is actually better. The O3 mini high model was actually quite impressive. Like it understands, for example, physical simulations better.

It created a code. I saw someone run a tutorial on it. created code for a snake game, then that person asked to prompt it to create a reinforcement learning algorithm that plays a snake game and was able to do it in one shot. So that's actually pretty impressive, right? And then OpenAI also released deep research.

So, Sam Altman on X, mentioned So That they're gonna pull up a few releases, push them forward, right? So deep research seems to be one of those releases is essentially a tool for multi-step reasoning research and report creation also steals the name of the Google Deep research product.

Like that's also I don't assume was intentional, but it is a deep research product, right. And then here use cases include stuff like shopping for houses, comparing cars, analyzing business trends, so on and so forth. And then under the radar, it's unfortunate that it's under the radar because I think Google actually has really great models announced the Gemini 2.

0 Flash Thinking Experimental model, as well as the Flash Thinking Experimental with Apps model. Really short names for these models and it definitely is contributing to, I think, why everyone sleeps on Google in this space. So maybe to kind of set the stage here, I'll first ask you a couple questions, Richie.

My first one is actually I'm frustrated because Google really has great models. They're a deep research model. I've been using it for a long time now. But if you look at the degree of interest online, you look at Google trends, right? Why does Google not get the hype it deserves?

Richie Cotton: Yeah, I mean, you're right that a load of the innovations around large language models, they've come from DeepMind and they've been incorporated into all the other models. I've got to say, it's a product thing and it's a marketing thing. Like, something like ChatGBT just works. The Google models, it's like, well, how do you even go about getting them?

It's like, it's stuff that's thrown at you in a Google query, or you actually have to go and look for it. So I think Google's product strategy is not really coming together. But also the marketing strategy, I mean, like the naming was like. Gemini 2. 0 flash, blah, blah, blah. I can't even remember the full name and you just told me

Adel Nehme: Flash thinking experimental with apps.

Richie Cotton: exactly. Yeah. It doesn't roll off the tongue very nicely. Now I've got to say OpenAI's naming strategy is much better. Like 03 mini high. It's like the mini one means it's small and cheap and not very good, but high means it is good. And I don't know what to think about that. So yeah, naming things is really, really hard.

Like this is a long known problem in software development and products like just getting good name things is hard, but yeah, the Google marketing team really need to like think a little bit better about like things.

Adel Nehme: Yeah, what's unfortunate is that they're amazing models. Like I have to say like Gemini 2. 0 flash thinking experimental with apps or experimental. So is like blazing fast, really good performance as well. I've tried them, like played around with the, some of the use cases, probably faster than DeepSeek, but and maybe equally good if not better.

And I haven't seen the benchmarks yet, but really, really great models. I really think this goes back to something that we've discussed on the podcast before, like this vibes based kind of approach to what is the favorite model, right? And it seems like the vibes have not caught up with Google yet.

And one thing that I think hurts Google here as well, that their discoverability is pretty hard, right? Like, it was pretty hard to go on Gemini, understand which of the models are new, which of the models are not, right? And it's I think there is maybe room for simplification, maybe the inclusion of Gemini in Google search, right?

Like, you have the most visited website on the planet. Make use of it if you want adoption on Gemini.

Richie Cotton: There's a certain irony there that Google is designed to help you find things and if they're having a discoverability issue uh, yeah, it's it's not ideal. I

Adel Nehme: Indeed, indeed. And I do have to say though, Deep Research from Google was, is an incredible product as well. I use it almost all of the time. And yeah which goes to Deep Research from OpenAI as well. So did you manage to try it, Richie? Did you experiment with the Deep Research model?

Richie Cotton: haven't used the OpenAI Deep Research myself, but I've sort of read a lot about it from other people trying it. And yeah, it seems the results are fairly mixed. It's not yet as good as a human doing research, particularly around shopping tasks. I feel like everyone's spent decades now shopping online.

They know how to do research products and having AI that does that is not as good as a sophisticated human yet.

Adel Nehme: Yeah, so I tried it. Also mixed reviews and when it comes to like asking it about, you know, reports of already really established topics and kind of themes that have tons of content on them on the online. It does a fairly standard job. Like if you ask it to explain, the difference between getting an electric car that is used or getting a new EV, you know, a hybrid car, so on and so forth, it will give you like selection of models, considerations, tables, so on and so forth.

That was actually pretty nifty. But for new topics, I found it to be quite disappointing. give you an example. We asked to give us an overview of the LLM ecosystem from OpenAI, Google, I think as well, DeepSeek, right, and Meta. And the models that I listed for DeepSeek were DeepSeek V2, and the one from Meta were LLAMA2 as the most recent models.

And I found that pretty interesting because the query took 11 minutes to write. If you contrast this with Google's deep research tool, it also took like three minutes, four minutes, maybe. And it had access to the internet. It was looking at websites, but didn't seem to pick up the newer models, which I found to be interesting.

So I think there could be a gap in deep research that is applicable to like newer topics, But that said, this is a product that is included in the 200 bundle, right? So if I'm paying 200, I expect it to be pretty great. So it's a bit of a mixed bag.

Richie Cotton: Yeah, that's tricky. And certainly once you start waiting for minutes at a time for a response, if it's not good, then that's a terrible product experience. Certainly 11 minutes exceeds my attention span for a lot of things certainly for shopping desks. So, yeah, if I'm wanting sort of more scientific report on something, I can probably wait that long, but if it's just for casual uses, then that sounds like there's some work to be done there.

Actually it does sound like these deep research products are kind of invading on the perplexity space as well, where it's just a lot of search and a little bit of generative stuff on top there.

Adel Nehme: do think that there's, yeah, there's a lot to that. And it's going to be interesting to see how Perplexity differentiates itself over time. For example, they released, I think, in their mobile app, like a general assistant. They've been able to incorporate now reasoning models in their search. I've tried that recently.

It was actually not that bad, the deep seek experience and perplexity. And yeah, it's going to be interesting to see what that means for the industry. I mean, going back to the deep seek conversation as well. if you remember the the conventional wisdom around nine months ago, is that OpenAI's next update is going to kill your startup, right?

And that there's no point in building a wrapper around a model because the intelligence gains in the model. We'll actually eliminate the need for many of the wrappers that are being made today. But actually that kid, that wisdom seems to be flipping on its head with the DeepSeek release.

Because now intelligence has been you know, reasoning models are becoming more and more commodified. I'm sure there's going to be an open source. O3 mini competitor in a few months now, If not, maybe more, maybe less, but that's kind of generally the time horizon. And lots of folks now are changing their minds.

They're like, okay, the way you differentiate yourself in the LLM space is by having a great application. and a use case and an interface, right? So I'd love to hear your thoughts here. What does competition look like in the foundational model space versus kind of the app layer? do you see this? Do you see kind of any similarity with any other industries, for example?

Richie Cotton: Yeah. I mean, I think the fact that there's just been so much money being thrown at all levels of the AI stack from like chip infrastructure right through to applications, it means that there's so many companies competing for things now. And so. And every sort of level of this, think we're going to see competition intensifying because those venture dollars that were thrown in the last year or two, it's like, you know, you start to have to consider your runway.

Like if that money dries up now, then got to start making money somehow and the competition is going to get pretty intense. From a consumer of AI point of view, I'm very excited. Competition means stuff gets better faster. And so, yeah great times for everyone using AI.

Adel Nehme: Indeed, but maybe on the competition thing there also could be a dark side here, there's always been the discussion of an AI arms race could lead to irresponsible deployment of AI systems, Do you think that that could happen? how big that risk is today.

Richie Cotton: Yeah. So I kind of defer to some great research from the center for AI safety. So, they've got some great stuff on like catastrophic risks of AI. And it's only like a few ways in which. AI is realistically gonna go bad, but one of those I like to think of is the Westworld scenario where, yeah you've got competition and the organizations cut corners around health and safety and then bad things happen.

I mean, my spoiler for the show is that lots of dead bodies as a consequence of bad ai.

Adel Nehme: and many flashbacks and many flashbacks and weird twists and turns of the story, but that may not be the case if we have a responsible use of AI. Yeah, indeed. It's going to be, it's going to be interesting to see hopefully we'll be able to avoid the dynamics that you've outlined here. And I think this puts us maybe in a good spot to talk about our next topic, speaking of intelligent systems that are autonomous.

Let's talk about Agents, OpenAI Operator, AI Control on Computers. Throwback to Industry Roundup number one. Richie, do you want to?

Richie Cotton: Yeah, sure. So, in the first industry roundup a couple of months ago, we talked about Anthropic releasing an AI agent called Computer Use, which is designed to control your computer in order to perform simple tasks. So OpenAI has now responded. So they've released an agent called Operator. So rather than controlling your whole computer, it controls a browser inside a virtual machine, so it's a little bit bit more secure, a bit safer but it follows a similar approach where you describe a task of what you want to do, and it takes screenshots of what's going on on the screen just to check its own progress to make sure it's filling out forms correctly. 

And then the space is kind of heating up. Google's announced something called Project Mariner. It's sort of coming soon. It's also like a browser control type thing. And the idea is that you can fill in forms with this sort of thing. There are also a few free versions available. So, if you go on AgentLocker, we just got a good place to find out what agents are available for different tasks.

There's one called Open Operator and the one called Smooth Operator. Very very smooth name there. I like that. Throwback

Adel Nehme: You've been hit by a smooth operator.

Richie Cotton: yeah uh, got a good, uh, Michael Jackson Spies name there. Uh, anyway, um, Yeah, so, these look like great tools for people who can't use computers themselves. So this might be people with disabilities or it might just be people who are just away from their computer, but need to control the computer.

So, Ideally it's gonna be like voice activated, filling in forms to, I don't know probably to buy stuff again is, is the big hope here.

Adel Nehme: maybe from your impressions of what you've seen, what do you think of the, Operator agent so far,

Richie Cotton: definitely early days for these things. And I guess my cynical take is that. A decade ago we had Alexa and Siri and these things were like, like the big tech companies poured billions into these things to get people to buy stuff more easily and people didn't do that. All they did was, you know, set a timer and listen to music.

So it might be the same again. It might just be a case of, yeah, we've invented this really cool technology and yeah, you can set a timer with your voice.

Adel Nehme: so that is fair. I do agree on the, I think it's exciting to see the potential of these systems with especially for folks with disabilities, if you integrate with voice mode, even folks with really low digital literacy, right? If you're able to access computers, like think about how many government.

Documents, processes are now digitized, folks don't have digital literacy, they're really like locked out of the system to a certain extent. If you're able to kind of streamline that, it's still early days, like I don't, I don't see operator being able to fill out your tax, tax form

Richie Cotton: Oh, in the U. S. it is impossible to fill out tax forms. This is what I've learned over the last eight years of living in this country. No one understands taxes.

Adel Nehme: I can imagine same here in Belgium. And so that's it from the demo that I've seen. I'm also not super impressed now. Like, I don't see a use case for Operator. Maybe if I am using repetitive tasks on Excel, right? I could maybe see that, But I just don't see myself spending 15 minutes looking at a computer, navigating Uber Eats, ordering food for me and then struggling, right?

I can just like open the Uber Eats app and order food myself, right? It just feels more intuitive. But if it becomes part of the OS of my phone, where I can be like, Okay, Google, or Google has woken up. Okay, Google, like order Uber Eats for me, right? It will wake up, like it will, it will do it. Fair enough, right?

But now Google is going to say, Yeah,

Richie Cotton: going to get second lunch there.

Adel Nehme: no, but it's saying that I cannot order Uber Eats, right? So if Google is able to order Uber Eats for me, it will be able to do it, So that would be an interesting future, for example. But I think we're pretty far away from having this like integration into the OSs that we use in a safe secure manner that's like, working locally on your, on your device, right?

So. Yeah, I don't, I don't see myself paying 200 a month to, to watch a bot struggle on a website.

Richie Cotton: Yeah, actually, so your UberEats example gives another problem here. You're going to have to give it access to your credit card.

Adel Nehme: Mm. No.

Richie Cotton: trust it to order the right food for you. And food's like a fairly low stakes scenario. So once if you want it to buy something bigger, like it's got to be right. A hundred percent of the time to warrant that.

Adel Nehme: I mean, I can see like a that is not fully autonomous, right? And probably that's also best practice design that you can do here if you're doing any form of transaction or you need to present the user options, right?

So if I, let's say in this hypothetical future where I ask my phone to order Uber Eats for me, I assume it's going to come back and say, Hey, how do you feel? What do you feel like today? You feel like Lebanese? You feel like, I don't know, burgers, blah, blah, et cetera. I'll give it my options and then it will make a selection for me, give me a few options.

Then it will make the options tell me, what do you think, good to go? And I'm like, yeah, and then do I have authorization to pay with this credit card? Yes, right, so I can see, I can see this working well. And going back to that DeepSeek example, right, if you have really cheap reasoning models that can run locally on your machine, then that's also pretty secure.

because it's just running locally on your phone. but looking at Operator, it's hard to see the use case now. But I do applaud OpenAI for taking the steps to make this happen, right? This is still a research product, you know, they mentioned this, they discussed this. It's not like they're marketing this to mass consumers. 

So, it's going to be interesting to see how the quality of their, I think it's called I forgot the name of the model kind of, I think it's called an action model, something improves over time. Like if that reaches some form of critical mass of quality and accuracy, it's going to be interesting.

could be transformative, especially if it's low latency.

Richie Cotton: Absolutely. I mean, yes, it's important to remember that GP2 generated terrible text, but like the, the series, eventually the series of models eventually got very good. So yeah, early, early days for computer usage models.

Adel Nehme: Yeah, so we still, we still have our jobs. We said, we're not, we're not going to run out of job anytime soon. And this may be, I think, going back to kind of the DeepSeek example as well. We always go back to DeepSeek. The arms race that we've discussed, right? I think kind of introduces our third story which is the Stargate project, right?

Because in a lot of ways, AI space is heating up to a degree that it's taking on a national security complexion and a national you know, for many of the competing nations here, whether, you know, the European continent, the United States or China it's taking on a complexion of national strategy, right?

So we have the Stargate project which is a 500 billion investment program that was announced by OpenAI, SoftBank, Oracle, MGX. President Donald Trump had all of the CEOs of these companies next to him. They did this massive announcement, right? Apparently, ARM NVIDIA, Microsoft, Oracle OpenAI are going to be the key technology partners.

The only detail so far is for AI infrastructure. So I assume lots of compute,

Richie Cotton: Yeah. New chips, new data centers. And I guess, yeah, there's more bits to data centers, just the

Adel Nehme: probably nuclear reactors, maybe as well. Probably nuclear reactors, given how they've been building it.

Richie Cotton: Absolutely. Yeah. I think most of the big tech companies have now signed deals to nuclear power for their own data centers.

Adel Nehme: Indeed, indeed. And then this is also maybe step one in Sam Altman's quest for AI compute to build the world's smartest Grubhub ordering machine. And then new chips here and data center components. Definitely they're going to happen, but we don't know else. This, of course created many reactions, so, Elon Musk is not necessarily the biggest fan of open AI, given their recent fallout.

People started buying the wrong MGX stock. And then it's interesting to see here what's going to happen. So I'll be blunt from the start. We don't have a lot of details, right? So I'm not going to speculate over what's going to happen with Stargate. Will it actually end up coming to fruition?

Yes or no. But given the trajectory that we're on with ai, you know, the commoditization of reasoning models, the increasing national security complexion of AI and national strategy complexion of ai. gonna ask you a very simple question, Richie, where do you think we'll be in the next four years?

Richie Cotton: Okay. Yeah. So, I mean, those numbers are really incredible, right? You think 500 billion sounds like a lot of money. It's like less than 10 percent of the way to the sort of this 7 trillion goal to create AGI. So I think for the foreseeable future, we're gonna have a lot of money shoehorned into AI. That's going to keep the train rolling, going to keep those improvements coming.

The trickier thing, I mean, we talk about commoditization of reasoning models, the number of players who can create cutting edge models is dropping. So, as we see, like every generation of models requires this sort of. increase in computational power, both the training and increases in data sets.

And then it just means it's more expensive to create the next generation each time. And we've seen some players like Cohere, Mistral drop out from like the, the tops of the leaderboards. So there's fewer players now could afford to stay at the cutting edge.

Adel Nehme: Yeah, I mean, DeepSeek is kind of the counter to that, right? That they were able to train a cutting edge model with 5. 5 million dollars. But that's the question, like, well, like, is that legit? Many people say that it's not legit. I'm not going to speculate, But generally, I do agree with the trajectory of your statement, right?

That, to build the more intelligent models, you need more resources.

Richie Cotton: So with DeepSeek so the numbers I've sort of heard is 6 million of compute is about equivalent to like the medium sized Llama 3. 1. So it's kind of, it's this year's cutting edge model with last year's compute equivalent. So that, so DeepSeek's got like, it's got a generation or two ahead in terms Performance to compute ratio, but that's like it, it's, it's just a few generations over the period of like four years or whatever, it's still going to get more expensive to create cutting edge models.

Adel Nehme: just, just to also caveat, I don't, no one has 6 million to spare to experiment with training, LLM, right? So like, I just want to caveat that. But yeah, it's going to be interesting to see how the kind of foundation model space heats up. Maybe based on, just how much investment there is in the economy right now in the AI space, where do you see adoption in four years?

Richie Cotton: Yeah, I mean, adoption can only go up really. I don't get there are any companies where we're like, you know, we use too much AI. Let's start deprecating you. Yeah, that's not gonna happen. So, I think it's particularly in the spaces where. Things just take time to adopt. So you think about like government, there's often like, processes in place where it's just harder same with highly regulated industries.

So like health finance anyway, you've got sensitive data. You just got to do things carefully and right. So, these things take time and we're only just seeing adoption start right now. So yeah it's going to be everywhere. It's just a slow process.

Adel Nehme: I'll maybe ask you one last question here on this one on the prediction side, it's going to be a very easy one. In four years, do we have AGI, yes or no?

Richie Cotton: Okay. So I've seen like nine different definitions of AGI. Some of them we've got already, some of them not happening for decades. Some of them it's impossible to measure. So it just depends on which definition you use. But I feel like most tasks Can use AI instead of humans. It's just not necessary at the right price yet.

Adel Nehme: I think the AGI conversation for me is, or the debate on AGI, is this AGI, is this not AGI, is actually not super useful. Like, the barometer that I use is like, is this a GDP disrupting technology, yes or no? Like whether it's a fully autonomous thing versus enabled, augmented versus how does, how is it like integrated into the economy?

Like you can build maybe technically AGI in a box, but can you fit it in a modern organization and has, does it have Excel integrations? Like, these are, I think, kind of the important considerations to have when it comes to AGI. I'm being a bit facetious on, on the Excel integration, right? But the, like, how do you actually use it?

Can this disrupt the economy, And I don't see the conversation framed in this way a lot. I see it framed on, Oh, it was able to code this Python snake game really well. This must be AGI, right? I think it's more about what's really the impact of this technology and how will it be integrated in the economy and will this disrupt labor, And I think these questions are much loosely, much less loosely defined from what I've seen.

Richie Cotton: Absolutely. So, when O3 was announced back in December things about, okay, we're broken records on the RKGI benchmarks, but then it was like, it was 2, 000 per task solved or something. And so that's only going to work in a very limited number of use cases in business where you're spending 2, 000 on compute just to solve a simple task.

Yeah, all about like solving things at the right price in order to get it into business.

Adel Nehme: Exactly. And maybe as we wrap up today, I would want to mention one honorable mention. Richie, have you seen the Repl. it mobile app by any chance?

Richie Cotton: Redfoot host feels like, it's very cool, but I never quite figured out how to make use of it. on, talk me through what's happening with our mobile app.

Adel Nehme: So, I also agree with this with the sense that Repl. it is actually, I think it's gotten better to use over the past few years, but it was always like, it had a higher barrier to entry than like your standard cloud IDE for me. But they seem to have redesigned their mobile app to just be around building apps for you using natural language.

So it's an app that builds apps. And I found it to be interesting. I'm not going to pretend like it's this insane, groundbreaking no, no more need for apps type, you know, I'm not going to, do the clickbaity. Probably app is insane type content, right? But I think if you assume the trajectory of, improvement keeps going the way it's going, it could be quite disruptive for the app ecosystem in the next few years, For example, I created like, in one shot, a Streamlit app that's on my phone that lets me track my weight on a daily basis. There's a lot of apps that do that, like, right, there's a lot of apps that earn like ad money, right, from having downloads, that let you do that, or even, you know, premium apps with a subscription. 

I just need a graph that shows how my weight is evolving day over day. That's the only thing I need. And I was able to build it via replit. So, it's actually quite interesting that, you know, because we've always had this debate over, or, you know, this discussion, what will be the user interface of AI, And it could be just an OS that makes stuff for you. And I could see that with rep, right. That you only have an OS with a chatbot. And you're like, order me Uber Eats, and it just like connects to the Uber Eats API and orders you something, you don't need the app on it, it's kind of like a chatbot interface to the world, Whether it's making apps, booking flights, being an agent, I could see that in the future. Basically, this is what the Repli app was able to show me. I'll, I'll frame it that way. Yeah,

Richie Cotton: Okay. So first I feel like we should not record just before mealtimes. We keep coming back to ordering food examples.

Adel Nehme: I need, yeah, I'm getting hungry.

Richie Cotton: But yeah, I do like the idea that you probably want a chat interface to most software. So yeah, I mean, traditionally, like at least technical software, you've had like, for a long time, you had two interfaces. You've got like a coding interface, you've got a graphical user interface with your point and click.

And now it's like all software should have three interfaces because you want that natural language interface as well.

Adel Nehme: Indeed. And on this important change, we end today's industry roundup number three. Rich, do you have any parting words before we wrap up?

Richie Cotton: No parting words, but yeah, I'm excited for what's coming this year.

Adel Nehme: I'm excited for what's coming this month at this point, because there's just so many releases happening on a weekly basis. It's really hard to keep up, but I hope that we were able to give you some information in these 30 minutes of recording. with that Richie, I'll let you go enjoy your lunch and I'll go

Richie Cotton: you go and order some dinner using a computer use agent.

Adel Nehme: uh, I'm going, I'm going to, I'm going to, I'm going to have a healthy meal and, track my way on the replant hub and take it from there. Cool. Thank you so much, Richie. Always a pleasure. Cheers. 

Topics
Related

podcast

Industry Roundup #1: OpenAI vs Anthropic, Claude Computer Use, NotebookLM

Adel & Richie sit down to discuss the latest and greatest in data & AI. In this episode, we touch upon the brewing rivalry between OpenAI and Anthropic, discuss Claude's new computer use feature, Google's NotebookLM and how its implications for the UX/UI of AI products, and a lot more.
Adel Nehme's photo

Adel Nehme

30 min

podcast

Industry Roundup #2: AI Agents for Data Work, The Return of the Full-Stack Data Scientist and Old languages Make a Comeback

Adel & Richie sit down to discuss the latest and greatest in data & AI. In this episode, we touch upon AI agents for data work, will the full-stack data scientist make a return, old languages making a comeback, Python's increase in performance, what they're both thankful for, and much more.
Adel Nehme's photo

Adel Nehme

27 min

podcast

The 2nd Wave of Generative AI with Sailesh Ramakrishnan & Madhu Iyer, Managing Partners at Rocketship.vc

Richie, Madhu and Sailesh explore the generative AI revolution, the impact of genAI across industries, investment philosophy and data-driven decision-making, the challenges and opportunities when investing in AI, future trends and predictions, and much more.
Richie Cotton's photo

Richie Cotton

51 min

podcast

Did Gen AI Kill NLP? With Meri Nova, Technical Founder at Break into Data

Richie and Meri explore the evolution of NLP, the impact of GenAI on business applications, the balance between traditional NLP techniques and modern LLMs, the exciting potential of AI in automating tasks and decision-making, and much more.
Richie Cotton's photo

Richie Cotton

39 min

podcast

Data Trends & Predictions 2025 with DataCamp's CEO & COO, Jonathan Cornelissen & Martijn Theuwissen

Richie, Jonathan, and Martijn explore incumbent LLM providers and their disruptors, AI reasoning, the rise of short-form video AI, the challenges Europe faces in keeping pace with the US and China in AI innovation and much more.
Richie Cotton's photo

Richie Cotton

44 min

tutorial

Fine-Tuning DeepSeek R1 (Reasoning Model)

Fine-tuning the world's first open-source reasoning model on the medical chain of thought dataset to build better AI doctors for the future.
Abid Ali Awan's photo

Abid Ali Awan

12 min

See MoreSee More