Developing AI Products That Impact Your Business with Venky Veeraraghavan, Chief Product Officer at DataRobot
Venky Veeraraghavan is the Chief Product Officer at DataRobot. As CPO, Venky drives the definition and delivery of the DataRobot Enterprise AI Suite. Venky has twenty-five years of experience focusing on big data and AI as a product leader and technical consultant at top technology companies (Microsoft) and early-stage startups (Trilogy).

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.
Key Quotes
I'm not just saying, hey, look, there's this cool tech and what problem can it solve? But really say, what problems do I have that I think are worth upgrading? And then obviously, you can take a look at the available technology. But you make the technology work for the problem.
The biggest concern people have with AI is the original process generally is super clear... With AI, it's more stochastic.
Key Takeaways
Organizations must focus on aligning AI initiatives with concrete business problems, ensuring that AI solutions are integrated into existing workflows and deliver measurable results.
Start AI projects by experimenting with known business processes, allowing for intentional experimentation and measurable outcomes before scaling across the organization.
Balance the decision between building in-house AI capabilities and buying external solutions by focusing on what aligns with the company's core business and competitive advantage.
Transcript
Richie Cotton: Hi Venky, welcome to the show.
Venky Veeraraghavan: Hi, Richie. Good to be here.
Richie Cotton: Brilliant. So, we're gonna be talking about AI readiness. So first of all, what does that mean for an organization to be AI ready?
Venky Veeraraghavan: it's being prepared on all parts. There's the technical part, and then there's the business part, and then the cultural part. So I think, you know, being ready on all of them Is what it means to get AI ready. I think people generally start thinking about, do I have the right tech?
Should I deploy a software buy a product? But it's much more than that. It's really sort of retooling your company to use AI and depend on ai to get business results.
Richie Cotton: I think lot of people think, okay, I just want to buy something and then all my problems will be solved. But if only life were that simple cool. So maybe do you want to talk me through what are these problems then? So why aren't organizations AI ready? What's this gap they've got to cross?
Venky Veeraraghavan: I think, across the board, AI is having, its yet another hype cycle in ai. And people are really focused on. understanding the tooling, seeing how it works, and experimenting and learning about it. There's so much change in the last two years in terms of new technology.
I think people are really curious and they're trying to put it in, they're not finding a lot of business results because there's more to it than just technolo... See more
And so, have a view that, you know, you really should focus on something you already know how to do, workflows, and then sort of, experiment and intentionally experiment on putting new, the new technology and then measuring how it works. And so that's, I think, to me, one of the big issues here, which is like, for us to fully capitalize on AI, you really have to sort of figure out where it gets Input from which where's the data and then where are you going to use it?
Where is the I going to be used? And is that a problem? What's solving because it is going to be possibly expensive, it's certainly gonna be risky because new technology and you really want to sort of find a problem that is interesting to the company that you can measure before he puts in, then measure after and then you gain a sense of confidence.
I think that's what people are lacking, a sense of confidence that there's a pat answer to go solve. And so they have to sort of do it all by themselves. And it's a lot of work.
Richie Cotton: Okay. Yeah. So that sounds like tricky if you're using a completely new technology, you're not really sure how it works and it involves a lot of different aspects. So you've got to change processes. You've got to get your data right. You've got to then figure out how the AI fits in. Many things to change at once.
Well, I like this idea of starting with something, you know. So suppose your CEO says, okay, we have to become AI ready. We've got to go and do this. Where do you start? what's your first step in this?
Venky Veeraraghavan: You know, I think most, most people they quickly go into a technology conversation which is like Should I use ChatGPT? Should I use Anthropic? Should I use, whatever. And really the question is really thinking through a little bit on where are the places where you can use it, where it might be helpful.
And there's a lot of, knowledge out there. And I think some of the research in terms of like, what are the business processes that can most tend to get improved, whether it's a top line, bottom line. I like to use the example of like. You know, you might be doing marketing mails. You might be sending mails already to your customers for saying, Hey, look, upgrade to the next service, and so you have this propensity model and you might say, like, okay, look, know what I'm doing today. I'm sending out a number of mails and I like, you four different templates. And I know what the click through rate is. I know what the conversion rate is. I know a lot of things about my process.
Now you say, great. This Jenny, I think, is going to make me happy. to hyper personalize email, right? It's not just like three or four templates that I can use to drive. can do an email per person because I know them so well. Because the technology can do that. So then you get saying, hey, look, who are the people you sent the mail to?
You might use a model to say, you know, what is the churn propensity or upgrade propensity that I have for this customer? How do I score it? And then you can say, look, everything that I'm, you know, that I feel very confident they're going to upgrade a model, as I say, 90 percent accurate. They can say, great, I want to pick those customers.
And then I'm going to, you know, use prediction features, right, that why is the model predicting this person would update. Use those as prompts in your model to then say, look, I'm going to create really, really hyper personalized email that says, look, I'm going to upgrade to a better telephone plan because I do a lot more international calls, right?
I do a lot more roaming. you can start using these inputs to build your email and then you test them. You send them out, then see whether they have a better click through rate. They have a better upgrade rate. and so now what you're doing is you're putting the technology in the context of a business process.
and measuring it and then gaining confidence that you can make it work. You might say, Oh, look, it works on 90%. Can I move it down to 85 percent At what point do I need to actually have a human in the loop to actually double check what mails are being sent? And so you sort of do this, what I call intentional experimentation on a known business problem.
and that gets you both the skill and also like really starts giving you the ability to start stitching everything together to actually have, A business solution as opposed to just a cool technology demo that you can show a CTO and be like, Oh, I did this cool thing.
Richie Cotton: I really like that idea of you start with a really concrete business problems. It's like, okay, well, how do I include increase click through rate for emails? And then you're going through like this stuff that you can measure. Like it's got a real, a real metric. You can do experiments to see whether it works or not.
And you're going to be able to have some like concrete knowledge of whether or not this is going to work or whether or not it works at the end. So, with this example, something like messing about with email. This is going to affect like a single team, like your email team, maybe a little bit larger, like a couple of teams, but is that where you should go for you when you're deciding what to do?
should you be choosing your projects at the team level rather than it being some sort of top down approach driven by your C suite?
Venky Veeraraghavan: You know, I think C suite can definitely help in giving guidance on what they think is the most interesting problems, And so if you want to sort of solve for like, you know, emails, straight upgrades constitute like. point one percent, then that may not be a good one.
It might be a good, interesting learning experience. But if you're saying, look, this is my main driver of saying getting more, more LTV then you're suddenly starting to say, look, this is what you want to work on. So I think you can clearly say, what are the priorities of the company and where is it?
And I think the technical teams again take that as input to then drive saying, look, this is what we want to experiment with. And I don't think the top. The tops really know exactly which are the right priorities, right? Because the technology works differently different conditions. And so you really want to be able to say, Look, try in this area, these are the things that are most important.
And then you make some experiments and you then find out. I think it's good to have a bottoms up ideas that actually check is the return of the technology.
Richie Cotton: All right. So, C suite is going to be all about just inspiring people to get going and looking at stuff and then at the load lower down the organization. It's going to be, this is where you pick exactly what's going to be relevant for your own team.
Venky Veeraraghavan: Yeah, like, broad guidance, like directionally sort of like, you we think we should, solve these problems then the technical team do have to say like, Hey, look, integrating, for the last mile in this area is super expensive versus the other one.
It's really easy. So you also want to sort of be able to, look at, ability to execute as one of the things and how much risk you have and getting that out there.
Richie Cotton: So yeah, ability to execute is incredibly important. I guess for that, you're going to have to have some AI skills in house. I'm curious as to what sort of, what you need, like particularly what roles do you need to hire for, what skills those people in those roles need. What do you need to know?
Venky Veeraraghavan: This is a industry on a function in rapid change, you know, I think like I've been in this industry for about 10 years and it used to be about data scientists and then it became about deep learning but now there's a whole sort of new set of skills that involved. But, you know, I'd say broadly speaking, I would say that three types of skills you want.
One is you still want data science. It is AI, being able to do good evaluations, something to doing outside of anecdotal sort of validation. You still need data science skills. You don't put metrics into place and you're going to be able to test and experiment. Even though you may not be building the Large model.
You might be fine tuning it. You might be able to compare different model types and everything else. So data science is still an important functional expertise that you want. I'd say increasingly you'd want to also want software developers. But I think we're now realizing that, A.
I doesn't. Hang out by itself. It is integrate the workflow. So you might have your I. T. Debs sort of really being able to say, How do I integrate into Marketo in my example, right? And so really be able to say, What is the end to end flow? Where do I get the data from? How do I push the results back into a business critical workflow?
And I think especially now with Jenny, I sort of the business users matter a lot. So again, if I'll use the email example, you might want a product marketing manager to say, what is the reading level that I want of this email, right? How many minutes do I want it to be? How many minutes do I need this email to be read by?
You know, what is the complexity? And so, suddenly you're having all these, what is the voice of the email? which is aligned with corporate brand. And so suddenly you're having these things that are not developer or data science. They're really business specific things. And so you really want this cohort of.
These three broad functions sort of working together to solve the problem and I think collaborating over them, otherwise you will, you know, you might have a, true but not useful solution, so to speak,
Richie Cotton: So it seems like, the most important things that you've got experimentation. So I guess a lot of AB, testing and variations on a theme from the data science side. It's all about connecting all your different bits of software together from your software engineering side. And then it's got.
is this actually going to have an impact? Does it make sense from a business perspective? From, I guess your commercial teams there. So, I guess beyond this is like, how do you structure all those different people? Are you going to have one team with all these sort of different skill sets brought together?
Do you want different teams communicating with each other? How do you organize it so you're getting it the most out of your AI?
Venky Veeraraghavan: I think be a no one, one sort of answer fits all, but it'll be a variation depending on how the company is organized already and sort of where they want to go. But I would say, what you want to do is you want to Get the teams together on a platform or on a thing in which you can pick and work together.
Often there's huge amounts of issues that sort of around just they might be in different departments, you know, one might be in I. T., one might be in the AI center of excellence and the other one is in the business. How do you sort of pull them together in some common place? sending stuff by.
Emails or a file system. So how do you sort of get them in a single place? So, having them in a collaborative environment is super important. Being able to iterate quickly over that is important. And I think once you do that, then you can start thinking about, Hey, look, these are what works for our company.
Then how do we sort of. harden it, so to speak. You might create a separate team, a tiger team, or you might say, look, it's a real team. You might say the data science should be really close to the business, or the app dev should be close to the business. Or you might say, look, actually, we have a more platform approach to the company, and so I want a central team that has projects, and then we bring in the subject matter experts into the team, and we sort of do this thing, and then we sort of move on to the next project.
I think, don't know if there's a fixed answer for that. I think every company has its own. But the idea is that you are Collaborating in real time, so to speak, on the problem in a common space. And I think, like, and sort of being able to do that is, I think, super important.
Richie Cotton: Okay. So, I guess the actual team structure matters a little bit less than just having all the right people who need to talk to each other, able to communicate with each other in an efficient way.
Venky Veeraraghavan: Yeah, I mean, teams eventually does matter, but don't know that's the first thing. And I think, and also, wouldn't be so bold to say, like, here's a structure that works for every customer. So, you know, I think I'm going to be a little bit more I guess modest in my guidance over there. the thing I would say is that's most important is that You start by getting them together and identifying them, getting, giving them time and space to sort of work on the problem because you know, you don't know what you're going to find.
And so being able to sort of iterate and work on it multiple times is sort of important.
Richie Cotton: In general do you need all these AI skills in house or is it something you can buy in? do you need to trim existing stuff up? Can you hire new people? Can you have them as contractors or whatever? What works best here?
Venky Veeraraghavan: since it's going to be part of your True business process. And my pitch. You have to have some of that skills in house unless you're sort of doing the entire thing is outsourced. But that's a totally different situation. But if you're running these processes in house, I think you need to have some of those skills house.
And the subject matter experts, The business users, they're probably going to be your employees already. you might have sort of app devs, you know, if you want to do augmentation, you want to get someone to write a little bit of code, that's for the project, you can certainly those contractors.
But the idea is that, the heart of sort of understanding what the business process is. understanding how the AI helps. I think those things are worth having in house, but you might start with getting as augmentation to sort of get you help to get going, and then you sort of, build from there.
But if you're going to make AI a critical part of your business, some of it should be in house.
Richie Cotton: All right. So, yeah, it seems like, if he's gonna be part of your core business, you want those skills to be uh, not outsourced to a contractor, then okay.
Venky Veeraraghavan: there are companies who do that, but that's a different model. Here I'm assuming that sort of they run the business themselves.
Richie Cotton: All right. And I'm curious as to whether there's a difference between you being a technical person and non technical person in terms of like what AI skills you need, because it seems like well, everyone needs some kind of level of AI understanding at the moment. suppose you are one of the commercial teams one of the commercial employees.
what do you need to know about AI?
Venky Veeraraghavan: The familiarity with the technology is important. I think there's so many ways. I think one of the nice things about Gen AI is there's so much technology available for non technical users. Whether it's chat GPT or, you know, perplexity, like there's so many co pilots. There's so many different technologies available to you that being able to use it and run it.
Gives you a sense of what works and what doesn't work. And also you a sense of like, what is reasonable that you can do, And so I think like the commercial outside of the field and business side of the house, they're the vaults of information about the business, about what's happening.
And so I think like, if you take that. Plus a sense of what the technology can do, then you can become a really good thought partner with your technical team to say, Oh, I was thinking about this. I was thinking about that. And I've seen, you know, a couple of times when a really, really good idea is that you as a technical person, all you know is how the technology works.
You don't know where you can help. so this idea of like the, um this marketing mail. it was brought up with a customer actually. And because we were sort of pitching Gen AI, we're talking about this and they were like, Hey, look, how do I know who to send it to? And we were like, Oh, how do I know what to what to write?
And suddenly those questions become the way you sort of get this really full fledged understanding of the problem. And so I think like that is super important for non technical folks, the business users to have a good sense of what the technology can do.
Richie Cotton: Yeah. It just seemed like you can need these business people to be able to describe what their business problem is and do it in a way that's going to make sense for the technology you're going to use. So I guess it's just going to be enough to have an intelligent conversation with the person who's building the
Venky Veeraraghavan: That's right. and then you can riff on something that you have some broad understanding of. If you have no understanding of the technology, then you can sort of expect magic. And then we went through that phase. We think about two years ago, we just thought that, these Gen AI things will solve all problems.
They're like, why can't you do that? And we're like, well, that's not how it works. Right. And so, so having a better understanding of what the constraints, what the strengths are can really help sort of channel the situation and sort of the collaboration very quickly to the useful part of the thing versus like exploring the non useful parts of the space.
Richie Cotton: Yeah, it's amazing. Like even just couple of years ago, like the crazy things people are trying to do, because it was new technology and no one knew what was possible or not. one of the fun examples is people trying to do data visualization using like DALI and stable diffusion and all these kinds of tools.
And it will just make something
Venky Veeraraghavan: That's
Richie Cotton: completely wacky. It looks like a plot, but it's absolutely useless. So yeah understanding what the technology can do and then making sure that your projects are aligned with that. So this has maybe got a bit abstract. Let's talk about some like concrete stories.
Do you have any examples of success stories for organizations where they've just really tried to improve their AI capabilities and then they've had some sort of benefit?
Venky Veeraraghavan: One of the ones I always start with is one of our customers. It's called King's Hawaiian. They have been super focused on working with us on really concrete business problems, whether it's office of the chief financial officer or on the supply chain side, they've been sort of really focused on concrete business problems.
And so they have been able to see understanding sort of how to price. They've been doing some pricing studies and being able to understand sort of how to price the goods. And, they were able to get 50 percent extra margin by, using AI their models.
And they were able to figure out what's the market conditions, what is the demand, everything else, and being able to really understand how to predict the price. And this is sort of a, more of an old school sort of forecasting problem but they were able to use it directly because they were working extremely closely with the business to say, look, we want to optimize this thing, right?
We're able to forecast today, and we want to forecast better. And so how do I get, gain on that? So that, that was a concrete example of a customer who's, gaining a lot of benefit, and a great reference for us. We're working with them on the next set of problems around, cash management and other things.
And so they've gotten a taste of solving problems with AI and they're like, Oh, this one works. What can we do next? And to me, that's what I mean by intentional experimentation, which is like, you pick something, you understand how it works and you gain confidence.
You're like, Oh, now I know what to do. Let's pick the next problem. then you sort of need more of a platform approach as opposed to just a app approach. So you're sort of like being able to take what you learned and reapply it in a new use case and very quickly gain from there.
Richie Cotton: So I like the idea that just messing about with your pricing, you can just dramatically increase your margin and make your company a lot more profitable. Just like, I guess this is just in one area of the business as well. Okay. All right. And then, so, I guess you mentioned the next step to that is taking what you learned from the first project and then trying to reproduce it.
So how do you go about with that scaling idea of just. We can do machine learning in one area of the business. How do we get it everywhere?
Venky Veeraraghavan: That's right. this is where now the technology part really helps because when you, you the way I think of the first one is you're to run water through the pipes, You're trying to get the right data and you're trying to figure out how to get the security worked out, how different teams can collaborate.
And then once you build a model, you're able to figure out how to push the results back into your business system. And that's The first one's the hardest one. That's why it's important to have a problem that is worth solving, and that when you solve it can sort of give you lots of bottom line or top line benefits.
So like, that's a good example of that. Once you have it, now you have a system that you can do the next one. You can say, okay, now I have related data. How do I do the next set of things? and you have a better understanding of the technology. You can say, if I can forecast.
Let's say pricing, can I forecast cash, And so the customer really is like sort of thinking through, Hey, look, what happens if I want to forecast receivables and, who's going to pay money on time? How much do I have? And, what they're finding out is that. they can predict it quite closely, and they're able to save money on the credit line, so suddenly you're able to sort of get these much, much closer sort of related problems, but they're now solving a different function. Now they're the CFO function. And so that is the transition that, you want. And you want to sort of, have a platform that actually supports those multiple use cases that are related, but they're different.
Richie Cotton: That sounds very cool. but the examples you've given there, this is kind of, I don't want to say old school. He's not really that old, but like it's all predictive analytics. This is machine learning rather than the fancy gerund of AI stuff. Should that be where organizations focus their attention first, if they're just wanting to start out with AI?
Venky Veeraraghavan: I would say maybe I've been in this. thing for too long. I started, it was called classical ML and deep learning became interesting and with, with NLP and CNNs then became generative transformer. But now all of that stuff is now considered predictive AI and that's the gen AI, But I think, in a year or two, it'll all become all AI. So, you know, that's why I like this idea of like, not having the technology drive the first starting point is to say, what is the right problem to solve? If you want to get better click through rates, again, I go back to this thing because it's such a simple example.
You're using Gen AI to create the email. You're using predictive AI to predict cohort the drivers. And so you have to let the problem tell you which technology to use, not the other way around. frankly, this marketing thing came from generative AI. We said, you know it can generate mail.
Can it do this? But in the end, it happened to have a big predictive AI component as well as a generative AI component. And I think that's the more guess maybe I'm too practical of a guy, but like, that is how I would sort of solve the problem. Not just saying, Hey, look, there's this cool tech and what problem can it solve?
But really say, what problems do I have that I think are worth upgrading? And then obviously you can take a look at the. available technology, but you sort of make the technology work for the problem.
Richie Cotton: Yeah. And I guess most business problems are tricky enough or nuanced enough that you're probably going to end up using a mix of different technologies in order to
Venky Veeraraghavan: and it really doesn't care whether it's generative AR or predictive AR, right? You know, if you want to say, I want to understand how to make my call center better and I want make them, get better let's say NPS the end of the call, you might have to use a combination of generative AI and predictive AI and deep learning.
And you want to use speech to text. You want to do sentiment analysis. You want to find anomaly detection. You've got to find these parts. Then you can say, Oh, great. Now I'm going to maybe help customers with a script that says when you run into these problems, say this instead, so I think the idea is that you really focused on how to use the different parts of technology.
And I talked about three different types of AI. but it's grounded on the, business problem.
Richie Cotton: All right. So, we're agreed. Now we start with the business problem that we've picked the technology afterwards. So we've talked a bit about like, how do you choose what projects to work on? What skills you need? The other stuff we've not really talked about yet is the processes.
So it seems like as soon as you start automating things with AI, whatever form it is all your processes need to change. So can you talk me through like what's likely to change and how you go about doing that in a reasonable manner?
Venky Veeraraghavan: I think the biggest concern people have, the thing is the original process generally is. super clear, right? Humans do it or the computer does it, but it's clear what you're supposed to do. It's deterministic. You sort of know when you do this, something happens.
With AI, it's more stochastic. You don't quite know what it's going to do. It's, there's always some amount of like, here's the prediction percentage, accuracy percentage. So you have to build your whole process around what happens when it doesn't do the right thing, so, you certainly want to monitor to understand, is it behaving the right way?
And that is a new thing. Because if the whole process is running with one set of people and one set of technology, if you're going to put AI in there, you've got to have a new set of runtime, which is like, it's not just you build a model and go away. You have to have the, the infrastructure to monitor the model, monitor its behavior, get alerted when things are going not the way you expect be able to intervene.
So you've got to be able to fund it. So I think like suddenly you're sort of having a, a different set of activities you're doing to sort of maintain this AI process. and so I, I think like what I'd say is Building that funding and capacity to run the AI at scale is an important part. The second one is like, how do you know when to bring human in the loop?
Like, if you have a confidence problem in general about these things, then making sure that, you know, should you check every piece of email that gets sent out, that probably is not very useful. But should you check none, that's probably too little. So how do you know when to get people involved?
I think those are all new things that you didn't ever do before. and that is the biggest problem change. Like I'd say, designing process for like more of a stochastic outcome as opposed to a deterministic outcome it can be quite different for companies.
Richie Cotton: I suppose as a data scientist, you sort of used to deal with probabilistic things, but if you're a software engineer, not so much. And I guess for, well, actually most of the rest of the business, probably not so much. So that's going to be entirely new.
Venky Veeraraghavan: That's right. And you can see it in a lot of the new, generative AI. Like, we look at these chat GPT or copilots. They sort of, in the UI, tell you that they're going to be approximately right, but they're not 100 percent right. Right? And so, we didn't used to do that before.
We used to just say, ask a question, get the answer back. But now it says like, Hey, look, you know, this is indicative. It's I am an AI. I mean, I'll get it right. And these are all little things that you're trying to get set up the promise of like, it's not 100 percent right. And so that is one particular example.
But the idea is that you have to think about it as a real thing. You just can't assume it is the old style deterministic outcomes. You have to really sort of say what happens when it's wrong, or if values are not correct.
Richie Cotton: So I guess the main thing here is you just have to have human check steps within your processes just to make sure that the output from any AI is correct. that about right?
Venky Veeraraghavan: Yes, I think you certainly have to sort of figure out both during the build phase, like how are you going to get the right input from the right stakeholders to make sure that it's, you think it's behaving the way you want it to behave and that many people almost get right because you fall into those problems, but many people don't get operational element right, which is like once you put it into production, it just doesn't run like I installed a piece of software, a web server, and it starts working.
It just, it doesn't change behavior in AI. It does change behavior because it's very dependent on the data and the distribution of the data. So you have to still manage making sure that you have time and effort spent. to manage that part process as well.
Richie Cotton: So you've got to monitor what the output looks like and I guess continue to test how much of the time it's giving the answer right. Actually do you have any more details on like how that might look? Like what does monitoring some sort of AI product look like?
Venky Veeraraghavan: You know, I'll use generative AI this time you know, when you're chatting with a chatbot, You're chatting with OpenAI and you're trying to make it your own. You're trying to say, look, I'm grounding it on my data my company's data. you might have certain policies, like say, I'm never going to talk about my competitor, And so in which case you have to say you know, you can do a prompt that says never mention your competitor. that works kind of, but what happens, are you going to be sure? Because these things do hallucinate. so what you want to do is to say, you want to have a guard model that looks at on the way into the LLM, you might say, look, I want to make sure that I, I test for any mention of any of my competitors.
And it could be a heuristic to just code. It could be another LLM that it's checking for it. If you find it, you can then say, I want to intervene. I want to tell them, look, I don't want to talk about my competitors. If you just return back. So you sort of, you're training your user. Now, the second part is you might do the response, a guard on the response side, which says the user asked for a good question, but LLM returned something I don't want to see.
then you can say, great, I want to have a guard here. I want to, again, detect it in real time, and I want to modify and mitigate that outcome by changing the output, so now you need a pretty sophisticated framework to actually host all of these guards and run them in real time and then be able to detect and then be able to mitigate the problem in real time.
so that now becomes a little bit more of an interesting, infrastructure problem. Obviously, you know, our product, for instance, supports that kind of framework that you can sort of just put the guards in and really focus on the business problem of. What words do you not want to talk about?
What topics do you not want to talk about? Versus out how to run eight different models and run them reliably and then sort of hook them all up. that's done by a platform.
Richie Cotton: Okay. Yeah. So, unless you're an infrastructure person, you don't really want to have to care about infrastructure. You actually want to care about the high level business problem stuff. That's interesting that you mentioned you got multiple levels of checks in there just because there's nature, you got to check user input, is that okay?
And then you got to run through the model, you got to check model output as well, just to make sure that that works as well.
Venky Veeraraghavan: That's great. And it gets more complicated with these, more of the sort of new style agents where the LLMs are talking to each other and they're making plans. You're like, they're all talking to each other. Like, how do you know what happened, How do you debug it? How do you make sure you're, you know, you have control over what it is.
And that's the heart of the confidence problem. You don't know what it's doing. It's hard to trust. And so being able to see it and then be able to say, I know it works. And this is the kind of failure modes again, very much like software development. Like you sort of learned that part and then you mitigate for those problems.
Richie Cotton: So, I guess the other part of changing processes, how do you persuade all your colleagues that they need to go along with the new processes? There's a sort of change management aspect to this. Do you have any advice on that?
Venky Veeraraghavan: Change management is a huge part. you know, I think the way you sort of drive the change is like you start showing small results and then you start saying, Hey, look, I can scale it by doing more changes, it's almost like see up this sort of pay for play thing, which is like, the more we do, the more benefits we can get.
And that gets people really aligned on the benefit and why the change is interesting. I think too many times we end up with like, we're doing the implementing this software to do this sort of hypothetical gain. they don't fully buy it. And then there are the skeptical, you know, a lot of people are skeptical of this sort of change in the status quo.
And so if you don't motivate it with the benefits and smaller scale experiments and say like, look, now we can. going back to MarketMail. I sent 50. It was great. And right now I increased it by 300 percent, click through rate by 300 percent. It'd be better if you can do it more, but that means I need to actually integrate with your, system, That's changed for them and more risk for the system. But now you're saying, Oh, I'm doing it because there's a possible upside. Maybe we can then sort of work out the next. So, I think to me, it is it's just, Overall change management as anything else, I think most people have to figure out, like, how do you motivate the change and why that's super important to sort of align it with the business benefit.
Richie Cotton: Okay, so this is going back to the idea of experimentation and just saying, well, if you can demonstrate that there is some benefit to some business metric, then that's at least going to encourage people to try the new process.
Venky Veeraraghavan: That's right. And, would say there is the more sort of. radical revolution style, which is like, look, I'm completely upset the business process. I have a brand new one. I was obviously going to have a lot more higher bar for change. And so I think, the lower bar, the more of the evolution style is much easier to get done.
And, there's some cases where you might want to completely rethink the business process, but many of these problems to date, at least, you know, technology hasn't, like, sort of said, throw the old thing away and started all afresh. I think, like, you know, they're all sort of how to assist and aid.
while we learn, then we can sort of find more disruptive ways to solve the problem.
Richie Cotton: so I guess the tricky part is when using AI for automation, and then people get asked to automate parts of their job. I think a lot of people get nervous, like, am I being replaced by machines here? So do you have any sense of a good way to deal with that aspect of using AI?
Venky Veeraraghavan: don't think we've written the final chapter on that. We're all learning. based on what we have seen so far, I think like, it probably allows a person to do a lot more. You get more capacity than not. what do you do with the extra capacity is filled with TBD.
And, you know, some companies could argue, hey, I don't need as much capacity. Or you could say I can do more. I think like that, think it depends a case by case basis. And we've seen examples of both. So, I don't know if there's a single answer to what, do you do when people get a lot more productive with the new tools.
Richie Cotton: Okay. Yeah. So, uh, once you're more productive, you've got more capacity and then there's like good ways of doing things, I guess, like re skilling people, bad ways of doing things, like, uh, just getting rid of everyone. Yeah.
Venky Veeraraghavan: You know, you know, like as we get more productive with coding, for instance, one of the places where I think it's best understood, I have a giant backlog, I can do more of my backlog, so I think like, depending on how you are, you might really love it saying like, Oh, I love the 30 percent gain that GitHub says we get, I'm like, I would like 30 percent more.
deliverables, That's on the tech that side or on the customer facing feature side, I would like more. And so I think like to me, it's a great benefit. It's not a, challenge to sort of, keep people employed, but really it's like I can get more done. Yeah, I suppose it's going to depend a lot on, do you have more problems that you need to solve than you have, like, time and money to, to solve at the moment, or Half your employees just kicking their heels at the moment, in which
That's right. And that's the part. I think that is very, very case specific. So it is very hard to like make general statements about that.
Richie Cotton: So, we talked about some success stories. I also like to hear stories of things that have gone wrong. So I guess, are there any mistakes you've seen happen when organizations are trying to improve their AI capabilities?
Venky Veeraraghavan: To me, the biggest thing is sort of like getting caught in the hype cycle. trying to like, sort of get the latest and greatest and be like, okay, this is, this is going to fix all my problems. And they generally tend not to fix all their problems. But they tend to like change. And so it's sort of like, the question is like, how do you not?
get FOMO or panic. And we've seen examples of that. People like, Oh my God, I'm going to throw this thing out there. and then everyone's like, this technology doesn't work. and I think you'll see a lot of like the early gen AI things were like riddled with errors and, hallucinations that it was like almost too fast.
Doesn't mean you shouldn't try, but I think it's just sort of like, don't panic. so I think that'll be one thing. But having said that, I think the most common error condition is having a plan to start, not having a plan to finish, right? Which is like, because it's new, exciting technology, you want to learn a lot, so you want to say, I want to build it all by myself, but then you're taking on these costs, which are, again, as I mentioned, like, not just build time costs, but also operational costs. Suddenly you're like, oh, look, am I building a, serving platform for LLMs and guards and hooking them all up and building a software infrastructure for scaling up and scaling down?
Most companies didn't sign up for that thing. so really what you want to think through a little bit more, and I think I would say a lot of not necessarily failed products, but products that have had challenges, sometimes failed, is that you haven't thought through like, Hey, look, what should you build yourself and what should you buy?
And sort of that makes, I think is. Changing and I think people have to figure out like, how to make sure that you are in the right spot in that on that spectrum, because you want the company is authoritative on its business process, its data, but it's not necessarily authoritative on like how to run LLMs at scale, or how to orchestrate a bunch of guards.
And so knowing when to sort of buy a platform versus like, yeah, put everything up from scratch with open source. think that's a place where I think a lot of companies end up picking the build side because it looks like it looks very I would say, appetizing.
Appetizing is not the right word, but like they're very easy to say, Oh, I'm going to get started. I have all these open source tools, but then owning and running them over time and maintaining over time becomes a much bigger cost that they have not planned for.
Richie Cotton: Yeah. So I guess the idea of starting or knowing how to start something about not knowing how to finish. This is very common with Agile methodology. It's like, well, I can think two weeks in the future, maybe six weeks at a push, but yeah getting to the end of the project is a problem for another sprint.
sometimes. So yeah, that's interesting.
Venky Veeraraghavan: And running an operation for forever, right? If you're going to do this stuff, you got to marketing team is going to count on your mails. Like, you've got to have a system for doing that. And, do you have all that work done? They're like, oh, I hadn't thought about those five things. I thought I was just going to do this project and move on to the next project.
We're like, no, now you have a mission critical system that's running. Someone's got to fund it. Like, oh, we hadn't thought about the funding. And, so that's where a lot of these end up sort of, wrapping themselves up around the axle because they haven't thought through what happens. if it is successful.
Richie Cotton: that's kind of a bad sign when being successful is also a problem as well. So,
Venky Veeraraghavan: Generally, people do solve problems with success. but if you have a middling project that doesn't quite break out, then you're like, Oh, am I in? Am I out? So then, that's why so many projects today, people say, don't go to production because there's all this learning and you haven't thought through all the downstream implications.
Richie Cotton: So, mentioned build versus buy is, is kind of the crux of the issue here. How do you decide whether to do one or the other?
Venky Veeraraghavan: I think it depends. We've seen customers of all kinds, like, highly mature customers who are, who think that only the technology is. a competitive advantage. and building it from scratch, they have funded their team, their IT orgs and engineering orgs are very, very deep then they have to buy less because they have the capacity, they're paying for it because they think it's a highly valuable technical competitive advantage.
But there are other businesses, who would say, look owning technology and running technology is not my, differentiation, what I'm really good at is, the core of the business that is, selling fruit or whatever else it is like, you know, pick your problem.
And so what I'm really good at is my understanding of the industry. or my understanding of the problem. And if you're that, then you're sort of a technically less mature in the sense that you don't want to like run all this stuff infrastructure, but you want to focus on that. So then that gives you a sense of like, oh, look, maybe the infrastructure work I buy because that is someone else will warrant and run that, run that system while I focus really on what is unique to my business, And so sort of knowing where it is, and so my, my view is like with the exception of a few point apps, lot of the AI will actually end up having these issues. customized solutions for the company. customization is about what's unique to the company, not general purpose, like building the thing up.
so you want to sort of buy the bottom, so to speak, the infrastructure and platform type stuff, and having you focus just on what's unique to your company. And that's where you sort of like get your differentiation really to sort of show up.
Richie Cotton: Oh, okay. So this sounds a little bit like the same thing we talked about with the skills earlier. So if it's kind of close to your core business, you're going to want to be like doing that at quite a low level, like, quite in house. Okay. And then you can just buy everything else that's sort of a bit further away from what
Venky Veeraraghavan: That's right.
Richie Cotton: what you're doing.
super. All right. sounds like we kind of have a plan now then. So, I guess, maybe not. So, yeah. What are you most excited about in the world of AI?
Venky Veeraraghavan: two things. One is, I'm excited that we are sort of coming off the. peak of the hype cycle. I think the conversations are much more grounded, which actually makes us have rational decisions because if you're in the hype cycle, people just have to guess and hope they're right because there's nothing to do solve hype, right?
And so now that we are sort of a lot more on like, it works for this. It doesn't work for this. You I want business value, which forces all the vendors and all the technologists to be like, really kind of focus on how to solve the problem. That becomes a much more interesting and It's a game where we understand the rules, because in a pure hype cycle, no one understands the rules.
so it's both the implementer and the vendor, all of us sort of have the, sort of know what to solve for. so that's sort of my view, which is like, I'm most excited about that, because I think we're going to see the golden age, because there's so much technology. But now that we're going to say, hey, look, it's not just It's a magic panacea, but it actually is going to be something that we can apply and test and do and, results are expected.
Suddenly, I think we'll have this amazing sort of blossoming of like actual use cases that work and, and we're trying to fix that. to me is the number one thing. The number two thing is as a technologist. I just love how quickly things are moving. It's just like every day it's hard. It's hard for me and my team to like keep up and every day there's a new competitor.
Every day there's a new startup and sort of being able to keep up is, hard. So it an exciting time. But really in the end, like, it has to be sustainable. It has to generate value and, that to me is, that focus back on that is super exciting to me.
Richie Cotton: All right. Less hype, more value, and lots of stuff going on. Yeah, two very exciting things. Wonderful. Thank you so much for your time, Venky.
Venky Veeraraghavan: Of course. Thank you very much. Thank you for the opportunity.
podcast
The Data to AI Journey with Gerrit Kazmaier, VP & GM of Data Analytics at Google Cloud
podcast
Effective Product Management for AI with Marily Nika, Gen AI Product Lead at Google Assistant
podcast
Data & AI at Tesco with Venkat Raghavan, Director of Analytics and Science at Tesco
podcast
The 2nd Wave of Generative AI with Sailesh Ramakrishnan & Madhu Iyer, Managing Partners at Rocketship.vc
podcast
[AI and the Modern Data Stack] How Databricks is Transforming Data Warehousing and AI with Ari Kaplan, Head Evangelist & Robin Sutara, Field CTO at Databricks
podcast