Hoppa till huvudinnehåll

AI Agents at Work: What Actually Breaks (and How to Fix It) with Danielle Crop, EVP Digital Strategy & Alliances at WNS

Richie and Danielle explore AI agents at work, experimentation with guardrails, data privacy, access, OpenClaw automation wins and failures, token costs, tying AI plans to P&L strategy, how data teams handle unstructured data governance, and much more.
23 mars 2026

Danielle Crop's photo
Guest
Danielle Crop
LinkedIn

Danielle leads go-to-market strategy at WNS, Capgemini's AI transformation services arm. Previously, Danielle was Chief Data Officer at American Express and Albertsons. She also write The Remix substack on technology trends, and is an Editorial Board Member for CDO Magazine.


Richie Cotton's photo
Host
Richie Cotton

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Chat with AI Richie about every episode of DataFramed - all data champs welcome!

Key Quotes

Are agents actually providing value? Or are they not? Are these agents in some sort of secured environment? So that they're not creating security issues for your organization. But the other aspect of it all is just have fun with it. This is a moment in which people do need to just experiment within some guardrails and have some fun with these new tools and see what they can build with them.

Your business strategy should feed your AI strategy. Business doesn't change. Your business still has all the same core business capabilities that it needs to do to make money today as it did five years ago. In many cases, unless they're creating new products and new lines of business. Think about what those business capabilities that you need are, the core business capabilities, and how you currently drive top line or bottom line through those capabilities. And then determine which AI use cases you're going to go after based on the size of that opportunity.

Key Takeaways

1

Adopting agents requires a 3-part filter: do they provide value, are they secure, and can teams experiment “within some guardrails.”

2

Even “ring fenced” / RAG systems can hallucinate—so human critical thinking remains mandatory.

3

Use-case choice depends heavily on risk appetite and regulation (and how much internal data you’re willing to expose).

Links From The Show

WNS External Link

Transcript

Richie Cotton: Hi, Danielle, welcome to the show. 

Danielle Crop: Hi, Richie. How are you? 

Richie Cotton: I'm feeling great actually, almost the weekend. Looking forward to this, although not for everyone listening, in case it's coming out on the Monday. Yeah, let us kick off. If you are in an industry that is overrun by agents, what should you do?

Danielle Crop: Oh, overrun by agents. So I think that, first of all, you have to ask a few questions, right? One is, are those agents actually providing value, or are they not? Two is, are those agents in some sort of secured environment, right? So that they're not creating undue security issues for your organization.

But the other aspect of it is just have fun with it. Like I think this is a moment in which people do need to just experiment within some guardrails and have some fun with these new tools and see what they can build with them. 'cause I think we were just at that tipping point between infrastructure.

And software in this world, right? And, we'll see what comes out of it. It's gonna be a lot of fun to see what people like. I think, you had to be living under a rock if you didn't see what Open Claw did. And it's just really fun to see what creative and interesting things people are doing with this technology.

Richie Cotton: I do love that idea of just have fun with it. So if you're interested in having some fun, like what's a good place to get started? 

Danielle Crop: I wou... See more

ld say wherever you want. So the best, the easiest place to get familiar with it if your organization is relatively uncomfortable with it, is to just get your own consumer version of the tools and just play.

It's like I know that there is some trepidation about okay, it's gonna be like some monthly cost. It's totally worth it to upskill yourself and learn the art of the possible about what these things can do. So whatever version anthropic, open ai whatever version you wanna use Gemini, just get familiar with the tools, understand what's possible with them, figure out how to.

Do maybe even more creative things in just your day-to-day life. Like one of the things that I built was just this agent to just do competitive analysis. So ring fenced it about around, our competitors earnings statements, earnings calls, transcripts, et cetera, and just did a compare and contrast between.

What our company is reporting and doing and saying and what they're doing and saying. So that as a strategy executive, I can think about okay, where are the gaps? Where are the opportunities? Where are we? And it just is a lovely way to, to use those tools. On a side note, though, you have to be, you still have to use your brain which is that, it will hallucinate.

In my case, this agent actually hallucinated and said one of our major competitors bought a company we bought. Even though it was ring fenced and completely ragged and only primary source information, it's still hallucinated in a very major way. While it's a great tool. You have to be, you have to like, it, it doesn't take the place of, knowing and being a critical thinker and going, yeah, that doesn't sound right.

I gotta dig into this further. 

Richie Cotton: That's very cool. I do think competitive analysis is one of those things that no one really quite has time for in general to do properly. So that's a good thing to, to outsource, but yet it seems like you've gotta have two mindsets then. So you've gotta have this sort of naive childlike innocence to be like, okay, I'm gonna try new things.

But also you need to be very critical in terms of you need cynicism about the output of of the ai because sometimes it does make things up. Do you have a sense of like how you go about balancing these things and like how you approach your mindset? 

Danielle Crop: So I think that, for me and personally, I've always been the type of person who likes new technology, right?

So I never, and I think that the genie's out of the bottle guys. You gotta embrace it, right? Like it's, and it's but I've always been that kinda, okay, what can I do with this? I was a very early adopter of online banking and online, right? I was just, what does, like what whatever could make my life easier, faster, more convenient.

I'm all for, and so I've always been that way my whole life. And so I just keep embracing that. And I would say that it's, it stood me in good stead, right? Throughout my career and throughout my life that like, I've been. What I would call a pragmatic tech optimist, right? So it's it's not that.

It's not that I think that technology can solve all the problems of the world. It cannot technology is a tool. And when human beings get better, greater tools, we can do better, greater things. Only if we choose to do

Richie Cotton: yeah, I mean there's certainly a case for you've gotta choose to solve the right problems, otherwise, you can't go solve the problems with ai.

But maybe we'll focus on on doing better things for today or at least now. So there are lots, we were talking about agents and there are lots of different types of agents. So some of them is okay, you've got a, an existing workflow, you're gonna throw a bit of AI magic in there. And there are some companies that are claiming they've got.

AI that are gonna replace entire employees. Do you wanna talk me through like how you decide what sort of AI you should be using? 

Danielle Crop: I think that you should, as far as what AI you should be using it very much depends on your. Industry, you're regulated, you know how heavily regulated you are, how much risk you're able to take.

So your risk appetite as an organization is number one. And then you can decide a little bit more about what you know, what which direction do if you need. The more and more I see things, particularly with the open clause scenario recently, the more I like feel that the unlocking of a lot of this opportunity, the early opportunity of just automating major parts of jobs, and I don't think it, I don't think it automates the full job, right? Because there's always that element of okay, now I have to evaluate, did it actually do what I wanted it to do? Is it doing the right thing? That's LLMs are math on that? They're qualitative outputs.

And so you have to, a human being has to evaluate a qualitative output. It's and even in a mathematical output, you still have to check and make sure that the computer doesn't mathematical output. So this is not new. We have to keep checking things. So I'd say that what tools you use, if you wanna be more, if you wanna keep all your data basically in-house, I think that the frameworks will evolve to be able to allow that.

So basically you'll be able to have the tools entirely within your ecosystem. Manage all your 'cause who wants open AI to have access to all of their Slack data? I don't think anybody does. But the reality is that there are some organizations that are currently like, feeding, an agent all of their emails for, as an organ, like the smaller organizations are figuring this out much faster than the large organizations, right?

Sending all their emails, all their slack, all their, so basically they have this agent that can tell them. Everything that's going on in their organization and do a summary of it at any given point in time, which is really incredible. And then they can do, they can think about like the, what that enables in terms of strategic decision making and timely strategic decision making.

It's pretty incredible. So I think that's where it's okay, you lean into the tools, but you don't, you can't just like, let's face it, we don't all fully trust our human colleagues. Okay? So we can't fully trust our agent colleagues, and that's the same mindset that we have to bring to it.

Richie Cotton: Absolutely. I love the idea of just checking absolutely everything. 'cause things can you get the wrong answer whether it comes from AI or from a human. Now you've mentioned Open Chloro a couple of times, so just for anyone who has not been following this story, just talk me through what is open clo and, why might you want it and how can it go wrong? 

Danielle Crop: I think that the, like where I discovered Open Claw mostly was in like the All in podcast, right? Like they were, they talked about it and it was, they explained kinda what I just explained about what they're creating with it. And the market reacted very strongly, right?

So it basically said okay, SAS is dead. Services are dead. Like all, like, all because of what, this one guy created with open claw. Okay. So he used the existing, capabilities and just created this amazing new, framework to bring things together in, in an agent fashion.

You have to be really trusted. You have to be willing to give it to, access to all of your systems and access to all your right. And it's so do you wanna trust it that much? As an organization? You have to decide, but it showed that there's a lot of possibility. So in this particular case, like the the there was a they created an AI producer for a podcast actually.

So you might wanna look at Drew it rigie if you haven't seen it already. But basically goes out and like prepares, lists of who should be on the show next and why, and what they're putting on LinkedIn and all those kind of things. Sends emails to the person sends, right? Like automatically does all this stuff to like, create the pipeline for the next podcast.

And so does it automate most of what a producer does? Yeah. Does in the end, do, are you actually gonna get more quality people on your show that if you don't actually vet what it gives you back, probably not right? But it is gonna automate a lot of the mundane tasks that nobody wants to do. So that's, there's nothing wrong with that.

Think about for sales is gonna be like, all those things, all those places, it's gonna make, it's gonna make a huge difference in what people are able to, capable of doing. 

Richie Cotton: Yeah. Lots of interesting things to think about there. Guess the idea is you just feed, like everything you're doing.

Into an agent. And it's designed, it basically works like a personal assistant for you. 

Danielle Crop: So you give it your login for LinkedIn, you give it your logged in. Like you give it all of the identity, like you give it a separate identity. So that's a huge issue that's coming up is like, how do you do that?

How do you control it? How do you, but you do, you have to, you do that and then you let it go. 

Richie Cotton: There've been a few disaster stories with open law already. So there was an example of someone using like a. The, a GitHub version like a, an open club version of that GitHub persona. So it was being mean to other people who are making poll requests.

There was an example of it deleting someone's inbox. This was like a meta security research as well. So it is especially ironic. So there are some dangers here. Do you have a sense of how you do this sort of thing safely? 

Danielle Crop: I think you have to be very. Clear on what you ask it to do for you and how you ask it to do.

So this goes back to okay, you have to give it very specific commands, right? And very specific instructions. And some people are not like, so even about like tone, like I, you can construct it and say I only want you to use. This type of tone with people, right? So it doesn't do those kinds of things, right?

That doesn't mean it won't still do that, but you can con contain the risk by giving it very specific instructions on what you want it to do, right? So that it's as much like you as it could possibly be, but it's still not, it's still not gonna, it's never gonna be perfect. It's a probabilistic model. It's never gonna be perfect.

It's gonna do unexpected things. There's error eight. So as long as you're comfortable with what that error rate is, or you can build small language models that like, make that error rate smaller in the context that you want. 

Richie Cotton: So there's different ways of handling 

Danielle Crop: that problem, I guess is what I'm, is like what, what model do you use?

What you know, what tools do you use? What you know, what servers do you, there's a whole stack of how you sa how you handle those problems. 

Richie Cotton: Yeah, certainly think about all the infrastructure. The infrastructure and technology. Those are big decisions. But I like the idea of just thinking about if you have something that's interacting with other people and you think about the tone of the output.

So actually recently did camp's head of data science was it created a bot to review other people's dashboards to make sure they work okay. It was like, wow, this. Spot is really mean. It's supposed to be talking like me. It's horrible. It should be very incredibly critical. And not in a nice way about people's work.

So that required a bit of tuning. You do need to think about this stuff. 

Danielle Crop: There's a lot of, tuning that's already been done. It's it's actually quite amazing. You just, that's another one of play with it. Ask it to do something and then ask it to give it to you in different tones.

Like it will, it's actually amazingly accurate. You give it some very like simple instructions like make this cheekier. It actually understands what cheekier is and gives you something that's cheeky. It's amazing, like how does it, so there's been a lot of great work that's been done by the AI researchers in tuning this and that, particularly in the area of tone.

Why I am, why they focused a lot of time and attention on that. I'm not exactly sure, but it's fun to play with. Like you can tell it to be really mean. You can tell it to be super, super nice. You can tell it to be verbose, you can tell it to be concise. You can tell it like you give it all. It's a very interesting.

Richie Cotton: Absolutely. And certainly if you're using something if you're creating AI for making social posts or for marketing materials, then the tone getting that right is gonna be absolutely essential. 

Danielle Crop: We still do, everybody still has brand guidelines, right? So I just anticipate there's gonna be like a brand guideline for your AI tone.

Richie Cotton: Absolutely. Yeah. I, like everywhere I've worked, there's been like an extensive brand guidelines document that no one can quite bother through. So it's good, at least the AI can read it's not gonna complain about that. And hopefully it'll stick to the rules. The other aspect of this was around like what do you feed into your agents?

Presumably, you mentioned. Sending Slack messages into some AI is a little bit risky. How do you go about deciding what ai, what what data the AI gets? 

Danielle Crop: It's I'm gonna be a bit of a redundant here, right? It's really about what your risk tolerance is again, right?

So if your risk tolerance is high, feed a lot of it in and then dial it back. Or feed a lot of it in a proof of concept in a ring fenced environment, and then tweak it from there and decide, but you, until you do it, you won't know. You won't really know what you want to feed it or what you don't want to feed it.

You have to like, you have to play with it and learn. And then, 'cause I think to some extent the risk tolerance has been like, there's been two ends of the spectrum in my experience. One is like the people who I'm like. Don't even consider the risk at all. And on the other end of the spectrum are the people who don't even try it because they think that it's, and it's like there's some happy balance in the middle.

But I would like going back to that, what you said earlier, which is like being very positive and happy about then skeptical. But be skeptical is what? But you gotta try it out because we won't know all of the risks until we try some of them in a controlled environment. Similar to that example of oh look it. Said, our competitors acquired this company. When we did like that, you won't know that's the kind of things that it will do until it does it. And then you're like, okay, now how do, now what do I do about that? So that's where I would, I just once again, tell people.

You gotta learn the tool. It's not it's an, it's a, you have to, you must. 

Richie Cotton: Absolutely. It just seemed essential to get started. But also, you maybe don't want to be gung-ho and try stuff without thinking about what can go wrong. You need some kind of culture of iteration, experimentation, testing.

You've managed many departments. How do you build that culture into your team or department? 

Danielle Crop: So I built the culture by rewarding curiosity. Reward in creativity. Rewarding, and demonstrating it myself. So that's, I think those are really key things to, whether it was, back in digital transformation or any of the data when we moved from one data, it's like I've been through like four major data transformations in my career.

So every single time you had to demonstrate to your team that yeah, you're gonna. You're a move from SaaS to SQL and you're gonna be okay. And so here's how you learn. And but as a leader, demonstrating the skills and using it yourself and reminding your, and using it yourself so that you can be the art of the possible for your team.

So in this case, it's I have to remind my team, a strategy team that creates content for a lot of different organizations. Like you have a copywriter in your back pocket, like it help you. It's like it's let it help you, right? It's and see what it does. So just reminding them where they can use it and how they can use it, but then, and then unleashing them in, controlled ways I think is the most important thing.

And doing it yourself. Leading by example. 

Richie Cotton: I do love the idea of leading by example. 'cause it is one thing to tell your team to do but if you're actively doing this and showing them that, then they're gonna be more motivated. Do that. Do you ever set any targets around AI usage? 

Danielle Crop: No, not at this point.

We will see I think that the we're all gonna, we're all very fast coming to a point in which. It's extremely expensive. So I think that's where people need to, like that token cost is non-trivial, right? Hopefully over time that will go down because of, as we get. More energy, hopefully cheaper energy, et cetera.

Then we'll, choke co and more data centers and more like the cost will come down over time, like it does for every general purpose technology, but for right now, you have to be, so that's what the other aspect I would tell organizations to be careful. The ones that are just letting people go play.

You have to watch her token cost, right? And I think there's gonna be more of that. Just restricting how many tokens any individual individual employee can use. And it's gonna, I'm sure there's probably places that are already doing it today. My particular organization is not but it's fast approaching.

So I dunno if that entirely answers your question, but that's what I'm thinking about right now is okay, yeah, I get them to use it. You have to get them access. But I think once they have access and they know oh how. How helpful it can be to everything that they're doing. You end up with okay, now we have to constrain the cost because we're like, he, you just can't have it run wild, right?

Like it's the same thing as in the beginning of PCs, right? Not everybody had a PC when they first rolled out, right? If they ruled out over time as the technology came down and caught. So I do think there's an aspect of this, of okay. How do you make those decisions?

How do you strategically make those decisions of who has access to this to begin with? How do you constrain the costs? It's a little different than a PC because you don't have to. Buy a piece of hard words to everyone's desk, right? It is software. Yeah, anyway, go ahead. 

Richie Cotton: No, absolutely. Yeah. So cost is definitely something to think about.

If think, okay, we'll find a, like a pro subscription if some of these chat tools is like, what, $? But then if you're doing like if. Coding stuff, you can burn through thousands of dollars of tokens in a month. So it certainly can get expensive. You mentioned strategic decisions talk me through how do these get made?

Like who needs to be involved in deciding AI strategy? 

Danielle Crop: So I think that area that needs to be involved in an AI strategy is, this is a little bit of a different shift from like a traditional, so for example, like I'm in a strategy role, but I came from a chief data officer background. There's a reason for that, right?

Like at this moment in time, 

Richie Cotton: my leadership said that what we need is somebody who really understands this ecosystem to help us make business strategy decisions. 

Danielle Crop: And so I do think that is more and more important is okay, having that view of as stacks are converging, right? That's what you're seeing in the market.

Is that all this market angst about SaaS and we're seeing the stacks converging across cloud, across SaaS, across, the native AI companies across the Databricks, the snowflakes, the we're seeing all of these companies basically converging and trying to build the same feature sets.

And that's an interesting point in time, but you have to be able to see that. And I think that unless you have. Both the business savvy as well as the data and tech savvy and are able to connect the dots across, you're not gonna be able to make the strategic decisions of who should you partner with and why.

For example, am my, like in my, who are we? Are we gonna partner with a major SaaS provider to do agents or are we just simply gonna go directly to open AI or anthropic? Like those are very important strategic decisions. For the future of the company. And what are our clients, what do our clients want, right?

Like what do, what is the stack that they're going towards? And I think everybody's converging into this more simplified, most likely stack, which is companies only wanna have to play with companies they have to play with, right? Because it creates a very complex ecosystem for them to manage.

As a chief data officer, I know as well, right? Like you just don't want the ecosystem of partnerships to be more complex. Than it absolutely needs to be. And I think what we're seeing is do you need agent force and open AI frontier? I think this is a, still is a, it's an open question, but we'll see what happens.

But it's I don't think the, I don't, I think the answer is no. You will not need both of them in the longer term. You'll just, but you do need the L OMS regardless. So there's one player you have like you have to play with the clouds you have to play, right? There's things you have to play with and there's things you don't have to play with.

That's, I think, what we're seeing in the marketplace at the moment. 

Richie Cotton: Ah, that's interesting. I suppose that's been a big stock market story in the last few months is that a lot of these SaaS providers, so you mentioned Agent Force from Salesforce, and I guess there's a few that's like ServiceNow where they have there are these huge SaaS companies, but they've also, 

Danielle Crop: I do think it's an overreaction.

Okay. So I'm like, not saying that, like I, I think there, there is an overreaction in all of that, but I do think there's a real threat. That needs to be, 'cause the moat in this world is the data and those SaaS companies actually don't own the data. Services company can surface companies like the one that I work for can help companies navigate and go, okay, which direction should we go and what, how should we integrate with these different, right?

But like the SaaS providers themselves, they're. They're, it's an interesting moment for them. We'll see how they navigate through it. 

Richie Cotton: Absolutely. Some great products, but yeah, they're being a bit they're struggling in the market at the moment. 'cause there is maybe a worry about do you need these SaaS products long term if you've just got an LM agent 

Danielle Crop: and if you must have the LMS and you must have the data and do you want the data to remain your own?

And it is in the SaaS environments today, but like you have this weird situation with. How do you navigate all of that? How do you make sure that you're protecting your data, your proprietary data, as your moat, because it is, and you don't want to make that just free and available for, open AI or anthropic, right?

As then, like that's a problem, right? You don't want your strategic decks to be available and queryable by anyone who's using open ai, right? Who's using chat? GPT. So like the more that's gonna need to be ring fence, like how it is with sas. It's an, it's a, we'll see what.

It's an interesting moment. I do not have the answer for this one, but I think it'll be fun to see what happens. 

Richie Cotton: Yeah, part of the the fun is there is the chaos at the moment, I think. We've talked about strategy and so you mentioned about simplifying tech stacks. I guess the other big question is how do you make sure that your AI strategy feeds into your business strategy?

Danielle Crop: Business doesn't change, right? So it's not, you still have, so this is the advice I give to clients. Your business still has all the same. Core business capabilities that it needs to do to make money today as it did five years ago, right? In many cases, the lesser creating new products and new lines of business, right?

And so think about what those bus business capabilities that you need are the core business capabilities that you need are and how you currently drive, top line or bottom line through those capabilities. And then. Determine which AI use cases you're gonna go after based on the size of that opportunity.

It's and it's gonna be different in different verticals. It's gonna be D like that's okay, but it doesn't change the fact that you have to start with the p and l. Like it's still business. You're still doing business, right? Like it's, and so I think that's where people get a little lost of okay, we've had a lot of bot, like I do love the bottoms up stuff, right?

Sometimes it's really interesting, but like you have to, we're like in that moment of oh, you need the bottoms up because you need people to learn what the kids, these tools are capable of. But you also need the top down of we need to focus on the things that actually really drive our business.

Not the things that just make your job easier, right? So it's a little bit of that, and strategically I think companies have to make some decisions about what's going back to who has access, how are we making, how are we holding them accountable for what they're doing with it? And how are we, making sure the costs stay low, right?

But that's a, that's a strategy, senior strategy decision to go. Who should have this? Why should they have it? What business capabilities are they gonna be working on? What is the business value? It doesn't, business case is still essential. Like it's doesn't, so I feel like people have lost a little bit of the plot of you still have to have a business case.

That doesn't mean that you should not do this, even where you don't have a business case, because it's like the PC revolution. General purpose technology. You can't just say oh, because I don't have a clear business case, I'm not gonna do this at all. That's not a good strategy either at this moment.

So it's like balance, right? You should have a business case. Try to find that you probably do. If you dig in and you get some really good people in your organic, you'll find those business cases and then deploy it. But if you can't find that business case, like for whatever reason, quickly. You still need to deploy this.

It's not you don't, and I was like, I I, I recently made the analogy of between like the companies that were like, let's just take FedEx. FedEx was born out of the PC revolution, right? Like the whole business model was built around the new technology and it killed. Like it did amazingly, like in the nineties, right?

It's it was like, I don't, I'm old enough to remember this, right? I'm old enough to remember like FedEx really hot stock and but it was because, and Walmart, for example, Walmart took the PC and took the, distributed computing and made a whole moat around their business about it.

They, and they, but they still are a retailer. They just knew that they knew how to use the tools better than their competitors. Sears saw it as a line item. They're dead. Walmart saw as a moat. They're three. And that's like people, like companies really need to think that way and make the technology tool available.

'cause if we, like you don't wanna be Sears, 

Richie Cotton: okay? So you can think of AI as being a cost, like you start worrying too much about the token costs and think we've gotta try and minimize spend here and you're probably not gonna benefit. But if you. Think about how can we harness the power of ai then that's gonna give you new business opportunities and hopefully there's a greater growth.

I guess that's the gist of the, 

Danielle Crop: it's striking the balance, right? You can't like, have. Completely like risk averse. I'm not gonna even try this. I'm not gonna engage in the new world. But you also can't look at it as just like a line item. Because that's not, if you look at, we had the same arguments during the pc, the revolution, right?

Okay, it's gonna, we had literally the same arguments. If you go back in time and look at it we had, the Harvard Business Review. You can see, the PC and all the numbers, but not in productivity. We know how much the PC drove productivity in the nineties but the, when you were actually trying to do finance accounting in a particular company, you couldn't find it.

That doesn't mean that we shouldn't have rolled out PCs. So it's not balance of but did, should a company have bought so much PCs that it drove them out of business? No. Like it's a balance. 

Richie Cotton: Yeah. You've gotta buy a lot of PCs to drive you out business, I think, but yeah, 

Danielle Crop: but my, your cost to be the point where you're like, you can't make a profit anymore.

So it's that, it's that balance that, a company always has to strike regardless of what era. And it's just I think people get a very, they get very nervous when a new general purpose technology comes out. And what does this mean and how it's, and it's normal angst. It's historically very normal angst.

Whether it was the, but you wanna go back and look at the railroad, you wanna go back and look at the telephone, you wanna go back any of these types of things, they cause these. And you'll see it in the the news articles. You'll see it in the, like the expert journals. You'll see it's the same.

And so I find that incredibly reassuring. This follows a pattern. Human nature stays the same. We still all need to make a profit in our businesses. So there's some things to just anchor yourselves around so that you're making good decisions. 

Richie Cotton: I love that phrase, historically normal angst, because I think if you're anxious, you think this is new to me.

This is like a inquiry personal thing. This has never happened before and that's why you're anxious. But actually, if you put it in the context of history and this has happened before, it's okay. Take a breath. You can relax and move on and do nice things with your life 

Danielle Crop: and try to have some fun. Like we are.

All right. Because it's if you just get, I mean the, I recently went on the record. I completely disagree with the idea that this is gonna take jobs. 

Richie Cotton: Oh tell me about this then. If it's not gonna take jobs how's the interaction between AI and careers gonna be? 

Danielle Crop: I mean like any general purpose technology, right?

Like it's that dri drives productivity, like that I it's going to create so in the short term, it will have disruption. I'm not saying that it won't, so it's if you think about I like to use the analogy again of the pc, right? Like we had a lot of typists that lost their jobs and there was even conversations I found this article that was quite hilarious, that was talking about the discussion in the boardroom and the C-suite about who would do the typing in the PC world.

Like it's fun. It's reassuring and hilarious, right? Okay, they did have these, so that whole yes, jobs will be, jobs will shift. This has happened, by the way, very quietly in the background during the, the initial, what I would say, AI era that most people didn't see, right?

It didn't hit the news because it was happening quietly in the background of companies. So companies that were building machine learning models to make better fraud decisions, et cetera, were displacing a ton of people who were answering the phones for fraud calls. Whole organizations like were let go that were just answering the phones because we were able to do model-based risk decisioning and routing in a new way.

So overall point, this has happened before. It happens all the time. This might, I don't know if the, everybody seems to think this is gonna happen faster. I'm, if you see what's happening right now, not so sure it's gonna happen faster. Like in the smaller brand new startup organizations.

Yeah, it's gonna happen lightning fast, but in the existing enterprise organizations, based on the risk, based on the change management issues, based on is it gonna happen like overnight? I don't think so. So I think that there's a lot of angst about, okay, it's gonna. It's because it's taking, it's being more productive.

We'll need less people. But what's happened the last time that the times that happens where people say, oh, we're gonna have more productivity, so we need less people. Those people move from here to here, right? They start doing different things. They don't go away. They just start doing different things, new things, new opportunities that arise.

So going back to FedEx, right? There was a new opportunity in shipping and logistics because of the pc, right? Like it didn't in the eighties and before it was all USPS. No. That's how you sense stuff, right? FedEx created an entirely new market. Competitive competitor to it will have stuff like that.

We just don't know what it is yet. 

Richie Cotton: Okay. I love the optimism. And on your point about what happened to all the typists, I think if you go back and watch like TV shows from the sixties or seventies, like every middle manager had their own secretary. And now it's you've gotta be, you've gotta be C-suite before you get your own executive assistant.

So I think all these admin operations type jobs, that there's been a long trend over the last years of technology making this build more efficient and reducing need for that sort of labor. 

Danielle Crop: Absolutely. And think about how much easier it is to book travel today than it was, years ago.

Richie Cotton: Absolutely. You have to go to a travel agent and then they take their cut of whatever and yeah. Now it's just oh, on my phone. Easy peasy. One of the big stories around careers over the last year or two has been around the junior hiring crisis. Now, it seems like for people who are just coming outta university.

It is more difficult to get a job in the last year or two. So do you see an effect where it's different for people who are just starting their careers versus mid-career or late career? 

Danielle Crop: I think because of the moment that we're in right now, it's harder. I don't anticipate that lasting. So once again, I anticipate that as we realize the Jevons paradox, which is that, once a resource becomes more, what becomes cheaper and more productive.

The need for it goes up, right? The demand for it goes up. I think we'll have, I think entry level folks that have the right skillset and the right mindset, more importantly will do just fine. Just curious, creative self-starters, like all these are all things we looked for before we look for them now.

That also hasn't changed, right? Like it's, yes. Do I do? I think that we made that, there's one major shift that I see between the way that we educated in the past and the way we should educate. I think in the future, because of the nature of these tools, we spend a lot of time and money in the West in particular.

I think to some extent, catching up with the east on STEM education. Do I think that PE high schoolers in the future should just be focusing on STEM and, no. No, I do not. Think about some of the things we've talked about today. I'm a statistician. Yes. But I talked about history, I talked about economics, I talked about business.

I talked about like you have to be the type of person in the future who's just curious and interested in all sorts of things. Then you will connect the dots and move the business forward. And that was true before. It's true. Now. It'll be true in the future. But do we, do I think that, will we see possibly really long term year careers as a software developer?

Maybe not. So I would say don't be a specialist. That's not learn the tools, but you don't need to be a deep subject matter expert in the future. The models will have that covered, but what you do need to be is the type of person who can imagine the future with those tools. 

Richie Cotton: Okay. So I love the idea of, you said that if you are curious, if you are a self-starter, if you've got a good imagination, these are all things that are gonna help you get hired.

Do you have any other ideas about skills that are gonna help you out? That will help you in your career. 

Danielle Crop: So I would say that so like I'll just take like the thought process I have for my own -year-old daughter, which is I'm trying to educate her broadly, right? So that she can have relatively moderate level understanding of a lot of different things, right?

And then understand how to use the tools. When she needs to understand something more deeply. But by having that, like that level, you have to have this level of understanding across a lot of different things so that when the LLN, when the, when it returns something to you, that just makes no sense, right?

You can have that critical thinking and go, that's not right. Like you have to have enough knowledge to be able to call ps. On the tool. Or to just know that actually this isn't, this tool, an LLM actually isn't the right tool. I need a small language model for this because it's got way higher risk.

So being able to understand the risk tolerance of the world, do I think that A, do I think that a, that a. Any sort of bot in the future, whether it be a robot with LLMs on it and world models and all do I ever, do I think that they will ever be able to understand say what, when Darwin discovered, the theory of evolution, right?

Are any of them gonna be able to go okay, now I'm gonna go and challenge the entire world on their existing ideas? No. Only humans will be able to do that. 

Richie Cotton: Yeah, I guess we hope if there's a. Some sort of AI revolution, challenging humans on their ideas. That's gonna be tricky. But 

Danielle Crop: the math doesn't support them being able to create novel new ideas.

They'll make stuff up that we can then decide is that right or is that wrong? And should we should we take that and move that forward? But do they actually create novel new ideas and then make decisions about that impact the world? Go and argue with policymakers in Washington or whatever.

No, they don't do that today. I don't anticipate they will ever do that. I could be wrong. We'll see. But I just, it's it's just, it's a tool. Like I don't see it any different than any other technology tool than we've, that we've ever created as a species. It's the same thing. So I refuse to think that, I wrote something about this too.

I'm like, Terminator's not coming. A GI is not a thing. Okay. Maybe I'll, maybe somebody in five years will be like, did Yale, you were totally wrong. 

Richie Cotton: We have this conversation from a bunker. Yeah, no, there do seem to be a lot of companies who are trying to build bits of Terminator, though, which I do find quite terrifying.

Danielle Crop: Oh yeah. There's, that's not, yeah. What it, like the idea that Skynet's gonna become sentient and the trying to kill us all, like that's is that gonna happen? Do, am I more concerned about concentration of power? Absolutely. So if you have concentration of. Human power, whether that be like, and in this moment in time, concentration of power would be concentration of compute.

Okay. So if you have a few people who have access to the compute, that creates a very unstable situation for all of us, right? So that's like we need democratization of the tools. We need democratization of compute, we need that's what will keep us from any sort of. Terminator esque scenario in my mind.

It's it's just about, it's, it goes back to the basic understanding of freedom. If you have a concentration of critical resource amongst just one group, think about if just Saudi Arabia had access to oil and none of the rest, what, in Saudi Arabia dictate the entire global policy for everything?

Yes, I would. So you, if any sort, any critical resource in this case can be like, the more data centers we have. The more distributed they are around the world, the more freedom we will have. The less control. Like it just, that's what, that's I think what I, that's in the Terminator scenario.

That's what I think about. I go, what? What am I worried about? Not worried about the models. I'm worried about concentration of compute. 

Richie Cotton: Yeah. And that doesn't seem to be a big thing. 'cause there are a very small number of companies that are now spending tens of billions of dollars on, hoarding, compute or I, in some cases building more compute.

But yeah, there is that concentration there. Was that at least one of the big topics of last few months was around sovereign ai and the idea that you need to own some of your tech stack in order to remain independent. Do you have any position on this? Like how should individuals or organizations worry about like controlling their AI stack?

Danielle Crop: I think it depends on what you mean by sovereignty, right? So do I think that individual countries, so and I've had conversa it's like people have different, just like when we first, and even to this day, we have we have lots of different opinions on what a gentech is. We have a little we have the same thing with sovereign ai.

Okay. So does sovereign AI mean you own the compute in your own data center and everything's all, or does it mean that you have, your country in defense of itself has its own AI models and its own infrastructure? The that second thing is inevitable. Like obviously countries are going to invest in AI to defend themselves.

That's not even a question. Which means they're gonna have their own infrastructure, their own compute for those particular defense applications. Does that mean that they're gonna then create some sort of like sovereign AI that takes over. Social, political, other aspects of society. I don't like, I don't know.

Like I would, we see this with China, right? China has done this. It is not a very pretty picture for those of us who like freedom. Or privacy, right? Either one. If you don't, if you're, it's a very, it's not a pretty picture, right? The Chinese coming as per doesn't like you, you don't get to transact on certain things.

You just it's very dystopian from my perspective. We've seen that. Is that what people want? Like we think that's a decision that each individual, democratic country is gonna have to make. Is are you gonna push your policy makers to keep these things free and independent, or are you gonna let them create, centers of power that are.

Really not good for society, and I think that's just a, that's just something people have to contend with at this moment in time. 

Richie Cotton: Okay. Yeah, certainly some incredibly important decisions on the societal level about the interaction between government and technology providers, and I guess all the citizens as well.

Alright. That was a fascinating digression. We somehow got sidetracked from we were talking about skills and careers. So I would like to talk a little bit, 

Danielle Crop: It's all connected, rich, Richie. It's all 

connected. 

Richie Cotton: It's everything's connected. Yeah. I would like to talk a little bit about what happens to data teams?

'cause of course you've been a chief data officer, so with the advent of ai, particularly agents that can do a lot of data analysis. Does it change how you approach building? Yes. 

Danielle Crop: So I think there's. There's still the core, you have core functions that you need to build as a data officer, right?

The most, almost all. There's a little bit different depending on where the data officer reports to the organization, but the core things are, data governance, data management, data products, data science, data monetization. Those are usually the common five pillars to build for a chief data officer when they start a new job okay do I have these, do I not have these?

So I think that how you build all of this, you have to really think in terms of now structured and unstructured data, right? As, so that's one, one pillar. That's one, one functional thing that's different in the last three years that you didn't really have to think that much about before. From a data governance and a data management perspective, how are you gonna manage you've got every, most chief data officers, like we all understand, oh, structured data.

How do you manage structure? Like now it's more okay, now you, like how do you manage unstructured data? How do create that provenance for it in the same way that we used to do data lineage, but in it, but it's different. It's just different. So how do you do that and what pro, what tools, providers, capabilities that you can use to do those things?

Are different than they were three, five years ago. So how you organize your team can be different. Like how do what do you wanna do in data governance? Do you really wanna do, like data governance is gonna be a very important but very fast of evolving space in my mind. And I think that there's gonna be a lot of tools that are gonna be built out of this agentic.

Era that we're in that are gonna make it a lot easier to do data governance. A lot faster. A lot. 'cause data governance was something a company just never wanted to invest in. And now I think you don't have to invest nearly as much in it, which I think is good, but you still have to have those people who know how to do it.

It doesn't change the fact that you have to have people who are skilled. What does data governance look like? What does, like you still have to, people have people that are skilled with that. They just have better tools to do their job. 

Richie Cotton: Yeah, certainly. Data governance tooling has evolved a lot over the last few years.

It is dramatically easier than it used to be, but it's interesting that the big worries around unstructured data. So I suppose this mostly means the documentation of how your business runs, but I guess there's also gonna be do you need to care about images and video as well? 

Danielle Crop: Yeah, I think the absolutely.

Like how do you know, we're all dealing with this at a societal level, right? Like how do you know the, like I use the word provenance, right? Because I think it's the most apt one. I know it's not like the one that most people necessarily use, but I think it's the most apt for the moment is does this have history to it?

Does this have reliability to it? Does do we know where it came from, right? Just like you would do, like for an antique, right? Like it's a, it has a provenance, right? Data has a provenance and like historians already know how to do this stuff very well, right? Like, how do you critically evaluate source material and know that it's actually real and know that it, right?

And that same kind of thought process, I think has to come to any unstructured data. So for a, like there not all books written in the th century are equal, right? Like some of them are good and some of them are not. And how are you judging that? It's the same thing with unstructured data.

Where did it come from? Who authored it? What what's, what, you know how like the, all those things matter. It just, they matter at a greater scale than ever before. And so how do you create, the right type of mindset and tools around that? 

Richie Cotton: Fascinating. You mentioned historians knowing how to do this.

I think like philosophy degrees. Often considered not great for careers, but actually now every company's philosophers for their responsibility. AI teams. Do you need historians on your data governance team now? 

Danielle Crop: I think that, perhaps like that might be a good way of inculcating that thought process and mindset into a data governance team.

I think that they would understand, I think, librarians like very similar type of like how do you structure things, how do you organize things, how do you like, I think there's just these. These, there's these professions that we can learn a lot from that are more, that have maybe not been the shiny professions in the last or years, but I think that we need to go back and look at them and go, what did they have?

What did, what do they know that we don't know? And how do we organize things in a better and more structured way for the future so that when we're making decisions like they will. The LLMs won't coerce hallucinate as much if you're feeding them good data, like if you're feeding them things that are true.

And then it goes to the societal aspect of it all. We all need to stop. Like I, we all need to stop feeding social media with things that are true. It's like we have our own personal responsibility that we can take in this world at this point in time as well, which is stop reposting the thing that you didn't check.

Before you reposted it like that. There's a, it's a it's a fascinating space. 

Richie Cotton: Absolutely. Certainly I like that idea. Is a call to action is, don't repost anything on social media 

Danielle Crop: if you haven't checked the source that it came from and what their agenda, motivation, financing. If you haven't checked, don't repost.

Particularly if it's a very. Sensitive, highly political topic. Just don't, just, you can read it, you can do it, but don't let's not propagate things that we don't know are true. Let's all try to remember that. Truth matters. 

Richie Cotton: Yes. I love that. A little bit of fact checking goes down very well.

Alright. Just to before we wrap up, what are you most excited about in the world, mayor, at the moment? We talked about a lot of things here, but what's really getting you going? 

Danielle Crop: I think that my, the most, like I, and I've said this a couple times already, the most exciting thing for me. Is to see what people will do with this.

It's it's like I'm having fun doing things with it, right? I'm curious to see what people much smarter than me will do with these tools and what kind of things that they will create, what type of apps store we will have, right? Using these tools that we don't have today.

Like I, I'm really in, I'm like, I think that in the next. A lot of wealth has been generated in the infrastructure phase. More wealth is always generated in the next phase, and I'm really curious and interested and see what happens. And that's why I want all the young people who maybe are maybe not listening to this podcast to be like positive.

And there's a lot of opportunity. Go play with this, have fun with it. And you might discover the next huge, the next unicorn. You don't know. Unless you go play with it. And so I think that's like the fun and it's easier now for anybody who's a creative or a historian or library philosopher to do this than ever before in history, right?

Like before they would've been like, oh, I have to go, I have to go get my four year CS degree. And so I'm interested to see what those people who have a very different mindset and a very different way of approaching problems in the world, what they're gonna do with these tools. Because that's a part of humanity we haven't unlocked.

In this technological era. So I'm really excited to see what happens with all that. 

Richie Cotton: Absolutely. Yeah, it's not even young people, it's like, it's absolutely everyone. Like we've got hundreds of millions of people who are now capable of creating stuff. So yeah, certainly exciting times. I'm with you there that this is this is gonna be a big thing.

Alright. And just to finish you up, I always want more people to learn from. So whose work are you most excited about right now? 

Danielle Crop: I generally like, so the way I. I go about, it's like I have to keep a very, external lens on what is going on, right? So I generally like to do the podcasts and I've already mentioned, but the D Cash podcast is a really good one.

So if people aren't doing the dark, I, I would, and it gives you the high level perspective of what all the big guys are saying, right? Dario. Elon Sam they're all like, it's and just other huge, Jan Koon, like just other, people who are real thought leaders in this space and pushing the bloop and so keeping on top of what they're thinking, what they're doing, and just asking yourself, they're like, for me, I ask myself the question like, do I agree with them?

Do I not agree with them? I often argue with them in my own writings and posts. But it's but it's a way of keeping on top of. What's going on? And so I'd say that like podcasts are one of the easiest way to get deep into a topic, spend two hours, right? But not have to like, like for somebody like me, it's yes, I was in the modeling world, et cetera.

Do I need to know the different, the exact difference in the technology between cloud four and Cloud flat ? 

Richie Cotton: No, I do not. 

Danielle Crop: So I'm not in the AI research space, so if you're in that space, I love you. I need you, but I have my own HFAI researcher. We have a, one at WS who I just adore.

But like I spend my time more on the strategy level of what's going on. What's important, what people are talking about at this moment in time. And so I find the dke podcast to be essential. 

Richie Cotton: Okay. Yeah, maybe my second favorite data podcast or AI podcast after data frame. Yeah I'm definitely all for listening to podcasts in order to learn things.

Very good idea. Alright, super. Thank you so much for your time, Danielle. 

Danielle Crop: Thank you.

Ämnen
Släkt

podcast

AI Agents Are the New Shadow IT (And Your Governance Isn’t Ready) with Stijn Christiaens, CEO at Collibra

Richie and Stijn explore AI governance failures and wins, risks from agents that can act on systems, creating visibility with an agent registry, how AI governance differs from data governance, EU AI Act risk tiers, and much more.

podcast

Enterprise AI Agents with Jun Qian, VP of Generative AI Services at Oracle

Richie and Jun explore the evolution of AI agents, the unique features of ChatGPT, advancements in chatbot technology, the importance of data management and security in AI, the future of AI in computing and robotics, and much more.

podcast

How Next-Gen Data Analytics Powers Your AI Strategy with Christina Stathopoulos, Founder at Dare to Data

Richie and Christina explore the role of AI agents in data analysis, evolving AI assistance workflows, the importance of maintaining foundational skills, the integration of AI in data strategy, trustworthy AI, and much more.

podcast

Building Trust in AI Agents with Shane Murray, Senior Vice President of Digital Platform Analytics at Versant Media

Richie and Shane explore AI disasters and success stories, the concept of being AI-ready, essential roles and skills for AI projects, data quality's impact on AI, and much more.

podcast

The Challenges of Enterprise Agentic AI with Manasi Vartak, Chief AI Architect at Cloudera

Richie and Manasi explore Al's role in financial services, the challenges of Al adoption in enterprises, the importance of data governance, the evolving skills needed for Al development, the future of Al agents, and much more.

podcast

Building & Managing Human+Agent Hybrid Teams with Karen Ng, Head of Product at HubSpot

Richie and Karen explore the evolving role of AI agents in sales, marketing, and support, the distinction between chatbots, co-pilots, and autonomous agents, the importance of data quality and context, the concept of hybrid teams, the future of AI-driven business processes, and much more.
Se merSe mer