Accéder au contenu principal

AI Agents Are the New Shadow IT (And Your Governance Isn’t Ready) with Stijn Christiaens, CEO at Collibra

Richie and Stijn explore AI governance failures and wins, risks from agents that can act on systems, creating visibility with an agent registry, how AI governance differs from data governance, EU AI Act risk tiers, and much more.
5 mars 2026

Stijn Christiaens's photo
Guest
Stijn Christiaens
LinkedIn

Stijn is a data governance veteran and one of the leading thinkers in the space. He runs data strategy, data infrastructure, and product evangelism at the data and AI governance company Collibra. Since founding Collibra 18 years ago, Stijn has held several executive positions, including COO and CTO.


Richie Cotton's photo
Host
Richie Cotton

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Chat with AI Richie about every episode of DataFramed - all data champs welcome!

Key Quotes

When you look at AI governance, it’s a moving target. And the moving target is tied to the underlying technology. If you looked at AI governance topics when GPT just came out… So that use cases was sort of the first step of AI governance… But then the second year… agents became a thing… AI governance naturally evolves alongside it… having an agent registry… And then inevitably… the multi-agent system will come up… people are already talking about swarms, right? Soon you'll have AI governance requirements that have to do with that whole system.

People often consider, I'm the one who's driving the wheel and I'm hitting the gas, right? And governance is doing the brakes. But that's a naive perception, right? Governance is around the track that is there, the road, which signs are on the road, which curves are coming up on the road and how those curves are indicated, what the rules of the road are, etc.. Even on a racetrack, your car has a brake because that brake allows you to slow down a little bit at times so you don't hit the wall or you can take the curve in a better way.

Key Takeaways

1

Start AI governance by creating an AI/agent registry that catalogs which models, agents, and use cases exist in your environment, because you can’t control what you can’t see; then use that inventory to prioritize what to govern first.

2

Operationalize risk-based governance by triaging use cases (e.g., low-risk website navigation bots vs. autonomous agents with system access or high-stakes domains like medical/industrial control) and applying deeper reviews and controls only where consequences are severe.

3

Avoid “checkbox governance” by connecting legal/compliance assessments (e.g., EU AI Act mapping) to the live agent system—ensure the controls you define actually bind to deployment, data access, and runtime behavior rather than living in a static model review document.

Links From The Show

Collibra External Link

Transcript

Richie  

Hi Stan, welcome to the show.

Stijn 

What's up Richie? Good to be back. How are you?

Richie  

Doing great. Thank you. So to begin with, I need some disaster stories. Tell me like what's the biggest problem you've seen from poor AI governance?

Stijn 

Oh, disaster stories. I'm gonna flip through my list of them. So I think there's like a whole wide range of them. I think we've all seen the examples like, you know, chat bots recommending a competitor's products or, you know, people getting a policy and then following the policy from that chat bot turning out that is a completely made up policy. But, you know, the airline or insurance company still has to sort it out.

And then it gets really dodgy in my view, because what I learned recently about Cloud, the Cloud bot, that it's powerful because it has all the Linux commands. Now, Linux is a very nice operating system with the bash operator, so you can do really a lot of things, especially if that machine is connected to something else like the internet. But the horror story is there that people actually do this. People actually will have taken AI from the internet and run it on a machine.

that can do whatever is rude. And then, you know, that leads into all sorts of horror stories. I mean, the most recent social media phenomenon hit was around that, you know, the social network for robots, right? Where they started talking abo... See more

ut conquering the world and turns out % of them were people in the first place. Yeah, yeah, yeah. So I think the horror stories are all over and we'll see more, quite frankly, because clearly people are not focusing

Richie  

Really?

Stijn 

on the controls or on the break, if you will. They're just pushing the gas pedal. They're excited to play with the technology. The technology is evolving. So we'll see more horror stories come out, but we'll learn as people, as society, as organizations, we'll learn how to use this new technology widget better.

Richie  

Absolutely. And I think there's quite a wide variety there. So I think a lot of the chatbot cases have been sort of fairly well publicized over the last few years, like the idea of chatbots saying things to customers that they shouldn't have said, and then the company being liable for that. But this Claude bot thing, that's new. So these sort of computer use tools where they can take over your computer. mean, yeah, there's a lot of security risks there. Maybe we'll dive into that more in more depth later. But for now, also need some motivation. So can you talk me through...

any success stories you've seen. So do you know about any companies where they've put a lot of effort into AI governance and it's paid off?

Stijn 

Yes, of course. We've got a number of them on Colibra's website where we literally advertise the stories we have worked with, example, in McDonald's, for example, or Siemens or other companies. So the organizations are really serious about working with this AI stuff. know that they have to, this is not, know, I put AI in my company and I push a button and tomorrow everything. So they know.

This is a new technology. It's a platform shift. So they're going to have to roll this out across, you know, functions, departments, lines of business. They're going to recreate new business. And they know that, you know, just like with the mobile phone or the cloud before, they know that you need to put some controls around this, right? You need to have some maturity with respect. mean, yes, experiment with the technology to learn what it can do, but also experiment to learn how you need to do it well.

So that liability that you were talking about earlier doesn't start to become a problem.

Richie  

So I suppose with governance, that's one of the things where it seems like a lot of it's about avoiding liability, reducing risks and avoiding problems. Are there any kind of positive business metrics that come out of it? Like, can you talk about, I don't know, like improvements in revenue, improvements in productivity, all this kind of stuff. Are there any more positive metrics rather than just avoiding negative things?

Stijn 

absolutely. mean, I want to be clear about that, Richie governance, whether it's, know, data governance or AI governance, you know, even corporate governance, governance is a control mechanism, right over a management layer. And it has the purpose of making that management layer for data for AI, for corporations run better and run well. That's it. So imagine any place where that control function is gone.

Well, yeah, of course it's gonna bring in negatives potentially, right? Because not definitely, but for sure something's gonna happen if you take away a control function. But ultimately it's about making it run well, right? Making it run as well as it can. So yeah, I think there's even MIT research that shows that there's a measurement between how well you do data, how well you do AI just is a percentage point, like, I don't know, % or whatever it was.

better in your business. So that's a net positive, which makes sense. Instead of doing an experiment with an AI just like that, doing it well and learning how to master that thing for your business, it's just gonna give you a more positive output. It's just that most people associate the word governance or control with a negative. They associate it with...

for example, compliance or regulations and laws and fines and liabilities to your point. But I think that's a perception issue. Now perception is reality. So you have to recognize that people see it sometimes like this, but show them, no, this is about the negative, the risks and avoiding them, right? As well as it is about making our business work better. And executives are starting to pick that up, right? They're taking the learnings themselves.

Richie  

That was it.

Richie  

Yeah, think about if you avoid doing stupid stuff, that's actually a pretty good competitive advantage then. So yeah, I the idea. Do less dumb stuff and that's gonna help your business. All right, so it feels like AI governance is relatively new field, but data governance is course much older. Can you talk me through what the similarities between data governance and AI governance are and where they differ?

Stijn 

Yeah, so I mean, we can speak about this topic alone for a very long time, right? And what I want to say, Richie, is that data governance has reached some level of maturity curve. But I wouldn't say, you know, maybe if we want to put it in human terms, it's maybe left home for university or something like this, right? It still needs to get to the peak of its career, right? So that maturity curve will continue a little bit. And there's going to be evolution there.

in the data governance requirements still as well because you'll have new data storage and warehousing and movement technology. So that will come with new requirements and you'll have new laws and regulation and policies that will come with new requirements. So it will evolve a little bit, but you can see it's further advanced on the maturity curve. Now, when you look at AI governance, there I would say you're much more in a moving target area. And the moving target is...

on the one hand, tied to the underlying technology. For example, if you would look at AI governance topics when GPT just came out. And by the way, there's AI governance topics pre-GPT as well, right? But I like to make GPT like a AC, BC moment because AI got redefined before AI was like machine learning and do some data science. You take data, train a model on it, and it makes predictions.

And after GPT came out in the world of the, in the eyes of the world, everyone on earth, AI became the GPT, right? So in that first year, what AI governance was, was around use cases, know, organizations were trying, oh, you know, somebody's going to upload our IP in the LLM. So we got to put, you know, protections around the use cases. What are we allowed to use it for and what not? And which one are we allowed to use and which one not?

So that use cases was sort of the first step of AI governance and that it doesn't end there, right? That's gonna continue. But then the second year, I think we're going back now to last year, agents became a thing, right? Because they took the LLM engine, if you will, and then realized, well, to make an engine run as a plane or as a car, we need to put something around it, like a steering wheel maybe. So that's the harness of the agent. So then...

Stijn 

AI governance naturally evolves alongside it. And then AI governance became also, so use cases, but also about, for example, having an agent registry. So you know which agents quote unquote live in your environment. And then inevitably, you know, an organization will not have one agent, it will have multiple. So then you can sort of predict that the multi-agent system will come up.

from the olden research days and people are already talking about swarms, right? So then you'll have AI governance requirements that have to do with that whole system where the traceability of which agents are important, but also how they interact with one another and also how the whole system as a whole acts. So you have AI governance requirements that's a moving target there. And across all of them,

And then this is where you start to go into the overlap of data governance across all of them. It's going to need to be clarified which agent or which LLM or which agent system uses which data. And what you start to see there is that Richie that the topic of context is sort of bubbling up in the world as well. And context is essentially everything that you give to the agent so that it's

informed to make its decision, right? The semantics, the context of the business policies, regulations, but also data, data sets. So there you'll see that there's some overlap between data and AI governance because let's take a simple topic like quality will become important because everyone knows if you want an LLM or a non-pre-trained model to perform better, then you got to give it better data.

better quality data, better quality context makes the model do better things. That's proven mathematically, I believe even. So that's clearly an overlap topic between these areas. And then, you know, when you mentioned governance, it'll take seconds and somebody brings up lineage. So if you look at similarities and differences there, in a data governance world, a lineage requirement was very much about how does my data hop from its source?

Stijn 

into its ETL and then its target in the warehouse, for example. And it gets a lot more complicated, but it was more about how the data moved and at which points it moved through and which controls you could put on it. Whereas if you look in that word lineage in an AI governance world, people even start talking about provenance, but ultimately you start talking about, how does one agent connect to another one and which data flows in between them? And make no mistake, right, Richie, the...

people will set up these agent systems with separation of concerns, right? Not all agents will be allowed to do all things and will be allowed to do touch all data, right? Because you want to box up an agent in a controllable unit so that you can actually manage that overall system. So that lineage, if you will, of which agents and which data and how they all interconnect, that will be like an AI governance requirement. And it'll look a little bit like your...

data governance lineage, but it will also come with new things. I would say a lot of similarities, including on the organizational side, which is to say what you saw years ago in data governance organization, you saw people wondering who needs to do this and why, and how do we need to do this? And how do we measure that we're making progress on this topic, which have all sort of been sorted out, right?

There's lot more clarity in the maturity curve on data governance. But if you look at AI governance discussions today, you'll see those same questions bubble up. So for AI governance is that the lawyers, the privacy lawyers have to look at this or is it the head of technology or the CIO who's responsible or is it our compliance team or is it the chief AI officer, the chief data? So you see that confusion happen, including why we need to do it, when we need to do it.

how we need to do it and how will we measure success. And of course, the security people pop up as well. So a lot of similarities and a lot of differences as well, including differences on, well, the maturity curve means the organization is figuring out how to do this new thing.

Richie  

spring, I have to that was incredibly thorough answer. like, I shouldn't be surprised, but there's a lot of things to think about with governments. I guess, let's before we get overwhelmed, maybe let's figure out like, where do you get started then? So one of last things you mentioned was that there's gonna be a lot of different leadership teams involved. want, can have like legal, you can have IT, security, like the data team needs to be involved. There's a lot needs to go on there. So talk me through.

what sort of organizational structure do you need to put in place in order to govern your AI effectively?

Stijn 

Well, I think most of the organization will likely already be in place. Let's unpack this with the simplest example, right, Richie? So let's do it the way that probably a lot of people will do it. Some CEO or board members like we need to do something with AI. And then they find somebody in the company who does something with AI, not like, first we need to think about X, Y, no, I need somebody to do this. So you start.

developing a prototype chatbot agent. Let's forget about that for a second. What it is exactly, but they're just doing something with AI, right? And now they didn't take into account any of the other parts of the organization. They just did it because that was their assignment. And now a problem occurs, right? you know, there was a prompt injection on the chatbot and now it did something bad. It's, you know, we gave it root access on the, with Claude bot or whatever.

and it deleted all folder files or it dropped the table in the database because it will do those things, right? And then, you know, that's when out of the Woodworks based on the incident, you'll see security people come in and they'll be like, well, we need to protect ourselves against new sorts of attacks. So are we going to do that? Then obviously you'll see the privacy or the compliance lawyers come in into the Woodworks because they'll say, well,

you were building this chatbot or agent, which model did you use? Because now we need to start assessing this model, whether we think it's okay for our company to use that model and not another one. And obviously you'll see some technology leaders swoop in and say, well, we're standardized on hyperscalers such and such or Frontier Labs such and such. So you should have known not to use that one. So instead,

come to the dark side and use the company approved platform for which we're already paying credits and tokens and whatnot. And then inevitably as that chatbot, so now you have some of those stakeholders that surfaced and then inevitably at some point in time, the chatbot doesn't know, one of the agent doesn't know what if anymore, it's lacking context. So it needs to be fed with additional data. Somebody gives it a rag architecture or some other setup, it says, oh, but we have that data in the source system or.

Stijn 

in our fantastic data or Delta or data warehouse, Lake or, whatever consolidation place of data the organization has. And then you'll see the data boss come out to say, Hey, well, if we're going to feed that into you, we need to make sure that this is done appropriately and put a quality control in place or whatever else. So now that person surfaces too. And then of course, at some point in time, then it starts touching.

multiple parts of the organization, know, sales, marketing, or different lines of business. And people will still start to realize, well, if that agent operates like this, actually generates an organizational cross-functional problem. How will we solve this? Well, we'll assign a steward, AI steward, agent steward, whatever name they give it, they can call it AI ninja for all I care. And they'll form some sort of, you know, committee that maybe...

on a strategic level, on a quarterly basis, reviews, well, what kind of issues did we experience cross-functionally with this new AI widget? And how will we decide the prioritize solving that together? Because you need that senior leadership. So you see many of those roles and functions, I believe, are already in place. And I would recommend you tap into them. You make the friends from the get-go rather than first have a problem and then have them be angry at you and then make friends with them. It's fine too, right? Both ways work.

but, on top of that, you'll inevitably, have, you know, you're seeing that in the markets, just like we saw before with other functions, you'll have some people opining that there needs to be now, a chief AI boss. And maybe that rolls everything up, right? Maybe it's chief AI, chief data, chief data analytics, chief AI and data analytics officers, whatever it's called. But that function is starting to materialize or in some cases even.

consolidate, I would say. So many pieces are already in place and a new leadership function will probably grow or evolve from an existing one. And maybe that's a temporary one, right? Because the way I see it Richie is that when you're dealing with a platform shift, organizations and business will start to operate differently, right? So job responsibilities will shift, business processes will shift.

Stijn 

Three years down the line, this is now business as usual, just like the cloud is, or the internet or the mobile phone or the desktop is all business as usual now. And then maybe that role turns out to be a temporary transformative role rather than a permanent one. Or maybe it just rolls up to the CIO again, or the CTO, for example. So many in place and some new ones probably transformative, but maybe permanent will start to materialize.

Richie  

Okay. I see when you were naming your bot for controlling things, was like, you call it Stuart? Why do call it Stuart? Or is it the AI Stuart agent? I like that. But it's interesting that last point about maybe it's only gonna be like this chief AI officer is only gonna be a temporary role, like having someone coordinate these activities. That's interesting because I think a lot of companies are just starting to think, oh, well, do I need to hire a chief AI officer?

But actually saying maybe it might be better to do this in a federated manner and have AI responsibility within different departments rather than like try and centralize it across your whole business.

Stijn 

Well, I think it already sits everywhere, Richie, because if I look at the AI phenomenon, it captivates everyone's attention, no matter your job or country or department or company or industry you work at. It's keeping all of us busy, even people who are not working even yet. So, in that sense, I think it's already everywhere in all functions. And then putting like a leadership role on top can help to grow efforts or...

can help drive focus on it so that all of these individual efforts maybe go in the same direction. But if you say, I'm an organization and I don't have that and I'll hire one from outside to help change our business, well, you're not necessarily setting that person up for success either because maybe that person doesn't know anything about your organization. So to be effective, they will need to work with the organization that's in place because they'll need to...

extend it or transform it. And if they're an external candidate, okay, maybe they have the AI chops, but then they still need a bit the business chops in the company or the organization as well, of course.

Richie  

I think it's a problem with a lot of these sort technical roles. I mean, it's been a problem with data science for a long time. like you need business skills, you need the technical skills as well, and the ability to communicate in both ways. And having all those different skills in one person is often a bit of a challenge.

Stijn 

Yeah, no, it's a team sport in my view, just like what you saw in data, people often refer to it as a team sport and data and AI we know goes hand in hand. So AI in that sense is also team sport. I mean, I gave the examples earlier with various roles involved, even if you're, mean, whether you're a large organization or a small organization, the same thing applies. And maybe in a smaller organization, more of the roles are bundled into one, but then you're going to start hitting a point where this is a white Raven type scenario, right? And then

You know, it's such a unique profile with that company of skills and years of GPT experience, If that's even possible. It is by the way, that'll be a handful of people worldwide and they might, you might not be able to afford their multimillion dollar salary either, right? So I think it'll be indeed a mix of people collaborating to make this a successful initiative.

Richie  

Absolutely. Actually, do you want to expand on that thought? Like if you're interested in a career in AI governance, what sort of skill set do you need? Like what are the technical skills? What are the soft skills?

Stijn 

Well, anything governance is always going to come with a lot of people skills, right? a lot of people program management project management communication even change management skills are definitely going to come in handy technical skills I think that's a developing topic If I break down technical, I don't want to say, you know Just technical coding technical but coding and it's technical in its broad sense because you'll need legal AI technical skills

For example, you need to be able to comb through, let's say the EU AI act a little bit and try to map how those external requirements can be executed upon pragmatically inside your organization. So that requires some translation, but it also requires for you to understand how AI functions a little bit. mean, you don't need to know the exact workings of a neural network, but you need to know how that system operates and which risks it produces.

some legal, you need to be able to at least read it, right? And interpret it a little bit. I think you need also data skills for sure, right? Because, I mean, again, not the deepest, but inevitably the chat bot, the agent, it will need to eat up data in some way. That doesn't mean you need to be like the ETL developer necessarily, right? There's all sorts of no code, no code self-service connectors to say, okay, my agent connects to my...

CRM or whatever it is, right? But you need to be able to piece those together and maybe if you're the AI governance person, maybe then you don't doing that yourself, but maybe you need to drive it a more technical person to help you do that. So you need to understand at least the requirements of what's needed there. And I think then there's also business skills you need to learn. But if you are going into AI governance right now, it's a great time because you're looking at the start of a discipline.

And if history repeats, which it doesn't, but it echoes, then you're looking at at least a year window in which you can develop that career. And because it's a new discipline, maybe, you know, anyone who applies themselves a little bit in the short term quickly knows more than most others. So yeah, it's a great time to get started. And this podcast and the data camp learnings are a perfect place to start.

Stijn 

know, increasing those skills.

Richie  

Absolutely. thanks for the pitch. That's wonderful. But yeah, I liked it. There's quite a wide variety of ways in. you can go through

Stijn 

Now, on the legal angle, Richie, I would be cautious also because if it's too legal and too compliance, one of the pitfalls you have there is that it becomes a checkbox kill. Like you say, okay, but I've checked the model, we've assessed it and it's fine, so my work is done. But if that activity is not connected to the actual agent system that runs and purrs and runs and does things, then yeah, you have the checkbox.

Richie  

You're supposed to...

Stijn 

but the problem still exists. So I would say if you're in that angle, make sure that you understand the actual operations of the system well enough and that you're plugged into that. that's, not just a checkbox exercise because that won't be enough.

Richie  

Yes, this is a common problem where you have a governance team is the team that says no, and they can be a blocker on other teams and you can end up with a sort of bad culture where different teams are fighting each other. Do you have a sense of how you can avoid that situation? Like how do you stop governance just being a blocker on other teams?

Stijn 

Yeah, that perception is real, right? So people look at governance, they often say, there's the police or the ones who say no, right? But on the other side of the fence, when you bring up the word governance, also hear things, well, that's important, but somebody else has to do it. It's important, but for later. So there's sort of a mixed bag, but to avoid that kind of situation where increasingly you're becoming opposed to one another.

I think you need to lean in and understand the other party. So let's say you're in the governance unit or maybe that's you're the governance unit. And now you found somebody in the organization who's like, I'm doing an AI project. My boss told me, and he gave me a lot of budgets. I'm, you know, I'm running and I'm racing and I don't want the governance guy to come break me. Right. So you can already assume that they'll take a bit of a stance, like don't get in my way.

because it's their little project. And essentially anyone who comes from the outside is a risk that they're gonna make it muddy and more complicated and slow it down, right? So I don't think that's necessarily a threat from just a governance person, but from anyone who comes into that aid, but it's my project, leave me alone. So I think you need to just, you know, openly go and then be curious about what they're doing and why they're doing that. Because honestly, Richie, if you find out that this,

project that they're running is just some sandbox. Maybe you don't need to be involved, right? Then you can just say, hey, hey, I love this project, great work. Can I just look over your shoulder and learn from it from time to time? Because it'll inform me of how we can, you know, do governance right and make many of the more of these initiatives. I you're doing the experiment, so let's learn from that. And I'll do the learning. You don't need to do anything extra. Don't worry about it. So you're sort of looking for needs.

you know, what need exists on the party that you need to collaborate with, because maybe that you discover that the project is an experiment or it's wider, and then you detect, well, they hit a point where they do need interaction with, you know, the lawyer or interaction with the security person or interaction with the data person. And then you're like, well, no problem. I've done this many times. Let me find the right person right away, right? And let me...

Stijn 

Let me give you here, look at this button, click this button and then you get like access to the data. Somebody will approve it. So you're actually looking where they are having problems and seeing how you can help them. And that changes the whole game, right? You're not the person who's gonna send them to a three month governance course in Siberia. You're the person that's gonna help them with their needs and respect the constraints or...

the boundaries of what their project or program actually is, and you're helping them achieve those goals. But the perception is real, right Richie? People look at, if we compare it with racing, people often look at, you know, I'm the one who's driving the wheel and I'm hitting the gas, right? And governance is the one who's doing the brakes. But that's a naive perception, right? Because really governance is around

the track that is there, the road, which signs are on the road, which curves are coming up on the road and how those curves are indicated, what the rules of the road are, you you can drive or or whatever. And even on a racetrack, you know, your car, yes, it has a brake because that brake allows you to, you know, slow down a little bit at times so you don't hit the wall or you can take the curve in a better way. And when the race is supposed to...

crashing out in the first kilometer of riding. So governance is more about the rules for the whole system that make the whole system work at its most performant way for the long term, not just for the next month. I hope that helps give a bit of a different perspective of governance.

Richie  

Absolutely. And I suppose I should have guessed that the solution to having different teams not fight with each other is just more communication between those teams. So have the governance team talk to the people who doing things with AI. And once you find out what they're doing, you can have a dialogue and that's going to resolve some of the miscommunication, some of the tension.

Stijn 

Yes. Now this is on a spectrum, right? So I'll try to make it a simple picture. You got lovers and you got the haters and the spectrum. So all of there might be or a hundred initiatives. So you can't do them all governance units, the person or the team is typically a very scarce resource. And if you want a scarce resource to be successful, you got to focus it at the right thing. So essentially once you identify all the ones that are out there and

you know, how they look to lovers or haters, then you start out with the lovers, right? So you make the ones that are most inclined to collaborate, you make them champions first. You, you know, practice your approach, your methods with them because you have most chances of success. So you work with them on their use case. And then, you know, a little bit later, they're champions. And then you use that.

momentum of those champions to go to the next ones and then you sort out their use case use the first ones as a testimonial or reference like this is how we did it and that's why we did it and that's what you know turned out to make better and then you you know convince the next ones and then you keep going some point in time sequentially you're gonna hit the haters too but you're gonna hit it with an army of lovers who are like winning the hearts and minds even of the most hardcore resistors

Richie  

Okay, I like the idea of kind of winning your colleagues over a bit at a time. One thing I'm curious about, when I speak to security people, part of their job function is a mindset of suspicion and paranoia, because you have to have at some point, a certain cynicism about what your colleagues are doing, making sure they're not breaking rules. Is that mindset also important in governance?

Stijn 

Well, it is, but I think it's still slightly different. Because the... I mean, if I try to look inside the mind of a security person, they have to be very cautious and risk avoiding all the time. Because even on a personal level, their job is always on the line. Because the security profession is a cat and mice game.

where the cat is the security people and the mice is the external attackers that are trying to get in. And those mice are always looking for the easiest way in. So the cat, the security people, they're always trying to make sure that all the holes are covered and there's no easy way in. But ultimately there's always a way in. There's always like, depending on how many resources and state actors have.

significant resources. in a way they could almost get in everywhere. I mean, you see it, for example, through breaches. You'll see at some point in time, all companies in the world get breaches, right? So when a breach inevitably happens or an incident inevitably happens, the security person is, well, you didn't do your job well. So they know they're on a timing game in that sense. And from that sense, they're, I think also naturally cautious, right? Like,

We gotta avoid that situation. Governors is a very different situation, I think. I don't think with governors, you can only think about what could go wrong. No, with governors, you also gotta think very clearly about what needs to go right. Otherwise, you're in the wrong side of the fence, I think.

Richie  

I mean, that's a big thing is that if you govern too strictly, then you're to miss out on opportunities that you would have said no to. So yeah, it's very much a balanced sort of field in that sense. Okay, so let's talk about some practical things you can implement in order to improve the state of governance at your organization. So before you mentioned the idea of having an agent registry, and it just seemed...

Stijn 

Exactly.

Richie  

really sensible to track what you're doing with AI. Is that the place to start or are there other things you want to do first to get your sort of governance infrastructure in place?

Stijn 

Yeah, an agent registry, you could almost compare it to previous times. You know, we have registries for applications, systems, devices as well. So naturally that pattern extends into this new phenomenon because you need to control a system. You need first to have visibility on the system. You need to have transparency. You need to know what's out there. So, you quote unquote catalog.

which models, which agents, which use cases, because that gives you visibility. And then you can sort of start to classify them. For example, you can come up with criteria of priority because you can't handle them all at the same time. So you, high risk, low risk, whatever criteria you choose in your organization, you pick them and you say, okay, well, let's focus on the high risk ones first and see what we need to do about those. And then you can move on to the next one.

I think the first thing you can do, that's an easy one, is just getting that transparency, know, what exists out there. And by the way, for a number of the regulatory drivers, that's literally one of the requirements.

Richie  

Absolutely. just understanding what you're doing with AI seems a very good first point. You mentioned the idea of categorizing the AI use cases by risk. So this is an important component of the EU AI Act. Do you want to talk me through what are some high risk use cases just so people have a sense of it?

Stijn 

Yes. So, the way it goes is that, in an organization, somebody comes up with a use case, like I want to do this with AI. For example, I want to use it for sentiment analysis, or I want to use it to make our coding better, whatever it may be. And then that use case needs some sort of variety of assessments, where there might be an assessment related to privacy, for example, you know, will, will this system take private, personal data and is that okay for that purpose?

to other class assessments, including the high or low risk one. And high risk...

Richie  

hang on. Sorry. You've got police cars in the background. Yeah.

Stijn 

you can actually hear that?

Okay, the echo cancellation is not working that well. It's...

Richie  

We've got noise reduction on as well. Okay, sorry. Do want to go again? Last few sentences.

Stijn 

Yes, of course. So when a use case comes in, right, it needs to be assessed according to a number of things. For example, I want to use AI to do sentimental analysis or to automate our customer support more or do coding. That's a use case. And then the organization wants to look at how can we assess that use case from a business point of view? Does it make sense for us to do this? Are we going to get value out of that?

Is it gonna match with our brand or ethics or policies and whatnot, right? So there's a number of business assessments to take into account as well as there are other kinds of assessments, like for example, related to data privacy. But then when you look specifically to your question, Richie, about risk, high risk versus low risk, let's say you have a chat bot on the website and it helps you navigate the website. Well, that's a low risk, because what could happen is that

they end up on the wrong page and have to search again. So there's not a lot of high, low, high risk in there. First is when you have an automated system, right? So a chatbot is just human input and human reading output. Whereas an agent has autonomy. It can do things, do things like, you know, running root commands on your computer. Or in the case of physical AI, actually perform.

actions that affect the real world around us, like a self-driving car or, you know, people think about robots, they always think about the human robots, drones flying around our robots as well, right? So when you touch systems like that, that go into the physical world, that go into personal data, that go into medical situations, maybe that go into manufacturing, know, affect the manufacturing process of food or drugs or whatnot, that's when you start to go in a much more high-risk situation.

Whether it's about life or death or other significant risks to human beings or society. And high risk ones, you definitely need to control them in your organization. But the real high risk ones will need to be controlled at the society level as well. I mean, nobody is going to want a self-driving car or a self-flying drone on the streets or in the air.

Stijn 

without knowing about the safety parameters of that device. Somebody has to go first, obviously, but you're gonna do that in a very controlled environment. So a high risk system, you're gonna put it in a more controlled environment before you sort of relax the parameters and open it to a broader population, for example.

Richie  

Okay, yeah, that certainly seems very sensible if you've got something where it's gonna have life or death consequences. You wanna treat that a little bit more seriously than, I guess, like your document processing agent, whatever, that's just kind of gonna save you little bit of admin time. All right, so in general, how do you assess the state of governance across your organization? I suppose you want to like audit your own capabilities in order to figure out how do I do things better?

Is there a good way of measuring how well you're doing?

Stijn 

Yes, so I think we'll see, just like we, again, the echoes of the history repeating echoing, just like you saw with data, you started to see maturity frameworks pop up. So what I expected that over the next three years, you will have probably or maturity frameworks to pick from, right? So it doesn't matter which one you pick, but pick one. I believe like there's one from a security angle with OWASP that is clearly more geared towards security.

But you may as well start from that one. And like I said, other ones will come. The schools, the universities will come up with them. The big labs will come up with the maturity frameworks. And even the consulting firms will come up with it all. So pick one, and then just measure yourself consistently against that framework. That would be my advice.

Richie  

Okay, yeah, it seems like I guess there quite a few sort of general AI sort of maturity frameworks around. Maybe there's going to be some more government's focus ones coming soon.

Stijn 

Yes. And the maturity framework, Richie, not a, it's a means, it's not an end, right? So you just, the reason why you can pick one, you throw a stone and pick one. And the most important thing is that you just consistently measure yourself against that framework, let's say over a period of one to three years, because then you have some sort of anchor to measure, like, I, my efforts that I'm putting in.

Am I actually inching forward on that maturity framework or am I sort of stuck in the same place? So just consistently apply one.

Richie  

Absolutely, yeah, a lot of these frameworks, they're about just giving you some inspiration for what do I need to do next and like where my weak spots, where can I go about fixing them? All right, do you have any?

Stijn 

And what you see in this space, because remember the underlying technology is rapid evolution. We talked about that earlier, the LLM, the engine, the agent, the framework, the car or the plane, and then now the agent system, the swarms. And tomorrow or next year, we'll see more. because it's a young and evolving discipline, you are going to have to accept evolution in the

maturity measurements as well.

Richie  

You mentioned rapid evolution. So we've gone from like a year ago talking about agents as being coming soon to you just mentioned like agent swarm was being the next big thing. It is moving fast and governance is, I guess, traditionally a very cautious, let's do things carefully kind of discipline. Rightfully so. But is there a way to do AI governance fast? Like, can you, how do you?

keep evolving your capabilities at the speed that the technology is moving and with the speed of adoption.

Stijn 

Yes, so let me give you there the example, Richie, that I used earlier today. So at one point in time, I was on the in Yellow Mountain in China and they have stairs carved out of that mountain. So but it goes up high, one or two kilometers. So it's quite a walk. And what you see is that you have people who are like going for it. Wow, we're going to go up the stairs fast. And, you know, you meet them minutes later.

meters later, whatever it may be, meet them heaving strongly and needing a break and taking some rest. Whereas the people that take this, you call it carefully, but I would say it's thoughtfully, just step, step, step, step, step. They pass all of those fast ones by and they actually typically reach the top faster than the faster ones.

So when you say how can you do governance fast, it's about being pragmatic and it's about being consistent. And it's about, if you see that an approach is not necessarily working or, you know, creating some fuss or not solving a problem, then iterate fast enough. Right, but mostly have a consistent discipline and that determines your steady pace that will put you on top of the mountain much faster than...

A quick experiment that one of the young interns in the company hacked together over the weekend. Yes, that was done fast. But then you push a button and you put it live for users and everything breaks. That's not fast, right? That's just silly.

Richie  

Okay, yeah, I feel like you've, I love the analogy of going up the steps. It reminds me of the story of the tortoise and the hare. It's like their classic fable there. Yeah, so I like just sort of consistently doing things better. I suppose the risk then is that if you've got engineers being more productive, because they're generating stuff with AI and a lot of teams are trying to adopt things quickly, because there's a bit of a rush on, you can end up with a governance gap.

Stijn 

Yeah.

Richie  

in your organization because people are doing stuff and the governance team can't keep up. So do you have any sense of how you avoid having that gap there?

Stijn 

Yes.

Stijn 

I don't think you can because again, the of the scarcity of the resource governance resources versus everyone else, it's miles apart. If you have five governance people, you might have engineers, right? So they, you know, you can't keep up with everyone all the time at the same time. There's always gonna be somebody who's pulling out a little bit ahead on certain thing.

but you try to manage the overall value and the overall risk of the whole system, not of an individual unit, because you need to make sure that your top priorities from a governance point of view are catered for. And that, for example, if you talk about engineers coding, that could be as simple as agreeing which coding agent you're gonna test out, and I say test out, because you wanna pilot a little bit.

And you would want to put the boundaries of your pilot. Like who are we going to test it with? What are we going to test it on? And how are we going to make sure? And then you pick, then that is the governance activity. And then like a quarter later you evaluate how did it go? And let's roll it out to the rest. As opposed to saying, well, it's a free for all. Everybody pick your favorite coding agent and let it rip. And we'll see which one works best because...

I'm pretty confident that your GitHub or whatever repository is going to be full of a steaming pile of something very quickly.

Richie  

Yeah, I love the idea of having some sort of like broad standards. So the idea of like, okay, we're going to at least decide on which tools you're using and what you can do with them. Because that seems something that's fairly scalable. But also, it feels kind of terrifying the idea that you have got now, maybe tens of thousands of people using AI and a very small governance team. do organizations just need to radically, re-change their expectations of what they need to spend on governance? Like, do you need to...

dramatically increase the size of governance teams or have organization-wide governance training? How do you prevent these disasters then if there's this new imbalance in amount of AI usage versus existing governance capabilities?

Stijn 

Well, know, Richie, if you talk about investing more in governance, I can always recommend it, but I got to seen as having a huge bias, right? But I can tell you one thing, if you invest zero in governance, you're not going to have a great time. So there needs to be some investment. But like I said, from what I've seen, I've always seen it as a scarce resource. And I don't think that needs to change. It needs to be resourced sufficiently to be able to be effective, of course.

Richie  

You

Stijn 

But I don't think you make it effective by saying, we have a hundred engineers, so we need a hundred governance people. That's not the way it works. Because then, you know, people are just tripping over each other. Again, governance is a control function. And a control function is about setting the rules of the game so that the game can be played in the right way, in the most optimal performance way. And you don't need a lot of control resources.

It's always going to be a scarce resource. You just have to make sure that the scarce resource is focused on the right thing and in the right direction. Because if they're focused on the wrong thing or if they're going in the wrong direction, then yes, for sure, it will be seen as an obstacle that is not contributing anything. It needs to have the right focus. And I think that is a challenge right now is that because there's a newness to the discipline and there's a lot of evolution still.

knowing what the right direction is, is a bit more challenging. So you can make an argument now that an over investment in some governance resources to make sure that you're covering some of the basis is a wise choice because clearly the other side, which again is much bigger anyways, is experimenting all over with the AI. because of that increased activity, you know, we're in a race.

people are fomo-ing, like, I gotta do something with AI, otherwise my colleague or my competition will beat me. So they're sort of racing, and sometimes they don't even know what they're racing or who they're racing, but they need to go fast, right? So that means there's a lot of things being tried out. And if you have more things being tried out, because you give that freedom, then maybe, yes, you do need some more control resources. But I wouldn't advocate for...

changing that scarcity ratio all too much.

Richie  

Okay, yeah, I can imagine a company that's almost entirely staffed by governance people. I mean, it's a very niche thing, but in general, it's not going to be useful. Okay, I like the idea of setting the rules of the game.

Stijn 

You want to play soccer or do you want to play basketball? What game are we playing?

Richie  

All right.

Richie  

You

Yeah, at least decide one or the other, if you've got half the team playing soccer and half the team playing basketball, it's going to be chaos. mean, yeah, it'd fun to watch. Maybe disastrous. All right, super. Do you have any final advice on how to do AI governance better?

Stijn 

It'll be fun.

Stijn 

How to do it better? Well, first of all do it and Doing it in my view is very simple Don't complicate it Right. So make it practical make it about not talking but about rolling up the sleeves Because a lot of governance challenge comes from

Here's a hundred page slide deck about all of our rules and our policies and the roles and the responsibilities and the this's and the that's and the don'ts. And then, you you start presenting that slide deck and within seconds, you see the eyes glaze over. That's not what you want, right? So if you want to do it better, keep it simple. It's not rocket science. Roll up your sleeves and help get the job done and learn from that experience how you can.

Do it faster or quicker the next time or with the next champions you got to make.

Richie  

Absolutely. Yeah. Extensive slide decks that send people to sleep, no benefit for anyone, whether you're the speaker or the audience. yeah, just annoyed every time I'm to come. All right. Thank you so much. Actually, before we finish, I always want people to learn from. So tell me, whose work are you most interested in at the moment?

Stijn 

It's typically a benefit for the consultants, Richie. They love that stuff.

Stijn 

as in books or anything

Richie  

books, people, companies, anything that you've learned about that you want to recommend.

Stijn 

Okay, well, my main recommendation remains consistent. I would go for MIT Scissors, Data Monetization Work. And the book is called Data is Everyone's Business and includes portions on AI as well because AI is a data product in the end. Because it's really simple, there's no jargon, and it's really about connecting it to value.

Richie  

Okay.

Stijn 

So it's accessible for non-technical audiences, all audiences. And I would consider it to be a strategic framework that's easy to digest. You can read it over the weekend and you can get started on Monday. So that would be my recommendation. that's Data is Everyone's Business by Barb Wixom from MIT.

Richie  

Wonderful, yeah, I I love the idea that data is for everyone. And of course, monetization is such an important thing, like trying to figure out where the value comes from is sometimes challenging, but incredibly important. All right, thank you so much, Stan. Great to speak with you.

Stijn 

Richie, thanks for having me over. Have a nice day.

Sujets
Contenus associés

blog

The Enterprise AI Paradox: Scaling Agents Without Losing Control

As AI shifts from assistive to agentic, enterprise risk grows. Learn how to implement governed autonomy and scale AI agents without losing human agency.
Iris Adae's photo

Iris Adae

8 min

podcast

AI Agents in Your Systems: Speed, Security, and New Access Risks with Jeremy Epling, CPO at Vanta

Richie and Jeremy explore AI-driven security risks, vendor data use and trade-secret leakage, governance and access controls, compliance beyond audits, human-in-the-loop design, computer use automation, and much more.

podcast

The New Paradigm for Enterprise AI Governance with Blake Brannon, Chief Innovation Officer at OneTrust

Richie and Blake explore AI governance disasters, consent and data use, the rise of AI agents, the challenges of scaling governance processes, continuous observability, governance committees, strategies for effective AI governance, and much more.

podcast

RAG 2.0 and The New Era of RAG Agents with Douwe Kiela, CEO at Contextual AI & Adjunct Professor at Stanford University

Richie and Douwe explore the misconceptions around the death of RAG, the evolution to RAG 2.0, its applications in high-stakes industries, metadata and entitlements in data governance, agentic systems in enterprise settings, and much more.

podcast

How Next-Gen Data Analytics Powers Your AI Strategy with Christina Stathopoulos, Founder at Dare to Data

Richie and Christina explore the role of AI agents in data analysis, evolving AI assistance workflows, the importance of maintaining foundational skills, the integration of AI in data strategy, trustworthy AI, and much more.

podcast

Governing Pandora's Box: Managing AI Risks with Andrea Bonime-Blanc, CEO at GEC Risk Advisory

Richie and Andrea explore the rapid advancements in AI, the balance between innovation and risk, the importance of adaptive governance, the role of leadership in tech governance, and the integration of ethics in AI development, and much more.
Voir plusVoir plus