Direkt zum Inhalt

The New Paradigm for Enterprise AI Governance with Blake Brannon, Chief Innovation Officer at OneTrust

Richie and Blake explore AI governance disasters, consent and data use, the rise of AI agents, the challenges of scaling governance processes, continuous observability, governance committees, strategies for effective AI governance, and much more.
29. Dez. 2025

Blake Brannon's photo
Guest
Blake Brannon
LinkedIn

Blake Brannon is Chief Innovation Officer at OneTrust, where he leads product vision and strategic direction for the company’s AI-ready governance platform. He has been with OneTrust since 2017, previously serving as Chief Technology Officer, and has played a key role in scaling the platform to support privacy, data governance, risk, and responsible AI initiatives for large enterprises. Blake is based in Atlanta and holds an academic background from the Georgia Institute of Technology, with early research experience in network systems and wireless communications.


Richie Cotton's photo
Host
Richie Cotton

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Key Quotes

There is a speed mismatch of wanting to bring new AI online, agents, reasoning and in real time deciding to do something new that you didn’t predetermine in the past. How are you going to govern that stuff? You have to reimagine governance. You have to reimagine. This is something we talk a lot about. You have to be AI ready with your governing processes. And you've got to shift some of the fundamentals of how you've been approaching doing that for years.

AI agents are going to fundamentally change the way security operations and security operations centers work. We're seeing it in a lot of different groups in a company, but now AI agents are going to be doing the lion's share of the work that you have to do as part of compliance or governance teams. You can imagine there's an AI agent that is governing the AI use that is happening. And when an AI-use in the business is reasoning and making a decision, they're not going to ping a human to say, is this okay to do? They're going to ping an AI agent.

Key Takeaways

1

Treat consent and notice on first-party data as a hard precondition for training: if you can’t prove lawful collection and purpose, you risk ‘algorithmic disgorgement’ that forces you to delete not just data but also the trained model and pipeline built on it.

2

Before letting agents take actions, implement guardrails that require explicit human-in-the-loop confirmation for boundary-crossing operations (e.g., production database deletes, destructive write permissions), because agent reasoning can infer catastrophic steps you never directly instructed.

3

Make governance scalable by decomposing agent behavior into repeatable primitives (data access pattern + action type + purpose) and applying pre-approved decisions to those patterns, instead of treating every agent as a one-off ‘snowflake’ assessment.

Links From The Show

OneTrust External Link

Transcript

Richie Cotton: Hi Blake. Welcome to the show. 

Blake Brannon: Hey, Richie. Thanks for having me. 

Richie Cotton: Yeah, great to have you here. Now just to motivate things, I'd like to start with a disaster story. So can you tell me like what's the worst AI governance disaster you're seen in recent memory? 

Blake Brannon: There's there's two that come to mind that I'll give you. The first one was a organization that. Spent a lot of money on acquiring customers and getting customer data, typically called first party data, right?

'cause that's the gold mine of how you're gonna innovate as organizations. But they made a subtle mistake in doing that. They got the data, they then fed it into their entire AI pipeline. So they trained on that data to do facial recognition and. Then fast forward, they ended up the FTC. So this happened here in the us.

The FTC ordered them to not only delete all that customer data, but also delete the AI algorithm, an engine that they spend all that time and money training on it. So it was really impactful from like that's material. In fact this an interesting word the audience might find interesting.

The regulatory ruling of that is what it's called, is called arrhythmic dis engorgement. Which like sounds really scary, maybe by design, but that's what it is when you have to delete that the algorithm that you use to train your data on. And all that was rooted in something that I spent a lot of my time and the history of th... See more

e organization I'm with doing, which is privacy, right?

That is personal data. You used it without the proper consent and notice to those individuals, and that was the consequence of it, is you had to delete not just the data that you gathered and spell that time and money doing, but also the AI algorithm. You spend all that time and money training as well.

So make sure you are getting consent and not misusing data that you're gathering and collecting 

Richie Cotton: algorithmic disengagement. That's a new phrase me, but yeah. One to remember, I think. But yeah, that seems like pretty much the worst possible outcome there. 'cause I guess at this point they've annoyed their customers.

Yeah. They've annoyed the government. 

Blake Brannon: It sounds like some beheading thing or something weird. But yeah, it's, by design and I think it was actually rooted, the term was rooted post Cambridge Analytica. And the order that ended up having, which was like. You used all this information, you trained a brain, the brain did all this stuff.

You gotta destroy the brain a thing. 

Richie Cotton: Okay. Yeah. And at that point, just everything, all the work you've done is wasted and you probably engendered a lot of bad wealth with a lot of people. Okay. So that's one to avoid. You said you had two stories. What was the other one? 

Blake Brannon: So that was one.

The other one is more like the actioning and safeguarding, and this got a lot of publicity even a couple months ago, but it was a company, a startup that was using AI to do a bunch of coding. So they had these AI agents that were doing the development and they were vibe coding. And the AI agent, from the derived reasoning and instructions, effectively deleted the production database.

That was not good, let's just say it that way. Emphasizing the importance of why you need things like guardrails and controls, especially as we move to this era of AI agents in the world where there's certain things you obviously want to automate. There's a lot of manual work we all do every single day.

But we need the right boundaries of okay, wait. When you cross that boundary, like you, you need that double check. You need that human in the loop intervention to make sure something like that doesn't just happen and cause something that could be. Pretty catastrophic. I think this individual company was a startup and really just emerging and building.

But if you can imagine a large enterprise or something like that happening to how impactful something like that could be. 

Richie Cotton: Oh yeah. That, that could be completely disastrous. So this is a database in production then? 

Blake Brannon: Database in production, yeah. Mean it just decided, oh yeah. The right thing to do is just delete the all database and then I'll rebuild one.

Richie Cotton: Yeah. I suppose it is very easy to kinda laugh at that kind of treasure, think there should be some kind of guardrails in place, but yeah, once the AI goes a bit rogue, it's yeah it shouldn't happen, but it could Okay. 

Blake Brannon: Laugh at it until it happens to you and then, yeah.

Yeah. It's it's 

Richie Cotton: not as funny when it happens to you. Okay. Exactly. Alright, so I feel like AI adoption's just skyrocketed everywhere over the last few years. So at least in theory. AI governments also should be skyrocketed. Do you think the government side of things is keeping up with all these kind of new use cases for ai?

Blake Brannon: No way close. And maybe to set the aperture of the scale of the problem, I think the world's kind of walking into is IDC predicts that there'll be billion agents that are running around in the world, in the kind of commercial enterprise sectors by Too far away. That's pretty close.

But put that number because it's hard to understand these numbers in absolutes. There's only about million knowledge workers that exist today or around the world. So you're talking about something like a % increase in the workforce that we're gonna have in just a few years. And that, there's all kinds of stuff that's gonna break down, right?

That's gonna create a lot of challenges and struggles for how are we gonna govern all this? It's one of the things that is like so top of mind to every CIO in every organization today. And if you put that in some parallels that we've probably all recently experienced, like when COVID happens. The we probably all remember a lot of things at our work where, like the VPN was saturated and we couldn't all get connected.

We were trying to join like webinar meetings and things, and there was capacity issues. That was from this massive shift of remote working and online, but that only was about a % increase in internet traffic, right? So only %. The infrastructural boundaries still were impacted by that % spike. We're talking about a % spike in.

Work that is happening. So just imagine all the things that kind of break down. It's not exactly the same, but if you imagine bringing on % more employees into a company, like think of all the infrastructure, the training, the processes, the the systems to detect all these things, they're gonna struggle, right?

They're gonna, they're gonna have some level of parts of it that fall apart and have to be rebuilt, reimagined. Not just extended to this new scale. And that's the state of AI today, right? Where everyone's looking at this future that we're gonna be in and looking at all these things that make up traditionally how you govern it and technology systems, and what are we gonna have to change as organizations to be able to fit into that new world and new reality that we're living in.

Richie Cotton: That's a really huge shift then, like billion agents. It sounds like a big number anyway. So yeah. I can imagine big changes. Where do you think the gaps are gonna be then? So once, once we go to a, where, a point where you've got more agents and people, how does that change your post to governance?

Blake Brannon: The biggest thing to break it down to something super fundamental is the. Speed mismatch for how companies have traditionally done governance. It fundamentally falls apart. And what I mean by that is a lot of governing processes for all technology today, whether it's compliance governance, security governance, ethics, governance all these different things.

They're rooted in a lot of human committees and processes where you evaluate looking at things you're doing and you go through some kind of a framework and methodology. And those are like semi-automated. I wouldn't say they're completely manual, but they're semi-automated. But they're designed for companies to go through, like what we've historically done, which is, oh, we do three or four new projects a year, right?

New product launches or new things that are need to be assessed, evaluated, right? AI use is all about data use under the hood, right? It's democratizing that. Now you're moving into an era where there's more agents than people, and within those agents there's orders of magnitude of data uses that are happening, and they're almost like instantaneously happening.

So your governing process of taking something where you're gonna say. You built an engine in a factory that could review four or five projects a year to go through these governing processes and frameworks to have safety and trust with ensuring that, okay, what is actually happening? We feel confident in it just falls apart.

So the speed mismatch of wanting to bring new AI online agents reasoning and almost in real time deciding something new to do that wasn't predetermined in the past. How are you gonna govern that stuff? You have to reimagine governance. Like you, you have to reimagine, this is something we talk a lot about, but you have to be AI ready.

With your governing processes and you've gotta shift some of the fundamentals of how you've been approaching doing that for years. 

Richie Cotton: That's interesting. With speed, I imagine governance is something you don't necessarily want to do quickly. It feels like it ought to be something that's done slowly and carefully.

So how do you reconcile that difference then between you have to go fast, but you also have to go carefully. 

Blake Brannon: I think it does put a lot of governing teams in this crux of how do we not slow down the innovation that the business is trying to do. And it's hard because you don't wanna be someone that just says, no.

You don't want to be holding the business back, keeping them from being competitive at an advantage, but you also then are accountable for being compliant, making sure you don't have mistakes that put you on the front page of the newspaper, right? Or have a algorithm, algebraic dis engorgement order, whatever it is hard.

But what is starting to emerge is like new technology and new approach to how we solve this 'cause it's. It's not any different than any new problem that the world has to solve where we just, we put our smart heads together and we say, okay, how can we build new technology to figure this out and solve this problem?

And that's what's happening. And traditional governance required or relied a lot on manual kind of human entry. People would meet and review. Maybe you've gone through this yourself. You would join a, a call with team members to understand, explain this project to me. What are we doing with this?

Where'd this come from? Let me understand the context. Can you tell me about how this is designed? Where's that data go? Where does it flow? All this information. And that is starting to shift to things that are like continuous observability technology. So looking at. The systems and the, in this case, AI agents as they are a, being built, as they're connected to new data sources as they're reasoning and taking action.

How do you observe that continuously and start to change the way you govern to fit into that more automated approach to it? So not trying to take everything upfront and deterministically go through some flow or process, but to say. We're gonna deploy like default guardrails. We're gonna observe what's happening.

We're gonna pull the anomalies out, we're gonna block things obviously that deviate from the guardrails. We're gonna pull things out and then we're gonna manage the exceptions, right? So that shift is what's happening right now. And I'll give an analogy to the type of shift that kind of ties me back to some of my background in history, which is what the world saw happen when we went from corporate networks.

Where traditionally security was built around the primitive and the main assumption that you could solve security by protecting the perimeter of a corporate network, right? So we built up our firewalls and, everybody all the corporate data was inside the corporate network. And then there was this technology disruption similar to ai, but back then it was called mobile and cloud.

And in mobile and cloud, your data wasn't inside the corporate network anymore. It was actually out all over the cloud in a bunch of apps that you didn't directly control and you definitely didn't control the network around it. So you had to shift your approach to security. You no longer could approach solving security by protecting the firewall and the perimeter you had to solve protecting the identity and the person.

And you, and this is where you saw this explosion of a new approach to solving security for the mobile cloud air. And we're seeing a similar level of a fundamental shift. To how are we gonna solve governance in this new AI and agent error? It's that order of magnitude of a shift that we're talking about doing here.

Richie Cotton: Okay. So I love the idea of you trying to figure out like which bits are automatable and putting in I guess guardrails and rules for at high levels, you're not having to worry about what does this specific agent do. It's does it comply with the general rules? And that's it, that's feels a bit more scalable compared to doing this on a one-off basis.

If you're doing it for many agents, that seems a bit of a mess. 

Blake Brannon: Absolutely. I think pattern decomposition of what are the agents actually doing, and if you broke those into the primitives of the patterns of things and say, let's apply the decisions we've made already into that same pattern.

You kind of auto have reviewed, right? You've auto approved, you've auto made that exception. Instead of trying to evaluate every AI agent as if it's like a snowflake, like it's a whole new thing. So let's evaluate and assess it. You just you can't do it right? You can't evaluate hundreds of thousands of agents as unique use cases.

You've gotta break 'em down into the fundamental primitives and apply governing principles to those to scale it. 

Richie Cotton: Okay. That makes a lot of sense. So I'd love to figure out a bit about implementation and basically I guess to be able to just work out like, who needs to care about this stuff.

Who is generally in charge of AI governance within an organization? It's pretty fragmented right now. I think you see a pretty broad set of people. In one lens. You have teams that are the AI technology teams, right? And they're. They have general guidance of governing principles at the corporate level, but they're the ones that are responsible because they're actually implementing it into the product.

Blake Brannon: I see this a lot in technology, like heavy technology centric companies that build a lot of pure tech, and the AI is really part of the product in a very heavy way. So they put the governing on those teams because it's closest to, the people doing it. Then you see these other set of use cases of AI that maybe are a mix of back office.

Some things that are, involve customer data, personal data, things like that. And that's where you start to see traditional compliance or governing teams. And a lot of organizations form AI governing councils or committees that are made up across people from privacy, security, ethics, the data or AI office.

And sometimes they sit, they sit under whoever. Is the appointed person that can be, influential. Some organizations have a lot of personal data, so the person in privacy typically has a huge influence over that because AI is using a lot of personal data. Therefore, privacy is a heavy play in the governing process.

And other organizations, there's not as much personal data use. So oftentimes those organizations think of it a little more like a security. Oriented lens. So you have someone in the security group that's the head person sharing that. And then some companies are super data focused and they have a very mature chief data office or chief ai office type role.

And you typically see that. But in all of those I do commonly see a committee or a council that is formed to allow all those different stakeholders that come from different parts of it to be represented. Then you have one of those that's taking the operational lead at driving the actual program.

Richie Cotton: Okay. Yeah. So it seems like there's more than sort of one way to, to do this, like depending on the existing setup you have within your organization. So I guess the details of the committee, you're gonna get tricky, right? Because if you do have a talking shop and everyone wants different things that get complicated, do you wanna talk me through like how a good governance committee works?

Blake Brannon: I think the world's still trying to figure out. Good governance committee and exactly what what flows with that. I think there needs to be a strong alignment to like probably the business the business goal of why they care about governing the ai. And what are the use cases and scenarios that ultimately need to be prioritized.

And does the team that has been appointed, like that AI governing team. Do they have the authority and the charter within the organization to operationally make the changes and enforce the changes with it? So for something to be successful, at the end of the day, the team has to be effective.

With the organization, leaning into how they are using ai, having the governing controls around it, and being embedded into the business processes to effectively ensure that kind of on the onset as early as possible, just like software development, you had this shift left, philosophy. It's the same thing as early in that process as.

AI is being crafted and created. That governing program is being put in, and ultimately that is what's key to help it be effective. So it's aligned to the priorities of the business. And it's embedded into the process where the people using and implementing ai, it's effective inside of those processes.

Richie Cotton: Actually, do you want to tell people more about shifting left? I get the vague idea is you find problems before they become big problems, right? Like you don't wanna. Wait until life. 

Blake Brannon: Yeah. If you can catch, a potential AI issue before you, as the concept or the prototype is there, it's a lot less costly than going back to my, original example of you've actually rolled it out and now you've gotta delete all the data.

You've lost trust with your customers, you've gotta delete your AI algorithm. So the earlier you can do it in the process, the better for kind of everyone. Ultimately the closer you can get to oh, there's a new data project that's being spun up, or an AI project rather. In this case, and I instantly, the governing council and team is aware of that, right?

That's a good sign of a healthy governance organization where they're not blocking it, they're not blocking people from ideating and playing around with concepts, but they're aware of it and they've put some preliminary guardrails in place, right on that to ensure that. The people using AI are aware of the a potential harms that could come.

'cause most of the time people just don't realize that could happen, right? Or this could be an issue, or we've gotta worry about that. And then as it progresses and it goes from a concept and idea to actually a real prototype and that we really wanna take it to production. They're on the journey with them.

So you're not waiting till you get Hey, now we're ready to go live. Let's talk to the governing team to make sure all this is okay. And we're trying to un engineer and, delay the project. Everyone, feels like it's a failure because we have to stop it. No one wants to it's just a mess.

But if you're aware of it along the way and you're anticipating, you're inserting yourself in the middle of that. It just go, it goes live with confidence, right? At the end of the day, you wanna be able to say, we are trusting the AI that is live, and we trust that we've gone through all the right things as a governing team process to be able to ensure that we're mitigating as many of the risk as possible, and we're confident with everything that we've done to ensure that we're not misusing customer data, gonna break trust or end up with something that's harmful to the business.

Richie Cotton: Yeah. So this seems like a really important thing. You mentioned it's very common for a governance team to be the people blocking stuff, and that's a terrible outcome. You don't wanna be like the department that says no. So yeah. Do you have any more advice on like how you can get good re I guess cross team interactions between a governance team and between the people who rely on governance to approve or deny projects?

Blake Brannon: One, I think a couple tips there. One. I think the language you use matters, and we have one of our customers that they call their governing program a data enablement plan, right? It's all centered around like, how do we activate data? How do we activate AI use? We're here to help you do that responsibly, and we're here to be a partner with you doing that, right?

Not we're here to govern and to ensure compliance. So I think the language matters just to set the right tone from the onset. The second thing that I think is important, especially for people in governing roles, is you need to be a practitioner to be able to have credibility with the business stakeholders.

And what I mean by that is like you need to practice and use AI even on the side, but at a level that makes you come across as not being like living under a rock. Like you've gotta be someone that understands and can speak the language, walk the swagger, with. With coming across as Hey, this person you could almost come across as I vibe, code.

And maybe you don't just do that, but you're familiar enough with the art of what's possible. You're familiar enough with how it works, maybe some of the lingo and language as well as the value and benefit to the business. 'cause if you can align with the business stakeholders on principle of really deeply understanding and being respected of this is the future of how we've got to do this right in this part of the business, and you really come across credible, they're gonna be much more willing to.

Understand your point of view on governance and ensuring safety and trust, right? And they're gonna lean into you on that front. The next thing I would say is there are a lot of frameworks for doing governance. Like one of the most popular ones is the nist ai risk management framework. So leveraging something that is, a little more of an industry standard and others in the industry are using to guide your governing framework, helps give you credibility of what you're asking them to do, is something that you know, other organizations are doing as well.

It is a best practice in the industry and oftentimes people are very familiar with NIST from the cybersecurity practices. Most people use the cybersecurity framework from nist. As part of their internal program. So putting the AI framework in that same kind of umbrella also comes with a lot of credibility.

And I think people's openness to say, okay, this makes sense. We're very familiar with that. We get why we would have to do this is another key thing. And then lastly, you'd just be a really good leader and a good cross-functional, speaker and communicator. 'cause at the end of the day you're trying to get teams that are innovating with technology to ensure that they are.

Putting some of the governing, policies and principles and things in place and not just running, wild. Like the worst thing you can do is swing the pendulum so far where people are like, it's too hard to try to follow your process or to do something. I'm gonna ignore it. I'm gonna go rogue and I'm gonna do my own stuff.

That's a lose for everyone. 

Richie Cotton: Oh, process sounds too difficult. I'll just do my own thing. Yeah, that's a definite disaster. Okay. I really like your idea of just rebranding governance as enablement. It just makes it feel a bit nicer. It's like a very simple change, hopefully helps the different teams get on better together.

You mentioned the nist AI risk framework now. I was interested about like where do you focus your efforts and I guess a risk framework's gonna help you with this. Can you talk me through what the rich framework involves. Like what counts as a high risk AI use case? 

Blake Brannon: There's different classifications of the, types of systems and the uses.

But the way to think of that is actually, look at what are the, actually the first lens I'm worried about the AI system in the thing. Think about where you're putting ai. Think about like how big of a spotlight does that have? If you were to, as a company, forget about ai, if you were to go do something that's this is a.

Change in the branding and the terminology that is the front door to the company. Think about all the eyes, all the scrutiny, all the things, like anything you do that has a huge magnifying glass on it is gonna be like, it's gonna have a huge magnifying glass on it. It's gonna be scrutinized with whatever you're doing, and you're gonna amplify people.

And then most of the time, AI projects, people further amplify 'cause they're so excited that they're using AI for something. So you put even more eyes on it. So where, wherever you're putting ai, I think there's a very simple, like h how being aware of how much visibility this has to, to what you're doing as a company.

Just in general, it's like onelan. Then you want to think about, okay, with whatever we're doing, forget about frameworks and all these things. It's what are the potential harms that could come from us getting it wrong? And there's different harms that you could think about, things like.

Are we gonna lose just trust of the customers? Would the customers be proud of being aware of how we were able to do this? Did we use their data and information? And if we were to be very completely transparent, would they be okay with this, right? Or did we miss a step in doing it? Are people gonna feel like, we're treated unfairly in these types of things?

So the NIST framework does have, methodologies for how you assess. Your own program and like kind of the pragmatic, like here's how you assess the maturity of your program. Here's the steps and things you need to do, create policies, do these types of things. But then you can go through these kind of risk evaluation frameworks to evaluate what are the potential harms that could come from going live with this AI system and have we done things to mitigate, like assess that risk and what have we done to mitigate it?

And it prescriptively walks you through how to do that. And there's software by the way, that. Takes that kind of framework as it is and helps guide you through it, automate steps in that process, things like that. 

Richie Cotton: Okay. I like the idea of having a bit of software assistance. If you do want to like audit like or you just use cases I guess, or you've got a new project, you don't wanna be just like making stuff up from scratch and spending a lot of time working up risk rather than building stuff, so you want that process to be fairly straightforward? 

Blake Brannon: A hundred percent. There's a lot of steps in that process. Going back to the slowing down. A lot of the reasons. Innovation teams and companies are frustrated with governing teams just because the steps that they have to go through to get those governing teams up to speed typically require them to like, do a lot of data entry and restate a lot of information that should be self discoverable.

For example, what categories of data are going into this AI system? What AI model are we using? How is it in the run? Runtime, where's it hosted? These things, you've bet. If you could just read the engineering code, the documentation that we have already, all these different artifacts that you would be able to answer those questions.

But most governing teams don't have systems and software that are, that technically connected, right? And translate at that level. And that sucks up a lot of time and slows things down. So if you can shift to using a solution. That enables you to skip those steps. You can automate doing that and you can focus like the governing team, brain power and capacity on making the risk trade off decisions, for the business, not spending a lot of calories and time trying to gather information, understand context from people in the business. 

Richie Cotton: Yeah. Certainly if you like having to copy and paste stuff from like a project description document into a governance document is is not. Great or exciting use of a lot of people at the time.

Whereas I guess, yeah, if you have the governance system hooked up to the GitHub retail whatever, it's okay, you can see exactly what it's doing. That seems like a good automated solution. I suppose you wanna get better at governance. Like maybe this is like coming from your CEO who says, okay, we're using a lot of ai, now we've gotta do better AI governance.

Where do you begin? What's step one? 

Blake Brannon: I would start with aligning with the CEO on what? What's driving him to wanna say that, and is it tied to, specific ROI of certain projects and AI initiatives that are most important to get right? Is it tied to compliance concerns, fines, regulatory conformity these types of things.

So really understanding and aligning with what's the why behind that, I think is key. Then I think there's a huge element of, even though you want to govern everything, you've got to prioritize. And you've gotta prioritize with what are the projects that are gonna matter the most? Because at the end of the day, you only have so much time.

And even with as much automation as possible, you've still gotta focus on what are you gonna do and how are you gonna prioritize? So I think getting the alignment on the what's the why, and then what are the, of all the AI initiatives that everyone's running around doing, which ones matter the most.

So we can really lean into 'em. Then I think you you start to tackle going back to that point of which of these are the highest risk. And it's an element of, have the most visibility because it touches the most customers, or it's the broadest visible on, our site or our properties or things like that.

And then what could go wrong that could be the most impactful, right? And in a lot of times, a simple tip or trick. Look at the data, instead of trying to figure out, oh, could AI hallucinate? And there's something that's hard to determine. Look at the data that's going into the ai. The data's much more deterministic, right?

So I know that this data is from certain people, right? Is it personal data? Is it governed under healthcare regulation? Is it governed under banking, regulation, right? So just looking at the data, where it come from, what is in that data? What categories of data is it? Is it biometric data? Okay.

That is much more heavily governed from a compliance standpoint than things like email addresses. So just looking at that is gonna help you answer that. If this thing did go haywire, what could be the worst thing could happen? If you look at those data sources, you can extrapolate into that pretty, pretty accurately. Like these are more important to focus on governing and really making sure we're doing that well versus these others because the harm of getting it wrong is probably much more impactful. 

Richie Cotton: Absolutely. I'll let that you're focusing on what are the high risk projects?

What are the high visibility projects? 'cause these are the things that matter and worry about governing everything else at a later to date, okay. That seems like a good place to start. You've mentioned data governance and some of the things like like healthcare cases, like there's a lot of governance around healthcare data because it's very personal.

Is there a difference between your approach to AI governance versus data governance? Absolutely. But they are also coupled together when you think about the controls, like AI fundamentally is just a new engine that is using data. So it's existing data that we have and we're having AI use it. Now, there's completely different things that are happening that add to the amount of governing you have to do, but doing good data governance is almost not a prequel, but a parallel to doing AI governance.

Blake Brannon: And when you look at a AI system, you have to answer like kind of two questions is what data is it using? And from that, you're able to understand there's certain policies and governance that go into governing that data. And then what is it doing to that data? This is actually the part that is more fascinating to solve because historically, let me describe one point, first one, using data in itself is not like evil.

It's what are you using it for, which is what causes problems. So to ground that having someone's phone number, having that data is not in itself good or bad. Having someone's phone number to do multi-factor validation for them to log in is normally a good thing. Having someone's phone number to SMS market message to them without the proper consent or opt-in is normally a bad thing, right?

So it's the purpose, it's the, what are you using the data for? Historically prior to generative AI and AI agents, what the data was used to do, the purpose of use was something that was deterministically coded into a software system, right? Someone was basically, whether it was low code or actually coding, like you were telling the software, use this data and do this thing.

And that required human judgment, human decision, and typically human governing processes and things like that. And human review, it moved at human pace. Now when you fast forward to the AI era, you have AI agents that are reasoning and they're coming, it's non-deterministic, right? They're coming up, we've given a problem statement, an objective, and looking at the tools.

They're connected to the data, they're connected to, they reason, and they determine what to do. So hypothetically, to give an example, you can imagine something like a customer support agent. A customer support agent that has access to the engineering backlog of all the enhancements and tickets and bugs and things.

And if a customer opened a support case about some issue, it would be able to look and say, this is a known defect, we're working on getting it fixed. 'cause it's got access to that. And that would normally be a great thing 'cause you automated an end-to-end support case. It's that same AI agent. Says actually what you asked about is similar to this other feature over here that's also in that same engineering ticketing system, and I wanna reach back out to you to let you know that not only is that bug fix, but here's these three new features that you might also be interested in.

Like. Where did we go from, we were doing customer support to now we're marketing something maybe new to you. And maybe it's like an additional purchase. So that was not something someone coded into the system. It was like reasoning from the AI system to do it. So to, to get back to your question, that gives you some analogies and understanding of there's difference in the controls and the guardrails you have to actually govern these systems.

And there's data controls that hit the data sources. Then there's AI controls that sort of do things like grounding the AI into the task at hand and making sure it doesn't deviate, making sure it doesn't respond to certain types of questions, right? 'cause you just don't want those types of things answered.

Like this AI system that's supposed to do support should never answer political questions, right? Type of thing. So there's different guardrails and different governing problems you have to worry about in AI systems because of. That non-deterministic nature of how the software technology works.

Richie Cotton: Absolutely. This seems like a particular problem with LLMs that a lot of these these channels having you think about Chad g Bt, or Claude or Gemini, all these, they can answer almost any question in any tone of voice on any subject, but. For solving business problems. If you're a customer support chatbot, you really don't want it talking about politics or religion or you, you don't want it writing in a poetic style.

Yeah. 

Blake Brannon: And even other, a lot, especially when you look at AI and enterprise it's coupled to enterprise processes and enterprise data, and even the same thing. You don't want someone that is in marketing to get access or information that should only go to someone in finance or even a certain level of someone in finance.

There's a whole new set of problems that you have to think about and solve as you take AI out of the chat, GPT Gemini, hallucination type concepts and actually put it into the enterprise. It's, it explodes the problem set. It's like much harder to solve. 

Richie Cotton: Okay. So I guess once you start thinking about governing agents, then is the trick to start with simpler agents where there's less surface area that you need to worry about?

Or can you just dive straight into we've got sophisticated agents, we just. We need roles in place to, to govern them. 

Blake Brannon: I think there, there's definitely layers that I think about of how you start on your governing journey. So there are basic things like even the example you said that even a chat bot that someone deploys on their site in the enterprise, like you should probably put in the filtering guardrails to say, don't respond to these types of questions only.

Pull information and answers from these trusted sources of data. Not anything on the internet type of thing. So there's like very low hanging fruit basics that you should implement that are kinda layer one of defense. Then you get down to the next level of things, like going back to the use case if you're connected to the company, sets of data lakes and databases.

But there's some data that shouldn't be included for certain agents or certain AI use cases. So going back to my customer example, I may have an AI system that has access to the data lake that has customer information, and in that customer information, because we ship products to the customers, their physical address.

But for what the AI's doing, there's probably very few scenarios where I need the full physical address for that customer. So to prevent a potential leakage of that information or misuse of that information, I probably should mask that column of data out. And maybe if I do need to know the area, I just mask it to the postal code.

Like you, the AI system can only know the postal code, not the full home address of an individual to mitigate risk. So these things come in layers. When you do your governing framework, you just start to approach that methodology. You look at okay, here's all the potential risk. We're gonna put default guardrails in place that kind of are these broad strokes, but then we're gonna refine and refine, and we're gonna put additional controls at different levels.

Right? There's identity level protections, there's ai, agent level guardrails, their data level controls. All these things be, become important mitigating measures, but and they're all additive. 

Richie Cotton: Okay. Yeah, I can certainly see how. The scope for a lot of big problems there. If someone starts asking you a chat bot oh yeah can you gimme the address of your other customers in my area?

That sort of thing, or that could become a real problem if it starts giving out personal details. 

Blake Brannon: Hundred percent. Yeah. Which is which the importance of AI governance, right? Yeah. You wanna make sure that teams that are just trying to move fast to turn on those things, think about those scenarios and at least it's not like we're gonna, as a society.

Nail every scenario and get it right and not prevent anything. But you want to just try to mitigate things that, that you can do upfront and be able to constantly be measuring and observing. So when you see something start to happen, you're able to quickly react to it. 

Richie Cotton: Okay. Absolutely. And you mentioned compliance before.

So are there any particular laws people need to be aware of or particular complaints? Or particular rules they need to comply with 

Blake Brannon: there. There's lots of emerging, AI regulations and laws. The big famous one in Europe is the EU AI Act. Even in California, you have an automated decision making act.

Colorado has an AI law that's been proposed regardless of even those coming out. One of the other things most people forget is there are laws that are enforceable today and effective today that govern ai and they're called data protection laws. Regulations and because most, going back to the point I made earlier, most AI systems use data.

Those data protection regulations and laws don't care if it's AI or non-AI. It's using data for something and you still have to adhere to that. So even the GDPR. The CCPA here in the US and various other state laws and regulations, like they're all applicable even to AI systems, but there are new specific regulations because of the difference in the technology, the concern of the technology that are coming out, that are gonna further add to those regulations that organizations have to navigate.

This is something we focus a lot on because it's a challenge for organizations to just keep up with. What do I have to comply to? Tell me which regulations and policies. 'cause they have to then translate that internally to the teams to say, we can or can't do that. It's derived from what are all the things that govern me as a company and organization that I have to adhere to so I can translate that into a process and a program that I can roll out to my company.

Richie Cotton: And I get the feeling. For product teams, it's not the most exciting thing to work out. Does my feature comply with a particular law or not? Are the tools to help automate this sort of thing 

Blake Brannon: increasingly more and more every day. And, the concept that I think people wanna get to is, can I just call an API to say solve compliance?

Right to tell me here's my code. Tell me if it's compliant. Here's my website. Tell me if it's compliant. Here's this AI project. Tell me if it's compliant and tell me the, when you find issues, no doubt you're going to help me mitigate and remediate. Those issues. And that typically comes with some of these controls I was talking about of yeah, let's mask the data and take it out of it.

Boom. Now it's fine to do. And that's really interesting because it gets back to that point of the data enablement. Plan and platform of like how are these governing teams effectively enabling and activating AI and data use for the business and speeding it up like this is actually really important.

It's like you want to be able to speed up your ability to put more AI and more data online. That's actually a good sign of a healthy program is you're doing more use of data and you're able to go live with projects faster. That's when you know you've got something humming. And doing that as automated as possible is what every organization is progressively moving more and more to.

Richie Cotton: Okay. Yeah, I would say I love that idea of call an API just and say Doest comply with the loads or not easy peasy. So we've had a recurring theme then throughout the show about automating a lot of compliance. Automating a lot of governance. Do you have a sense of which bits.

Of governance at the moment are easy to automate, and which bits still require human in the loop or have to be done manually. 

Blake Brannon: The part that is much more automated today is the, like information gathering piece. It's the going into the code, it's going into the AI project, it's going into the database and figuring out what's all the data, categorizing the data, classifying the data, what is it doing, what AI system, what LLM.

Automating that piece of it is something that A is possible to do, and b organizations are very open to allowing that to happen. The parts that I think everyone's maturing in is the ability to truly have AI mimic human judgment at risk decisions, right? So basically assessing that business trade off of is this okay to do or not okay to do?

It's it's a risk decision. It's and there's a lot of context and human judgment that goes into that. And both a, from a comfortability standpoint. People aren't ready for that to be automated yet, right? People wanna be in the loop, human in the loop control, bring me all the information so I don't have to manually gather it, but let me make that business decision and call.

And then I think the the technology is progressing. It's moving fast, but it equally is not there to be. Able to truly mimic all those different parameters that people put in their head, that different context to mimic the same decision consistently and accurately and confidently. But it's gonna get there.

And I think what you'll see over time is as decisions get made the same, like primitives of the decision that historically get made again and again, people will be more, more comfortable with ai automating that it's a low risk. Activity. It's a low risk decision and it's high volume that we have to do, and we're gonna be comfortable with automating those level of decisions.

And then of course, the high risk ones are probably always gonna have human in the loop review. 

Richie Cotton: Absolutely. Yeah. If something is gonna have effect across your whole business, you probably don't want it to be like, oh, what does Chad DPT say? You probably that you want the human to make the decision at the end.

Okay. But I, I do like the idea that. Once you've made the same decisions, lots of times that's gonna give, I guess it is data then that can feed into some sort of automation system. And then at least for the low risk cases, you'd be able to reinforcement learning. You're taking all these risk decisions and you're reinforcing learning and then you're looking at a new scenario and you're like, wait, this is % the same as these other scenarios.

Blake Brannon: Or you break it into the, again, fundamental primitive parts, decompose the problem, and then you're able to say. I don't need you to evaluate this whole decision again, % of it. If you agree with your historical assessment, you've already decided this, and here's the net new that you should look at and consider.

And again, why do we care about that? It speeds up the time to get through that risk review and decision process so that you can go live with the project. That's brilliant. And I think. One thing we are maybe mi missing is that people are gonna need to understand something about AI governance throughout your organization.

Richie Cotton: So can you talk me through what sort of skills do you think you need, and maybe is it's different for different teams, so maybe who needs to know what across your organization about ai Governance's? 

Blake Brannon: Interesting question. I, there's different altitudes. I guess you could answer that with one being if you fast forward.

I'm a firm believer as you fast forward what data use in the world and what software computation is not gonna be using ai, it's kinda saying what software percentage uses electricity or something. It's it's gonna be where it's like a %. Not saying everything's gonna be non-deterministic, but it's hard to imagine any product or process.

It doesn't have AI in some portion of it. So when you think of it that way, it's like everyone has to, this is just the normal, it's like everything in a company is a digital process. We're gonna say every process in a company is an AI powered digital process. So we all in an organization are gonna need to understand what is, the new primitives of AI governance that we have to all learn and have to become part of our just default muscle memory and things like that in our organization.

So I think it'll evolve to some kind of a state like that where everyone's gonna need to have some baseline understanding of how the technology works how to use it, how to have the spiny sense of sniff test, something feels off here, right? That judgment. I don't know how to, I don't know how to articulate that, but like the way we would receive a, a spam.

Or a phishing call today and we all have like signals start to go off. 'cause we've been trained and conditioned of this doesn't feel quite right of what normally is. And you know that this is something that could happen and there's the equivalence of that, right? With all ai, and I don't mean just like deep fakes, but like any AI where you can the human starts to build these judgment calls up.

Where we can see and feel that something is going off right? Something's not right with this process all of a sudden. So I think that will, develop over time. But it'll become, just like today where you have people that just have a knack for something in the technology stack is off or something's here, you're gonna, you're gonna have a little bit of that skillset build up where people are gonna be able to pinpoint and isolate where they think.

Hey, there is an issue here and there's something to go look at. 

Richie Cotton: Okay. Yeah, so I, I love the idea that everyone's gonna need some level of understanding of AI governance because everyone's gonna be using ai, but also it's cool having a spidey sense for this has got, this is something that's gone wrong with ai yeah.

A little superpower there. Before we wrap up I'd like to talk a bit about like success. Like how do you know whether AI governance is working? How do you know when your initiatives have gone? 

Blake Brannon: I go back to the time and the number of AI uses that you are doing as an organization is a good, healthy signal that you are confident in the technology and deploying it at scale.

That means your governing process seems to be working. And if you are able to say, the time it takes for us to review a new initiative to getting it live from a governing perspective is reduced and going down, that's a good indicator of a successful program. And it, it shows that partnership from you've invested in the automation.

You've invested in the ability to repeat those risk decisions in a reliable way. And you've invested in a way, in a partnership with the business where they're reviewing and leaning into the AI governing processes and teams and you're reacting and responding at the speed they need you to, right?

Where they don't want to go rogue and do something separate or just ignore your process altogether. So the speed of your program and the volume of governing use cases, I think are the two top things that define success. 

Richie Cotton: Okay. I like that because it's you're seeing it is productivity gains.

There really about the speed of governance and then it's happening. Before something bad has happened, rather than we've had no algorithmic dis engorgement this year. That's our measurable success. We'll eventually say it, 

Blake Brannon: right? Yeah. Yes. That's a good indicator of success. I think those are kinda lagging indicators.

More Shai. I think the ones I said are a bit leading, but yeah, you're right. At the end of the day ensuring you didn't have something go wrong in production, whether it was a fine or a regulatory penalty, honestly those are like not as impactful. As the distrust you have with your customers in your market because your AI is just, a, it's massively wrong and does something bad.

It takes a bad action like deleting the database and production. B, you use customer data that you shouldn't and you end up with mud on your face, right? And you have to rebuild trust from that, which is really bad. Maybe you get a fine or a penalty or something, but those seem kind of pretty minute compared to wow, you.

You had to delete all your customer data and you lost trust with all your customers, that's harder to recover from. 

Richie Cotton: Absolutely. I'm trust with customers. That's like a almost invaluable resource there. Yeah, certainly something you don't wanna squander on. Some silly project. Alright. Super.

Finally, I always want more people to follow. Who's work are you interested in right now? 

Blake Brannon: I'll give two that we highlight publicly. One is Telus, who is a telecommunication company in Canada. Their team that is run out of the privacy group. There is who actually coined this data enablement plan term for, it's not a governing risk assessment process.

It's a data enablement, plan. And what's interesting, what I love about the work they're doing is they don't view their work as a governing and a let's mitigate risk and prevent bad things from happening. It's. How do we activate the business and enable this data use? And the second thing they do is they don't think of it just from one risk domain.

Most organizations have privacy is one group. AI governance in some cases is a separate group. Compliance. Is a third, security is a fourth. Like they're different frameworks, they're different processes, they're different teams. Often there are different tools they're using. They're really unifying those together.

They think of it as like, how do we converge all these things into one thing that respects all those different disciplines and domains and frameworks, but allows us to deduplicate the overlap so the business gets a better experience. And that's part of what speeds up the process. So unifying that together is a key thing.

I see that trend happening across a lot of customers, but they they really highlight and come out on top, I think with really pioneering and getting a lot of innovation awards for doing that. So I really like the work they're doing there. The second one I would say is one of our key partners, Microsoft and Microsoft and the group under the security group at Microsoft really had this, I think, very leading vision for how.

AI agents are gonna fundamentally change the way security operations and security operations centers work. And it puts us in this mindset. We're seeing it, in a lot of different groups in a company, but how AI agents are gonna be doing the lion's share of the work that you have to do as part of like compliance or governance teams.

You can almost imagine there's an AI agent that is governing the AI use that is happening. And when an AI. Use in the business is reasoning and making a decision. They're not gonna ping a human to say, is this okay to do? They're gonna ping an AI agent and they're gonna reason with an AI agent that is skilled on compliance and governing principles and data ethics and these things to reason with wait.

You can't do that because we as a company have to comply to these things and we have these data ethics standards that we adhere to and things like that. So that's just super fascinating. I believe that is the future where we're gonna be going as well as governing and compliance teams where AI agents are really, augmenting obviously human in the loop is still very key, but they're massively augmenting and giving scale to all these teams that are trying to govern and comply to this ex continuing to explode use of data that is happening.

The volume of data, the number of AI use cases. I think they just have a great vision and that team, we've been working a lot with them on partnering with, building, building some things and integrating, but I think that is the future of many things in enterprises and we're excited to be on that journey as well.

Richie Cotton: Absolutely. Some very cool stories there. And I love the idea you said from TELUS where it's a mix of you've got some technology there, you've got the process there, you've got the people to get and all those things are coming together and Yeah, certainly the Microsoft story is okay, you are agents.

Checking all the AI governances, I guess probably like more aged like AA agents checking the AI agents. So it's probably just agents all the way down, right? 

Blake Brannon: Yeah, exactly. It's a good analogy. It's but if you think about the sheer volume, like of think about security Operations Center, the amount of events coming in, it's impossible to pre AI causing events right?

Coming in. It's impossible for enough humans to be hired in a company to go look at every single event and investigate it. It was impossible. You can't do it. You have to prioritize today. And when you have outside actors and the bad actors are gonna start using AI agents to try to attack, you're talking about orders of magnitude of those events gonna explode.

The only way to defend against that AI is with your own ai and you have to have your own sort of AI agents that are doing this. And the same is true for these compliance and governing teams of it's impossible to go look at every single scenario of here's a new. AI agent that is using or reasoning something or new data that was added.

So AI agents help scale these teams and are gonna get us to the ability, going back to the very first point I made of billion AI systems running in just a few years in AI agents, it's gonna give us the scale to confidently govern. All those agents. 

Richie Cotton: Okay. Yeah, definitely. A lot to think about and there's some big challenges here just to make sure that all your governance does scale to I guess to match your AI adoption ambitions.

But yeah, exciting time for sure. Alright, super. Thank you so much for your time, Blake.

Blake Brannon: You bet. Thanks, Richie for having me.

Themen
Verwandt

Podcast

The Challenges of Enterprise Agentic AI with Manasi Vartak, Chief AI Architect at Cloudera

Richie and Manasi explore Al's role in financial services, the challenges of Al adoption in enterprises, the importance of data governance, the evolving skills needed for Al development, the future of Al agents, and much more.

Podcast

Scaling AI in the Enterprise with Abhas Ricky, Chief Strategy Officer at Cloudera

Richie and Abhas explore the evolving landscape of data security and governance, the importance of data as an asset, the challenges of data sprawl, and the significance of hybrid AI solutions, and much more.

Podcast

The State of BI in 2025 with Howard Dresner, Godfather of BI

Richie and Howard explore the low penetration of BI in organizations, the importance of data governance and infrastructure, the evolving role of AI in BI, and the strategic initiatives driving BI usage, and much more.

Podcast

How to Build AI Your Users Can Trust with David Colwell, VP of AI & ML at Tricentis

Richie and David explore AI disasters in legal settings, the balance between AI productivity and quality, the evolving role of data scientists, and the importance of benchmarks and data governance in AI development, and much more.

Podcast

Aligning AI with Enterprise Strategy with Leon Gordon, CEO at Onyx Data

Adel and Leon explore aligning AI with business strategy, enterprise AI-agents, AI and data governance, data-driven decision making, key skills for cross-functional teams, AI for automation and augmentation, privacy and AI, and much more.

Podcast

Building Trust in AI Agents with Shane Murray, Senior Vice President of Digital Platform Analytics at Versant Media

Richie and Shane explore AI disasters and success stories, the concept of being AI-ready, essential roles and skills for AI projects, data quality's impact on AI, and much more.
Mehr anzeigenMehr anzeigen