Direkt zum Inhalt

Governing Pandora's Box: Managing AI Risks with Andrea Bonime-Blanc, CEO at GEC Risk Advisory

Richie and Andrea explore the rapid advancements in AI, the balance between innovation and risk, the importance of adaptive governance, the role of leadership in tech governance, and the integration of ethics in AI development, and much more.
2. Feb. 2026

Andrea Bonime-Blanc's photo
Guest
Andrea Bonime-Blanc
LinkedIn

Andrea Bonime-Blanc, JD/PhD, is founder and CEO of GEC Risk Advisory, a board member, strategic advisor, and award-winning author. She specializes in the governance of change, advising companies, NGOs, and governments on global strategic risk, leadership trust, geopolitics, sustainability, cyber resilience, and exponential technologies. A former C-suite executive at four global companies, including Bertelsmann and PSEG, she has held roles spanning legal, risk, ethics, sustainability, and cybersecurity, and currently serves on multiple boards and advisory boards.

Andrea is a Senior Fellow at The Conference Board, NYU’s Center for Global Affairs, and an AI Ethics Strategy Fellow at the American College for Financial Services. She is a sought-after keynote speaker and media commentator, appearing in outlets such as Bloomberg, the Financial Times, and The New York Times. She is the author of several books, including Gloom to Boom and most recently, Governing Pandora: Leading in the Age of Generative AI and Exponential Technology.


Richie Cotton's photo
Host
Richie Cotton

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Key Quotes

We have people leaving companies like OpenAI because they felt that the safety and the guardrails weren't very strong. So they've gone to other organizations to create their own companies to put safety first. We have not seen a safety disaster yet, but we're going to see something, and the moment we see that, there'll be a rush to creating better governance, better risk management, better auditing and better red teaming.

In the age of AI, you have to be discerning. That's really, really important right now. There's so much garbage out there and so much slop and more and more of it being created every day by AI. So we need to be discerning, selective, not being overwhelmed.

Key Takeaways

1

Treat AI governance as a 360, lifecycle discipline: build cross-functional oversight from model inception through decommissioning, instead of relying on a last-minute ‘safety review’ right before launch.

2

Embed ethics/compliance expertise directly into data and ML teams early (data sourcing, data quality, evaluation, release gates) so governance becomes a build-time constraint, not a separate group that blocks shipping after the fact.

3

Align incentives to reward how systems are built, not just that they ship—tie performance metrics to pre-release testing, red teaming, auditing readiness, and documented risk tradeoffs so speed doesn’t systematically outrun safety.

Links From The Show

Andrea’s Book—Governing Pandora: Leading in the Age of Generative AI and Exponential Technology External Link

Transcript

Andrea Bonime-Blanc: Hi, Andrea, welcome to the show. Thank you so much for having me, Richie. I'm looking forward to our conversation. 

Richie Cotton: Yeah, great to have you. So I think over the last few years, AI has been advancing incredibly rapidly. How do you I guess first of all, what's the biggest risk you've seen from the rapid advances in ai?

Andrea Bonime-Blanc: I think the biggest risk is that we're not keeping up with both the upside and the downside. If people are not looking at. The AI and tech risks in a holistic way, keeping up with it, both in terms of advances and also downsides and making sure to protect and maintain an equilibrium of sorts.

That to me, is a very big risk. And unfortunately we have the very bleeding edge of tech change on the one hand, which doesn't really care about the risks as much. And then we have the overly cautious risk averse crowd that is trying to drag them down in the words of. Of the accelerationist, let's call them.

And so we have that tension between different kinds of functions in the pursuit of new technology. And I think we have to have a more holistic and more balanced approach in general. 

Richie Cotton: Yeah, I suppose both extremes are problematic. So if you just like gung ho, we will adopt the latest technology, will build the latest thing and not really worry about.

Risks. That's a terrible idea. 'cause something bad's gonna happen. But also if you're like, okay, we must do nothing because it... See more

could be dangerous, then you're overly cautious and you're not gonna get those upsides from the new technology. 

Andrea Bonime-Blanc: That's right. 

Richie Cotton: So I somewhere in the happy middle, there's a state where you are creating new technology but governing it.

And again, it seems like when technology's moving fast, governance becomes a problem. How do you deal with that? 

Andrea Bonime-Blanc: I think governance has to be, has to adapt as well to the. Speed of change that we have. And I like to call it the governance of change in the work that I do. And in the book that I wrote we'll talk about it later, but the governance of change means that we all have to have an attitude change.

We all have to adapt ourselves to the fact that change is going to happen and it's gonna happen fast, and it's gonna happen in a multifaceted way. And so if we as individuals, professionals, experts, and our teams don't try to adapt to that, we're going to lose. And it goes back also to the leadership of organizations, companies, NGOs, government agencies, et cetera, to really take this seriously and see it as something that will grow, will slip out of their hands if they don't create a proper adaptive structure to deal with it.

Richie Cotton: So I like the idea that everyone has a role in this from individuals right through to NGOs and government and I guess business leads as well. You said if we don't take this seriously it might slip out of our hands. So what happens then? Tell me, gimme a disaster story.

Andrea Bonime-Blanc: We haven't seen a major disaster yet, but we're hearing a lot of, I would say small stories or stories that are hidden beneath the surface that come out through whistleblowers maybe and others who are concerned about how some of the advancements that are being made with. LLMs and gen AI are actually potentially very dangerous.

And so we have the red alerts that we sometimes hear about from the likes of OpenAI. We also have people leaving companies like OpenAI because they felt that the safety and the guardrails weren't very strong. So they've gone to other organizations to create their own companies to put safety first.

And we have not seen that safety disaster yet. But we're gonna see something. And the moment we see that if it's big enough, there'll be a rush to creating better governance and better risk management and better auditing and better red teaming and all this stuff that. That we talk about, but that we don't always do.

And so we're in the rah phase of doing anything and everything and asking for forgiveness later. And sadly, I think the forgiveness piece will cost, and it could be lives, it could be money, it could be, safety of people. And we don't really know, and I think that's why we need to have a holistic integrated approach to governance from the top down, from the middle out and from the bottom up, and people who are on the front lines of.

Of doing the it, the technology, the testing, the development. They're on the cutting, bleeding edge of all of this. And they need to have help now, not later when something bad goes wrong. 

Richie Cotton: Yeah, so it just seems like a common thing is that you wait until something goes wrong and then that's the point where you start fixing things.

But obviously better do things before things go wrong. And actually, two, like a very divided culture. So they've got all the people trying to ship stuff and they're doing that as fast as possible. Then you've got a separate safety team and they're like no. And occasionally someone from the safety team gets annoyed and and leaves, and I guess that's how anthropic was formed as a company was people leaving the safety team, starting again, governance sometimes it doesn't have the cool image. So it's hard to see oh, it's it's the team that says no. So talk me through what is a good. What's good governance look like? What's the point of it? What happens when governance goes? 

Andrea Bonime-Blanc: Yeah. I think it really is a whole of organization effort.

It isn't just the board of directors or the tech governance around a specific technology. It's a fully integrated, holistic approach to governance. And I mentioned before the top down the middle app and the bottom up. And I talk about that in my book governing Pandora in the sense that there's a chapter I talk.

Call leadership and it's about the Gov governance basically. And by everybody has a piece of this but it takes the governance from the top to make it happen. So if the board of directors or the CEO or the management team aren't really on this issue because it, they haven't had their disaster yet or their big risk come to fruition or anything like that.

Then you're gonna have more of a lackadaisical attitude on the part of the top. And it'll only happen if there is that incident or that regulatory requirement that comes along, something from the outside. But it really I think it starts with leadership from the top and the board of directors really getting that tech governance is a effort and that you have to incentivize the rest of the organization to work in concert with each other to make sure that.

At the very fruition of the new algorithms or of bringing in data and looking at, its shape and content and so on, and how it then becomes part of your own algorithm, software program, what have you. That at that very granular, level. We have not only the technology software developers and all that, but some ethics folks who can ask the what ifs.

They can test some of this stuff before it goes into the next stage. And so it's a lifecycle approach to governance is what I like to think about it. It's everybody taking part of it. It has to come from the top in the first place because otherwise it won't happen. It'll happen.

Sporadic parts of an organization, and then it has to be for the life cycle of the products and services that you're creating. So from inception to termination or, decommissioning of a product and having those people who are involved with the development, implementation, sale troubleshooting, et cetera, those people.

Include some form of ethical experts governance experts that can help troubleshoot some of the downsides of what's potentially gonna happen. 

Richie Cotton: Okay. I think you named like a lot of different teams and roles in the company. Like basically everyone's gotta do something. So I like that. Is maybe let's start with the top then.

'cause you mentioned this. Come from leadership originally. What's the role here? How do you go about setting like a government's vision or creating governance culture? 

Andrea Bonime-Blanc: I, I think the frontline top frontline responsibility is always with the CEO or the president, the leader of the organization to be tech and governance savvy enough to understand that.

The vision, mission, values of the organization have to include some of this integrated tech governance. And again, every company's a little different. If you're a pure technology company like a Microsoft or an Anthropic, that's one thing. If you are in the auto business or food business or banking business, it's another.

But I would say that we're all technology companies now. There isn't a single company out there that isn't a technology company, so the degrees of. How much technology you have to govern are gonna be a little different perhaps. But everybody's using ai, everybody's using gen ai, multiple other technologies that might be interconnected with the AI and with other robotics, automation advanced materials, biotech.

All this stuff is interconnected at the end of the day. So I go back to the CEO and the leader of the management piece of the organization. To really set the right tone from a values and a mission and a vision standpoint. Then that has to trickle down into how human capital organizes the incentives around these things.

How it's not just about getting the software and the algorithms created and then deployed and sold. It's about how you're doing it. And so incentivizing the how doing it ethically, responsibly, accountably, doing it in a way where you will test your products. Multiple different ways before you actually release them into the wild and to your customers.

So it starts with the CEO and the management team, but there's that layer of the board of directors and I I specialize in the governance piece at that level, but also how does it enter lace with the rest of the organization? The board of directors usually lag behind what is necessary at the management frontline piece that's changing slowly but surely.

But the point here is. That the board of directors needs to have savvy people in it that understand the technology, the change are ready for that governance of chains and not just governance the old fashioned way, the traditional sit back. I sit on a board versus I serve on a board. You know that those are things that are very important right now.

You have to serve proactively on a board in order to get what's going on with technology and then adapt to what your company needs and be more visionary and more scenario oriented than you've ever been. So I. That's a responsibility at the board level. Although the direct responsibility, in my opinion, is the CEO.

Richie Cotton: This is interesting. Board of directors slightly above my pay grades. I don't interact with a lot of them yeah. Okay. This is interesting that the board needs to be actively educated then in order to have some kind of impact. Can you talk me through how that impact works then?

Yeah. What do they do? 

Andrea Bonime-Blanc: The board of directors is supposed to supervise oversee the, the whole, the full strategy of the company, the management of the company hold the CEO and the top team accountable through performance metrics and incentives. Make sure that the reputation of the organization is being protected.

And that that also there is a forward looking where is the business going? I like to think of them as the the strategic and culture. Guiders right for the organization. And if something's going wrong at the CEO level or at the management level, they're the ones who have to intervene.

And the chairperson of the board is usually the person that's most directly involved with the CEO. Leave aside the fact that in some companies we have the dual role of chairman of the board and CEO in one person. And that is not good governance, in my opinion, because it means that there isn't a check and balance between the chair.

The, and the CEO. But, so the governance at that level has to be, you need to select the right people to be on the board, not just the cronies, the friends and family of either the CEO or the chairperson. But people who are actually qualified to be overseeing a business of whatever tenor it is, right?

If it's automotive, if it's robotics, if it's biotech, it doesn't really matter. But do you have the right people? Who understand the business, who understand the financial wherewithal and implications of the business. And then these other things like technology sustainability issues issues that have to do with the long-term strategy of the business.

And also crisis and risk management because we are living in what someone else termed a poly crisis world which I think is a very good way to describe the world we're living in. I've added a little nuance to that. I call it a poly risk world too, because our risks are so complex right now with the infusion of technology and other different things.

For example, a cyber risk in the Ukraine right now would be a geopolitical issue, a cyber or tech issue. It could also involve automated machines like drones those kind of, so it's a. A complex multifaceted risk. So the people at the board have to understand these things and have to be continuous learners.

This to me, is the most important thing. We all have to be continuous learners, but the people who are the guide of the organization and the people who hold supposedly the CEO accountable, those people have more responsibility than anyone to be continuous learners and to be ahead of the curve, understanding, benchmarking their own industry, but also.

The competitors beyond their own industry, that might come into the fray. 

Richie Cotton: Okay. Yeah. Certainly I'm lost to take in and I like the idea of a poly risk world. Yeah. I don't know whether it's just there's more news and there's more awareness of all the different things going on, but it does seem yeah at any given moment there's always like some kind of crisis somewhere or something about to happen.

So yeah, certainly lots of uncertainty. Okay we talked about, about the board, the C-Suite. You also mentioned like a lot of different teams need to be involved in AI governance. So how do you implement that I guess tactically within an organization? Do you need a committee or is there another way of doing this?

Andrea Bonime-Blanc: Great question. And I'll use an example that I implemented in a company where I was a senior executive. I was in charge of risk audit, corporate responsibility, and cyber for the risk piece. We had as an interdisciplinary enterprise risk management team. We were publicly traded companies, so we had to report quarterly and annually and those kinds of things.

So we kept a very, a fulsome risk register and did surveys every year and collected information to inform our enterprise risk management posture, which then would go up to the management and the board. This team of people would meet every six weeks or so. And this team was about people from all walks of life within the organization.

It was a technology company. So we had we had very senior people like the general counsel and the chief financial officer, but we also had more specialized junior people who were in charge of one risk, for example, export controls. So we had these people around the table every six weeks reviewing the latest information coming out of the enterprise risk management exercise.

Anything new that came up. So it was this coming together of multiple different minds and lenses around the risk profile of the organization. So to me that's always been a great model to say, Hey. And one other thing that we did actually is if we identified a new high impact, high likelihood risk, we would put what we called at that point, rapid deployment forces.

It doesn't sound very kosher at this point, two or three people who actually understood or needed to understand that particular risk better. So if it was if it was cyber, you would have an IT person there. You would have a risk person there and you would have a financial person there.

Just to pull that out of a hat. And those rapid deployment forces would then go and really study the issue under, get to the bottom of the information that we had as a company and then come back and report back to the larger cross-disciplinary team. So to me that is a really good model of how you, a, learn about your risks and stay on top of it to the best.

And you need all kinds of tools and information flow coming in. Of course. But then identify the risks and so a, the similar model can be used for AI risk, which is, we have. A multitude of AI risks out there. There's this wonderful repository that MIT keeps for anybody who's interested.

It's the AI risk repository. It's free, it's online, it's constantly updated, and it has just a vast wealth of resources, maybe too much, but for those of us who are in the risk space or in the audit space, even in the technology space, it's a very useful place to go and see what the latest is. Someone who would be on a either rapid deployment team or interdisciplinary team looking at AI, would want to make use of those kinds of resources.

Also, other resources that benchmark the industry perhaps on AI risk. And of course looking at your own information that's percolating up through whether it's reporting audits whistleblower reports. We go back to open ai. Whistle blowing was not working well at open ai, and so there was a lot of no speak up because there was fear of retaliation.

And so you need to have a robust speak up structure in your organization that allows you to speak up about these kinds of issues without fear of retaliation. And so that's. All part of what I would say is you have to find the right structure for your organization. Every organization is a little different, the right people to populate a structure and make it a very a very agile, adaptable group of people who are really interested and can really find the resources they need to be able to inform the organization.

So that's what I would say is a great way to. Operationalize some of this is to have those structures and people in place. 

Richie Cotton: Okay. So yeah, certainly if you keep shooting the messenger with the, with this bad news, you can run out of messages pretty quickly. I like the idea that you need, do, need that culture of being able to talk about risks in a central way.

Now, you mentioned that you typically gonna have a very, cross team initiatives like it's typically like four, was it people or something from different groups or whatever as a example, size for within individual teams, like particularly technical teams if you're a data team or you're like a product team or something like that.

Are there any particular roles around governance that you think people should be aware of, as you're building stuff? 

Andrea Bonime-Blanc: Absolutely. And we're all, I'm sure, familiar with the wealth of new AI related roles that are. Rising. And so one of the things that I encourage certainly some of the younger people coming into the workforce and people who are, who traditionally were interested and have served in roles of ethics and compliance, regulatory affairs, those kinds of things.

Those people traditionally have been working in silos over here while the technical people are over here doing their thing, right? What I'm saying people have to do at the IT frontline technical teams that are dealing with the data that's coming in or looking at the quality of the data or building the algorithms and the software.

You need to have someone come in with a different set of lenses. And those lenses are more of ethics, the ethics of what you're doing, questioning where the data's coming from, if it's quality helping the technical team look at issues from from more of an ethics, compliance, regulatory and risk standpoint.

So you wanna have a few people like that peppered into the teams that are on the front lines. At the very early inception of things so that lens can come in at an early stage and you don't end up with Google Gemini putting out visuals of black presidents from the th century and things, and much worse, of course is much worse.

That's happened. But you don't want that sort of thing to happen. Maybe you can't prevent it entirely, but you have to make a, an important effort to think about those things early on so they don't, spin out of control later. 

Richie Cotton: Absolutely. I do have an idea of embedding like these skills and capabilities around governance in sort of frontline teams.

Otherwise, you're have that tricky situation where you've got separate governance team. The governance team is then like the team that says no, because the other team's done something silly that. It could have been avoided before months 

Andrea Bonime-Blanc: and this is one of the biggest problems I think in any kind of organization is jurisdictional battles.

I spent years as a senior executive at four companies and the jurisdictional battles, if I'm general counsel, I have control over this. If I'm risk, I'm controlling that. If I'm CFO and. Those kinds of jurisdictional battles are as natural as human nature issues are. But we have to try to overcome some of this.

And again, that's why I keep going back to the CEO and to the board. They have to set the tone for a culture that incentivizes the. The top managers and mid-level managers and others to work together because we're not gonna solve these problems if we don't work together and cross-functionally and cross disciplinarily.

Because these questions and these issues and these technologies are so complicated and the implications of them are so unknown that we really need to come together to help. Find the way to a safe future and not everybody. Lives in that world. I do. 

Richie Cotton: Yeah, certainly I think particularly on a legal level, but within companies as well, there's always like people arguing about whose responsibility is this?

And yeah, I guess either trying to pass the book or or take ownership for something where maybe you should belong to someone else. Okay. So I guess maybe I'll just start what do you think a, like a responsible technology culture looks like within an organization? 

Andrea Bonime-Blanc: I think it goes back to what I was saying a little earlier about the CEO and the management team typically set the vision, the mission, the values and then those things hopefully are not just paper on a wall or, zeros and ones on a screen, but actual things that matter to that organization and that have operationalization.

Through performance management, performance incentives, how do we do the work? Not just get the work done, but how are we gonna get it done ethically versus unethically or, those kinds of things. So it always goes back to me to how the CEO and the management team. Characterize the use, the deployment, the integration of technology into the products and services of their company.

And I mentioned earlier too, that there is no company now that isn't a technology company. We all use just all kinds of technology every day, whether it's. Compute computing, whether it's automation, all kinds of software and gen ai and bots and communications tools. So all of us are using these things and all of us are integrated with each other and using them.

And so it's all based on technology. And if something really goes bad, a data center goes down, a cloud is cyber attacked, whatever. We feel the impact, even if we're manufacturing needles or something, at the end of the day, how technology inter. Intersects with your products and services and how you're going to deal with that intersection has to come from the values of the company, from how the company does business.

And the it, the way it translates directly is through performance management and the CEO and the board have to set the tone about the culture. That, and to me, the most important crux of the matter, and it always has been this way, but it's even more now with technology moving so fast is, you have to have a safe to speak up culture and a structured one so that it's not someone feeling like they have to hide and talk to somebody about something that they're really concerned about because.

There's a out of control AI in your software or something. You want to have process where you can go to someone where those people are identified as helpers, as resources. You also need to have an anonymous route to to report some of these things if you're a bigger company or organization.

And you have to feel that you're not gonna be retaliated against because you did that, because sometimes that information comes out. And people will leave like they left open ai, for example. I keep picking on open ai, but they're like a very good poster child for a lot of these issues.

You look at some of these others, anthropic started from day one. Saying, we're gonna be a B Corp, we're gonna have all the structures in place. We're gonna cater to stakeholders, not just shareholders or owners. There, there's a different attitude that's reflected in each of these companies and it's showing, and we'll see who ends up more successful in the longer run.

I. I'm betting more on topic than I am on open AI to be very frank, but that's my personal opinion. 

Richie Cotton: Interesting. What's that? A spicy take? Yeah. We'll see how it plays out. Okay. So I guess for organizations who are wanting to get better at this, it seems like it's a big nebulous thing, doing governance better.

What's like a practical first step? Yeah. I think it always goes back to being well-informed. And having good sources for your information, which is a whole other topic that we could be talking about term in this age of disinformation, misinformation, weaponized information, et cetera. So I think we have to pick and choose, but also be open to learning things that we maybe don't necessarily always go to.

Andrea Bonime-Blanc: So be more open to reading, reliable sources on technology issues on. Ethics issues on business issues go-to my go-tos, for example are the Financial Times, the Economist. And then a, there's a couple of really great newsletters that are free from Axios and sema. Four on technology.

On ai. There's a Rundown ai, which is another great resource. These are things that each of us can easily. Subscribe to if, if you don't wanna pay money. There's the, a lot of available newsletters from like I said, Axios and SEMA four and rundown. But you have to be discerning, and I think that's really important right now is don't we?

There's so much garbage out there and so much slop and more and more of it being created every day by ai. So we need to be discerning selective, not be overwhelmed, but also be open to understanding some of the other forces that are taking place in, in our society. And I'm a big student of and someone who's followed geopolitical and political issues very closely.

In addition to being a lawyer in my earlier career. I have a PhD in political science, so I'm very interested in geopolitics and international relations, democracy versus autocracy. These things help inform you what's going on, if you're interested enough and I'm an old fogey, but I was like this when I was a young fogey.

I think you have to be curious, and that's one of the most important things for this to do well in this world, is to be curious. Continuous education, be open to. Understanding some other forces that are not necessarily part of your day-to-day job. And that's why I said The Economist is a great resource or Financial times which, it's a business newspaper, but it has better political reporting and international relations reporting than in my opinion, almost anybody else.

That's what I would say. No, I, that is interesting that. When you start thinking about risks, it's not just pure technology risks. You do need to think about business racing, like what's going on in the world and just, pay attention to trends and pay attention to the news a bit.

The IT personnel that are on the front lines of cyber security issues hopefully they understand the larger picture of cyber insecurity worldwide, right? So they're busy every day battling. The phishing scams and the ransomware and the this and the that, but they need to pan out every now and then and why are we being attacked and who is attacking us and where is it coming from?

Why is it happening like this? And it may be criminal gangs, it may be national security, it may be something else. But knowing, having that perspective, I think is really important to doing your job well. 

Richie Cotton: Absolutely. Yeah. Yeah, cybersecurity is obviously very closely related to geopolitics in all the sort of nation state gang, like hacking gangs or whatever.

Yeah. I'm trying to think of whether that also applies to AI things as well. I'm not sure whether there's a similar sort of 

Andrea Bonime-Blanc: it, it does in a very direct way in, in that now we have ai, turbocharged, cyber and security. So we have agents out there and others others deploying ai and then agentic ai really raising the scope and impact of cyber attacks.

There's also the reverse, which is the defense. Part of the cyber attacks is also arming up with more agents to defend and that kind of thing. But it is a, it is warfare and it is an escalating attack and defense I guess is one way to put it, those who are in that world really need to understand not only the technology piece of that, but also the larger context of geopolitics and and international relations.

Richie Cotton: Yeah, certainly like for all the cool stuff the AI brings us is like having automated cyber security tax is not great. Yeah, some different downsides as well. Certainly with bearing in mind. Okay I think it's worth judging on the relationship with governance and compliance, because it seems like complying with regulations is an important part of this, but maybe it's not everything.

Do you wanna talk us through what your approach should be here? 

Andrea Bonime-Blanc: Sure. In, in my day, I, I. I was in charge at the companies where I was an executive for the ethics and compliance program, which was a combination of ethics, on the one hand is more values driven and, culture and that sort of thing.

And the compliance piece is what are the laws that you need to comply with and let's make that the bottom line fundamental thing that we do. And then depending on the industry that you're in, you are more or less regulated. And of course, banking, for example, healthcare some of these industries are way more regulated than others.

And so each company to organization has to have an integrated approach under governance to risk compliance and ethics. And different companies organize it differently. It doesn't really matter how you organize it, as long as you actually organize it. So a lot of that comes under the jurisdiction of the general counsel.

But it sometimes is organized in a different way, depending on the company. To me, it doesn't really matter. How you organize it, as long as it works for your type of company, your footprint, the personnel that you have, your geographic scope are you all over the world or are you just in a couple of places?

That everything determine all those different criteria determine how your governance, risk and compliance program has to look like. Now, there's GRC, which people talk about a lot, which is the rubric that's existed for many years to describe. Not the top level, but the next level of governance within an organization.

And you usually would have a compliance head or a general counsel into which compliance reports, again, depending on the footprint of the organization. And the important thing is having the right people in the right places and having the right coordination. I don't care what you call it, if it's G-R-C-E-N-C, regulatory affairs that.

But it does have a very major legal component. So you wanna have your GC or your legal department very much involved with that, whether they actually control it or someone else does, because there's a chief compliance officer as well. So again, it's about the topic and how you organize to, to satisfy the needs of regulators and legal compliance, et cetera.

Richie Cotton: Okay. Yeah, that's a fascinating structure there. Yeah, you mentioned like risks and and governance and compliance as being three separate but related things. 

Andrea Bonime-Blanc: Yeah. And just to put a little extra nuance on that, the risk piece is often a separate piece that is enterprise risk management, and you have a chief risk officer who doesn't report to the general council, is an independent or part of the, of the panorama of functions.

But hopefully is coordinating closely with the GC and his or her team. And again, depending on how things are organized. You might have a chief risk officer and an enterprise risk management program, or you might not, which is a problem for bigger companies and even complex, medium sized companies.

You always wanna have some form of risk management that is independent from compliance and and governance, but actually integrated somehow in terms of informing them for purposes of governance and, compliance issues. 

Richie Cotton: Chief risk Officer sounds like a cool job. Now it's the kind of person who parachutes into the office or something.

That's 

Andrea Bonime-Blanc: the sexy version of the Chief Risk Officer. There's a very unsexy version too, where you're plowing through information and trying to find the right nuggets that will inform your superiors and management and the board. So there's a, it's nice 

Richie Cotton: not quite as dear as Bond that much. Alright. For anyone who's interested in governance, are there any particular skills you think are important to learn?

Andrea Bonime-Blanc: Some fundamental building blocks for governance is being expert in, one of several different areas. It could be technology, it could be ethics and compliance. It could be risk. And so I think having the sort of the rigor and the discipline that comes with being an expert in one of those areas or multiple areas is always a good foundation for governance.

Now, governance. At the end of the day, it's how you define it and what are you talking about? It's always con context driven. Governance risk and compliance is a functional group. Sometimes governance on its own is usually references the board of directors. And so there you have either the general council or a corporate secretary that helps with the governance agenda, caters to the board of directors, organizes the meetings, yada yada.

So that's governance in a different sense. And then there's sort of the governance that I reference in my work, which is. Integrating all those different things in a way that you have pieces of governance at the front lines that the IT and technology people understand and deploy with the help of other experts.

For example, maybe an ethical AI expert. And then you have management in the middle integrating that into how they manage how they do the performance management, the incentives and then the top of the house. Really thinking through the big strategic picture and then helping to create the tactical pieces that go into the rest of the organization.

Richie Cotton: Okay. I should probably have guessed before I asked that this stuff you need to learn depends a lot on what your role is, but just kinda there's a big mix then. So there's some technical skills, there are some legal skills, there are some management skills and about being able to like, communicate and plan things and make sure it gets implemented within your organization.

Andrea Bonime-Blanc: Now 

Richie Cotton: I'd love to go back to this idea of poly risk. One of my favorite things about your book was you put AI as being just one new technology amongst several that are having big impacts on the world. Do you wanna talk us through about all the other bits that weren't ai? 

Andrea Bonime-Blanc: Sure. So when I started writing, so I'll give you a little bit of the genesis of why I wrote the book.

I wrote an article. I'm always writing because writing to me is learning. And then once I learn something, I can help educate others. So that's my, the method to my madness. So I wrote a piece about two years ago for NACD Directorship Magazine called The Governance of Exponential Technology, something along those lines.

It was basically looking at the phenomenon of Gen ai, which was about a year old in terms of the public, of all of us learning about chat, GPT and so on that, but also looking at how the gen AI phenomenon was also interconnecting with a bunch of other technologies. I'm thinking of biotechnology, synthetic bio automation, robotics, all these things.

Advanced materials, of course, because the more powerful the GPUs. And the, the silicon that's being created for compute the more we can do. So I was fascinated by the fact that this isn't just about gen ai, it's about. Everything else that's happening in the ecosystem of technology and one isn't affecting the other and interacting with the other and new things are being created, et cetera, et cetera.

So I was fascinated by that. I got an invitation to put a proposal together for a book based on that article, and that was ended up being governing Pandora with Georgetown University Press. And so when I started strategizing the book, I felt that I first had to start with context. So I talk about that geopolitical context in which technology is flourishing.

But in the second part of the book is where I wanted to do what I call a whirlwind tour of exponential technologies. I spend a lot of time figuring out what are those technologies I wanna talk about? 'cause I'm not a technologist, I'm not an engineer, I'm not a mathematician, so I can't really explain them from a pure scientific standpoint.

And I've chose these five. So Gene AI was one, biotech, synthetic bio was another. Automation, robotics anything from. Killer robots to smart cities, all that kind of stuff. Then of course I looked at Frontier Computing, including Quantum and a bunch of others. Those are the five categories that I figured, okay, this is not exhaustive.

There's other things out there. There's energy, there's communications, but I can't do everything. What I'm trying to do is create a mindset for people to start thinking about this is all part of our lives, our individual, personal, professional community. National, international lives, and we need to get informed at least at a certain level.

So I try to do like a, a primer on each. So that we all get sensitized to what's going on. And then I go on to other things in the book. But that's, those are the five categories of technology I felt are exponential in the sense that they're moving really fast, they're becoming cheaper to acquire.

They're dangerous. They're fantastic. They're everything. 

Richie Cotton: Absolutely. Yeah. I have to say there's so many like cool technology in progress at the moment. So actually of those is there something you're really excited about? What do you think is gonna be really impactful or very cool in the next couple years?

Andrea Bonime-Blanc: This is a mixed message, but, climate tech is moving great guns forward in all kinds of ways. And the solar piece of it is already very well known. And there was a piece actually in the Economist last year talking about how solar, it was a cover story. Solar is the exponential technology that is going to be the most impactful of all because it's getting cheaper, it's getting easier to do and so on.

And we have sun, so sun and batteries and all these things that allow it to happen. But then politics interferes with that, right? Geopolitics, international relations interferes with that. But to me, climate tech continues to move forward. A pace even with a change in of administration in the US where climate tech or, sustainability or ESG no longer is part of the dialogue, although it is under the.

Under the surface in a big way, in my opinion. And technologists and inventors and innovators are gonna continue to innovate. And so a lot of innovation is taking place in the climate tech piece, which I think could have an amazing impact if we get rid of some of the political noise the geopolitical noise.

Meanwhile, China is doing fantastic. With all of these technologies and first of all they control solar, but they are making incredible progress with a whole variety of things. Batteries electric vehicles. And they will own the climate tech world at some point, although there'll be others in other parts of the world too.

But that will help save, I hope. The planet from too much heating that then leads to all kinds of dire and terrible consequences for humanity, for biodiversity and so on. So I'm the most excited about seeing that come to fruition over time. It's not gonna be today or tomorrow, but that would be it.

Richie Cotton: Yeah, that's a very exciting thing. First of all, like not burning the planet, brilliant outcome. But in general it's a happy story and, I suppose maybe the common theme is like the technology is maybe the easy part, and it's the people and processes that wrote the challenge.

Andrea Bonime-Blanc: I actually wrote a piece about that recently about how what did I call it? Climate tech is alive and well. Planetary Governance, not so much. That's the title of the piece that I wrote a couple of months ago for diplomatic career, and it's about this idea that people are inventing, innovating, doing fantastic things all over the world, frankly.

But it's the politics of no of maybe, of not making a decision of the cop for example, got nothing done. Basically. That's an oversimplification. But if you don't have the most powerful countries in the world, and then many others talking together about how to do things, we end up with atrophy Arthur Sclerosis, whatever you wanna call it, but I think that might hopefully.

Shake out over time with different administrations and political will. I don't know. 

Richie Cotton: My hope in this is that, you mentioned solar technology and climate technology is getting a lot cheaper. The economics generally provide a persuasive arguments. Hopefully long term 

Andrea Bonime-Blanc: exactly.

That, that's what I keep saying is people are in business. To make money, but in order to make money, sometimes, not sometimes, most of the time you need good data information, the he heating of the Earth. We now have, I believe, is the second hottest year on record after and wasn't so good either.

The last years have been the hottest years in cli on in terms of the heating of the earth. On records and the records started in I believe. Data speaks right? 

Richie Cotton: Absolutely. Data speaks. That should be the slogan for the show, really. 

Andrea Bonime-Blanc: Exactly. Data speaks and burning woods and boiling oceans and rising flood flooding, and fires.

We're all feeling it one way or another. Some parts of the world much worse than some others, but we're all feeling it. 

Richie Cotton: Wonderful. Yeah some challenges, but I'm, we're happy we're gonna say positive that the problems will be solved. Alright, super. Just to finish with, I always want new people to follow.

Whose work are you most excited about at the moment? Yes. So I'm gonna, I'm gonna pick a couple of different organizations that I think are doing philanthropic. First of all, I think is doing really good work in combining. Good governance and ethics with innovation. And they've announced that they're gonna do a public offering at some point.

Andrea Bonime-Blanc: I always look at the leaders who are the leaders and are they setting the right tone and diary Sanday there is, has been setting the right tone and he's been speaking up when others don't and others are fearful or others are just, playing to the current administration. So I really like the tone that he's setting, combining the safety, the governance with the innovation.

So he's someone I would be following and their company as well. Mustafa Soleman, who's the head of AI for Microsoft, and previously co-founder of DeepMind and an ai, basically an AI genius. He's. Heading up Microsoft. And he put out a blog just a week or two ago about how AI and at Microsoft is gonna be focusing more and more on healthcare outcomes and things like that.

And so I think there again, and Microsoft has a history of thinking about the governance and the ethics of ai. Nobody's perfect. So I don't wanna say, oh. Let's go pray at their temple, but I think they've set the right tone compared to some other companies. I would put meta in the other category of not setting the right tone in terms of ethics and responsibility.

So I would follow Mustafa Suliman, what he's doing at at Microsoft. And then there are several other players. I call them the tech guardians of the universe. As opposed to the tech masters of the universe in my book and the Tech Guardians are organizations all over the world. Sometimes international organizations like UNESCO or the un, other times independent NGOs or interna international kinds of organizations like the Africa AI Observatory, for example and Center for Humane Technology.

There's a whole bunch of future of life. These people are really. Focused on the safety, the ethics, the responsibility, the accountability. And I would say people should follow one or more of those organizations and see what they have to say about the issues that are coming before us in a tsunami of information.

Richie Cotton: No I do love the idea of just following some of these organizations who are involved in responsible ai, AI accountability. 'Cause yeah, that's another. Signal that doesn't get quite as much noise as maybe some of these tech leaders, like you're a CEO of a, an AI company. You get a lot of attention. But some of these other organizations may be a little more in the background.

But definitely. Yeah. Worthful. Okay. Wonderful. Thank you so much for your time, Andrea. 

Andrea Bonime-Blanc: Oh, thank you Richie, for the great conversation. I really appreciate it.

Themen
Verwandt

Podcast

The New Paradigm for Enterprise AI Governance with Blake Brannon, Chief Innovation Officer at OneTrust

Richie and Blake explore AI governance disasters, consent and data use, the rise of AI agents, the challenges of scaling governance processes, continuous observability, governance committees, strategies for effective AI governance, and much more.

Podcast

How the UN is Driving Global AI Governance with Ian Bremmer and Jimena Viveros, Members of the UN AI Advisory Board

Richie, Ian and Jimena explore what the UN's AI Advisory Body was set up for, the opportunities and risks of AI, how AI impacts global inequality, key principles of AI governance, the future of AI in politics and global society, and much more. 

Podcast

Leadership in the AI Era with Dana Maor, Senior Partner at McKinsey & Company

Adel and Dana explore the complexities of modern leadership, balancing empathy with performance, navigating imposter syndrome, and the evolving role of leaders in the age of AI.

Podcast

How Generative AI is Changing Leadership with Christie Smith, Founder of the Humanity Institute and Kelly Monahan, Managing Director, Research Institute

Richie, Christie, and Kelly explore leadership transformations driven by crises, the rise of human-centered workplaces, the integration of AI with human intelligence, the evolving skill landscape, the emergence of gray-collar work, and much more.

Podcast

The Challenges of Enterprise Agentic AI with Manasi Vartak, Chief AI Architect at Cloudera

Richie and Manasi explore Al's role in financial services, the challenges of Al adoption in enterprises, the importance of data governance, the evolving skills needed for Al development, the future of Al agents, and much more.

Podcast

Scaling Responsible AI Literacy with Uthman Ali, Global Head of Responsible AI at BP

Adel and Uthman explore responsible AI, the critical role of upskilling, the EU AI Act, practical implementation of AI ethics, the spectrum of skills needed in AI, the future of AI governance, and much more.
Mehr anzeigenMehr anzeigen