Skip to main content

Scaling Responsible AI Literacy with Uthman Ali, Global Head of Responsible AI at BP

Adel and Uthman explore responsible AI, the critical role of upskilling, the EU AI Act, practical implementation of AI ethics, the spectrum of skills needed in AI, the future of AI governance, and much more.
Mar 10, 2025

Uthman Ali's photo
Guest
Uthman Ali
LinkedIn

Uthman Ali is the Global Head of Responsible AI at BP and is an expert on AI ethics. As a former human rights lawyer and neuro-ethicist, he recognized how regulations were not keeping up with the pace of innovation and specialized in this emerging field. Some of his current projects include creating ethical policies/procedures for the use of robots, wearables and using AI for creativity.


Adel Nehme's photo
Host
Adel Nehme

Adel is a Data Science educator, speaker, and VP of Media at DataCamp. Adel has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.

Key Quotes

There are so many different people you need to work with. From the average person that just kind of uses ChatGPT now and then to an actual Data Scientist that's actually building potentially these solutions. You need appropriate training depending on what the user is and based on their persona.

What does every organization want? They want all of their employees to use AI, but safely with the awareness of the risks and how to mitigate them or what the company's approaches. So that in its core is basically responsible AI.

Key Takeaways

1

A successful responsible AI program requires a top-down approach with clear mandates and objectives, involving legal, ethics, and technical teams to create a cohesive strategy.

2

Responsible AI programs should be tailored to different personas within an organization, from general users to data scientists, ensuring that training is relevant and use-case specific.

3

Monitoring mechanisms are essential to track AI tool usage and ensure compliance, but they should be balanced with empowering employees to use AI responsibly without excessive policing.

Links From The Show

Report: The State of Data & AI Literacy External Link

Transcript

Adel Nehme: Uthman Ali, it's great to have you on data frame.

Uthman: it's good to be here. Thanks for having me on.

Adel Nehme: Thanks for coming. So you've been leading responsible AI at numerous organizations. You've been in the AI ethics space for quite a bit now. earlier this year at DataCamp, we ran our state of data and AI literacy report, 2024 edition, where we asked. More than 500 business leaders in the US and the UK on what is the most important AI skill their team needs to have.

And 77% of leaders mentioned that responsible AI should be mandatory training for everyone within the organization. So maybe set the stage. Why is it so important to upskill your organizational and responsible ai?

Uthman: Because if you just take it back a step, right? What does every organization want? They want all of their employees to basically use ai, but safely with the awareness of the risks and how to mitigate them or what the company's approach is. So that in its core is basically responsible ai. So if you want things scaled well and used appropriately, your organization needs to be responsible and to train and up.

Because for most people before chat, GBT, they probably weren't familiar with what AI is or how it works. So again, there's a huge upskilling needed across so many organizations.

Adel Nehme: Yeah, and you mentioned here and from the outside looking in, It seems that the responsible AI conversation, you know, has reached a critical mass. Over the past few ... See more

years. Right. The advent of VE AI and change pt, like in, in a lot of ways change, PT, put AI in the hands of the masses and like I think everyone clicked in their head to why AI is capable of and why it's so important.

And then also more recently, the EU AI Act. Which puts a lot of pressure on organizations as well to make sure that they are compliant. So how true is that, that we've seen the combination of these two forces really push the responsible AI conversation to critical mass? How do you view it as an insider?

Uthman: think it's been highlight. One of the things you mentioned, right, is that the conversation has sort of reached a critical mass, but it's pretty clear now for everyone that we need to go beyond conversations into actual practical implementation. One of the key skills you, if any program, whether it's like compliance or just responsible more broadly, is you need to train people on like what to do.

So there's, on the one hand, you need people like filling in responsible AI assessments in your organization, like for their use cases. Reviewing this stuff to know what does good guidance look like and how should it be implemented? So there's ups across like everyone in the organization that's required.

Adel Nehme: And you mentioned here the SK that everyone needs within the organization. What does that look like in practice? I'd love if you can, you know, deep dive into bit what does a successful responsible AI program look like and what are kind of the skills that you need to have within the organization?

Uthman: mean, right.

There's so many different people you need to work with from an average person that just kindly uses chat, GBT now and then to an actual data scientist you need, that's actually building potentially these solutions, You need appropriate training depending on what the user is and based on their persona essentially. 

So when you talk about bias and fairness with an average N AI user, average office work is like, I might just use chart GBT to automate some stuff, It's very different high level conversation to someone actually building a system. When you're saying actually, what's the appropriate metric to test for, what does a representative sample size actually look like in your dataset versus like procurement versus hr?

And again, these organizations are so big, some of them that. People want guidance that's relevant to them. So you need to make a use case specific, so you don't want people sifting through loads of documents to go, I'm working in marketing. oh, okay, here's the, here's what my company's approach is.

You want it to be quite slick, right? You want 'em to kind of open up the website. The guidance is given to them. You almost spoon feeding them, and that's almost like what response, good response by service should be, it's almost like customer service success, Give people what they want when they need it in the best way possible. 

Adel Nehme: and there's that keyword that you mentioned here, which is personas, Like I wholeheartedly agree with this. Everyone's relationship within the organization with AI is different. Procurement professional, maybe different from a marketing professional, different from a data scientist building models.

Walk us through that persona, those personas, a bit more depth. What are common personas that you find with an organization and how do you arrive at your own personas, which I think is probably the bigger question a lot of professionals have in this space. I think within each organization, right. I always split into three things. Almost like you have the classic consultant like pyramid. You've got like the base layer, which is most of your, of any typical organization are people that kind of use AI a bit, need to be familiar with what it is. That's just general per general users that might need to know some of the risks and limitations.

Uthman: But then you've got that middle layer who are basically the people that might be in procurement or in hr, maybe higher risk exposure areas of the company, and even some of the data science, actual data teams responsible for. More use case specific guidance, right? Those are people that really have to be doing stuff.

But at the top of that pyramid is kind of where you want your, I guess you'd call a response by office, your center of expertise, where it's basically your dedicated part of the organization where these, where the company's subject matter experts are that produce the guidance and training that gets filtered down.

And how you find personas can often be based on like which parts of the organization. Are trying to hit, like how many different parts of the company are from different business? Basically a good one start. What type of training or education do they need?

Adel Nehme: One thing that you mentioned is there's a spectrum of skills in a lot of ways. Is depending on where you are and the persona, If you're, you know, someone who's less technical, maybe AI aware would be a great term for that, The type of guidance that you would need is like on how to use like consumer app tools like chatt or something along those lines on how to best go about that. 

If you're a developer or a data scientist, the skills as well that you need differ. Do you find that the spectrum of responsible AI skills that you need to kind of impart within your organization? Also increasing complexity. The more you go up that complexity persona, because A data. 

Uthman: Because this is where it gets like super interesting in terms of what skills you need to build a response by a program, because you're gonna need legal expertise. That's the given for compliance. Then you're gonna need someone like me with an ethics expert, right? But then the legal and ethics departments are often the ones that basically say, these are things that we should be avoiding.

This is how we protect the company's reputation, but also be compliant regulations, but also define like the company stance. But when it comes down to like who's actually doing a lot of the stuff that will actually be often your data departments, right? So it's one thing having a policy saying, we don't want any models that are biased to discriminatory, right. The procedure or the way that actually gets done will be from your tech teams to actually be looking at like data cleansing or selecting the right samples, or actually doing bias and fairness testing. So again, like there's a huge technical component that you can't have without responsibility because the technology is, so, the technical aspect is so important by building a program.

Adel Nehme: Yeah, I couldn't agree more. And then we're gonna unpack that in a bit more depth. But there's also, you know, we're. I think a key equal important aspect of skills transformation or, cultural transformation in general is also behavior transformation, Maybe what are the key behaviors that underpin an organization that has high degrees of responsible AI skills. 

What are the type of instincts your tech professionals need to have, for example, to be able to operationalize responsible AI effectively?

Uthman: I think in terms of like you hit the nail on the head on culture, and this is really what the job is more than like risk mitigation. It's with AI ethics, one of the things, it's not just saying this is the ethical risk of introducing a new product or service. It's also like what is the risk of not innovating now?

Basically, what is the risk of action or keeping up with the times, particularly if the things that it could improve, like for example, safety and operations. Depending on which industry you're in. But in terms of those, key skills, it's incredibly important that, again, like you have the right technical expertise, you have the right legal and ethics expertise to filter down guidance.

But the cultural change that you want to see is people that actually want to fill in these assessments and learn more about the topic and learn more about what the company's approach is.

One of the key drivers because if it's seen too much as like restrictive compliance, people are only human. They'll look for ways to not get around it, but they'll be like, look, I really wanna put off doing this thing. Right? But if they view it as look, I speak to these response by people, they're actually adding value to our product or service, or finding ways that this could be improved actually even commercially, it's. philosophy, but how you build like your program and your team structure, right? the cultural change, again, that's like for lots of companies, they've had AI ethics principles about fairness, transparency, accountability. You need to make these lived reality that everyone believes in in the organization from when they walk in for when they're onboarded, they're taught what the company's approach approaches, who to speak to for when they exit and they ask, you know, how do you think we did in response by how should we be improving this?

Adel Nehme: Yeah, I couldn't agree more. And I love to kind of, focus on that Last point here on lived reality because, lot of times, you know, ethics departments, responsible AI departments, get a bad rap. Because, organizations may want for example, ethics wash, There are existing AI practices, And you're making that point here of a lived reality I think allows that responsible AI department to go beyond that perception. So maybe what are best practices that you advise here for these types of functions to go beyond that perception that we just discussed? 

Uthman: Yeah, I think one of the biggest things is actually being deeply involved in the business where you operate. So for example, when you explain what AI ethics is and why it's important, actually going through use cases with various product teams and explain what the field is, I think goes a long way. 'cause there's often a perception that ethics is that this is like this barrier to innovation, death.

Person might be like the police officer, right? They're just gonna shut down things. But that's actually not what this role is in industry, right? The role is about helping people make informed decisions and actually know the benefits and the burdens and the trade-offs. And when you explain these things, it can be at a very technical level, right?

But if you're a data scientist and you say, look, if there's between like accuracy and bias slide on scale versus the other, this what the unintended consequences. A lot of them actually see very, I say almost always see value in this because.

Adel Nehme: Yeah, that's wonderful. And I, I love connecting that to the business objective and the function. You mentioned something earlier when we were discussing kind of the upskilling component, right? There's the legal team, the ethics team, the technical teams, right? In a lot of ways this is uncharted territory, managing kind of a responsible AI upskilling program or even a responsible AI program at large, right?

Who should be in charge of driving this agenda, Especially who should be in charge if you don't have someone who's owning the responsible ai? So, yeah, walk me through kind of your best practices here Take the Charge, and how do you organize this effort within your organization? 

Uthman: So much of a team sport, it's hard to pin down who's the captain, right? Who's actually the captain on this team. But I'd say like you gotta break it down into different components, If you're specifically looking at AI compliance, that should be your legal department. And also I'd say even the ethics should also be in there because this is often the part, the company that says. What can we do, especially from legal? It's like, What can, or we cannot do? What are the clear legal red lines? Ethics is the question of should we be doing this? But then the technology teams are the ones of like, okay, how do we actually about go about putting principles into practice? Like transparency, whether it's things like watermarking or avoidance or bias, ? 'cause if you start from like a bottom up effort, you can have some success, but eventually you're gonna run into issues over like what's the mandate for what responsible AI means to our organization? Who's basically the working group or the people accountable for creating basically the ethical principles or the values or requirements or basically se creating the company's global approach.

But that needs to come from like the highest levels, the C level at the company to go. This is on the priority list. This is where we think this is going, or even to delegate downwards, to basically your response by office, your AI governance office or committee to say, look, the C level might say, we're not experts.

You guys are, explain to us clearly what this field is or what the options are for implementation. I will be the ones to sign off on what this means for us as a company.

Adel Nehme: Yeah, that's wonderful. And you mentioned here building guidance for organizations globally stand as well. You know, you've built this for quite a few organizations. walk me through the process of finally reaching that end state product. What goes into it? What is the resources, the inspirations that you go through, and kind of what are the inputs that come into play that could come out when you see that final finished product?

Uthman: I think one of the biggest things that I'd advise companies to do is take like a real case law approach to this, I think in my field now, this is becoming more common practice where it's one thing to say we should be avoiding bias or discriminatory outputs. As a common example, I. But you need to guide people through a clear examples of where this could happen in your company, right?

Like the CB screening, one HR algorithm that discriminates against people based on the race background, where they live. Socioeconomic status is like a classic example, right? But what advise companies to do is once you have that mandate and you have your AI governance committee, or ethics committee, or whatever you wanna call it, basically the team captain, right?

that thing is set up is to run through a load of even fictional products that you can see your company building from, like co-pilots and digital assistance to basically think through what are the unintended consequences, actually, what are the ethical risks that we can see from like things like job displacement or company reputation or whatever happens to be.

the key thing is that it needs to be harmonized with what your company's guidance already is. So, for example, the ethics team will say, we're against like discrimination. Right? Okay. Everyone's against discrimination. It's like a legal mandate to be against discrimination, But when they say, you know, we want to do the right thing and be equitable, it's like, how do you know that that's your company's stance? The only way to do that is to actually look at your existing policies, procedures, public statements, to actually in company's position is and. where do we stand ethical as a company?

So a good example I think is in like even bias and fairness in hiring, right? You might say, as an organization, we'll do enough to not basically be legally penalized. And that's legal, right? They'll be like, that's what we're gonna do. But ethically people you know, they kind of be like, okay, what's the ethical approach?

Even when it comes to things like affirmative action programs. But if you take a step back and you say, look our company as a target by 2027, we need X amount of female employees in this department. That is actually commercial goal of the company, which is to equity inclusion, right? If we use this discriminatory tool, we're not so.

Adel Nehme: I love that. And it comes back to that point that we discussed as well about how to make responsible AI actionable, And operationalize one. Thing as well. Like when you've been managing these programs, Oman, what are some common challenges that you can expect along the way? Right. We mentioned, how it's really important it is to get C-level executive sponsorship.

I think lack of executive sponsors probably, you know, one of the biggest death nails for a responsible AI program. So yeah, walk me through some of these challenges that you can expect managing these types of programs and where do you start if you have that sponsorship?

Uthman: I think one of the biggest challenges again, is even defining way AI governance. Responsible AI means for your organization because there's ethics compliance angle, which is a, about risk mitigation. Ethics if done well, can I Commercial benefit as well, right? But then there's the angle which tech teams often come to, which is like AI governance for me is how do I actually build stuff well and do it quickly, right?

They're thinking what's the commercial aspect of this? But also just how do I work with AI in my company? And again, it's being clear again, like from the top down over what is the mandate for this? What do we mean by responsible ai? Which key teams actually need to be involved are key personnel because if you leave it too open or too vague, it can quickly, you can run yourself around in circles, We're basically using the same words, like even AI governance. If you ask a cybersecurity person versus a data scientist versus a lawyer, an ethicist, they might come up with four different definitions. Of what AI governance means, right? So you might be thinking, we're using the same words. Why don't they get it?

But because you're not being clear. So even to start with, to say in this response by program, who's involved, but even write down what are your objectives, right? What are you trying to get out of this? What does good look like for you? To even draw a line over this is actually what this governance team is trying to do.

This is what we mean. And then after that you can think through what guidance do we need? How do we structure a program from all the things like learning and development to third party risk? Then you have to think about, okay, what are the, even like some KPIs, ways we're gonna measure success for each of these actions?

And when you slowly start doing this, you're like guidance and that sort of thing comes under this as well. You start actually building like a full program. You look at how many teams need to be involved, who's important, you can build the racy matrix, but that's how you basically will start to be able to scale the sensibly.

But that is also a great resource to give to your C-level executives to go, this is actually what it takes to embed this up the company, to also not just be legally compliant, but also meet our other strategic objectives at the company as well.

Adel Nehme: And one thing that stuck with me here is, you know, defining the metrics and KPIs that matter to be able to measure the success of the program. You know, what are some of those KPIs and metrics and are they organization specific or can you find, you know, common ground with different organizations?

I. 

Uthman: I think a lot of them will be organization specific, right? So classic ones around like just compliance training and that sort of thing is how many people have done the modules, the learning development and workshops. You can quiz them. Did they understand it? You can run through use cases, even fictional with them.

Did the people actually understand the guidance? So do you actually know how to implement that? Right? But then you've got the other side, which makes this interesting is that companies might often have innovation metrics. Whereas we want to basically make X amount of money or revenue or increase value by this amount, by this date, right?

Using ai. So you need to design them in a way that's kind of harmonized program. And your basically innovation metrics aren't always conflicting, Because if you just have metrics in one part of the company, which is like zero risk, zero risk at all, they might turn around and say like, look, we shouldn't be doing anything at all.

But if you have other part of your company, which are just pure cowboy approach of, make as much money as you can in the shortest amount of time, strategy

executive senior. almost like a negotiation, but also a well thought through strategy to say like even in terms of how many high risk use cases or models or products can we have, how many are we resourced to review? You kind of wanna put them in a way that like it's sensible, Where you can say, we're well resourced to do this.

no one's gonna complain saying, look, we're getting shut down all the time. 'cause no one to review stuff, but also your response.

Adel Nehme: And then one thing that you mentioned here that I wanted to ask you about, actually that segue perfectly, is you mentioned that, you know, the IT component here. How many tools do we have in our organization that are proliferating? There's. shadow it phenomenon where a lot of organizations have, grassroots usage of tools like chat PT or midjourney or something along those lines.

And we've seen organizations weigh this differently, right? Some creating really strict compliance about what type of tools that you can use. Some outright banning chat pity within the organization. What is the balance that you need to strike here? If you wanna balance between responsibility and innovation, should you ban GG pt?

Should you not?

Uthman: I think, again, this is what it comes down to is that the reality is the genie's out the bottle. Can't really ban it the way either people will find a way, just go on my phone, I'll just do on my personal device. Or people, you know, they'll find a way, like

Adel Nehme: literally know people that have their laptop open and then their personal laptop open to.

Uthman: Yeah, yeah. There you go. Look, they're just, they're just multi screening, all that stuff. The problem is if your metrics aren't tailored is that you might be presenting perfect metrics saying, you know, we mitigated all risk, but there's no usage in the company. But again, it's not lived reality because you know, people are doing it on their personal devices.

So that's where you gotta know your own organizational culture, but also be sensible with this and And this is why culture is so important and upskilling people, is that you wanna say, I can trust you to use this tool, but not use sensitive company data or do anything that might put us at a reputational risk, right?

I'm gonna trust you to basically do this. I'm gonna empower you that I don't have to basically watch you. I don't have to be a police officer. And then afterwards you can say, look, these are like our list of approved tools that we've been through. Please use these ones. These are the ones that we vetted even for security con considerations.

These are the ones that look, these are the guardrails. Stay within these walls and you'll be fine, basically. And that's basically the sales pitch of AI. Ethics should always be that this is like an enabler of the digital transformation if done well.

Adel Nehme: How do you create monitoring mechanisms to be able to monitor any misuse or irresponsible use of ai?

Uthman: Yeah, I mean there could be monitoring in ways that you can monitor, even for using security tools to see website, basically just general use ai, right? So you can monitor. But then if you go into like actual prompt auditing, there are ways you can audit, like even whether toxic or harmful prompts or outputs produced, and you can actually have ways to filter for those.

There are tools you can use like shadow AI detection to actually see how many models do we have, potentially production that we weren't aware of. But there are also tools that you can have over like. Monitoring for things like technical risks, like use case or like model drift to actually see did this fall below basically acceptable thresholds.

So monitoring again is like a really important one to basically, when you have a huge organization, you need some way to basically have a good understanding of where are risks sort of proliferating across the organization. Again, like why is that?

Adel Nehme: You mentioned something as well, like about the element of creating trust. You know, that we trust you to, we're empowering you to use these tools effectively. And I think a big part of that by which you get that trust, at least as program leaders doing assessments, So you assess people's skills, you assess people's capabilities, understanding how should you go about creating assessments for responsible ai?

What should you be testing for, and how do you communicate, the use of assessments in a way that inspires confidence from your audience and not just as a, you know, the police is here, type, type

Uthman: I, I love the phrasing, like inspiring confidence because a lot of this is what people are looking for. Confidence now what ai. But again, like something you mentioned is really important is even defining the scope of what your response by assessment is. Because if you're not careful in figuring out what your mandate is early, some people would say the commercial value or even alignment to business strategy should be part of response by ai, other people at ethics or like legal compliance department to say, we're solely more about the risk mitigation or reputational risks.

Your technical teams might be like, look, we wanna make sure people are following best practices, So again, you need that clear mandate to show if there's almost imagine a response by a questionnaire, you know, you're definitely gonna have legal in that. That's like non-negotiable ethics you should be having in that as well.

And ethics often precedes things like technical assessments as well. Like even when you look at, bias and fairness, disparate impact assessments, how that gets conducted is often quite technical assessments, But then if you want that commercial aspect in there, does this use case generate value?

Again, that's the company wide decision to go, is that within the scope of responsible AI and to each company, it's quite different. But those are usually the core tenants that you wanna have.

Adel Nehme: Maybe when you lurking to assess these people as well. And kind of coming back into like the messaging and kind of the communication program around the use of the upcoming program and the assessments, et cetera. What's kind of some best practices that you've learned around how to get people excited about being part of the agenda and not just, willful participants?

Uthman: I think the biggest thing is that you want people to, Don't want to say like, look, this is more assessments I need to do on top of other assessments. So I think one of the first things the company should do right is actually when you look at. Responsible AI and actually AI more broadly is that usually people right now are super excited about AI and what it could do.

So you already have a lot of buy-in. And I think generally I've yet to find someone that doesn't find AI ethics in particular is interesting. 'cause this is like a really, it's almost like an existential topic that we're facing now as well. Even the future of like, how are we gonna work with machines in the future?

Right. But in terms of building that, that confidence of wanting people to do it, it's, again, it's a good opportunity for companies to actually look at like their existing assessments and actually figure out how can we just make this more palatable for people, right? If you've got three to four different teams asking three to four different questionnaires, if you know you need to introduce more stuff, response buy now is, it's not a great opportunity to just solidify everything into one simple format.

We'll have all teams speak together at once, so just reduce the time taken to do reviews. I think that in a lot of ways inspires confidence to people that are like, look there's dedicated resource here. There's a team here, but also they're actually trying to make my life easy to go through this process.

Adel Nehme: And you mentioned something here, kind of that long-term conversation about what working with AI means, and I'd love to actually switch gears to discuss that, More long-term perspective. if the AI space keeps moving the same speed that it's moving in today, And if the same pace of improvement keeps up.

Now there's a debate on that. What does this mean for the responsible AI agenda? How do you think the next few years will evolve?

Uthman: You know, it's, it's so funny because I was actually at a digital ethics conference recently, and this is one of the round tables I led was, well, it was basically unstructured. You could ask anything, right? And my question was, what will our jobs and responsibility look like in five years? you know, we've never really thought about it.

So I, let's sit down and after you think about what will the future job roles be? Is it gonna be your, like a new form of chief compliance officer for responsible AI is actually linked to more sustainability and wellbeing. But then we also thought about new jobs you might see as actually like the role of almost AI and storytelling and also building company narratives and branding, learning and development that might be bespoke roles specifically for responsibility given how like complicated this is actually with the personas

involved. 

Adel Nehme: in the data literacy. In the data literacy space. We already see data literacy officers. Yeah. 

Uthman: Yeah, like literally bespoke people. But the one I like is because we're actually entering a era, right? We're having like AI enabled crime, deep fakes, like a great example, right? and even a future role would be like almost expert witness, even for people working in responsible AI to actually be giving expert testimony to even talk

Adel Nehme: Oh wow. Yeah.

Uthman: This is basically an AI ethicist perspective of how this AI was used or why this basical basically, the scenario came about. So I think almost like when you look at other fields, it's almost gonna be that AI specific niche revolving around we need an ethics expert or someone involved in responsible AI to talk about this.

Adel Nehme: Yeah. And how do you organizations having 

Two increasingly powerful AI systems as well. Like What are kind of the pressures that you imagine organizations will be under?

Uthman: I think it's almost like building the plane whilst flying it, right? That's the reality of like where everyone's at from like big tech developers to consumers, and I think for most companies what they're trying to get to grips with is that many of them still don't really understand what AI ethics is and what this means.

I. A lot of them aren't sure. Is this just like some sort of vague activism or is this some sort of new form of business professional ethics? So like a lot of them are struggling to really understand what is responsible ai? What does it actually mean for my company? If I embed this at scale, what does this look like?

So you've got that issue, but then you've got other ones where they're like, is responsible AI standalone thing, or do we embed that existing compliance? It should. But then you have all this wave of multimodal gen AI and all this new stuff coming. And even with ar, vr, metaverse, companies are like, who's actually looking at what's my disruptors next? Right? Like who's actually keeping their eye on the ball? 'cause the next chat, GBT might happen, right? And it's like, when that happens, the risk is that if we're too narrow in our approach, even in our guidance for AI and what this program looks like, if the new thing comes up.

We might have to tear this down and start all over again. So when you look at digital ethics more broadly, you need to have it in a way that's quite adaptable. But when the next disruptive tech comes, you're like, actually, we kind of know how we can just basically consume bandwidth with whatever we're doing now, right?

We don't need to create a whole new team or process or program for this. We're actually setting our company up for success in the future.

Adel Nehme: Yeah, so proactive is. One additional point on, you know, we discussed this slightly early in the episode. We talked about the U AI Act, Even looking at the current landscape, and I can only project this into the future, there seems to be strong disagreement within the AI and tech community, for example, about the need for regulating ai.

Some people think that the U AI Act is not strong enough. Some people think it's an overreach and it hinders innovation. ? How do you see this debate happening over time, especially as the technology gets stronger? do you think are modes of regulation that will have to be conjured up?

Because, you know, it's a global market 

Uthman: I think this is like a really fascinating topic actually. It's for lots of companies even designing the response by program. The key conundrum is do we make the EU AI Act that benchmark, If this regulation is the strictest in terms of documentation and even the prohibited categories list where you could be five 7% turnover for this being used in Europe, is that gonna be a global ethical stance that this prohibited uses will actually, that's a company wide approach, right?

And many of them, I think quite sensibly are saying we're not gonna make a decision on this yet 'cause it's quite too early. But we'll be able to have a framework that's flexible, but like you're prepared for. And in terms of like people working in tech, I actually know lawyers even that said, they're not convinced by the UA act.

I know others that are like fully for it saying this

is 

Adel Nehme: what? In what way? In what way? Is it too much of an overreach or is it too much of a a limited uh, recognition? Yeah.

Uthman: I think what it comes down to, that there are a few things. One of them is that. The question, the need for AI specific regulation, because the argument is that it's arguably a distraction, right? Because you already have lots of regulation already, like data privacy, ip, anti-discrimination laws, consumer protection laws.

It's, we already have lots of stuff

Adel Nehme: Yeah, you can't do libel against anyone 'cause there are libel laws, even if it's AI generated or human

Uthman: yeah. So like we already have. What is it about AI that mandated the need for a specific regulation dedicated to the technology? And I'd say one of the most controversial points about the E AI Act is the regulation oversized, the models itself and the technical aspects. Well, this was the thing was should he be ready regulating the use case, which I think most people agree with is basically how users where risk occur.

But when start talking, regulating the size.

How far is, are we going beyond the scope of originally what this is intended to be? And also does regulating the things of the size of models, is that even a good proxy for risk given that tech developers are trying to make them smaller actually. So again, is that, is that actually a long-term solution? 

So again, I think that it remains to be seen, but I think what I'm hoping we'll see is that, a global sort of harmonized approach where we kind of agree on this is what best practices look like for what you should be documenting. For things like AI risk management, even AI ethics and practice Different cultures might a different perspective of ethics, which might inform, which prohibitive practices look like globally, but hopefully companies will be in a position to say, take.

Adel Nehme: Now as we wrap up today episode, do you have any final call to action or maybe one single piece of advice you have for listeners today that wanna get started with their responsible AI agenda within the organization? What's.

Uthman: It's hard to narrow it down to one single piece advice. Right. Is together a plan, you know, that this is something company has to reasons right? I actually get together with a few experts. Most companies might have a volunteer network of people interested in this to actually sit down and start thinking through even just vision paint, Two years from now, if you were to embed your vision or response, the company, what does this actually look like? Even take the compliance out of it, because we all agree that this is quite important for even for the future of work, when you enter your workforce? When you get up in the morning and you like start going into work?

Are you doing a daily commute or whatever? What does that actually look like? What does it feel like to work at this company? What do the processes look like? I really start having a vision of what that is. Then work backwards from this and go, what do you need to start doing now to basically achieve this?

Adel Nehme: Okay, work backwards from the plan. That's probably the best advice in single context. Say that is always the best place to start. Gutman, it was great to have you on data frame. Couldn't thank you enough.

Uthman: Thank you. Thanks so much for having me.

Topics
Related

blog

The Case for Responsible AI

We recently released a report co-written by DataRobot’s VP of Trusted AI Ted Kwartler, Global AI Ethicist Haniyeh Mahmoudian, and Managing Director of AI Ethics Sara Khatry. Here’s a run-down of what to expect.

Kevin Babitz

10 min

blog

The Role of AI in Technology: How Artificial Intelligence is Transforming Industries

Discover the power of AI in technology, from software development to healthcare. Learn how businesses are using AI and why upskilling in AI literacy is crucial.
Javier Canales Luna's photo

Javier Canales Luna

10 min

blog

Reskilling and Upskilling in the Age of AI: Challenges and Opportunities For Organizations

Discover the importance of reskilling and upskilling in AI. Explore challenges, opportunities, and strategies to equip your workforce with essential skills.
Matt Crabtree's photo

Matt Crabtree

12 min

podcast

The Future of Responsible AI

In this episode of DataFramed, Adel speaks with Maria Luciana Axente, Responsible AI and AI for Good Lead at PwC UK on the state and future of responsible AI.
Adel Nehme's photo

Adel Nehme

45 min

podcast

Building Trustworthy AI with Beena Ammanath, Global Head of the Deloitte AI Institute

Beena and Adel cover the core principles of trustworthy AI, the interplay of ethics and AI in various industries, how to make trustworthy AI practical, the importance of AI literacy when promoting responsible and trustworthy AI, and a lot more.
Adel Nehme's photo

Adel Nehme

38 min

podcast

What You Need to Know About the EU AI Act with Dan Nechita, EU Director at Transatlantic Policy Network

Adel and Dan explore the EU AI Act's significance, risk classification frameworks, organizational compliance strategies, the intersection with existing regulations, AI literacy requirements, the future of AI legislation, and much more.
Adel Nehme's photo

Adel Nehme

47 min

See MoreSee More