Skip to main content
HomePodcastsArtificial Intelligence (AI)

Building Trustworthy AI with Beena Ammanath, Global Head of the Deloitte AI Institute

Beena and Adel cover the core principles of trustworthy AI, the interplay of ethics and AI in various industries, how to make trustworthy AI practical, the importance of AI literacy when promoting responsible and trustworthy AI, and a lot more.
Updated Oct 2023

Photo of Beena Ammanath
Guest
Beena Ammanath

Beena Ammanath is an award- winning senior technology executive with extensive experience in AI and digital transformation. Her career has spanned leadership roles in e-commerce, finance, marketing, telecom, retail, software products, service, and industrial domains. She is also the author of the ground breaking book, Trustworthy AI.

Beena currently leads the Global Deloitte AI Institute and Trustworthy AI/ Ethical Technology at Deloitte. Prior to this, she was the CTO-AI at Hewlett Packard Enterprise. A champion for women and multicultural inclusion in technology and business, Beena founded Humans for AI, a 501c3b non-profit promoting diversity and inclusion in AI. Her work and contributions have been acknowledged with numerous awards and recognition such as 2016 Women Super Achiever Award from World Women’s Leadership Congress and induction into WITI’s 2017 Women in Technology Hall of Fame.

Beena was honored by UC Berkeley as 2018 Woman of the Year for Business Analytics, by the San Francisco Business Times as one of the 2017 Most Influential Women in Bay Area and by the National Diversity Council as one of the Top 50 Multicultural Leaders in Tech.


Photo of Adel Nehme
Host
Adel Nehme

Adel is a Data Science educator, speaker, and Evangelist at DataCamp where he has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.

Key Quotes

I'm excited about the fact about how much attention AI regulation is getting now. So I'm hopeful that we'll see more progress on regulations. Of course it takes time, but hopefully this will be a time when most organizations start imposing some level of self-regulations and from a trustworthy AI aspect, right? So I'm very optimistic about where we are headed with AI as long as we consider the downsides and address it.

The challenge is that there is not a one-size-fits-all approach to making AI safe. It cannot be just one AI regulation. There will be overarching principles and overarching ideas on what we align with, whether at a national level or a global level. But at the end of the day, it is going to depend on the use case. So I see regulations coming in from two broad categories. One is for existing industries—the pre-internet era industries like health care or bank, financial systems, energy and manufacturing. Then there is this other category, which is primarily the industries that have emerged in the last few decades, right? Where there have not been as specific regulations, whether it is broader technology platforms or social media. Those are completely new industries and you probably will see more regulations over there. But bottom line, I don't think it's going to be one regulation. It's going to be very different depending on the industry and the use case itself.

Key Takeaways

1

While bias and fairness are crucial in AI, their relevance varies based on the use case. It's essential to determine the accepted level of bias for each specific AI application, especially when human data is involved.

2

When making decisions about AI ethics, especially concerning bias and fairness, it's vital to involve a broad range of stakeholders, including business leadership, risk management, and legal teams.

3

To implement trustworthy AI, it's essential to identify relevant dimensions for each use case, define metrics, and involve key stakeholders to ensure the AI system aligns with business and ethical goals.

Links From The Show

Transcript

Adel Nehme: Hello everyone. Welcome to DataFramed. I'm Adel, Data Evangelist and educator at DataCamp. And if you're new here, DataFramed is a weekly podcast in which we explore how individuals and organizations can succeed with data and ai. Throughout the past year, we've seen AI go from a nice to have for many organizations today to a must have.

Almost every boardroom today is talking about its generative AI strategy. And as a result, there's never been more pressure on the data team to deliver with AI. However, as the pressure to deliver with AI grows, the need to build safe and trustworthy experiences has never been more important. So how do we balance between innovation and building these trustworthy experiences?

How do you make responsible AI practical? Who should we get into the room when discussing the trust angle of AI use cases? Here to answer these questions is Beena Ammanath. Beena leads Trustworthy AI and Technology Trust Ethics at Deloitte. She is the author of Trustworthy AI, a book that can help businesses navigate trust and ethics in AI.

Beena has extensive global experience in AI and digital transformation, spanning across e commerce, finance, marketing, telecom, retail, software. services in industrial domains, and a lot more. In our conversation today, we delve into the core principles of trustworthy AI, the interplay of ethics and AI in various industries, how to make trustworthy AI practical, who are the primary stakeholders for ensuring trustworthy AI, the importance of AI literacy when p... See more

romoting, responsible and trustworthy AI, and a lot more.

If you enjoyed this episode, make sure to let us know in the comments, on social, or more. And now, on to today's episode. Beena Amanath, it's great to have you on the show.

Beena Ammanath: Thank you for having me, Adel.

Adel Nehme: You're the global head of the Deloitte AI Institute, a technology trust leader and author of the book Trustworthy AI. So maybe first setting the stage, you know, especially in the industry, I'd love to get some definitions out of the way. We see a lot of different terms within the responsible AI ethics sphere.

We have AI ethics, responsible AI, trustworthy AI. So maybe walk us through your understanding of how these, of these terms, where they intersect, where they diverge, and how you settled on the term trustworthy AI for your book.

Beena Ammanath: Yeah, Adel, that's a great question to start with just in terms of level setting. And I wanted to approach it very much from an enterprise lens to solve for AI ethics or all the challenges that we hear about in the headlines. I think you have to look at it purely from an enterprise lens.

And at the end of the day, what matters in an enterprise perspective, if you are building an AI solution or AI product, you want your product or solution to be trustworthy. And that includes in my mind, the dimensions of ethics, responsibility and accountability, transparency, all these different dimensions because it's trust encompasses a number of different dimensions at the end of the day.

If your users trust your product, then they're going to use it. Then they're going to adopt it and your product is going to succeed. So I think trust for me is a good way to define and it also enables us to measure it. Okay. when the product gets rolled out.

Adel Nehme: Okay, that's really great. And I really want to deep dive with you on what you mentioned here are the dimensions of trustworthy AI. In your book, you mentioned, or you identify six distinct dimensions of trustworthy AI. I'd love it if you could share these dimensions first at a high level. And more importantly, walk me through the process of identifying these dimensions as you were writing this book.

Beena Ammanath: Yeah, think we'll start with the big one that always comes up in every ethics conversation or any headline on Many times it comes up as around bias and fairness. And in my experience, and Adel, I'm sure your audience has seen my prior experience as well, is that I've worked in a variety of industries, right?

And yes, fairness and bias is absolutely crucial, but it may not be applicable in certain use cases. So the best way to think about these dimensions is to identify which of these dimensions is relevant for the use case that you're working on or for the AI product that you're building, and then identify the dimensions that are relevant. And then define the metrics to measure and track it. So let's start with fairness and bias. If you are, in most cases, if you're not using human data, or if you're not directly influencing human behavior, then fairness and bias may not come into the picture.

What do I mean by that? anything that is directly consumer facing, whether it is personalized marketing, patient diagnosis, bias is a crucial factor, and you absolutely have to be able to address it. But if you are looking at, for example, say, predicting a machine failure, predicting a jet engine failure, then bias may not be as relevant.

It is more IOT data. It is more machine data. It is about looking at service records. It's about looking at, the black box data and then predicting whether that jet engine is going to fail and when it will fail, right? So I think it is very important you know, uses dimensions that I'm going to walk through and identify which ones are relevant.

And once you identify the dimension. I'm hitting a little bit on the process part that you mentioned once you identify the dimension. It's important to define what's the accepted level, Especially for something like bias. We know that it is impossible to have completely unbiased algorithms.

It just doesn't work. So then what's the accepted level of bias? Say you're using AI say facial recognition technology. Say you're using it in a factory floor in a manufacturing plant to prevent, worker accidents. If you see somebody's, eyes are drooping they look like they might be falling asleep away in a while on their job.

You might want to do some preventive action to prevent the accident accident. but that same technology, facial recognition, it can be used a law enforcement scenario to tag somebody as a criminal, And the accepted level of bias there is absolutely zero. Because, if you flag somebody as a criminal wrongfully, then, their life Get disrupted in a way that's unrecoverable.

So There's no zero tolerance for bias, but in that country scenario, is the question to ask is it helping us, prevent accidents? Has it changed the metrics? Have the number of accidents reduced by 10 percent 90%? What's the accepted level?

Yes, it may not be catching every possible scenario. But at the same time, do you want to use this algorithm on? Think of an extreme example, right? We know facial recognition technology is being used by certain NGOs to identify human trafficking victims and kidnapping victims and pretty much in a very law enforcement type of setting where it's being used at traffic stoplights and so on.

Now, the question to ask is, What's the accepted level of bias? Is it helping us rescue 60 percent more victims than we could if we didn't use technology, right? Is that okay? Do we understand it's bias, but is that still an acceptable metric? So, for each of these dimensions that we'll discuss today, I think it's important to identify whether it's relevant.

For your specific product project use case. And what's the accepted level of that dimension? What's the accepted level of bias in this specific case

Adel Nehme: Yeah, no, I love the way you focus here on how you apply that trade off thinking depending on the use case, especially when it comes to bias and fairness, I find it's a very pragmatic view on how to approach these different dimensions. And maybe thinking about bias and fairness how have you seen, the landscape of solutions for leaders looking to build out use cases that leverage human data?

What are How should we think about the trade off when we start leveraging use cases that use human data in this regards?

Beena Ammanath: That's a great point. You're going into the next level of detail, right? Like Who makes that decision? What's the accepted level of bias? And that's where you need to bring in the key stakeholders. It's across the organization. It is not the data scientist. It is not just the data science team or the I team or the I team.

Even, the technology team where the project is being built. You have to absolutely make sure when you are defining those metrics, right? The accepted level of bias. It is crucial to make sure that you have business leadership involved. At the end of the day it is not a technology decision because whatever decision is made comes with inherent risk.

To your organization, So in that factory floor scenario, right, where you're using potentially bias, facial recognition technology to prevent accidents. It's not just the I. T. team that's developing and deploying it. It's the, quality and risk management team.

It's a brand protection team. It's your legal and compliance team. It is the business owner, the GM for the business that manages that factory. I think it is a quick meeting, but where you have that discussion and make an informed decision on this is the benefit, but these are the risks that come with it, and what Are we okay with moving forward? I think it is absolutely crucial to bring in the stakeholders beyond just the data or AI team to make that informed decision so that the business has bought in and is aware about the risks that come with it.

Adel Nehme: That's really great. And I'm excited to unpack with you even further. More tactical ways teams right now can start operationalizing trustworthy. But I want to go into the dimensions a bit more. In a lot of ways, you mentioned a great example of, predicting jet engine failure, as not an example of a use case where bias and fairness is going to be very foundational or important.

But that is a use case where another dimension is very important, which is robustness reliability. Because, if there is a sacrifice made on robustness and reliability here, that could mean something, that could mean the difference between life and death in a lot of ways. So I'd love to learn from your perspective, what makes an AI safe?

system robust and reliable.

Beena Ammanath: I hope, Adel, you read my book because you're literally quoting me. So that's great.

Adel Nehme: I am. I have read the book. Yeah.

Beena Ammanath: I think uh, and that's where, again, that term trust is important for me, right? As opposed to just, right? Fairness bias squarely fall under the ethical implications. But, for your AI product to be trustworthy, it has to be reliable.

And, in the world that we live in today, where we hear a lot about hallucination, right? Take a step back. What is hallucination? It is a reliability issue. It is software that's not consistently producing reliable results, So it is a reliability issue, which we, cutely call it hallucination today.

But for me, reliability means where you are able to consistently provide accurate, consistent results. Results that match the metrics that you define. What do I mean by that? The reliability metrics can be different. And let's use hallucination because that's in the common vernacular today, right?

Is if you are using AI for doing personalized Targeted ads, reliability can be a bit off. Yeah, it can be a little bit more flexible, right? Okay, you serve the wrong ad to the wrong Persona. Okay. But if you are using that algorithm to for some kind of patient diagnosis, It needs to be 100 percent accurate, even if there is a Human in the loop, So you take the reliability dimension and you're able to look at it from that use case lens. See in your world, Adele, right? Your education, right? If you recommend the wrong course to the wrong student, what's the accepted level of reliability, right?

Is that life and death scenario? Probably not. But if you are recommending the wrong drug to a patient, right? If you're using AI, which I was just seeing recently a result where a lot of, consumers are beginning to use AI To look at treatments or understand medical jargon.

So, you know, I think it really comes down to that use case. The reliability metrics and it is, in the simplest way. Thanks to Jenny. I can say reliability is all about hallucination. how do you Tackled.

Adel Nehme: Yeah, we're definitely gonna expand on, how trustworthy I has evolved given the advent of generative AI here as well. But, given that we, that you just mentioned generative AI, I think one thing that I've also been thinking about when it comes to generative AI is the concept of transparency, which is another dimension that you have in the book laid out.

I remember five years ago we used to talk a lot about. The importance of opening up black box models and trying to make models interpretable. And now we live in an era of large language models where we have, models with billions, if not trillions of parameters as these models become sophisticated.

So maybe how do you think about the concept of transparency and the trade offs leaders need to make now? between transparency and performance as these models become more and more performant.

Beena Ammanath: Yeah. Yeah. I think transparency that there are two, parts to it. Transparency is showing a little bit on, how that model arrived at a certain result or a certain outcome, right? But it's also making sure that the stakeholders understand, right? How the model operates and the risks.

that come with it, you know, the kind of training data that has been used, what are the gaps, what has it not been trained on, So transparency is being able to provide that level of visibility from a dimension perspective. But in terms of metrics and, how do you weigh the opportunity versus the risk?

I think, again, it comes down to the use case. If you're using transparency, if you're maps. I'm going to get from point A to point B. Can you probably don't really care? How should it arrive at that optimal path? Okay, it might take you to, you know, in sometimes my app does it, which it might take you, through a longer path, right?

But, if you're a doctor, going back to that patient care, right, where your algorithm is recommending a certain treatment pathway, then as a doctor, it is absolutely crucial to understand, what were the factors that were used for the algorithm to make that decision, or we've seen cases where, you know, credit card were denied, right? I think they're like, Driving that level of transparency is crucial, so it comes down to the application of the air. It comes down to, What's the level of transparency that's needed sometimes it's OK for the eye to be a black box. I do not need to know why my music app is recommending me a certain song, right? I really don't need to know the black box behind. I do not need to. Need that information, right? But if my doctor is recommending a certain treatment plan based on the AI recommendation, then I as a consumer want to know.

And the doctor needs to know as well to make sure that it's the right path, right? Because there's a lot more. Impact if that level of transparency is not available. And at the end of the day, if you are, if the end consumer doesn't feel that you as a business, as an enterprise are not being transparent, then they're not going to trust what were AI solutions that you put out, So I think transparency is an absolutely crucial pillar for building trust in your ai.

Adel Nehme: that's awesome. And one thing across, our discussion so far that is a common theme, a common thread is the importance of looking at it by use case by use case basis, right? The importance of understanding the level of risk across different use cases. So maybe how do you think about a framework for categorizing risk for different?

AI applications. I know this is maybe a bit of a annoying question in the sense that there are different use cases across different industries. what is kind of a framework that you would apply when it comes to categorizing applications of AI vis a vis their risk level or potential risk level?

Beena Ammanath: yeah. And look, there are going to be regulations that come into play and help us define this more in a standard way. But as of today, there is a balance of sales regulation that needs to happen, right? And it is the organization that is building or using the AI that has to define the tolerable level of risk for their enterprise.

And that's why it's absolutely important to include your chief legal or compliance officer, chief risk officer in these conversations, right? I don't think it's a one size fits all. And even within an organization, right? We've talked a little bit about patient diagnosis, but if you're using AI for hospital bed allocation, and you're using AI to do it in an automated way, is that a less risky if your marketing department is using AI for providing targeted marketing, right? Would that be considered less risky?

And I think that's where having that, AI steering committee or a group of stakeholders that come together and have that discussion, right? Taking it beyond your IT or AI team to bring in the business leaders and having that discussion and defining the level of risk tolerance for your own organization.

I think that's the number one thing that needs to be done to define the level of risk. There is no one size fits all, even at an organizational level. It will be, based on what the AI marketing tool might be doing versus what the AI tool is doing in the factory floor.

Very different, but you know, the stakeholders have to weigh in. And I have a pillar in the book, Adele, called accountability. And I think that one is where it kind of forces that discussion because accountability about, as you are building the project or as you are, building the product and or designing and defining upfront.

Who is accountable for it when things go wrong? Because we've seen scenarios in the last three, four years. when the algorithm does something wrong, the CEO might have to go face the law enforcement. Sometimes the data scientist gets fired, right?

There is no fixed protocol for it. So defining accountability up front is actually, serves two purposes. One is it forces that risk. Discussion. It forces, you know, the discussion with bringing all the stakeholders together. And secondly, it also almost puts a level of urgency to have that discussion and mitigate them, right?

Because I fundamentally believe that, all of us, technologists especially, are not, setting out to do harm, It is just that in our processes, in our project management tools, there are no checks for these risks, right? For too long, we focused just on the ROI and the positive value creation of AI.

And look, I'm a technologist by training. It is very easy to get enamored by the, cool things that AI can do but when doing it at an enterprise level, I think bringing in that risk lens is crucial. And, whether it's just adding a simple check into your project management tool, saying you know, have the risks for trust and ethics being considered, right?

And forcing that conversation. And even if you're And just 5 percent of your entire project planning time brainstorming it. If you identify those risks, I fundamentally believe that you will do something to fix it, The discussion doesn't happen today. The conversation doesn't happen today because our processes are not set up to force that discussion.

so that's why, We are not thinking about those risks, but thankfully now we are at that point where there is more urgency around operationalizing for these risks, and especially after regulations come in, it will be a mandate, but I think there's a lot companies can do today to start operationalizing.

Adel Nehme: You mentioned here the accountability kind of segues into my next question well here because, leaders are trying to note like leaders are listening to this podcast. They're trying to think about, okay, how do I operationalize these conversations within my organization?

So in your opinion, who should take the lead in terms of organizing different stakeholders to discuss the potential risk of a I use cases and more importantly, Who should the stakeholders be? You mentioned quite a few different profiles from legal, risk, business functional stakeholders, and what should that checklist look like?

Beena Ammanath: Yeah, I know that's a great point. And that's actually part of my role at Deloitte. So I can talk to it from my own experience, right? Leading a change of this scale a large organization at Deloitte. And it comes from, years of prior experience building data analytics and AI products and taking them to market.

I've seen those gaps. So, I think If company is farther ahead in their journey, you probably need an explicit AI ethics officer or a leader like I lead our technology trust ethics focus, which includes, the trust and ethical component, right?

And a few things that I've seen. across organizations because there are a number of industries where in the mode of self regulating and there are best practices evolving and being shared, right? So, one of the key factors, the first Step should be to gather this steering committee, this group of stakeholders, which is, cross business and cross function, meaning, if you have different business lines, the CEO or representative of the CEO should be part of this committee, then, the functions, marketing, finance, legal and compliance, risk, they should also have a seat at the end of the day, it is, bringing in your cross business and cross functional team.

Thank you. committee of senior most leaders to be able to, just have these discussions, define those metrics and move forward. That's number one. Number two is, doing an ethics level fluency, right? I'm assuming the organization has some level of AI fluency, but if you don't, you know, having a base level AI fluency training where every employee in the organization understands the basics of AI, right?

And maybe they take a data camp course to do that. But, understand the basics of AI. But one big component should be around, the ethical principles, the trust principles that the organization believes in. And how can employees, get engaged. So everybody is On the same page, they're talking the same language.

When the question on fairness comes up, they know how the organization internally tracks it and then who to reach out to. So that leads to the third step. So first is, having a cross business cross functional leadership group. Second. A base level AI training, AI ethics training, and third is changing the processes.

if you're using a project management tool, whatever it might be, make sure there is check. the risks we check for when we start a new project, there is a financial risk, brand reputation risk. Add a risk around trust and ethics if the AI is involved, right?

And make sure that. Is being getting filled out to being able to, change your processes. And then if an employee has a concern, you know who, where do they call right? and it is super important that every employee in the organization has the knowledge, understands the processes and is using it.

The reason is it's not just your team. It's not just your data science team that should be thinking on looking at ethics. There is probably an intern in your marketing team right Now, evaluating an AI tool for personalized marketing, That, in turn, should know, the junior most employees should know what questions to ask.

Whether it is just asking, what training data sets were used, right? What, they should be empowered, they should have enough information to ask the right questions so that they do that frontline mitigation of any issues that might come up later. And then, there should be a process where they need.

further information, they reach out to the central team who can then help guide. So I think this is these three are immediate steps that any organization that's using AI or building AI can use. And I think then it's important to point out it's not, this is not a challenge just for big tech or companies that are building AI.

Even if your company is just using AI, applying different applications have different ethical impacts. So there are impacts that will come into it. So it has to be on both the creation side, but also on the applied side.

Adel Nehme: That's great. And there's one thing that I wanted to also tease out from this particular conversation is when exactly this conversation should start, One thing I've been reading about quite a bit is this concept of shifting left, in the context of data is considering reliability.

the implications of data and AI systems way earlier in the development cycle. So maybe at what time in the development process, as AI use cases are being developed, should this conversation start?

Beena Ammanath: I will answer it in the context of a pillar I have under Trustworthy AI called Responsible AI. And Responsible AI is the most, I would say, the most philosophical and vague one. But that is where you pause and ask the question, Is this the right thing to do? Should we be doing this? Is this the right thing for broader humanity, society, for the future?

I think of the classic Jurassic Park scenario, right? Remember the quote? You know, Just because your scientists could, they did without pausing and thinking about it. I think this is where we have this very powerful technology, if you are planning to use it a certain way, there are long term impacts.

So I think being responsible citizens, right? We're never part of an engineer or scientist mandate, but now more than ever, I think it is an absolute mandate. Even when you get that first idea. of building a certain whether it's large language model or whether it is using AI in a certain way, like should I be doing it right?

good I think it should be when you get that first idea pause and ask, should we even go down this path, and Look, I've seen scenarios where you know, and again, I'll switch now more from the philosophical to the the enterprise lens, right?

Where, see, look, I fundamentally believe that all technologists come with good mindset, wanting to do good, wanting to change the world for the better. an optimist that way, but as you can probably tell, I'm very pragmatic as well, right? Because we are not trained to think about the long term implications.

We just see the... Media. Oh, there is, new revenue opportunity or we see a cost saving opportunity, And I'll tell you about a great a couple of scenarios. One was this was an enterprise with extreme good intent, wanted to make sure they could retain their. top most employees and this was one of my prior roles, very early days of AI where oh, let's combine employee social media activity with their email data.

And see if they're unhappy, And just focus on the top most employees and we will make sure they, they're not looking to leave, right? Instead of trying to, have them leave and then try to retain, let's be proactive. Let's use, you know, AI. I'm like, wait, but do the employees know that, this kind of match it comes with good intent.

Look, you know, when social media started, it was all about connecting and bringing communities together, Great intent, Great idea, right? But if you paused and thought but how are we going to make money off this, and what does targeted advertising, what kind of social impact could it have on teenager minds, right?

I think there would have been guardrails put into place. And that's where my optimism comes in, right? If we can think ahead a little bit and make sure our processes are set up in a way where you are thinking and being able to identify some of the ways it could go wrong. Thank you. I'm very confident that we would actually address it, right?

So I think when that initial idea comes, pause and think, and I think it needs to be at that very beginning stage. And of course, then throughout, right? This is not something that stops at the idea phase or even design and development. It goes right into your MLOps because AI the way machine learning operates, it is changing, right?

So as model drift happens on the value creation, model drift is also happening on the ethical risk side, right? So you have to continuously monitor. So it starts right at the conception of the idea till the model gets retired.

Adel Nehme: Yeah, the road to hell is paved with good intentions, as they say. And given that you're discussing here this, I think, a brilliant example of social media, I think one of the reasons why is, well, the social media the ecosystem evolved the way it was is also because of competitive pressure, right?

And now we see a lot of competitive pressure as well with generative AI, a lot of organizations within our hype cycle at the moment where we are. Everyone's thinking about generative AI use cases and how to deploy them and how to operationalize generative AI. And there's a delicate balance between being first on a use case and trustworthy AI and the importance of creating AI systems that in the long term also benefit humanity.

So when deploying AI technologies right now, generative AI use cases and technologies, how should organizations approach this balance? What is a good place to start in your opinion?

Beena Ammanath: I think you know, having this risk conversation early on proactively thinking of the ways it could go wrong is a great starting point for any AI conversation, right? I think should not be an afterthought at the end of the day. It's not like you'll be able to eliminate all the rest, right?

And the idea is to make it an informed conversation. decision. And we've heard too long, the notion of unintended consequences. And there will always be unintended consequences. But can we reduce that bar down, right? Can we make sure that, we are proactively thinking about the ways it could go wrong and then reducing the number of unintended consequences.

We have to move away from the phase of, massive Of scale unintended consequences where there is no rollback option, So I think the idea is not to completely eliminate risk, but make it informed decisions. Document it, have those discussions, bring in the experts to talk about it. Make sure you're getting the guidance and not hide under the excuse of unintended consequence.

Adel Nehme: That's great. And, one thing that we discussed and kind of we alluded to throughout the discussion is the importance of AI fluency or literacy and that kind of base level. And I think that base level of fluency enables a common AI language or data language within the organization that promotes that conversation and that cohesion around responsibility and risk.

So maybe switching gears a bit and talking about the importance of AI fluency, something that we care about quite a lot here, but on data camp what do you think is the ABCs of trustworthy AI within the organization?

Beena Ammanath: So in terms of dimensions, I think we've covered most of it, and there are two more that I will refer to, like privacy in the context of AI, right? we have, quite a few laws around data privacy and data protection, but when that same data is used by AI, how do you untrain a model that was trained on a certain data set, right?

The nuances that AI brings to the table from our data privacy concept is important. Thank you. to consider. And then the last one is the safe and secure, right? A I can actually expose vulnerabilities within your system. And we've seen scenarios where the chatbot goes robe or racist, right?

So I think because the way I operate, it's not your traditional software where you built and deployed. It consistently produced the same results. It's constantly evolving and changing. The logic is changing, right? So how do you make sure? That you're protecting from security hacks, especially, if you are using it for, say, a jet engine maintenance, or you are looking at manufacturing plants, We live in the world where we are surrounded by IOT everywhere, right? If you do not think about AI and the security vulnerabilities it exposes, I think you're walking on the edge over there. So I think thinking about security and safety is important.

We've also heard of safety in the context of, the decisions that AI makes right now. And I think safety in AI There is a physical safety. We've heard the examples of, when it's a self driving car, who should it hit? Which pedestrian? That's the scenario we've all heard about.

But, it's also about safety from a mental health and psychological perspective. Right? What kind of human behavior is going to be induced by using this sort of AI, right? what's going to be the impact on mental health? Is it going to increase teen suicide rates by using the software that we are building, right?

So I think it is important to think about safety broadly and holistically beyond just physical safety. Also on the mental, psychological safety as well. To be able to make sure that your AI product or solution is really trustworthy. The last one I'll touch on Adele is explainability, And I think It's a tricky one because it's easy to explain an algorithm and how it works.

But I think for companies that are serious about trustworthy AI, it has to be explained in a way that your end stakeholder understand. Meaning you explain. Explain how your algorithm works very differently to your CEO and her board, right? Versus your end customer, right? So I think explainability works only if It's understood by the end user, so it cannot be just this one big massive technical document and publishing it and saying we've made it explainable, right?

I think you have to get down to that nuance level, especially where transparency is crucial, you have to be able to explain. Playing in a way in the language that the end user understand. And here's, I see, something like Jenny. I'm playing a huge role.

Think about it, right? Jenny is extremely powerful and there are many good things that can do right. But, you know, think of the explainability being able to explain plane in different languages, depending on, your user base or the audience. I think that's something that you can be done with generative AI, right?

So all of these, if the intent is there, if your stakeholder leadership group is prioritizing this, I think technology itself can be actually used to solve this.

Adel Nehme: Yeah, I couldn't agree more, and I'm very excited personally about the potential generative AI has, and unlocking what we think about as data literacy by design, right? The ability to, inherently within the tool provide that useful context. And maybe as we're discussing about the potential of generative AI here, I'd love to also, talk to you about where you think we're headed within the next 12 months within this space.

So what do you think will be the stories that define? trustworthy AI over the upcoming quarters? How do you think generative AI will evolve? And yeah, I'd love to share your thinking here.

Beena Ammanath: So Adele, I'll just start with you know, when generative AI, you know, burst into the scene it's actually not new, right? Large language models is something that, the tech community has been working on for a while. I think we were even looking at it 12 years ago, So I think it just burst into the public mindset in the past few years. And I got a question like, Ooh, are you going to add a chapter on generative AI to the trustworthy? I hope. And I said, No, everything that's there is still applicable, The foundation is still the same. You might call it by different names.

But, generative AI has It's still the same anchoring principles of trustworthy AI. So what I see evolving over the next 12 months there are going to be a burst of use cases. In fact, we put out a AI dossier of 60 AI use cases across different industries, right?

So we do see there are going to be Several high impact, low risk use cases that come out over the next 12 months. I think there's also going to be a much more awareness on, what are the risk factors and, best practices that come out from different industries of what works, what doesn't work, and where it can be used effectively, and where do we need to, put in additional guardrails, right?

I think there will be a lot more real world applications of generative ai. I'm also excited about the fact about how much attention AI regulation is getting now, so I'm hopeful that we'll see more progress on regulations. Of course, it takes time, but hopefully this will be a time when most organizations start imposing some level of self regulations and from a trustworthy AI aspect, right?

So I'm very optimistic about where we are headed with A. I. As long as we, consider the downsides and address it.

Adel Nehme: Yeah, and maybe here you're touching upon regulation. You mentioned, how do you see the regulatory landscape evolving over time as AI becomes more powerful?

Beena Ammanath: I think the conversations have been happening for a long time. The challenge is, and as we've seen in our conversation today, Adele, is that it is not a one size fits all, right? It cannot be just one A. I. Regulation. There will be overarching principles, And overarching Ideas on what we align with, right?

Whether at a national level or a global level. But the end of the day, it is going to depend on the use case. So I see regulations coming in from two broad categories. One is for existing industries, right? The pre internet era industries like health care or bank regulations. Financial systems energy and manufacturing, right, where there have been regulations, it'll be more about extending out those regulations to make sure that AI impact is way in to that regulation, right?

Or there might be additional regulations that come in, a fewer ones. But then there is this other category, which is primarily the industries that have emerged in the last few decades, right? Where, there have not been as. Specific regulations, whether it is, broader technology platforms or social media, those are completely new industries and, you probably will see more regulations over there given, yeah, but bottom line, I don't think it's going to be one regulation.

It's going to be very different depending on the industry and the use case itself.

Adel Nehme: Okay, that's really great. And I think that's a great place to end on. Bina, it was amazing having you on the show. Really appreciated your insights. Maybe before we wrap up, do you have any final call to action or closing words to share with our audience?

Beena Ammanath: I think based on your audience and the work that they're doing, I think, you hold a tremendous power in your hands with the technology that you're working on. It's time to step up and be responsible citizens about it.

Adel Nehme: That's awesome. Thank you so much, Bina, for coming on the podcast.

Beena Ammanath: Thank you for having me again.

Topics
Related

You’re invited! Join us for Radar: AI Edition

Join us for two days of events sharing best practices from thought leaders in the AI space
DataCamp Team's photo

DataCamp Team

2 min

The Art of Prompt Engineering with Alex Banks, Founder and Educator, Sunday Signal

Alex and Adel cover Alex’s journey into AI and what led him to create Sunday Signal, the potential of AI, prompt engineering at its most basic level, chain of thought prompting, the future of LLMs and much more.
Adel Nehme's photo

Adel Nehme

44 min

The Future of Programming with Kyle Daigle, COO at GitHub

Adel and Kyle explore Kyle’s journey into development and AI, how he became the COO at GitHub, GitHub’s approach to AI, the impact of CoPilot on software development and much more.
Adel Nehme's photo

Adel Nehme

48 min

A Comprehensive Guide to Working with the Mistral Large Model

A detailed tutorial on the functionalities, comparisons, and practical applications of the Mistral Large Model.
Josep Ferrer's photo

Josep Ferrer

12 min

Serving an LLM Application as an API Endpoint using FastAPI in Python

Unlock the power of Large Language Models (LLMs) in your applications with our latest blog on "Serving LLM Application as an API Endpoint Using FastAPI in Python." LLMs like GPT, Claude, and LLaMA are revolutionizing chatbots, content creation, and many more use-cases. Discover how APIs act as crucial bridges, enabling seamless integration of sophisticated language understanding and generation features into your projects.
Moez Ali's photo

Moez Ali

How to Improve RAG Performance: 5 Key Techniques with Examples

Explore different approaches to enhance RAG systems: Chunking, Reranking, and Query Transformations.
Eugenia Anello's photo

Eugenia Anello

See MoreSee More