Skip to main content
HomePodcastsPodcast

The Future of Responsible AI

In this episode of DataFramed, Adel speaks with Maria Luciana Axente, Responsible AI and AI for Good Lead at PwC UK on the state and future of responsible AI.

Updated Aug 2021
Transcript

Photo of Maria Luciana Axente
Guest
Maria Luciana Axente

In her role as Responsible AI and AI for Good Lead at PwC, Maria leads the implementation of ethics in AI for the firm while partnering with industry, academia, governments, NGO and civil society, to harness the power of AI in an ethical and responsible manner, acknowledging the benefits and risks in many walks of life. She has played a crucial part in the development and set-up of PwC’s UK AI Center of Excellence, the firm’s AI strategy and most recently the development of PwC’s Responsible AI toolkit, firms methodology for embedding ethics in AI. Maria is a globally recognised AI ethics expert, a Advisory Board member of the UK All-Party Parliamentary Group on AI, member of BSI/ISO & IEEE AI standard groups, a Fellow of the RSA and an advocate for gender diversity, children and youth rights in the age of AI. 


Photo of Adel Nehme
Host
Adel Nehme

Adel is a Data Science educator, speaker, and Evangelist at DataCamp where he has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.

Transcript

Adel Nehme: Hello. This is Adel Nehme from DataCamp and welcome to DataFramed, a podcast covering all things data and its impact on organizations across the world. I think we can all agree that one of the first things the general public thinks about when they hear the term AI, is the ethics of AI and its potential impact on humanity. A lot of the time though, the concept of risks associated with AI is driven by conceptions in the popular media and pop culture and high profile failures of AI in public.

Adel Nehme: These failures have prompted many organizations to think twice about the models they put out in production and to analyze the risks of AI in order to deploy them responsibly. This is why I'm excited to have Maria Luciana Axente on today's podcast. Maria is Responsible AI and AI for Good Lead at PwC UK. In her role, Maria leads the implementation of ethics in AI for the firm while partnering with industry, academia, governments, NGOs and civil society to harness the power of AI in an ethical and responsible manner while acknowledging its benefits and risks in many walks of life. She has played a crucial part in the development and setup of PwC UK's AI Center of Excellence, the firm's AI strategy and most recently, the development of PwC's Responsible AI Toolkit, the firm's methodology for embedding ethics in AI.

Adel Nehme: Maria is a globally recognized AI ethics expert, an advisory board member for the UK All-Party Parliamentary Group on AI, a member of BSI/ISO and IEEE AI standard groups, a fellow of the RSA and an advocate for gender diversity, ... See more

children and youth rights in the age of AI. Throughout the episode, Maria and I will talk about her background, where responsible AI intersects and diverges from the ethics of AI, the state of responsible AI in organizations today, how AI responsible AI is linked to organizational culture and values and most importantly, what data scientists can do today to ensure that their work is used ethically and responsibly by organizations and how bottom-up activism can nudge organizations in the proper direction.

Adel Nehme: If you enjoy today's conversation with Maria and you want to check out previous episodes of the podcast and show notes, make sure to go to www.datacamp.com/podcast. Maria, it's great to have you on the show. I'm really excited to talk to you about the state of responsible AI, AI governance and accountability, and how organizations can start with the responsible AI journey. But before, can you give us a brief introduction about your background and how you got into the data and ethics space?

Maria Luciana Axente: Of course. Hello everyone and thank you Adel for inviting me. It's a pleasure to be talking to you and let's start exploring what's the buzz, what's the fuss about ethics of AI? Why would you talk about it and what can we do about it? So, a bit about my background. So, I work for PwC UK, which probably most of you know. It's a professional services company and the question might be, what is PwC doing in the AI space? And hopefully throughout the conversation we'll be having today, you'll get a bit of a more and more flavor why companies like PwC need to have a role in shaping the story of ethical and responsible AI. So, I joined the firm seven years ago and my background was business in digital transformation, so I go to set up businesses, transform businesses and then move into the technology and digital space and work on this transformation with the help of technology.

Maria Luciana Axente: And at some point, I had the opportunity to focus from the wide range of emerging technology, which was the case before that, into something a bit more specialized, which is the AI. And it was a match made in heaven because, when we started exploring PwC with other lenses, then technology, we realized how important it is to understand the whole context where this novel technology is being developed and used and how important to understand the consideration that needs to be part of the design much beyond the traditional boundaries of experience design and back into how businesses or the context will change as a result of using a tool that has its own agency or now plays very differently from all previous technology.

Maria Luciana Axente: So, for the last four years, I've been part of the AI Center of Excellence. That's been a fascinating journey because I was there from the beginning, so I helped set it up, put the strategy together. It was very much like a new venture that we develop. And gradually, based on my previous experience and my education, I started exploring the ethical layer and what are the key moral consideration we need to consider, what does it mean from a business perspective? And if we have this vision of a good life with AI, then how do we make it happen? What needs to be in place? What needs to be changed? And that's why we came up with this concept of responsible AI that allows us to be able to create a vision of a good life with AI, but also create what is needed to achieve that vision and be very practical because, ultimately, we are a for-profit company that needs to demonstrate value-added from what we create. So, we can't afford just to vision about emerging technology. We need to be able to deliver it in a way that is sustainable.

Defining Responsible AI

Adel Nehme: That's great and I'm excited to unpack all of that with you. So, I want to first start off by asking how you would define responsible AI. Over the past decade, a lot of the discussion on AI risk has fallen under the umbrella term of AI ethics and over the past few years, we've seen the gradual rise of responsible AI. I'd love if you can define responsible AI specifically in how it intersects and doesn't with AI ethics.

Maria Luciana Axente: I think that's an excellent question. And I think I'm very grateful that you have asked it because I think we need to start framing those concepts and understand how they overlap. If there's ever overlap, what's the relationship between them so that we have that clarity? We know like always, if you don't frame a new concept well enough, then you will struggle to make it happen. You will struggle to get it off the ground. So, this is quite, I would say personal to us and to myself, is leading responsibly. I work for PwC UK, so we define the two terms in the following manner. To say that the ethics of AI is a new apply ethic discipline that is in the process of being developed. So, we might say, it's a branch of digital ethics or data ethics or is information philosophy in general, but it's definitely a new branch of applied ethics that is concerned with studying the moral implications of the technology we label AI.

Maria Luciana Axente: Those technology that ultimately have some key unique characteristics that makes them quite different from all the technology we've seen before, mainly to the fact that they display or they own, they possess agency. That means on one hand, that they will interact with an external environment. They will adapt to this external environment, to a certain extent, shape that external environment. And they have a certain degree of autonomy from human supervision. And these types of new assets require new ways of thinking in terms of normative question. Is it right or is it wrong for us to be using those machine? How should we treat those types of agents? So, that's the discipline of the ethics of AI, which is a bit more abstract and also, it's a bit more forward-looking because we're stepping into areas we haven't been before. In the history of humanity, we haven't had assets like this that will operate alongside us and push the boundaries of what's wrong or right.

Maria Luciana Axente: We have thought about it in fictional work, but we haven't had them in real life. So, we have to be able to reason and discuss and debate, what are those smallest implications? And in this process, the ethics of AI will allow us to formulate a good life with AI. So, what does it mean? If we have this potent tools that have both benefits and risks, how do we make sure they use it in a way that is aligned with some human goals? We are not just creating AI for the sake of doing it or because we have some sort of record complex, we aligning it with the purpose of humanity. Those purposes that we have as a human race are something that's very interesting because we haven't had to come together to say, or maybe we did. I'll get to that in a second.

Maria Luciana Axente: But to be that precise, we want to help everything to flourish in this way. In fact, we have. It's the human rights. But the AI ethics is that discipline that allows us to, not only understand the moral implication, but also say, "This is where we're heading. This is what's acceptable." And what's acceptable is not just to do this for the sake of doing it. It's because we want to achieve a human-related goal. Related to that is, how do we achieve the vision? Right? Because every vision has to have some sort of a modus operandi or an operating modalities that will allow us to get there. So, we need to have another body of discipline that collects everything that we need to be able to achieve that vision. And that's where responsible AI is being placed, because responsible AI is much more tactical. It actually feeds into, now that we understand what's morally permissible and also, what are the risk area, where things can go wrong, we can take that vision and translate it into tactical plans.

Maria Luciana Axente: And we have a set of approaches and tools and ways of thinking that are multidisciplinary in nature, that are holistic in nature, because we realize that the disruption AI brings is going to transform who we are, so therefore, we need to approach this much more holistically where Responsible AI is the engine that allows us to get to that vision, if you want, and has different approaches. It has the risk angle, which means, not only understanding what are the risks attached to AI, the numerous that AI brings, but also what current risk in each organizations and the society level and personal level will be augmented by AI. Then you have a new operating model. How do you govern? How do you control over a self-learning artifact that operates in a different way that it's stochastic in nature?

Maria Luciana Axente: And therefore, we need to upgrade our linear processes into something that's much more real time and dynamic. And also, if we agree that we have of moral vision, what will be the values that need to be incorporated in each and every single use case and context, that specific application will operate and bring it all together, the risk side, the governance side, but also the values that need to be incorporated. It gives you this new discipline that requires input from a wide range of disciplines, case by case, example by example, that come together to actually say, "This is what we need to be doing to achieve that vision of a good AI in this specific context."

Adel Nehme: So, correct me if I'm wrong. In summary, the ethics of AI is about how we should align our moral values with AI systems whereas responsible AI is more about how to operationalize these moral values alongside other disciplines in order for us to create value out of responsible AI. Is that correct?

Maria Luciana Axente: It's something on those lines, but I will say, the ethics of AI is not the framework. It's a vision. It's where we want to go. Framework is the one that gets us there.

Current State of Responsible AI Adoption

Adel Nehme: Okay, awesome. So, you work with a lot of data and business leaders trying to integrate ethical practices into their AI development process. How do you view the current state of responsible AI adoption? Do you think that this is something the majority of companies are investing in and thinking about?

Maria Luciana Axente: I think that's an interesting story, and I think we need to separate a little bit, the buzzes being made publicly, the marketings and common narrative that we see out there from the reality in the field. I think, really to understand the state of responsible AI, we have two avenues. Put aside the noises being created because obviously the last few years have brought us a lot of bad examples of AI, of examples, where AI is being misused under-user, over-user abuse and has been widely reported in the press. And as a result, we've seen a lot of public attention giving to the negative implication and consequences of AI. But when we look at the state of responsible AI within enterprises, there are reasons to be optimist. I think it's all stuck with the fact that when it comes to a technology as powerful and yet unknown as AI, we need to change completely the mindset.

Maria Luciana Axente: It's not just about the set of benefits that that technology will deliver, it's a set of potential risk, potential harms that are attached with it. And we need to bring the two imbalance and we need to have this mindset of, not only can we build it because we have what it takes, we have the data, we have the processing power, but should we do it? And that's a stark change for the, "Can I do it?" Which is the philosophy that underpins the computer science community. Right? And if you start thinking with, "Should I do it?" Suddenly, you see that there are both benefits and risks attached to a new endeavor. And when you put together a business plan or to a certain extent, any plan to use this technology, you will compare the two and you will always proceed with cautious because you will always outweight the benefits versus risks.

Maria Luciana Axente: And that's something that gives us reason to be optimistic because the public narrative have helped the executives working in AI, or that have some sort of oversight over AI to reconsider. Right? So, it's about benefits and risks. And starting with that, many organization have actively started to bring it all together. As you said, if that's the case, if we need to consider benefits as well, then obviously, let's go on the risk side and identify all the potential risks that could be triggered by AI and not only identify them, but being able to define mitigating strategies and with that, understand if our organization are capable of mitigating risk associated with AI. And the research that we've been doing last year, so we've surveyed about 1000 executive on responsible AI from across the world. They've told us that, an increasing number of executives will have AI risk strategies and they will consider the risk side as part of the wider AI strategy.

Maria Luciana Axente: On the other hand, if we go beyond the risks of AI and acknowledging that, ultimately, the vast majority of AI risks are in fact ethical risks. We have seen quite a lot of uptake when it comes to the broader outlook of ethics of AI in setting up initiatives that will enable companies to explore this moral consequences of AI. And with that, being able to have a more long-term view, not only so reactive, which is linked to the risk and saying, "We will be looking to create internal policies that will allow us to explore from the perspective of our companies, what is the direction we should be traveling while using AI and various technology attached to it in some contexts, who needs to be involved? What is the decision process? And what are ultimately the values that we should be embedding, encoding in the processes that will lead to build and deployment and use of those technologies and make sure that we have a way of controlling it all?"

Maria Luciana Axente: It now sounds grand and sounds like a lot needs to be done. We have seen a lot of the companies having a code of conduct and principles, quite a high number. We've seen companies being interested in setting up ethical boards that will allow them to explore and debate and have a transparent and constructive process in ethical decision-making, but also developing impact assessment and other types of tools that will allow them to weigh, what are the consequences of the AI being developed?

Maria Luciana Axente: That's enough evidence from us to say that the things are going in the right direction. A lot more needs to be done, but at least we received those clear signal that, where someone mention the word AI, the very next thought that comes in mind is that, we need to be thinking about ethics, risk identification and accountability. And we need to make sure that we have all this baked in whatever plan we have ahead of us so that, not only we are taking all the fruits and all the benefits that are being promised by this technology, but we remain in control and we understand that, where are the things that can go wrong and how best to deal with it.

Adel Nehme: That's very encouraging. And you mentioned here, the presence of reactive organizations who're reacting to ethical risks and ones that are more proactive. What do you think is the main differentiator between organizations that take Responsible AI a as a serious level of value as opposed to others?

Maria Luciana Axente: I think there are two groups of companies. One it's where we see the AI adoption quite high. And I'll put for a second the tech companies because they are in a different bucket altogether and I think, the challenge is that, the big tech companies we have are less to do with the ethics of the technology they produce and use. It's more to do with corporate and business ethics than anything. And talking about the non-tech companies, everyone else let's say. I think from what we've seen from our clients is, very much those implications are linked with the maturity of AI adoption. More maturity in understanding AI and also deploying AI at scale, triggers those consideration because when you start seeing how AI operates within your business, and more likely they might have experienced some of those negative implication live, especially around bias and discrimination, it makes them be much more cautious in the way they approach AI.

Maria Luciana Axente: But also it's very much linked with the industry they operate. So, for example, when we look at the various ethical principles that are or not the priority for different industries, we have found, not to our surprise, but in fact, reliability, robustness and security is the most important concern for all of the companies. So, it's prioritizing, making sure the solution is robust and stable.

Maria Luciana Axente: But then when we look at others priorities, they're different by sector. Right? And worth mentioning that, at the same level with reliability and robustness and security, we have data privacy that has been a top ethical priority for every house so because in some parts of the world, it's a mandatory requirement, therefore, embedding privacy in all the data-driven technologies, it's done as part of a compliance process. But when it comes for various industry, for example, in technology, media and telecom, human agency is a top concern. In public services and health, beneficial AI is a top concern. Right? Energy, accountability is a priority for the executives in this field.

Maria Luciana Axente: So, on one hand, besides the maturity of adoption, the other one is how the industries are shaped and what sort of application they will be deploying and how those applications will empower, the operations. Will they be more closer to the customer? Will it touch the personal data? Or would it be application that's more back-office type of AI? And when you put this imbalance, you'll see that companies that are mature, they will, not just start thinking about it, they already have initiatives in place. The ones that are starting this journey will consider those different implication, but they will probably slow in adopting because it will be linked very much with the pace in adoption of AI.

How to Operationalize Responsible AI Within the Organization

Adel Nehme: So, you mentioned here robustness, security and reliability and I would like to maybe segue into discussing how to operationalize responsible AI within the organization. One of the best resources I've seen on responsible AI is a framework your team developed on responsible AI titled: The Responsible AI Toolkit. Can you walk us through that framework and the different components that go into it?

Maria Luciana Axente: Yes. Thank you for appreciation. I think we are very proud of the work we've been doing because we've put a lot of passion in it. So, when we created the tool three years ago now, when we launched it two years ago, it was a bunch of us from different territories that came together based on our previous experiences, client work and internal experience to create a tool that is both flexible and forward looking, but also holistic in nature, because we understood that with the potential of AI to be that disruptive, responsible AI needs to be able to bring together, under one umbrella, approaches that will allow for this flexible yet holistic approach. So, we ended up creating a set of Lego-like type of toolkit, both code-based and non code-based that are captured towards different problems to be solved.

Maria Luciana Axente: So, we have assets that will test for reliability and robustness and security, so for explainability or fairness and discrimination, which are very much the plug and play type of things. It's very much where the whole of the industry is in terms of creating solutions that will allow for a ad hoc testing of the performance of the algorithms. But at the same time, we also have non code-based assets, that are more advisory consulting in nature, that will allow for an assessment of where the organization is in terms of understanding what values they are standing by, when it comes of developing AI, how well they are able to translate those values into principles and then into design requirements, but also how well in-sync those values are with the context of the organization and with the regulation in various territories. And lastly is, how do you remain in control? How you develop a governance models that look across the AI life cycle, that starts not just with the business requirements.

Maria Luciana Axente: That's the model design. It's the application design. But the reality is that, AI life cycle starts with the strategic overlook when you decide, what are the strategic priorities where AI is going to be incorporated and how you approach it, and who's going to be involved in it. And the governance of AI brings together all sorts of tools that allows you to operate various flavors of AI or various types of AI, whether you built it in the house, or you acquired from a third party. You have to have a virtual operating model that allows you to be in control, at least until this disruption or this change of AI triggers in terms of the linearity of business processes, the way the structure of the jobs, the working culture, will gradually adapt to the autonomous agents. And having those different assets allow us to say to our clients, "If your main concern is about identifying risks, we can help you identify those risks, create the right controls, but also update your operating model so that you actually have the ability to address those risks like bias or partners."

Maria Luciana Axente: In the same way if companies will say, "I'm concerned if they're based out in Denmark for example, where there's a legal requirement for the companies to have a code of conduct on data ethics." And say, "What should be my ethical principles? How should I align it? How should I translate it in my internal policies? And who should be involved in bringing this policy to life?" We'll be able to address this question. But in the same time, we always keep an eye on the longer vision, the longer vision, which is a good life with AI. And while we do all those different individual pieces of work, the reason of having such a holistic approach to responsible AI is to say that, there are more steps for you to take if you are serious, if you are committed to deliver ethical AI, which implies, regardless if you've started the journey from addressing the risk of AI, you should address, if possible, all of those different elements, because without it, it will be difficult to achieve that ethical outcome with AI.

Ethical Code of Conduct

Adel Nehme: I'd love to unpack what you mentioned here. So, one of the things you mentioned is helping organizations create an ethical code of conduct and integrating these values into their AI systems. Can you walk us through what that process looks like and what organizations can do here? You mentioned the Denmark use case here. Assuming all organizations are like Denmark, how can they go about operationalizing the ethical charter?

Maria Luciana Axente: I think, first of all, it's to understand that you have to go back to the values of the organization. You can't just pick up ethical principles that you want to apply for AI out of the skies, or align it with other organization before understanding who you are as a group. And this is where most of the tech companies will have a problem, because there seems to be a disconnect between the values that they have sign up as a collective and the organizational values that need to drive the vision and the ambition of all those organization and how those values are actually translated into the way they operate, including the tech they develop and use. And that's probably the first and most important step, acknowledge that you have those values and acknowledge that in the world of AI, the translation of those values into design requirements, requires much more honesty than before.

Maria Luciana Axente: If before, you wouldn't have that much visibility or a way of proving if you have your values aligned and embedded or not, now it's the time. I keep on saying to people, a good organization produce good AI. Bad organization, they will produce a different type of AI. I'm not saying bad. I'm saying less ethical organization. So, I think it's really important as a first step, to understand that you have a set of values that ultimately need to be reflected in everything you do and say, not just say, everything you do. The second level is, if that's the case, what are the key ethical, moral consideration that are being triggered by AI? This is where we spend quite a bit of time to understand, based on the research of so many brilliant experts we had in the field, that have been thinking about it for a very long time, looking at AI and assessing and iterating the various moral implication they cope up with, those ethical principles based on that.

Maria Luciana Axente: And we started with groups as the [Asilomar Hwan] and then we had the IEEE initiatives that probably is the largest to date because they spent three or four years and they engage more than 300 experts in the field with the view of collecting those ethical and moral issues and then being able to distill in some guidance that is easy to be used by people with less experience, the engineers, once they need very clear guidance and rules on how to work with those implications and how to translate that. Into something much more recent is like the OECD and European Commission first. Obviously, the EU trustworthy rules of AI and OECD, they're all interlinked because ultimately, it started with a group of AI experts alongside some philosophers that have brainstormed together about those implications and the various groups doing this separately.

Maria Luciana Axente: Then more and more other groups will iterate and further enhance this thinking and gave us close to 200 different documents, 155 different principles, which when we put together, we aggregate in distill into nine meta-ethical principles. And those are data privacy, robustness and security, transparency/explainability, beneficial AI, accountability, safety, lawful and compliance and human agencies, of course. But that's our way in PwC of grouping it all together to say that, if you look at all those 155 different principles that have been drafted by all those groups for-profit, not-for-profit, multi-national or supernational organization, they all have a lot of in common. And it's just a matter of how you express some of the issues. So, when we aggregate and we find those nine, but if you then look at how OECD have done this or European Commission have done this, it's very similar. Right?

Maria Luciana Axente: Being able to aggregate all and say, "Those are all the moral consideration we have right now and while there'll be others, we need to be thinking about in more long-term, what happens right now, it needs to have those rules incorporated on guiding the design." Right? And that's the second step is saying, looking at all those different principles, pick up the ones that are more relevant to one's organization, trace it back to the values, but also very importantly, demonstrate how those ethical principles that are set to be translated into norms and design requirements are aligned with human rights because ultimately human rights is the value system that has been signed up by almost all 190 countries in the world, it's actually a law, that it's binding. So, therefore, we need to demonstrate how a specific principles is linked with various human rights article and how various application either fulfill that principles or are in danger of breaking.

Maria Luciana Axente: And I'm not going spend too much time in going in that direction, but to say that the third step in this process of operationalizing is, to make sure when you build this charter, you consult with everyone in the organization. Right? It's not enough for a group of people or just someone who owns this in one organization to say, "Okay, I actually pick the principals, I draft them and here you go." That's not going to fly for long. You need to go through the process and this is where everyone is actually, not everyone, but a lot of companies are leapfrogging trying to cut the corner here, is to say, "I have the principles is enough. I'm just going to push this policy." This is the slowest and most painful journey. You need to be bringing together different groups, different stakeholders and being able to sign off those principles and being able to negotiate those principles with everyone who would influence how the values will be incorporated, will be impacted by.

Maria Luciana Axente: And then only by you do this and in some cases, it might take yes to formulate this policy. OECD principles, it took about two years to formulate a pager. But the process behind that, that extensive consultation with a stakeholder eats the secret sauce of ensuring the ethical principles will then be properly operationalized because in this process, not only that you get the support and consultation with everyone who needs to be involved, but you also start the process of changing mindsets, because in that process, you start debating why those principle matters and being able to iterate based on current examples you have in your organization, how you use data and AI or potential one, what's going to happen. And you start the process of operationalizing the principles by engaging everyone. And by the time you start designing framework and tools, you already are halfway there because people will have a higher degree of awareness and understanding of why this is important and why this needs to be done.

Adel Nehme: That's fantastic. And you mentioned the AI governance and accountability to be one dimension of the responsible AI took it. AI governance can require the collaboration and accountability of data scientists, business leaders, experts, process managers, operations specialists, and really a variety of different people and personas present within any organization. And they may not all have the same "data language" or data skills or the same level of data literacy. Pointing it out, obviously, data literacy and AI literacy are important for organizations, but how do you think we should expand our conception of data literacy to address AI ethics, risk and responsibility within the workplace?

Maria Luciana Axente: I think before we talk about AI and data literacy, I think we need to start talking about digital literacy in general, and also literacy to the extent of the implication of technology. I think every single time we aim to educate people on technology, we avoid describing, but what can go wrong? And is there another alternative? And as a result, we go along thinking that AI technology in general is a panacea for every single problem humanity has. And we need to step away from this type of attitude and reconsider all together about what we're trying to solve here, and what are the consequences of building a technology like AI. And more and more, we have scholars now that come out and say, "There are lots of hidden costs of developing AI that we don't see. And we take for granted the level of sophistication of AI, which in fact, there's a lot of hidden work and efforts that are not being acknowledged."

Maria Luciana Axente: And there's a beautiful book that has just come out. It's going to be available in UK, probably in the next few days. I think it's available in Europe already. It's called the Atlas of AI by one of my favorite people in AI called Kate Crawford. And what Kate does that's absolutely brilliant, is to describe AI as a phenomenon that brings together, not just data and algorithms, which is the refer framing of everyone working in this field, especially the engineers, but all the other elements that come together to give us the data and the algorithms. What the natural resources that are being harvested from the surface of the planet? Then what are the ecological cost of doing this? What are the environmental costs of training a model, a language model, for example? And if we start replicating that, then if we start having more of this type of models with trillion of parameters, what does it mean for the environment? But also very much stressing how much of a hidden labor goes into labeling the data and how much this labor is being kept out of the supply chain, if you want, of AI and gives the impression that AI is more intelligent than it is, to the extent that she concludes, "And besides the data, that's not an oil, it's not a natural resource that is there to be harvested, but in fact, it's about people's life."

Maria Luciana Axente: So, we still have to find and agree a narrative of what data is for us before we go any further. And Kate's conclusion is that, ultimately, AI is neither artificial and neither intelligent. And I don't want to ruin the surprise for our listener in giving too much away from the book, but what I say is that, the book is exactly the type of narrative we need to be having when it comes to AI, understanding the full length of the impact of AI, and where it is coming from and who owns it and what are the interests behind so that we collectively come together and challenge those entities and those who at the moment seem to be disproportionately own parts of what enables to build powerful AI and being able to say, "We need to have a different approach to it. We need to consider it in a different way."

Maria Luciana Axente: And while some might say probably it's a bit too late, I would say this is exactly the right time to reconsider. It's exactly the right time for people who now join AI, to rethink the whole phenomenon and say, through the context of raising inequality, which we know now that AI can make so, so much worse without even knowing, without hidden automation that already exists in so many parts of the world in the public services and also the impact on the environment, it's the right time to have this conversation. It's the right time to unveil the hidden parts of AI and stop thinking that it's just a data set and a model and see what's behind that and how did we get to that data set? How we created this data set and who are the people who are represented in these data set?

Maria Luciana Axente: Then, if we are going into that direction, how will we change the life of so many different people? I know it sounds like cisterns of question, but I think in order for us to avoid that absolute disaster using AI, we need to start thinking in those terms. And while this is done so brilliantly by people like Kate Crawford and her brilliant crew at AI Now and so many other activist groups around the world, I think in our own little teams and organization, I think what we can learn and inspire from those scholars and visionary is to say, we need to be thinking beyond the immediate borders of our perception and vision into, what I do as a data scientist will actually change and how will that change the level of responsibility and accountability you should have and start acting as advocates for change.

Maria Luciana Axente: And sometimes, the boundaries of accountability needs to be pushed from the team, higher up to the business unit and higher up to the company, but also society. And until we have this grassroots activism at the data scientist level, we won't be able to completely change or transform this mindset of the whole technology world because we need people inside, the people who build this, to acknowledge that their job is much more impactful that writing a piece of code or processing a dataset is actually changing people's lives.

Maria Luciana Axente: And while there's no law out there, nothing forces you to think about it. But I have confidence that there are lots of good people out there that work in this industry that will understand what's at stake and they will learn how to be the good agents for change and build AI in a way that accounts for this negative implication. And that's the literacy I want us to start having in these places, not so much learning about, "Oh, this is how you build a machine learning algorithm. This is how to build a voice assistant." No, it's understanding the implication and the impact, and then work from there, how to best build that so that we achieve the positive outcome and being able to keep the negative under control.

Call to Action

Adel Nehme: I completely agree with you on this vision of AI and data literacy that incorporates AI ethics and the AI value chain and what it looks like. As we're ending on this inspiring note, what is your final call to action for listeners to on the show?

Maria Luciana Axente: Don't take things at face value, challenge. Challenge everything. We need you where you are to challenge, to inform yourselves on what is the real potential of this technology, who's behind it where it is and how can you individually make a change where you are. And I wouldn't say this if I wouldn't have been exposed to the fantastic work of people like Kate Crawford and so many like her, that advocate tirelessly for a different approach on AI. And I think only by us individually informing ourselves and trying to find ways where we are and change our mindset first before we ask our companies to provide us with frameworks, with methodology, with policies, I think we have a lot of leverage, ourselves being as the prime builders, the ones that are closest to building those tools to make a change.

Maria Luciana Axente: And while things are going in the right direction and I'm hoping to see much more progress in the realm of a top-down approach, companies that develop the right framework around responsible AI, that's not going to get us too far if we don't have the bottom-up approach where people like yourselves understand that truly this is a unique moment in history where we have at our hands, a technology that can either get us in a very good place as humanity or in a dark place.

Maria Luciana Axente: Although I was never too much of a fan of what people like Elon Musk or Stephen Hawking have said, I think there is a benefit of raising the alarm in that direction, because it's almost like he says, "That's where you don't want to go." So, if you don't want to go there, get yourself together and work towards a different outcome than the one that will just show you that it's possible, because it's possible to get that no matter how much you deny it. Technology, there's very little unknown by the vast majority of people including politicians.

Maria Luciana Axente: It can be easily politicized and no, it's not going to be the AI that's going to take over the world. It's going to be other people developing and using AI in a way that will grab more power into their own hands. So, we need to be careful for that. And the best way to do it is to start being active participants in this and not just say, "It's just my job to code. It's just my job to cleanse this dataset." It's much more than that guys. And only if we come together, we can do it. We're still a small community, but I'm hoping that the new generation is coming. The ones that are training to step into the AI jobs of the future, they will be inspired to take off this vision and they will join us and together we will continue to push the boundaries of how AI is being created right now and how AI should be developed and used in the future.

Adel Nehme: Maria, thank you so much for coming on the podcast. I really appreciate sharing your insights.

Maria Luciana Axente: Thank you very much for having me.

Adel Nehme: That's it for today's episode of DataFramed. Thanks for being with us. I really enjoyed Maria's empassionate call to action on how data scientists can assume more responsibility around their work and her insights on the state of responsible AI. If you enjoyed this podcast, make sure to leave a review on iTunes. Our next episode will be with Brent Dykes on effective data storytelling for more impactful data science. I hope it will be useful for you and we'll catch you next time on DataFramed.

Topics
Related

You’re invited! Join us for Radar: AI Edition

Join us for two days of events sharing best practices from thought leaders in the AI space
DataCamp Team's photo

DataCamp Team

2 min

Breaking Barriers, Building Futures in Ecuador: Meet the New Dimensions Graduates

Guest authors from the Ecuadorian Development Research Lab (LIDE) describe a comprehensive business intelligence training program supported by DataCamp Donates and the US Embassy in Ecuador. The New Dimensions program successfully opened pathways for Ecuadorian women in science by teaching participants industry-valued data skills and supporting their entry into the job market.
Alonso Quijano-Ruiz's photo

Alonso Quijano-Ruiz

Writing Custom Context Managers in Python

Learn the advanced aspects of resource management in Python by mastering how to write custom context managers.
Bex Tuychiev's photo

Bex Tuychiev

Serving an LLM Application as an API Endpoint using FastAPI in Python

Unlock the power of Large Language Models (LLMs) in your applications with our latest blog on "Serving LLM Application as an API Endpoint Using FastAPI in Python." LLMs like GPT, Claude, and LLaMA are revolutionizing chatbots, content creation, and many more use-cases. Discover how APIs act as crucial bridges, enabling seamless integration of sophisticated language understanding and generation features into your projects.
Moez Ali's photo

Moez Ali

How to Convert a List to a String in Python

Learn how to convert a list to a string in Python in this quick tutorial.
Adel Nehme's photo

Adel Nehme

How to Improve RAG Performance: 5 Key Techniques with Examples

Explore different approaches to enhance RAG systems: Chunking, Reranking, and Query Transformations.
Eugenia Anello's photo

Eugenia Anello

See MoreSee More