Reid Blackman, Ph.D., is the author of “Ethical Machines” (Harvard Business Review Press), creator and host of the podcast “Ethical Machines,” and Founder and CEO of Virtue, a digital ethical risk consultancy. He is also an advisor to the Canadian government on their federal AI regulations, was a founding member of EY’s AI Advisory Board, and a Senior Advisor to the Deloitte AI Institute. His work, which includes advising and speaking to organizations including AWS, US Bank, the FBI, NASA, and the World Economic Forum, has been profiled by The Wall Street Journal, the BBC, and Forbes. His written work appears in The Harvard Business Review and The New York Times. Prior to founding Virtue, Reid was a professor of philosophy at Colgate University and UNC-Chapel Hill.
Richie helps organizations get from a vague sense of "hey we ought to get better at using data" to having realistic plans to become successful data-driven organizations. He's been a data scientist since before it was called data science, and has written several books and created many DataCamp courses on the subject.
How things actually get done in the organization or how they ought to get done, depend very much on what the organization is like. So check your model for bias, you know, have a cross-functional team involved. Okay. But the real work, the real mitigation is in the details, not in the sort of high level best practices. It's in how do we actually, given how our organization operates, given the personnel we have given, you know, the regulations to which we already have to comply, given our existing policies and governance structures and given all those things, how do we best implement those so-called best practices. That's where the action is. That's where the, that's where things get difficult.
If your company is using AI, then you just have to start thinking about the risks involved. That'd be silly not to. Think about the business risks. It can be a significant investment of capital, or at least a significant investment of personnel, which boils down to for a business is significant risk to capital. So there are risks, there are ethical risks, there are legal risks, and they need to be considered.
Building an ethical AI product is not a short process, it needs to be done with careful consideration, factoring in governance structures, policies, personnel, and the nuances of your own organization. Even then you must continually check against these initial considerations to ensure you are staying within the guidelines you created for the project.
Evaluate what the potential risks might be in the concept phase of creating an ethical AI tool, this can often help outline potential challenges you will face and help you decide whether the project will be a success prior to any significant effort.
The makeup of a team that works on ethical AI will not just be data scientists, it must include everyone relevant from the C-suite, down to specialists who are required for certain projects, such as sociologists. The team members and responsibilities between them need to be considered prior to starting any work to ensure any unexpected problems have clear owners who will work to resolve them.
Richie Cotton: Welcome to DataFramed, this is Richie. In this show, we've spent a lot of time talking about the benefits of AI because they're real and amazing. Unfortunately, using AI can also cause problems, either for your business or your customers or society in general. That means that organizations AI need to do so in a responsible way.
And that means thinking about the ethics of AI. While ethics for AI sounds like a brand new problem, the good news is that philosophers have been thinking about ethics for thousands of years, and thinking about ethics for AI for decades. So while the problems may have only recently hit the mainstream, I've been pleased to discover that there's already lots of well established good practice to follow.
Our guest is Reid Blackman, the founder and CEO at Virtue Consultants, where he implements responsible AI and AI risk management. programs for enterprises. Reid was previously a philosophy professor at Colgate University, and he advises the Canadian government on federal regulations for AI. Reed also hosts the Ethical Machines podcast.
In short, he knows an awful lot about the ethics of artificial intelligence in business, and I'm very excited to hear what he has to say. I read, thank you for joining me on the show today.
Reid Blackman: You're welcome. It's my pleasure. Maybe. Let's find out.
Richie Cotton: yeah, let's find out. Cool. So, uh, to begin with I just want to give a bit of background context. So can you give me an overview of what ... See more
Reid Blackman: Well, it's changed now that generative AI is on the loose. Things have changed a bit. The three big ones used to be, and to some extent still are, things having to do with ethically discriminatory or biased AI, privacy violations, and black box models. Those are the big three. So when I wrote my book, Ethical Machines, those are the three that I focused on.
Now that AI has come along, and here I'm talking about both generating images, video, text, like large language models like GPT now those three risks still apply. There's still the potential for bias in black box models and privacy violations with models. But now we've got other kinds of concerns as well.
So now we have concerns like worrying about the conversational agent being manipulative. We have obviously IP violations that hook up with the privacy violations. We have concerns around the appearance of LLMs, large language models, chat boxes, they appear to be deliberating, appear to give explanations for why they're telling us the things that they are, but they don't actually do that.
There's the problem of hallucinations, what gets called hallucinations, that it gives false outputs. So that's to give you a flavor of some of the big headlines. And that's just, I'm focusing when I say those are the big ones. Those are the big ones for corporations. There's other kinds of things that people tend to worry about, or some people worry about.
not sure that I'm one of them, but some people worry about the existential threats of LLMs. We're all gonna get dominated or killed or both by our robot overlords. There's concerns around the spread of misinformation, which is a concern for social media companies, but I wouldn't say it's, for government.
Job loss due to automation, that seems to be a concern for many people. Again, I don't think it's obviously the responsibility of business to account for them, but those three headline risks, existential threat, mass job loss, and spread of disinformation. Those are three other really big concerns, but those are not the main concerns of the vast majority.
Not all, but a lot of my corporate clients, because it doesn't. But within their wheelhouse to fix those problems,
Richie Cotton: Yeah, I guess most businesses not necessarily going to be concerned with like the threat of like humanity becoming extinct due to
Reid Blackman: Right. If you're selling, if you're selling ketchup or clothing, you're not going to do much about it. If humanity is coming to an end, you know, they still have to be clothes. So, yeah nothing, the gap is going to do about it.
Richie Cotton: I do like that. There's a sort of fairly well defined list of problems and I'd love to get into some of these in more detail later before we get to that. Do you have any sort of specific examples of where AI has been misused and it's caused a problem?
Reid Blackman: Well, I'll give you one where it didn't cause a problem, but it could have, and it kicked off. One of the major things that kicked off investigations into bias or discriminatory AI. So years ago, 2017, 2018, something like that. Amazon developed a resume reading AI. So, you know, we get tens of thousands of resumes every day.
We can't possibly go through all of them or not efficiently. Let's have an AI quote unquote, read those resumes figure out who to interview and who not to. The thing turned out to be biased or discriminatory against women.
So all else equal, it greenlit men's resumes, redlit women's resumes. And they tried to mitigate the biases of the model. I couldn't do it, so they threw it out So this is a really well known case for a lot of people. This sort of shows, Oh, look what Amazon did, but it's actually in some ways a success story.
Because yes, they built a biased model, but number one, they had the good sense to actively look to see if it was biased. Two, they actually found the biases. Three, they tried to mitigate it. And four, when they couldn't sufficiently mitigate it, they actually threw out the project on it for two years.
Which, that's a lot of time and money to toss. Maybe not if you're Amazon, it's not that much money. But, it's still money. And so there's many ways in which It's a success story because they actually at the end of the day, they did the right thing, but it also shows how easy it is to accidentally create bias or discriminatory AI and how difficult that it can be to sufficiently mitigate that risk.
Richie Cotton: That's actually a really interesting framing of it because I've heard that story a few times and a lot of the time it is just, oh yeah, Amazon did this stupid thing with AI and it was bad, but actually you're saying that they really did uh, do this in quite a responsible manner and it's a good thing.
Reid Blackman: I will say just, I don't know, I guess I'm supposed to disclose that they're Amazon as a client, but I wrote the same thing in my book and that was before they were a client. But yeah, it's, Back then, the fact that they sought to check for, this was before the conversation around AI ethics exploded.
The conversation on AI ethics exploded, arguably, it really exploded the past six to nine months. But even when people were talking about biased AI quite frequently, 2019 ish, 2020, when it really started picking up steam, got outside of academia and people were really talking about it. There's exceptions to that, but.
I mean, these are data scientists and data engineers, or AI engineers, whatever you want to call them, that thought to look into this. Before was the cool thing to do, so to speak. So it is a success story. And, those data scientists were actively trying to mitigate the biases. And it turns out to be a technically difficult problem.
It's not just, it's not a matter of being pure of heart. That doesn't, That doesn't fix your models. It's, there are certain kinds of strategies and tactics that can be quite technical in nature. that need to be successfully executed on. And sometimes you can't successfully execute on it due to limited time resources, due to limited research into the nature of the problem and potential solutions that were available at that time.
So, no, I think it's a success story. It's a big warning sign. It's a red flag. But that's an AI ethical risk success story.
Richie Cotton: Are there any ways up front, so before you spend several years like developing this AI and then discovering that there's some kind of bias there, are there any ways up front of judging how difficult it's going to be to have a non biased model or to have some sort of responsible AI model?
Reid Blackman: Yeah, there are various things you can do. I mean, one thing that you want to first of all do is think about the way that it might be ethically problematic during the concept phase when you're just trying to think about what's the should we build an AI solution? If so, what should it look like? That's a good place.
I mean, if you do it after you built and tested invalid and you're just about to deploy or just about to transfer Yeah. Yeah. Control to your client. If you let's say you're an AWS and you built it for a client then you might find the problem and say, Oh, my God, this is a problem. And I have to go either.
You got to ship a bad product or you got to go all the way back to the drawing board, which costs lots of time and money and probably broken contracts, broken promises for when something's going to get delivered. the best thing to do then is to start at the very the drawing board. Opening phase of thinking about of the AI project itself.
And then there's other kinds of things you can do aside from thinking about, okay, what are the ways that things might be wrong? Let's say you're talking about bias in particular, which gets a lot of press. You might start thinking, okay, well, one potential source of a discriminatory output is the training data.
So let's take a look at the training data. Let's take a deeper dive of that. Let's explore the data. Let's analyze the data and see if we might find potential for biased outputs there. So let's take one simple example. It's well known. Let's say you're building facial recognition software. You take a look at your data.
The training that is going to be pictures of people's faces in particular. And you might take a look and do an analysis and figure out that, oh my God, only, 0. 01 percent are black women. This thing's going to wind up to be terrible at recognizing black women. So we're going to need to up that training data.
We need to get more data, more examples of pictures of black women's faces in various lighting conditions, et cetera, et cetera. So you can do some data exploration. That's one way.
Richie Cotton: So really it is all about just making sure that you understand what you're trying to do at specification phase. So it's cheaper to solve the problems early on rather than later after you've started developing things. Is that,
Reid Blackman: yeah. I mean, it is complicated though, so. There's no such thing, as far as I'm concerned, there's no such thing as intrinsically biased data. It's not like data sets are sitting around a biased halo around it, the biased anti halo around it, a lot of it depends upon the context in which you're going to deploy the thing. So to take a sort of toy example, suppose that you're going to use facial recognition software and you're only going to use it in this town in, northern Sweden or something where everyone has very pale skin and they have a no visitor policy.
So no, don't, I'm, you know, picking up a toy example and you want to use facial recognition software in that little village or whatever. It would be bizarre to complete, so let's say you build it, you use just the faces of people who look like the population on which you're going to deploy it. It would be weird to say, oh, it works perfectly, but it's biased or ethically discriminatory.
So you need to retrain your model so it can recognize, say, the faces of black women or Asian men or whatever it is. that would be weird. It's not ethically discriminatory. It's not having negative ethical impacts. so that's to say that the context matters. It's, It's the training data used in such and such a way, in such and such a context, can produce discriminatory outputs.
And so we better take a closer look at, say, the training data. Or how we weight the variable, the input variables, or where we set the threshold, or et cetera, et cetera. So it's not people say, Oh, we need to get rid of biased AI. We have to make sure that we have representative data sets. That's just a very narrow way of thinking about bias and bias mitigation in AI.
Richie Cotton: so really it's about getting the right data set for the problem that you're going to be using.
Reid Blackman: That's part of it. Part of it is the right data set, but you also, it's things like how you weight the input variables. Part of it is the threshold. so to give you an example, let's say you're doing credit rating scores across various subpopulations. And so if they score above a, I'm just going to make up arbitrary numbers.
If they score above a 53. 5, they get a thumbs up, they get credit. If they score below a 53. 5, then they get a thumbs down, no credit. And it turns out that when you set the threshold at 53. 5 for, yes, no, you get credit, no, you don't, it's going to have a certain kind of distribution of credit.
across various subpopulations, black, Asian, white, black woman, Asian woman, Asian man, Asian woman, et cetera, et cetera. So you're going to get this variable distribution across subpopulations, and maybe it turns out that distribution is discriminatory in some fashion. It may be though, that if you move the threshold to 62.
1. Again, just choosing an arbitrary number. You move the threshold so everyone above a 62. 1 gets a thumbs up, everyone below it gets a thumbs down or whatever. Maybe then the way that it distributes that credit across various subpopulations is non discriminatory. So we haven't trained, we haven't changed the training data.
We changed the threshold in order to have, in order to differently distribute credit across various subpopulations in this way, stipulating in a non discriminatory manner. So there's various ways of getting at the problem. There's various strategies or tactics you can use to mitigate. An ethically biased distribution.
Richie Cotton: So it seems like this is an example of how you make your AI fair or have some sort of sense of justice about it. So, in general, are there steps you can take, like, to mitigate discrimination from AI? How do you ensure this sort of level of fairness across all these different subpopulations?
Reid Blackman: Yeah, so one thing I might highlight first is that I think at a minimum we're aiming for not unfair. There's probably, sometimes people talk about what's fair as though there was just the one fair distribution. I think that there's reasonable disagreement about what constitutes a fair distribution.
And what people should really focus on is what constitutes what's discriminatory. What's clearly unacceptable. Right, so for instance, if you find as ProPublica analysis did, if you find that Your model, your AI is predicting that black defendants are twice as likely to commit crimes within the next two years as white defendants, holding fixed all other variables, criminal history, socioeconomic status, educational accomplishments.
If you hold all those steady and black people are still predicted to be twice as likely, you might think, okay, that's... I don't care how you think about what constitutes the fair and the just with a capital F and a capital J. It's clear that's just ethically unacceptable. It's clear that's unfair, it's discriminatory, so it needs to change.
So I think that companies should think a lot about what constitutes unacceptable as opposed to what constitutes the ideal of justice or something along those lines. And then... There's going to be various ways that you think about that. So one way that people, especially computer scientists, data scientists think about fairness, when it's distributing goods across subpopulations like credit or mortgage lending or interviews for a job or admission to a university.
There are various, these various quantitative metrics. So you see how it's distributed across various subpopulations, and then you get a score according to various kinds of ways of scoring for fairness. There's something like, two dozen plus ways of scoring or metrics for scoring fairness. you can't score well on all of these metrics at the same time. They're mutually incompatible. So if you score well on one test, you're going to score poorly on another, and vice versa.
So then you have this question, okay, well then which metric is the appropriate one? And another way to frame that, my preferred way, is What are the metrics or what are the ways of measuring fairness that we think are inappropriate or unacceptable? Let's take those off the plate. And now we have these remaining set of metrics that are reasonable.
And then I think you can choose the metric that is compatible with other kinds of concerns that you have, like accuracy of the model.
Richie Cotton: that's really interesting that there are mutually incompatible sort of measures of fairness. Can you give me an example of what one of these sort of measures is? What's the sort of difference?
Reid Blackman: so, it's gonna be, unfortunately, a little bit abstract off the cuff. But suppose You think about fairness as we want to make sure that true positives are distributed equally across various subpopulations. So if someone say deserves credit, then they get it. And so, whether you're white or black or an Asian woman or an Asian man or a white woman or whatever it is that the rate of true positive is equivalent across all subpopulations.
On the other hand, you might think we want to make sure that false negatives are, equal across all those populations. And so then what you might do is you might set your AI to equalize false negatives across those populations. you can't do both those things at the same time by equalizing false negatives, you're going to make unequal the true positives and by.
Equalizing the true positives, you're going to make unequal the true, what did I say, the
Richie Cotton: False negatives, I think. Yeah.
Reid Blackman: thank you. I mean, that's the, that's an abstract way, I mean, that's not a, that's not a very helpful, that's not an example as such, but it's the structure of an example.
Richie Cotton: Okay yeah, I get it. So, basically you can either... Try and optimize for one thing where good, whether the right people get the good things and or you can optimize for the wrong people not getting the good thing or whatever.
Reid Blackman: Yeah, or, you know, you're minimizing false negatives or something like that. And then you optimize for that, but you can't optimize for both at the same time.
Richie Cotton: okay. And so related to this idea of fairness is accountability. when do you need to care about Accountability for AI decisions, or if an AI makes a bad decision, then, there are some consequences to say, okay let's not do this, or let's try and mitigate those bad decisions.
Reid Blackman: one thing to highlight first is that the fairness biases are just one issue. I mean, I rattled off a bunch at the top of the program. There's a bunch of ethical risks. Bias and ethical discriminatory AI gets the lion's share of attention, at least it did before generative AI.
And that's for arguably good reason. I mean, it's a pretty big deal. And it's very easy to accidentally create it. But it's really the set of ethical risks. How do we think about accountability with those? So it's, I don't want to limit the discussion of accountability to just be about biased or discriminatory AI because it really applies to any ways that things can go ethically sideways.
look, if the organization builds an ethically problematic AI goes ethically off the rails, who's accountable? Who's responsible? I mean, most generally, it's going to be senior level people. I'm generally not a fan of thinking that it's really primarily the data scientists fault.
the organization doesn't provide those data scientists and that engineers, et cetera, with the resources to engage in appropriate ethical risk identification mitigation, it's not on the data scientists to perform a Herculean effort. Thank you. To do that work despite compensation, despite unchanging deadlines, despite lacking the tools, the training, et cetera, to engage in that kind of stuff.
it's up to the organization, senior level senior level officers, someone in the C suite, like a chief data officer, a chief analytics officer, a chief information officer, a chief technology officer, et cetera. Someone senior level to own an AI ethical risk program who empowers those You know, frontline defenders, data scientists, that engineers to engage in the kind of ethical risk due diligence that they ought to do and to provide them with the resources to not just perform those analysis, but to do something about them to actually devise and execute on strategies and tactics that will mitigate those biases.
What's more, it's incumbent upon those senior level leaders to recognize that data scientists can only do so much. They have a very specific area of expertise, data science or something along those lines, whatever you want to call it. And It turns out, I think, that there's other kinds of expertises that are relevant to identifying and mitigating those risks.
So, yes, data scientists are part of the picture, but you also might need legal in the room. You might need an ethicist in the room. You might need a, and this is a kind of lawyer, like a civil rights attorney in the room. You might need a sociologist. You might need a representative of the culture in which you plan to deploy your AI or something along those lines.
So. There are going to be many cases, especially when they're high risk applications of AI, where you need cross functional array of people thinking about the ethical risks of an application of AI and appropriately mitigating those risks. So if those people aren't present because senior leadership hasn't built the kind of infrastructure where that could be accommodated, then again, you can't expect your data scientists to do more than they can given their training expertise, et cetera.
Richie Cotton: I think that's a really great point that some of these skills around ethics are maybe not something that data scientists have been trained in by default. And so you do need someone with a, I guess, is it like a philosophy background? Is that where most ethicists come from?
Reid Blackman: And I think that's not inappropriate. I think that can be perfectly appropriate. I'm an ethicist by training. I have a PhD in philosophy. I was a philosophy professor for 10 years. I've been researching, publishing, and teaching on ethics for over 20 years. But I'm not going to tell everyone, go hire an They can be relevant. Sometimes they're not. Let me give you an example in which I think they are relevant. which we've already covered, the bias issue. So you got these quantitative metrics for fairness. different ways of thinking about fairness, what constitutes unfair. You want to ask, okay, what are the, and I said this earlier, I go, I got glided over it a little bit, but what, what are the appropriate ways of thinking about fairness in this particular context for this particular use case?
And what you mean by appropriate can vary. Maybe you mean ethically appropriate business appropriate, reputationally appropriate, legally appropriate. There's different ways of thinking about what constitutes appropriateness. And my general view is that you need those different ways of thinking about what's appropriate represented in the discussion about how do we handle this AI responsibly.
But you can't expect your average data scientist or 99 percent of data scientists to give an effective analysis of what constitutes ethically, legally, reputationally, and business appropriate. That's just beyond their cap.
Richie Cotton: And so once you've got data scientists and legal people and maybe an ethicist and maybe some other business people in there trying to make decisions on this, you've got a lot of people from different teams. So have you seen any. Organizational structures within businesses that sort of help all these people come together.
Reid Blackman: Yeah, sure. I mean, look, you don't need this for every single use case. One thing I want to stress is that it's not the case that for every single A. I. You build, you need to have this cross function. No, no one's gonna do that. It's too much work. It's gonna slow things down too much. And so it's really a matter of one is at least training data scientists.
Product managers to understand when is something high risk? When does it need to go to that kind of committee? So there's different ways of doing risk scoring. You train your team to do proper risk scoring. One score means, go one score means, talk with the product manager about it.
One score means go to the ethics committee, but keep working on the project. One means. Red, stop everything, don't do anything else until you talk to these seven people, whatever it is. So it's not the case that, I don't want people to think, oh my god, this is such, we've got to, get the Avengers together every single time someone builds a model.
That's ridiculous. Of course, that's ridiculous. So it's really the high risk stuff. The thing that works well is to leverage existing Governance structures, committees, risk boards, whatever it is different organizations operate differently. So, for instance, some have very strict, very good, robust governance around privacy.
where legal might be particularly involved. So, maybe you take those existing committees or people around legal, you augment them with, say, an ethicist, or you do additional training for them. And then off you go. In other cases, there's a really good compliance board. And so, okay, it's not the privacy stuff, it's a regulatory compliance board.
We're going to leverage that board, we're going to add new responsibilities to that board. Maybe we add additional training, maybe we added an additional role or two. There's not a sort of one size fits all for every organization because they're just different organizations, have different priorities, different risk appetites.
They are structured in different ways, their policies vary. Some are really privacy forward, some are not at all. you have to add, you have to add to existing governance in a way that's commensurate with how the organization already works. Otherwise, no one's going to do it.
Otherwise, it's just, it's too disruptive.
Richie Cotton: Okay. Yeah. I mean, I can see how trying to get a lot of people from different teams together for every single AI use case is not going to be very scalable at all. And I can imagine how, using AI to, I don't know, create a new filter for your selfies or whatever, then that, that doesn't really require a sort of ethics committee.
Do you have any sort of guidelines for like when. wait, you do need to get the end, the Avengers together and and care about this stuff.
Reid Blackman: this is not exhaustive, but I think one of the... Main criterion is something like, does the A. I obstruct, hinder, block people's access to the basic goods of life, something along those lines. So if we're talking about jobs, you need a job, make money to pay your rent, pay for food to, pay for your kids, various whatever, just well, living expenses, frankly, just for kids, let alone extracurricular activities.
So jobs credit or lending for mortgages, healthcare and life sciences often directly relevant to people's ability to live a minimally decent life. So if it's things to do with the basic goods of life, by which I mean the things people need access to regular systematic access to in order to live a minimally decent life, then it's probably high risk.
If you're making a prediction about when the screws are going to arrive at the toy factory, All right, no avengers necessary.
Richie Cotton: So, really it's all about the consequences of this and who's going to benefit, who's going to who's going to be harmed by this and how much it seems like.
Reid Blackman: Yeah, I mean here's another way to put it. Another kind of standard that's very similar to the one that I just articulated, but probably not exactly overlapping, at least the Venn diagram. If it's potentially going to violate someone's human rights, that would be a pretty big red flag. That seems pretty high risk.
If this thing potentially violates human rights, let's get it. Someone called Captain America.
Richie Cotton: Absolutely. I can imagine how that's a pretty serious concern for anyone involved in creating that sort of AI. And so. I think sometimes when businesses go, okay, let's implement this new AI feature, the focus is going to be around, like, what are the sort of the benefits or the harms to business itself?
Do you have any advice on how to make sure that businesses start thinking about, like, what are the benefits or harms to their customers or to other people outside the business?
Reid Blackman: I mean, look, if you're, if your company is using AI, then you just have to start thinking about the risks involved. That'd be, it'd be silly not to. Yes, think about the business risks. I mean, there are business risks to it. I mean, it's, it can be a significant investment of capital or at least a significant investment of personnel, which is.
Boils down to for a business is significant risk to capital. So there are risks, but Like I said, there are ethical risks. There are legal risks given AI in particular, which we haven't really talked about yet. There's lots of risk. One of the things about generative AI is that, well, there's two things.
One It's general purpose, so it could be used for all sorts of things, as opposed to non generative AI, or AI circa 2020, earlier, first three quarters of 2022, you know, it's task specific, it, just does. Risk ratings for diabetes or, you've got cancer or you don't get an interview.
It's very, but things like LLMs or generative AI in general are general purpose. So that could be used for all sorts of stuff and the use cases we don't even know. There's countless use cases as far as we know. And so organizations don't know how people might use generative AI. That's worrying.
And secondly, it's just that, relatedly. Everyone in the organization has access to it. Everyone gets to use it. It's not just data scientists. Far from it. It's not even that data scientists within your organization have approved it for use by your internal people. No, it's just, Bob in HR decided to use it and load on this, HR data onto the thing and now it leaked and now there's a big problem.
So, at this point it's bordering on negligent, frankly, not to have some kind of governance around how your organization does AI because the risks are there. I mean, every organization has sensitive information, for instance, that they don't want leaking out. Every organization does sales and could potentially use a chatbot for sales in a way that is manipulative objectionable in some kind of way.
So, yeah, there's, Problems abound. They're not, I think it's, you can get your hands around them. Some people think, oh my god, it's so complicated, with nothing to do, but no. They'll say things like, oh, we need to do our research, we're going to look into it. Nah, we sort of know. We people have been working in this field for a while. We know what to do, it's just a matter of political will to do it, and that political will incidentally is increasing. Because leaders are anxious about.
Richie Cotton: That's good that leaders in general are concerned about this sort of thing, and it sounds like, a lot of the, the best practices uh, fairly well established. So can you maybe talk me through some of those things?
Reid Blackman: I wouldn't say I mean, I don't like the phrase best practices. It makes it sound like a list of things to just do and then you're good. But the best practices are necessarily high level. I mean, they're necessarily generic, check your models for bias. Make sure that the training data doesn't violate anyone's privacy or IP.
Don't use the black box model and very high risk situations, we could like rattle off some quote unquote best practices. But number one, and so far as best practices are agreed upon by the relevant community that this is what is good. We don't have that. We just don't have that kind of universal or widespread consensus unless the consensus is so high level to be almost meaningless.
The other thing is, as I was saying before, is how things actually get done in the organization or how they ought to get done depend very much on what the organization is like. So check your model for bias, have a cross functional team involvement. Okay, but. The real work the real mitigation is in the details, not in the sort of high level best practices.
It's in given how our organization operates, given the personnel we have, given, the regulations to which we already have to comply, given our existing policies and governance structures, and given all those things, how do we best implement those so called best practices.
That's where the action is. that's where things get difficult. Not difficult, I would say difficult, I suppose. Doable, but not clear. The best practices are easy to articulate insofar as the best practices are extremely high level, like, vet your models for ethical risks, mitigate the ethical risks, don't violate people's privacy, make sure that your Models not discriminatory, don't engage in manipulative ways, et cetera, et cetera, et cetera.
Check your gendered AI for hallucinations for false outputs, no kidding.
Richie Cotton: All right. So it sounds like there is a sort of very high level set of best practices, but actually the devil's in the details, the million dollar question then is like, well, how do you stop Bob from HR just posting employee data on Church EBT?
Reid Blackman: so one take on this is, let's just shut it down within our organization, let's just make sure that People can't use GPT. We're going to create a firewall or whatever it is around various kinds of access to LLMs, I think this is not a good way of doing things.
I think it's similar to telling teenagers to be abstinent and then providing no sex education. It's just a bad idea. Bad things are going to happen. Because they're going to do it, both those teenagers and your employees. They're going to use it. They have their own phones, right? They've got their own computers.
they're going to use it. So I think one, you've got to get better, more clearly articulated policy. here's at least two main things, maybe three. One. You've got to front load learning and development. Most people don't know much about generative AI.
They don't really know the ethical risks of AI. They don't know the ways in which generative AI, they don't know ways in which it can go wrong. So having to educate enterprise wide becomes more important than it was pre generative AI. Because pre generative AI, your data scientists owned it. And they distributed it.
They gave it to people. They could give them instructions. Now with general AI, since everyone has access to it, now we don't have any kind of, real control over the flow over who uses these tools. And so L& D becomes. Really important. So front loading L and D number two, getting an inventory of who's doing what.
So provide those people and tell them about this in your L and D program. Here's what you should do. here's the, what to look out for. And here's where you tell that you're doing it. So that we have an inventory and understanding of what's going on within our organization.
Otherwise, who knows? I mean, you've got, thousands, tens of thousands, in some cases, hundreds of thousands of employees potentially using this thing. You have no idea who's using it for what. And it's going to be so use case specific. And, Bob in the background of HR is like, Oh, I can use it for this thing, right?
That some C suite executive would have no idea that would be it. That's a problem that the HR person faces that Bob faces, and that he might use gender of AI to solve. So, Make sure that there's method inventory. And third and related, related to the inventory piece making sure that, you have to have some way of getting them to smell ethical smoke, understanding that they're dealing with some kind of potentially high risk thing or potentially using high risk data, then the means by which they can contact the appropriate risk board or person, whatever it is, to get guidance on how to use this thing appropriately.
Richie Cotton: I like that you said that people need some kind of training around using AI. Are there any particular teams or roles that you think this training is particularly important for?
Reid Blackman: No. I mean, this is the thing with generative AI. It's for everyone. HR, marketing, operations, product teams, sales teams, it doesn't, it's everybody. And so at least some kind of, generative AI ethical risks 101 that is deployed enterprise wide that everyone gets. It's not role specific because it's just everyone, any role, literally any role in your organization can use it for something again, that's different. It's different with non generated AI. Then, okay, you do need specific training for say, data scientists, right? Your data scientists need to go an extra level deeper, but there's AI ethical risks 101 and then there's ethical risks 201 and then 201 is.
customized on a role by role basis, or at least department by department basis or business unit by business unit basis. But no, for generative AI, it's everybody.
Richie Cotton: Training for everyone, I like that. So, one thing we've not really talked about, you've mentioned a bit, but is around data privacy. So I think this is perhaps one of the biggest concerns, especially with generative someone's going to post some sensitive data. Can you just give me a bit of an overview of like one of the different ways data privacy can go wrong with AI.
Reid Blackman: okay, so here's a couple of ways. Number one, you use private data. Let's just not even say IP yet. Just people's private data, maybe health data, for instance, in order to train your AI. So you've acquired and used that data in a way that violates their privacy. That's one way.
Second way related to privacy is the IP stuff. So you use texts by authors, images by artists, and you use that to train your general AI. So now you've got concerns. Another way is... a lot of these things can be used as chatbots, they interact with people, the people put certain kinds of information into it, and they don't realize that data is then being, collected and used for various other purposes.
So you collect new data with the chatbot that mere possession and use can self constitute a violation of privacy.
Richie Cotton: Yeah, you got the personal information and then you got the IP risks. And are there any particular ways that you can go about mitigating these risks? I'll say one other thing is that the data might get leaked. I didn't mention that. so you might share data with. Generative AI provider, let's say an open AI or Microsoft or Google or whatever, and that might get leaked in various ways.
Reid Blackman: So this happened to open AI a few months back where people could see other people's chats, their chat history, not the whole thing. I don't believe, but some of it or it saw payment information. So the other issue is that there might be nothing intentionally nefarious going on, but there might be a negligent handling of the data.
Things to do, it's all going to vary. I mean, some people are trying to come up with ways of for instance, sending data to a third party vendor like an open AI in such a way that third party. that, LLM provider, for instance, can't see the data. it's, I don't want to say hash, but it's I'm going to, whatever I'm going to say, it's encrypted in some fashion or other.
So for instance, I was just at the conference AI for last week and, met a company where you could type whatever you want into the LLM interface on your own premises, it gets Rewritten somehow, it scrambles the words in a way that it becomes gibberish, gets sent out to an open AI or an Anthropic or a Microsoft or a Google, then that LLM company generates the response, sends the answer back, and so All that company sees is the answer, but they don't see the question that was asked.
So if there was any personally identifiable information or private data in that exchange, that OpenAI or Google or whatever can't see it because it was scrambled before it got there. That's sort of one, one interesting tactic. That company is, I believe, called Portopia. Pretty cool stuff.
I don't have any financial connection to them, just to be clear, I thought it was a cool demonstration.
Richie Cotton: Yeah. So, really it's about like not passing your sensitive data directly. It needs to be scrambled or encrypted in some way before Any other company is going to see it
Reid Blackman: Yeah, or you could have an LLM, a large language model, on your own premises, on your own servers and just never leaves. That's a different way of doing it, right? So if you're not sending it out, then of course you're more likely to not share it with people accidentally. So there's different, different ways of doing this, but it's also just early days.
There's not best practices for that.
Richie Cotton: related to the idea of privacy is transparency of models. And you mentioned the idea that sometimes you can have a black box model that no one really understands, or maybe you have some kind of interpretable, easily interpretable or explainable model. Can you talk about when that's important?
Reid Blackman: there's some controversy here. I probably occupy, I think, a relatively unpopular position within the ethics community broadly. so first of all, I wanted distinguish between explainability and transparency. So explainability is going to be what's going on between the inputs and the outputs.
So you input some data like. Here is this person's medical profile or here's their financial history and it outputs something like this person is going to develop diabetes or this person is going to default on a mortgage, don't give them credit. how did it arrive at the quote unquote decision?
If you can't explain. Because the pattern that it's looking at is too mathematically complex to understand, you have a black box model. You have an unexplainable model. You might go around telling everyone, Hey, everyone, we've got a black box model. That's being very transparent that you have a black box model.
Okay, so that's transparency. On the other hand, you might... has a model that you totally understand. Okay, we give these outputs, it engages in these calculations, and so it arrives at this output so I can explain to you exactly how it arrives at its point and quote decision. But shh, don't tell anyone we've got this, clear box or glass box model.
That would be high explainability and low transparency. So transparency and explainability come apart. There are a lot of people who think we shouldn't have black box models in high stakes cases. It's too high risk. We don't understand how the thing works. It's not safe to use. That's one line. Another line is, look, as long as you can show that it's sufficiently reliable, sufficiently, gives outputs that are sufficiently reliable that Maybe do better it seems performance wise than when say humans judge or when you have explainable models.
Then arguably it's permissible, ethically permissible to use it. So for instance, let's just stipulate the best doctor is 80 percent accurate when it comes to cancer diagnoses. I'm completely making that up. I don't know what the stat is, but let's just say 80%. Let's suppose you've got a black box model that's 99 percent accurate.
Is it ethically permissible to use the black box model, especially if you have the informed consent of the patient? Probably.
Richie Cotton: So are there any cases where it's the other way around and you would say, okay we've got something super high stakes and. we really shouldn't use a black box model here.
Reid Blackman: So there's one kind of case that gives me pause. So one thing that you might think about is AI in the criminal justice system. So one thing that we're supposed to get in the criminal justice system is what's called procedural justice. So the procedure is in order for the outcome to be considered fair, the procedure by which the outcome was arrived at was fair.
That's why we have things like a trial by juries. The juries have to engage in this kind of deliberation. They're not allowed to take this into consideration, but they can take that into consideration. Et cetera, et cetera, there's case law to guide them. The judge didn't, do anything or didn't bias the judge, the jury or something, right?
So there's ways of ensuring more or less a fair procedure to increase the probability of a true outcome. But we take the outcome to be fair on the condition that the procedure is fair. So you might think that in some cases we need to have a fair procedure. And that's the best we can get.
Cause we don't, if you like have the ground truth, you don't really know if they're innocent or guilty. We just have this way of doing it, of finding them innocent or guilty or, guilty or not guilty by the procedure. Okay. But now suppose you have a black box model in that mix and part of the procedure occurs, if you like, between the inputs and the outputs of an AI, but you can't explain it, but if you can explain, if you like the procedures of the AI, then you can't assess whether the procedure was fair or not.
And so that looks like potentially anyway. A case in which a black box model will be unacceptable because it would necessarily make it impossible to assess whether the procedures were fair or not. That's one case. In some cases we don't care, right? Like, so with cancer diagnosing, there's a way in which I don't really care.
I don't care if it's a magic box. I don't care if it's a genie who just divines the truth. Like, if they get it right, that's all I care about. But in the criminal justice system, we care a lot about the procedure and the fairness of the procedure. So, Arguably in that case, procedure matters and black boxes should be unacceptable.
That's debatable too, but I find at least a plausible line of thought.
Richie Cotton: That sounds a lot like difference between statisticians and machine learning scientists, where the statisticians want to know how everything works and do inference and machine learning scientists just care about, like, do I get good predictions or not? So yeah,
Reid Blackman: And it might be the case that sometimes you want a machine learning engineer and sometimes you want a statistician.
Richie Cotton: absolutely. So, that's the explainability side of things. Are there any cases where you either really care about transparency or you might say, okay, this isn't important.
Reid Blackman: Yeah, this is a good question. I mean, look at any ethics statement, AI ethics statement, everyone's going to list transparency, which is silly, because no one is going to be transparent about everything, and no one's going to be transparent about nothing. And so you need to be transparent about the right thing at the right time in the right way.
which is relatively uninformative. I think that there are probably cases in which high stakes AI, say life and death is on the line, and you need to be transparent enough so that external stakeholders, let's say auditors in particular, regulators, can look to see what kinds of decisions were made throughout the AI life cycle, and were they, are they defensible decisions.
That would be a case in which, there needs to be sufficient transparency.
Richie Cotton: Okay, so you need to really understand, like, how this thing's working, then, yeah you're gonna have to have some level of transparency there.
Reid Blackman: Yeah, I mean, sometimes, you may or may not know how the models operate. You might not know what's going on between the inputs and the outputs. But you could always know, at least in principle, what are the kinds of decisions before the inputs and after the outputs? There's lots of things.
No, there's lots of decisions that data scientists and other product managers that are make around that a life cycle. So those things could be, more or less transparent. And look, you're not going to give away your IP so don't be transparent about that if it's going to give away your IP, all else equal.
But there are going to be some things that you need to be transparent about so that you can be so that we can see whether you're, so that we, the we as appropriate authorities, can see whether your ethical house is in order. So,
Richie Cotton: it's not clear whether your data is going to be used by the AI, and sometimes it's unclear what happens when you're giving permission for AI to make a decision about your life. When do you think that is important?
Reid Blackman: different people will say different things to this. The go to line is that, in all cases, people should have control over their data. That's a standard line. So obviously in the high risk cases, but also even in low risk cases, because it's my data or something along those lines, that's a pretty standard line to hear.
I'm not convinced of that position myself. I actually, I did a podcast on this with someone who does think that, Carissa Valise at Oxford. We had a really good discussion around this point where I tried to push back. Look, there are going to be some cases where there's highly sensitive information about you where plausibly you should be allowed to say no, no, you can't have that data.
Health information is one of them, although I'm not even completely convinced of that. I mean, I'm inclined to think, let's say that there's a global pandemic and the appropriate authorities having access to certain health data of yours that's, anonymized or aggregated and, is used to engage in certain kinds of analyses to say, make certain kinds of predictions or build a cure or a vaccine or whatever.
Then it seems to me like, I agree that the individual has a vested interest in having control over their data, but the global community has a, very strong interest in not dying. And so plausibly that individual's preferences is ethically overrulable. That's going to be controversial though, obviously.
So I guess I think that there are many cases where it'd be nice, at least ethically nice, people had control over what data different organizations had about them. I'm not sure that in all cases it's ethically required. And then it's just going to be a case by case basis. I don't, I'm not sure that there's an obvious way of saying it outside of things like, health information from corporations, let's say.
Richie Cotton: Or I guess maybe financial information is possibly another sort of thing. For organizations who are wanting to get better at doing responsible AI I guess one of the most common mistakes that organizations make that they ought to avoid.
Reid Blackman: They rushed to implementation. They'll come up with some really high level ethics statement where for fairness, where for privacy, where for transparency, where for accountability. And it means nothing because it's so high level, it's so generic. And then they say, okay, let's, how do we implement this?
first of all, you don't have anything here. You just, you have nothing. You could have written any words on that paper. It would mean this, it would mean approximately the same. The thing that people, that organizations don't do is they don't do the sort of, I don't care what you call it.
There's different ways of calling it a gap analysis, a risk analysis, a feasibility analysis. They don't get a good grip on what their organizational standards are. And they don't get a good grip on where their organization stands relative to those standards. So. One of the opening moves that we do with all of our clients, almost all of our clients, is we do that risk assessment or that gap analysis. Okay, what are your current governance structures look like? What are your policies look like? Personnel training, onboarding tools, workflow. What does the organization look like right now and what can be leveraged and what needs to be avoided in building out.
a customized ethical risk framework that suits your organization as it stands. If you don't do that, A, organizations don't know what to do. Then they just start doing stuff. And then they find that, oh, actually, this doesn't play nicely with other parts of the organization.
So, for instance, let's just start building risk matrices and risk categories for AI ethics. And that's going to be our thing. And then you don't talk to risk about it, the people in risk. And they're like whoa, how does this fit with our enterprise risk categories? And if it doesn't, Now we got decisions to make, but, breaks get slammed.
Risk wants to get more involved now. It gets complicated, confused, and then there's political factions. Better to do that assessment early on, involve in risk and compliance and cyber and legal, get everyone on the same page of what we're trying to accomplish here. And then things move along a lot more effectively, a lot more efficiently.
Richie Cotton: Okay. So, it sounds like there's a sort of, maybe a specific order you need to do things then. So you need to set up some sort of committee and get these teams involved to begin with. Is that what you're saying?
Reid Blackman: It varies. I mean, usually we're working with like a working group. I wouldn't call it a committee or it's not like an ethics committee or something along those lines. It's, most organizations need some kind of AI ethical risk or responsible AI working group. Often that smaller working group reports to a larger ethical risk steering committee.
But yeah, you work for that with a handful of people, risk, compliance, legal, cyber. Chief data officer or someone under that person's charge. you're working with them to do that gap analysis or that risk analysis. That's the way to start. Look, sometimes things, it depends on how fast an organization also wants to move.
Sometimes we're simultaneously developing some learning and development, although that has to be done a little bit after you do that. Assessment because you want to know what existing learning and development resources can we leverage? But you know, and you could also do some, sometimes we're working with one client now, multinational who wants to retroactively do some risk assessments of some AI models that they already have out there.
And so they want to do it first. Let's just do a quick and dirty pass on what's already out there. Flag certain things as high risk, do a deeper dive on those things and let's get that up and running. Because it's, stuff that's already out there. We want to make sure that we're good while at the same time building our more systematic, comprehensive approach to AI ethical risk.
Richie Cotton: And are there any sort of process changes you think organizations need to make in general to do this right?
Reid Blackman: If we're talking about a level of process, one thing that's going to be crucial is figuring out the processes by which you engage in those risk analyses at each stage of the AI life cycle. So it's not just something you plug in at the end, pre, just before you deploy because if you do that, it's really hard to go back.
And fix things, the contract says it's due at this date, it's time to go, you find some big risk, not much you can do about it, except for, ship a bad product or go back to the drawing board and deliver something late, which nobody likes to do, obviously so making part of the building into processes throughout the life cycle.
Both the processes that teams engage in when they're developing the model and also processes between teams. So like data collectors versus data scientists, for instance when those differ, making sure that there's processes that facilitate appropriate communication and transfer information from one team to another as the AI.
Or parts of the AI get passed on from team to team. That stuff is really crucial.
Richie Cotton: Oh, yeah. Between team processes, that's always the hard thing. It's like trying to get different managers to agree on things. So are there any tools that can help you out with this? Like, are there any sort of responsible AI tools?
Reid Blackman: mean, there are tools for various kinds of things like let's do a quantitative analysis of the bias in these models. Let's do quantitative analyses of potential privacy violations. There are also certain kinds of platforms that allegedly help with this kind of thing.
I haven't seen anything yet that I find particularly impressive and scalable. That's one of the big issues is scalability of these things. Oh, sorry that's not totally true. Some of those tools that do, for instance, quantitative analyses of whether the model is unbiased.
Those are easily scalable. Governance platforms, I haven't seen one of those yet. I mean, part of the problem is just that for instance, among other things, a constantly evolving regulatory landscape. And checking for compliance with those regulations is not something that can be just automated at scale.
I mean, you need to have some people involved at some point.
Saying organizations shouldn't use these tools. I am saying though that there's lots of cases where organizations that think, Oh, we'll just get a tool and we'll fix it. And that's never the case. A tool is good, but to speak to the point earlier about accountability, a tool doesn't work well if it doesn't land in.
a fruitful environment in which it's going to be deployed effectively.
Richie Cotton: can't just buy stuff and have all your problems magically solved.
Reid Blackman: Yeah, you buy the software, maybe you train a couple of people on it, but they don't train the others. Then they leave. There's no specific place in the process where they're supposed to use that tool. There's no process by which there's any kind of auditing about whether that tool is being used and or whether it's being used well or effectively or according to the proper instruction.
So, companies spend hundreds of thousands of on these tools, but they don't actually get, in my experience, they don't really get used or used well because they haven't landed in the environment in which data scientists, et cetera. Actually have the support to use those tools.
Richie Cotton: so a bit of training and the right processes are needed as well.
Reid Blackman: And the last thing I'll say is that it also gives some client, actually quite a bit of corporations a false sense of security. Like, oh yeah, we've got a tool for that. We're good. And I don't try to convince them otherwise because I'm not going to convince them that actually maybe things aren't so good.
But whenever we do these assessments, whenever we do these risk analyses, one thing that we always find is, it's not as buttoned up as you think. The tool is good at what it does. But it leaves lots of gaps available, open whereas most companies are like, Oh yeah, we got this tool. It looks at bias, so we have the bias thing down.
So we're like, no, you don't, you've got a way of dealing with it. It's a good way, but it's not a comprehensive way. It's not sufficient.
Richie Cotton: Okay, so, if tools aren't the answer, are there any ethical AI frameworks that you think are worthwhile looking into?
Reid Blackman: NIST has a really NIST has, I think they call it AI assurance framework, something along those lines. I was impressed by the NIST framework for like, I really hate the word framework because sometimes framework means a pie chart with some buzzwords in it and for some it's more robust.
So I was impressed by the NIST framework. I read it and I thought, yeah, the people who wrote this, they know what they're talking about. And so there's something that's great about it. But, number one, understood. That NIST framework, because I'm already in the space. if you're not already breathing this stuff day in, day out, you'll read that framework and most organization will be like, what the, they're not going to get it.
There's too much detail. There's too much nuance and subtlety. There's too much stuff going on for, you know, you'd have to study the hell out of that thing. If you're not in the space already, you don't know this stuff already. You'll read that and You have to put in dozens of hours, dozens and dozens to really understand what's going on.
That's one issue. So it's not that it's not true, it's not that it's not good in the sense that it's, academically rigorous. And I say that as a former academic, but it's just not suitable, I don't think, for the vast majority of organizations. Because one, you're not going to have anyone in your organization who's going to study it in the way that it needs to be studied.
And number two, Again, I've said this a thousand times now already. The devil's in the details. It's a generic framework. It's necessarily generic because it's meant to apply to literally every, at least larger organization. But that means that it's not just plug and play. Because in order to make it work for your particular organization, customization has to be done.
And you can't do the customization without other kinds of things like gap infeasibility analysis. So unless, you had to understand what your organization looks like so you can see how to tweak that generic framework to fit for how your organization actually operates and in a way that's commensurate with your organization's ethical risk appetite and those very,
Richie Cotton: So it does sound like the NIST framework, you maybe just need to feed into chat GPD and
Reid Blackman: yeah,
Richie Cotton: this to me. Like I'm a 10
Reid Blackman: yeah, that, I mean, you probably understand it better, but of course, then you're going to have the kind of at best, if you implement it perfectly, you'll have how a 10 year old would perfectly implement framework
Richie Cotton: year old, more dinosaurs or
Reid Blackman: Yeah,
Richie Cotton: Yeah. All right. Do you have any final advice for any organizations wanting to get better at doing AI responsibly?
Reid Blackman: I mean, the main thing is that I think that it's very doable. I think some people think, oh, it's all abstract. It's all squishy. No one knows what to do. More research. I don't, we, we're good, people who have been in the space. I'm not just talking about myself, people who have been in the space, I think we know how to do it quite well.
There's always tweaks. There's always improvement to be done. But. You know what to do. I think that the first thing to do, honestly, what I found lately very helpful is when organizations really just start with that assessment and get a grip on what are we dealing with here? Where does our organization stand?
And then they can decide whether or not they actually want to put in an investment. They should say it is an investment. Not just of money, but time. It takes time to do this sort of thing. It takes the time of people in your organization to do this thing. And it. If you're really building an AI ethical risk strategy, implementing it enterprise wide, a decent amount of time.
So if you know you're gonna do it, great. But if you're not sure, it seems to be particularly given generative AI, but not just because of that. At this point, doing some kind of assessment to make sure you understand where you stand and doing something specifically at least at a minimum around generative AI, I think that's just A must do at this point.
Richie Cotton: That's a great answer, but also I hate that patience is solution. I always struggled with that one, especially,
Reid Blackman: not it is and is not. I mean, I can't stand when people say, well, we need, we're gonna do some more research on this. We need to do some more digging. We need to really understand that. No that's nonsense. I'm very impatient with that. I think that's a level of patience that I find intolerable because it's either sincere or faux contemplation.
That is no longer appropriate at this point because we can get the ball rolling. We know what's going on. We know what to do. I do recommend patience in that, look, we're talking about an enterprise wide strategy, it's, and they're big, we're talking about tens, hundreds of thousands of employees.
it's not a tiny lift. So, you know, a lot of our clients, they don't just plop it down as it were, but it gets phased in because if you don't phase it in and it's overwhelming, it's too much. And so you figure out how do we phase this in, in a way that is sustainable.
If you try to do too big of a lift right out of the gate, just knock it, nothing's going to happen. So some patients,
Richie Cotton: All right. Just enough patience to do things right. Okay. Nice.
Reid Blackman: right.
Richie Cotton: All right. Super. Thank you very much for your time. Really great show.
Reid Blackman: Yes, my pleasure.