Saltar al contenido principal
InicioPodcastsPodcast

Interpretable Machine Learning

Serg Masis talks about the different challenges affecting model interpretability in machine learning, how bias can produce harmful outcomes in machine learning systems and the different types of technical and non-technical solutions to tackling bias.
1 ago 2022
Transcript

Photo of Serg Masis
Guest
Serg Masis
LinkedIn

Serg Masis is a Climate & Agronomic Data Scientist at Syngenta and the author of the book, Interpretable Machine Learning with Python. Serg has developed his expertise in Interpretable Machine Learning, Explainable AI, Behavioral Economics, Causal Inference, and Responsible/Ethical AI throughout his career, which spans web and software development, mobile app development, systems analyst, ML engineer, and more.


Photo of Adel Nehme
Host
Adel Nehme

Adel is a Data Science educator, speaker, and Evangelist at DataCamp where he has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.

Key Takeaways

1

The three main challenges in interpretable machine learning are fairness, accountability, and transparency.

2

The best way to assess risk is to view machine learning models as systems with different factors that interact with each other. This prioritizes experimentation, not just inference or prediction, to determine how different aspects of the model impact each other and the outcome.

3

As machine learning systems become more efficient and take less time to develop, model interpretability and improvement will become more central to the data scientist role.

Key Quotes

The three challenges of interpretable machine learning are fairness, accountability, and transparency. First, fairness addresses discernable biases and discrimination in a model. Second, accountability addresses robustness, consistency, traceability, and privacy, ensuring that models can be relied on over time. Lastly, transparency addresses explainability and understanding of how decisions were made and how the model connects inputs to outputs. You cannot prove a model is robust or fair without transparency, and an explainable model that lacks fairness or reliability is a failed model.

As AI is adopted in different ways, many are so worried about the potential dangers that they fail to identify the potential benefits. People treated the internet similarly, and didn’t realize how data could be exploited, stolen identities, or spread misinformation. But little by little, everything became more robust, such as with the adoption of SSL. Early in the internet’s adoption, people started working to protect the internet as a technology for those that do not want to use it for dangerous means, and while people do still use it inappropriately, it has also enabled amazing societal transformation at a rate exponentially faster than anything seen before it.

Transcript

Adel Nehme: Hello everyone. This is Adele data science, educator, and evangelist at DataCamp. A major challenge for both practitioners and organizations. Creating value with machine learning is model interpretability and explainability. We've seen a lot of examples over the past few years, machine learning disasters, whether it's recruiting systems, rejecting candidates based on race or gender credit scoring systems, penalizing, disadvantaged groups, and much more.

This is why I'm excited to have Serg Masis on today's podcast. Serg is the author of Interpretable Machine Learning with Python. And for the last decade, he has been at the confluence of internet application development and analytics. Serg is a true polymath. Currently, he's a climate and agronomic data scientist, at Syngenta a leading agribusiness company with a mission to improve global food security.

Before that role, he co-founded a search engine startup incubated by Harvard Innovation Labs, and he was the proud owner of a bubble T shop and much. Throughout our chat, we spoke about the different machine learning. Interpretability challenges, data scientists, face different techniques at their disposal to tackle machine learning interpretability, how to think about bias in data, and much, much more. If you enjoy today's conversation, make sure to subscribe and rate the show, but only if you liked it now onto today's episode, Serg is great to have you.

Serg Masis:  It's nice to be here.

Adel Nehme: Awesome. I'm very excited to speak with you about Interpretinging Machine Lear... See more

ning your book on it, bias and data, and all of that fun stuff.

But before, can you give us a bit of a background about yourself? 

Serg Masis: Well, at the moment, I'm a data scientist in agriculture. Yeah. There is such a thing. A lot of people are like what they figured only like high-tech, like companies, data scientists work. I have a background in entrepreneurship. I created a few startups and for the longest time, the way I defined myself was like a web app person, someone that did all kinds of web-related things, whether on websites or on mobile devices.

Yeah. That's what I did. I went from being a builder also to being like, have more like a more managerial role as well. Yeah. Basically, that sums it up, but I've done a bunch of other stuff. I was also very interested back when I was deciding what to study. I was interested in computers for sure. But the application of it, there were like I had an interest in graphics, graphic design, 3d modeling.

Anything in that space, which now I, I get to work with the other side of it, which is computer vision, which is also very interesting. Computer graphics and computer vision are like two sides of the same coin. I also once owned a bubble tea shop and well. Something to note about all the roles I ever worked with, even the bubble T shop is that data was always there.

Data was throughout my journey. I just did not know it was my true love. It's like this story where your true love is hidden in plain sight, and you don't realize it. It was always in the background. And I've only like in the last seven years, brought it into the foreground. What I do.

Importance of Interpreting Machine Learning

Adel Nehme: That's awesome. So I would love to talk about the Bubble tea experience, but let's talk about interpreting machine learning given this is the topic of today's episode. Well, I think there's some understanding today within the field on why machine learning models need to be understood and interpreted. I'd love to set the stage for today's conversation by first understanding the why behind interpreting machine learning. So I'd love to understand in your own words if you could set the motivations behind why you wrote the book and why this topic is so important.

Serg Masis: Fundamentally machine learning comes to solve incomplete problems. So we shouldn't be surprised when the solutions are also incomplete. We tend to think, okay, we, we achieved a high level of predictive performance, and therefore the model is ready to be out in the world and it's just gonna work like a charm.

And it doesn't. So that's just the nature of things. That's the nature of machine learning. And if it were that simple in that it was just like any other beta software, we wouldn't solve it with machine learning. We have procedural programming, if statements, and so on, that would be our solution. And the other reason why it's important is for ethical reasons.

And unlike other technologies before it, this is a kind of technology that replaces that, something that is become very human in a sense. It's not like other animals don't make decisions, but the kind of decisions we make have a level of, for lack of a word intelligence that aims at foreseeing well beyond like our immediate needs.

So we're not just thinking of the next meal and everything we're thinking. And we're thinking of grander things, organization or society, supply chain, all sorts of things. And so there are a lot of reasons why we would lose use of machine learning and in a way, they aim to replace our cognition. So the thing is how to trust a model, the same way we trust a human.

I would dare to say. A lot of reasons, not to trust a human in the first place, but ultimately we wanna trust something that we can understand. And if it's a black box, why are we taking orders from a black box? that. The grand scheme of why to use interpretability. The reason that brought me to the book is I became aware of the topic when I had a startup back in 2017, 16, and so on.

I became frustrated that I couldn't debug my own models as someone that had programmed for so long. It was just so strange that there was this thing blocking me, you know, like if I wanted to figure out why isn't this working. I, it would Al always, not always, but a lot of times, point to the model and it was like, okay, why is the model doing this?

And at the time there was very little resource on it. At least for practitioners, everything was like in academic circles and there wasn't a clear. Understanding for me coming from outside. What does this mean? Right. So as someone that's obsessed with decision making, I found this intriguing and concerning.

So I felt like weird to be promoting a technology. I couldn't understand that got me into a rabbit hole. And then I started to learn all the terminology, read all the papers. Then eventually some books came out. My book was the third book for practitioners on the subject to come. I thought how late in the game does it have to be for it to be like the third book on the subject?

So that's it.

Difference between AI explainable and AI Interpretable

Adel Nehme: That's really great. I'm really excited to talk about the book and underline the challenges and the solutions that you discuss to a lot of interpretability challenges that data scientists face. But before I wanna kind of harp on some of the terms and the lingo that is used in the industry to talk about this problem.

Last year, we had Maria Luciana Axente, head of responsible AI at people. PWC come on the show. One of the topics we discussed was the distinction between the different terms, such as responsible AI, explainable, AI, interpretable, AI, and ethical AI. I think a small reason why we have these new terms is probably driven by the coms department of many organizations working on the problem.

But I'd love to understand from you how you view the differences and the overlaps between these differences.

Serg Masis: I find there's a lot outside of like industry. There's mostly, there's a lot of debate and confusion around the terms. First let's separate, like ethical AI is like more, a mission to inject like ethical principles and human values into AI.

So it's an ideal. And yeah, it has a lot of parts. I'm not gonna diminish like the contribution, but it's not necessarily driven by really clear objectives because everything is not only vague because there's no like perfect recipe for it. But it's also like, there is a disconnect between people working on the ethical side and the people that are actively practicing in the field and the, the rest of the terms.

That don't have ethics in it or fair in it are related to like the imperfect application of the value, the vision, because there's many reasons to interpret machine learning models that don't have to do with ethics also for other reasons, it's just good, big business practice. So like you have then explainable.

Which is another term that's used a lot. And then, or XAI people like to use the X instead and then interpretable machine learning or interpret AI. And they're used interchangeably in the industry in academia. They tend to use interpretable. To refer to the white box models and explainable to refer to the black box models or vice versa I'm of the opposite camp.

I tend to think explainable is more like a confident term is if you can explain something, the whole thing, backward and forwards. So I think it's best to actually start using it. Or like the highly interpretable, like intrinsically interpretable models, linear regression, and so on. And because you have to, you know, how it was made, you know, how everything was made, you can extract all the coefficients and so on.

Whereas interpretation is something that is perfectly okay to use with something that you can't understand. Complit. We do it all the time. The whole field of the statistics is based on it. How many things are just shown through a chart? Nobody claims to know every single thing about say the economy, but they can in average, explain a trend.

And so it's, it's an interpretation. It's not an explanation in the sense that they're like a hundred percent. Responsible AI is a newer term. I think it means the same thing than explainable and interpretable AI mean except with all the baggage. I think there's a lot of baggage with explainable AI and interpreter machine learning from where they came from and there the debate as far as what they mean.

So responsible kind of takes that. Because you expect something to be responsible, to also have some ethics in it, but it's not necessarily entirely epic. Yeah. It, it doesn't have that semantic confusion that it, the term's explainable and interpretable, because if I ask you to interpret something, you are explaining it.

So that kind of creates this whole mess. I actually prefer responsible, but I don't know how much it's CA it's gonna catch on. So that's what. Think about those terms. 

Challenges of Interpreting Machine Learning

Adel Nehme:That's great. And I appreciate that holistic definition. So let's start talking about the book. Let's first talk about the challenges in interpretable machine learning.

One of the chapters of the book outlines, these challenges. Do you mind walking us through these challenges and how exactly do they affect the model? 

Serg Masis: Interpretability the three concepts I think you're referring to are fairness, accountability, and transparency. Fairness is what connects. To things like justice, equity inclusion.

In other words, that models don't necessarily have, or are adding a discernible bias or discrimination accountability on the other hand is what connects to things like robustness, consistency, prominence, traceability, and privacy. In other words, making sure that models can be relied on over time and can make someone or something responsible.

Should it? And then transparency is what connects explainability. Interpretability, it's just like the baseline properties. In other words, understand how the decisions were made and how the model is connecting the inputs to the outputs. In other words, I tend to see it as a pyramid. Transparency is at a base because you can't have the other two.

If you don't have transparency in the book, naturally, I focus more on this level. However, a few chapters focus on fairness and accountability. I always speak of these being like where the focus should be. Mostly transparency is not by any means to solve problem, but the other two are more complex. So we're like tackling the low hanging fruit and that's sometimes very convenient, but it's not necessarily always the best choice, cuz it doesn't matter if you can explain the model, if it, if it's not robust and if it's not fair, so.

Adel Nehme: Okay. That's awesome. And so let's talk about that explainability and that interpretability and machine learning. Do you mind walking us through the main challenges practitioners face and how exactly they affect model interpretability 

Serg Masis: When it comes to interpretable machine learning model interpretability is impacted by three things that are present both in the data and the model, their non-linearity unity and interaction effects.

These elements add complexity. And as I said, they're, they're present everywhere. The most effective models match the nature of the data. So the nature of the data and the models are very highly connected. So if the data is linear, it makes sense that you use a linear model. That's why linear regression has all these assumptions baked in.

And if, if the shoe fits also with neural networks, why not use a neural network? That's why I think neural networks forte is unstructured data. Because on structured, it has the properties that you would expect. It has the complexity, the nonlinearity. And so on. One of the trickiest is the interaction effects because feature independence is an unrealistic assumption.

In other words, more likely not multi ity is present in the data. It's really hard to interpret models where many features are acting simultaneously to yield an outcome in a counterintuitive. And contradictory ways. It's not as simple as set as Powerbus, that is all things remain equal, especially when you have large amounts of data and many features.

So those are the challenges in machine learning. It's often discussed in statistics, but statistics like when a lot of things, these things were. There wasn't this idea that data would explode the way it has in volume and velocity in all the different ways that we come to call big data now.

Bias in Interpreting Machine Learning

Adel Nehme: Okay. That is awesome. And I'm very excited to discuss an outline with you, the solution to these challenges. But first, let's talk about another big challenge that is really important within the realm of interpretable machine learning. And that is bias. You know, data is ably one of the most important assets in generating.

Interpretable and responsible machine learning systems and bias is a big problem when it comes to data. So can you outline where does bias and data comes from?

Serg Masis: Bias can come from two sources and they trickle down. They can either come from the data generation process or from the truth. Which is where the data generation process connects to the data itself, but many bias that infect the data generation process are sampling bias, convergence bias, participation, bias, measurement bias.

For instance, an old school is if you do a survey by the phone to see what percentage of. Population approves a president. The problem is that your data's not likely to be representative of the population because not everybody is reachable by phone and not everybody would answer a known phone number, not to mention those that respond and are willing to accept.

The survey are certain kinds of people that may not reflect the general view. The general population. So that's gonna be a problem. That's just an example of the kind of bias that infects the data generation process. There are also data entry errors that you may have, as I said, they're random. If you're lucky and those are presence all over the place.

As for the truth, the data generation process may be unbiased. But it captures what I call ugly truths. So instances of real discriminatory behavior, there are also fake truths, which is what someone does something deceitful to trick our it systems. And this happens all the time. but a lot of times it, it goes under the radar.

Maybe it's not done to sabotage a system or, you know, it's just done to get some kind of benefit that the system provides. And then there are the changing truth. And this is, which is how our data generation process captures data at one point in time. But even in a short time, afterwards, reality might be different.

Our data generation process might be biased in that sense.

Adel Nehme: Okay. That's really great. And on that last point around ugly truths, given that these ugly truths are unfortunately rooted and biased, we humans produce in the data generating process. Will we be able to solve for this type of bias in machine learning through technical solutions?

Serg Masis: Machine learning's all about predicting the future, but there's this quote by, uh, the founder of Atari. And it it's very clever when it says the best way to, to predict the future is to change it. And there's absolutely no shame in that. If you actually bias your models to counteract the bias, present in the data, you'll eventually improve the.

So that's something that's part of the loop. People don't realize that they can do a lot of models, have that property, have that ability, say you have a pricing model for, you know, real estate and you could realize what if I make that pricing model so that it actually counteracts. The bias people have already with prices, so they don't go out on a frenzy and buy all the available homes at the worst possible time.

You can do all kinds of things to do that. Another example is if you have a biased data set, that is about say criminal recidivism, and it's about detecting, who could possibly go back into jail. You could go commit a crime again, after being in jail, you could argue, we shouldn't have models, do that. We should have human judges do that, but then you have to see, okay, well, how good are human judges?

We can do a model that actually improves the false positive rate over human. Why not test it and why not see how we can actually counteract the biases? You know, maybe in ways in which we benefit society, say we see, okay, well, women are more likely not to recidivate and they have children. And so by having them in jail, we're actually gonna perpetuate some kind of end cycle.

So why don't we bias the model in such a way that it's beneficial rather than perpetuating the bias to begin with? So these are questions that I don't intend to solve. I think it's something that sociologists who get together with economists and the people working on these problems from a technical.

Point of view, but the mindset shouldn't be let's predict the future. The mindset should be let's change the future for the better. 

Evaluating the Risk of a Machine Learning Model

Adel Nehme:I really like that perspective, especially since you're taking the data generating process and using it against itself, the biased data generating process in this form of ugly truth.

And you're able to, as you said, engineer and change of the future for the better harping on in the book as well. You know, one thing I love about the book is how granted it is and. Many examples of interpretability in real-life machine learning use cases, circling back to the earlier part of our conversation, where we discussed the why behind interpretable machine learning.

I think one thing that's also super important to take into account is the degree of risk associated with a model is highly dependent on the use case. Industry-affected population by the model, for example, a credit risk modeling that is determining loan outcomes has a much larger playground for potential harm than a customer churn model, which is driving retention strategies for a SAS organization.

How do you evaluate the risk of a machine learning use case? How would you describe the spectrum of risk here for any given machine learning application?

Serg Masis: Well, I'd say that risk is determined by several factors. First of all, you wanna know, to what extent your algorithm and decision impacts stakeholders and what are the stakeholders you wanna “protect” their interests from they're all weighted differently.

You know, like one thing is like the bank trying to protect its officials or protect employees or protect the customers. To what degree are you willing to have risk on every level? Because the goals aren't always. So you might think, okay, well, the short term risk is with the bank, which its profits, but the long term risk, it will alleviate the customers and those customers will go elsewhere.

So the idea is not to be shortsighted in the assessment of these risks. The best way to understand that is, understand it as a system. Where stakeholders are not in isolation, there is not one magical metric you might think. Okay. Well, I want profits to be my magical metric profits for the next quarter, for instance.

But a lot of these things measured on different time scales and with different actors, all interacting with each other. Once you start to see them as a system like you could develop, like for instance, a causal model, and try to understand how pushing one lever in one way will impact the other. But this takes an idea of experimentation and not just inference or prediction.

So it's like taking it to a next level. My book actually has an example in that sense of how you use Kate or conditional average treatment effects to measure what is the like least risky option. For the bank and for the customer in a circumstance like that, not all problems are like that though. There are AI systems whose risks are elsewhere, whose risks are in misclassifications of a specific kind.

And so they, they might have instances of bias present in their model. And that they're not even aware about or they're aware about, but they brush off and they think, oh, it's not a big deal. But the level of reputation damage that comes from something like that can sink the system down completely and nobody will wanna use it ever.

So shouldn't, you did risk that. So that's something you should realize when you output this at the very least to put out disclaimers, it has a weakness with this kind of input, or was only meant to be used on these circumstances. And people should key these kind of disclaimers because maybe an official from the company will say, oh, why don't we take this and repurpose it for that?

And then you get the sort of disaster that you got Brazil, uh, was never meant to be used in. Way, then there's also risks that you get for how robust it is towards adversarial inputs. That's something that a lot of, as AI is being adopted in a lot of different ways. We fail to realize. How it can be gained.

I connected with my early experience with the internet, how naive we were with the internet early on, we didn't realize how data could be exploited or how our credit card information could be stolen. And so little by little, like everything became. More and more robust, like first with SSL. And then until I think it was like eight years ago, nine years ago that Google came along and said, okay, well, normal websites with SSL will be featured or will be ranked by our algorithm.

So now SSL is a must. But in that early journey, there was a whole level of awareness that became towards, okay, this is a technology that can be used for dangerous means. So we have to protect it to be useful for the rest of us that are not using it like that. And I think people will become aware of it.

I just hope the outcome isn't painful for anybody in the sense that okay. All of a sudden, if everybody not just celebrities are getting deep fakes generated to game, all kinds of systems. We just wanna use AI systems from then on. So I think a way to improve this is to gauge monetary value in other ways.

So stop the like single metric mentality that we have not only in machine learning, but in business, which is okay, we have this metric and this is why we have to chase it. And all metrics are tied to this. I think it's something that would solve a lot of things. If we started to factor environmental problems, social problems, and other stakeholders of all kinds into our general equation of the wellbeing of a company or society we'd be better off.

And so, yeah, there, then it's gonna take a very careful discussion of how to weigh everything, you know, like what is more important and it's a very uncomfortable discussion, but I think it has to be done sooner or.

The Zillow Example

Adel Nehme: I love the holistic perspective here. And before we talk about solutions, do you wanna give maybe an explanation of the Zillow example?

Because I think it's highly illustrative of the problems that you're discussing for those who may not be aware of it.

Serg Masis: Zillow had come up with a pricing model. Hopefully, I remember this correctly because it happened many months ago. It had this pricing model that was just for the benefit of users. So users would come and would say, okay, estimated value of this home is this much.

This of course had an effect on the users because maybe it was overpriced very, it was underpriced, but they seemed to think it was accurate, but this had no like ramifications other than inflating or deflating a market, but then they figured why don't we get into the game of buying and selling homes.

We have this metric. We can use this metric, this number we can use to actually get a benefit because we know what it's estimated at, and we can negotiate a price based on that. And so on. So sooner or later realized that their estimates were inflated and they probably, it was beneficial for them to be inflated because.

Who doesn't want to go and see their home and see, oh, it's actually, I don't know, 50,000 more dollars more than I thought it was and it's good to their stakeholders, their customers, the people that come and see that. But to them, it was terrible because they ended up losing a lot of money that way. So they couldn't make back.

They couldn't flip the homes for what they thought it, they could flip them because they, in a way had contributed to inflating the markets. And that, that was very painful. They had to lay off a lot of people cuz they used the model in the wrong way. They had never designed it to do that. And, and quants do this in finance all along.

They don't necessarily use machine learning model for that for doing PRI pricing. Optimiz. They use all kinds of optimization methods that are purely math based at the, in the best case, in the closest thing to machine learning. You'll see, there is reinforcement learning and yeah, I think it was just a bad idea.

Techniques to Drive Better Interpretability

Adel Nehme: Yeah. It's a very fascinating example. Given that let's discuss the solutions to many of the challenges that we discussed. The book does an awesome job at breaking down a lot of different techniques available for practitioners looking to drive better interpretability. Can you walk us through these techniques and the crux of these different techniques as well?

Serg Masis: When people talk about interpreting models, like everybody's interpreted a model at a very, very, very basic level. And that's evaluating performance. So especially once you break down the performance by cohorts, segments, everything you're, you're already engaging in error analysis and error analysis is an interpretation technique.

But beyond those traditional interpretation methods, there's also feature importance. And that's the, probably the first one people will learn and, oh, I didn't know this existed. And it allows you to quantify and rank how much each feature impacts the model. And depending on what you're dealing with, it, it may vary.

It might be for a tabular data set it's every column, even every cell, you can break it down, but for an NLP model, it depends how you tokenized it. It might be every. invite me every character. It really depends how you tokenized it. And then for an image, it's every pixel, that's how it's usually used and then feature summary on the other hand.

Examines individual features and the relationship with outcome, they include methods such as partial dependence, plot, accumulative, local effects, and ideas to figure out well, a feature might be important to the model, but how is it important? What, what is driving things? And so you start to see the relationship of that feature with the model you might realize for cardiovascular disease as age increases.

The risk increases. So you're already seeing in those terms, in which it's not just that the features important at certain values, it has more impact, or it has a negative impact or a positive impact, or it actually goes up and down. And that's when we start to see, well, it's non monotonic because it's not going in one direction and then feature interactions.

Methods quantify and visualize how the combination of two features impact our outcome. So usually we go by variant, but you could even go into three levels if you wanted to or four, but that was just crazy. But yeah, you see how they, they work together. Sometimes you'll realize that a feature by itself has no impact on the model.

But I tend to talk about it. If it were like a basketball game there's players that you they're never shooting the ball into the, to the nest, but they're assisting another player to do that. And that's how it is in a machine learning model often. Well, it depends on the kind of model, but they tend to work together.

Well, there's also another kind. I didn't mention those kinds. I mentioned are all like global interpretation methods, but actually the vast majority of interpretation methods that exist, especially thanks to deep learning are local interpretation methods. These are ones in which you're trying to understand a single prediction.

So, you're not trying to understand the model as a whole, but you're trying to understand a single prediction. And sometimes you can even take interpretation methods and apply them to single predictions and, and put them into a group and to say, okay, well, All these predictions that were misclassifications tended to be misclassifications because of this.

So you can also do things with a bunch of single predictions on that level as well, which can be very useful. 

Shop Values Explained

Adel Nehme: That's really awesome. And I love also the holistic nature of how you approach these different techniques. One of the techniques that we've heard a lot about over the past couple of years in the data science space and that it's got quite a lot in the book as well is shop values, which are short for sharply additive explanations.

Can you dive more into more detail about what shop values are and how and why they're useful?

Serg Masis: You understand shop, you have to understand first chap values. Shap was a mathematician and chap. Math values is a method derived from coalition game theory. And generally, I explain this as a basketball analogy.

So I tell people, imagine you're blindfold at a basketball game and a loudspeaker just announces when a player exits or enters a game by their number. They don't say their name just by their number. And you don't know. If that player is any good, say like Michael Jordan or bad, you know, like say, I don't know.

I imagine Mr. Bean is a bad player. good players and bad players just hop in and hop out at any time. And all the only way to tell if there're any good, if their presence made a difference to the score. So with this, you can get an idea what players are contributing the most, most positively and negative.

For instance, you notice when 23 is playing and score increases a lot, no matter who else is playing. So you get this idea, it must be really good, but imagine you're blindfolding and you're taking notes as well. So you, you start to okay. Quantify that. And so the, the difference is cords. It becomes a marginal contribu.

And this starts to make a lot of sense. Once you quantified all the different marginal contributions. And once you run through all the possible permutations of players in the court, imagine it was like an endless game. You can calculate all the average model general contributions of each player. And this is called the Shap lead value for model the features are the players.

Different subsets of features are called coalitions, and you can calculate all the average marginal contributions for each player. No differences in predictive error are actually the marginal contributions and you're blindfolded of course, because it's a black box model. That's why you're blindfolded.

Of course, problem is computing Shap values. It's very time consuming. So as you can imagine, all the different peer mutations of all the different features, unless you're talking about two or three features is just like enormous shop, combines other methods in many cases to approximate chaplain values.

If not, it just takes a sampling. It doesn't do the entire permutation. So it, it still adheres to a certain degree, to all the mathematical principles with exist within chap. And often the thing is that shop is the closest. We have to a principled way of calculating feature importance. And this is important because it has advantages because of all the mathematical properties.

You know, it has symmetry, there's a whole bunch of dummy principles. It has a whole bunch of principles. And since there are shop Valley's values for each feature and observation, another advantage is it can become a local interpretation method as well, which is rare because usually you're just one or another.

Adel Nehme: I love that explanation using the basketball analogy. I think it's very useful to use it as a mental model for how to use this technique. Now, given the proliferation of these interpretability techniques, what is the mindset that a data scientist needs to adopt? When trying to understand the results coming from interpretability techniques on machine learning models.

This is whether using shop or other techniques, as in what level of certainty do these techniques provide and do data scientists need to inoculate them from a false sense of certainty when interpreting the results of interpretability techniques.

Serg Masis: Well, certainty with machine learning can never be too high.

I think that's a fool's errand in any case, and you have to start seeing models the same way you see another fellow human. If you did something and I asked you, why do you do that? And you told me, oh, because of this and. I can't be a hundred percent certain. I just saw inputs and outputs. And then you gave me an explanation that I don't know if, to believe you or not.

That's called post hoc interpretability and it can't be a hundred percent certain, but what you can do is you can use different methods. And then get a good idea. If several people saw you do something and then they asked you, your mother asked you, your sister asked you, I asked you, everybody, around, asked you, you tell them all different stories, but they have some commonality.

You can take that commonality with higher certainty. And that's what I expect people to do with these models. Don't rely on a single model and call it the truth, single interpretation method and call it the truth. Rely on several. That's always a good practice. And then another thing is a lot of them are stochastic in nature.

So they might give you slightly different results every time you run them. That's certainly the case for line, which is another very popular method. So why not average them out? Average back the values or take the median, you'll get a, something closer to the truth in that sense. So it'll be a lot easier to interpret.

The more you run in, there's also other cases in which they have number of steps, integrated gradient has number of steps. So if you increase the number of steps, of course it's gonna take longer, but it's gonna be a better explanation. Our number of kernel Shep, which is one of the. Truly model agnostic ways of doing C has a parameter called number of samples.

And if you increase that number of samples, you'll definitely get a better, you know, something you can trust more. So there ways of approximating better certainty within each method, but also something I recommend is not relying on single method. And as I said, it's an interpretation. It doesn't have to be perfect, but it's better than nothing.

You know, it's better than blindly trusting the model and just never examining what it's doing.

Common Diagnoses about Machine Learning Models

Adel Nehme: Definitely. And given the results of some of these interpretation models, we've discussed how to interpret them. We've discussed the mindset that we need to be able to adopt here. What are some of the common diagnoses you can make about a machine learning model that has suspect interpretability results and how can practitioners remedy these diagnoses?

Serg Masis: You mentioned bias and that's a big one. Fortunately, there, there are many bias mitigation methods that remedy this on three levels. So you can remedy bias in the data. You can do it with the model or with the predictions themselves. Another one is complexity models can get too complex and that can lead to poor general.

So regularization is what I prescribe on the model side, but also pre processing steps, like feature selection engineering can address it on the data side. And if you're dealing with something like, for instance, images, there's all kinds of pre-pro steps you can do with images. Unless there's a reason, for instance, to have a background, you can remove the background of an image.

You can also sharpen it. You can do whatever needs to be done, but realize that whatever's done on the pre-processing side during training has to be done on the pre-process side during inference. So we also in NLP do a lot of pre-processing. Sometimes you realize a form of feature engineering has selection is actually stemming or limitation, you know, taking away pieces of the word to make it simpler for the model.

And you can certainly do that. That will make the model more generalizable because there might be words that seem like, like another word and they pretty much mean very similar things. And it's just by taking only the stem, you're making it more clear and less complex. So there's also robustness. And for that, we can augment the trading set data or address the model.

With robust training methods or even in the predictions, there's many methods that can be used for that. And for model consistency, as long as you monitor data drift and retrain frequently, you can tackle it. But it's also good practice to train using a time based cross validation with time is an important element, like for the typical like cat cats and dogs scenario.

It's not like we expect cats and dogs to evolve that much during their lifetime, there might be slightly new breeds, but it might not make a difference if the images are like two years old. But for a lot of cases, like I find in my, my own work with agriculture, like every season is different, you know, like different weather, different everything.

It is good to update the models a lot. And use cross-validation to make sure that doesn't matter what year you use, the model's gonna be. And consistent. 

Future of Interpretability

Adel Nehme: That's really awesome. And especially on that last point, I think that the best example of consistency has been through COVID right FMCG models that are highly time series based when it comes to, for example, predicting stocks of items, like for example, toilet paper, these have a completely different dimension pre pandemic than post-pandemic.

Yeah, for sure. Awesome. So as the industry evolves and machine learning is further adopted and the need for interpretability goes higher, what do you think the future of interpretability will look like? 

Serg Masis: I think there'll be a, definitely a change of mindset. I think it will come naturally organically if you will.

But the problem right now is that model complexity is seeing the culprit of all ills and machine learning. But it isn't always, I think after all the things we try to solve with machine learning are complex or they should be, I mean, I don't think maybe there's some Novis out there taking like a very simpler tabular data set and frame, deep learning on it, but I don't think it's like a big issue.

But I do wonder that we're taking like the brute force approach too far. There's been a arms race with model complexity. It's not my area, but I have to wonder if we need to leverage trillion parameter model language models for natural language processing tasks. There has to be a simpler, less boost force way of achieving the same goals.

After all humans only have 86 billion neurons, and we only use a fraction of them for language at any given time. So I have to wonder what. Gonna come of this arms race, and if it's going in the right direction and that of course is gonna change interpretability whatever direction it goes at. If it becomes more brute Forcey and everything, it might hit a limit in which interpreting it is gonna become impossible.

At least through traditional means or we'll have to use more like approximation based things that will give us a less reliable interpretation. There's the other big issue we've discussed throughout the session is bias. Generally the ideas to train machine learning models that that really reflect the reality in the ground.

So the idea is okay. Go a bit back to basics. Think about what it's like to have a data centric approach. Forget about the model. Let's go back to the data, understand how to not only improve the quality in order to achieve better predictive performance. But as I said, actually, pro improve outcomes, change kind of the flow of things, because a lot of the things is we don't realize what the technology.

Coming out of it. It's like when social media was came out, everybody, oh, this is wonderful. We can finally communicate in these very rich ways, no matter where you are in the world, but we didn't realize what can happen from that. And how once the genie is out of the box, how do we put it back in? Or how do we improve things?

The same things can happen with AI. And I think a data centric approach. Forces us to do that. And then there's of course, things that can improve with model interpretability, there's more and more methods coming out, methods that I, that I find super promising and will take a few years for them to reach the pipeline that are in production.

But I think something that. We'll make a transformative change in the field is the nuts and bolts of machine learning are, are right now. Data cleaning data engineering, training pipelines, the drudgery of writing all the code, but, or to orchestrate training and inference coming years new and better. No code in low code machine learning solutions will displace these quote unquote artisanal machine learning approaches.

And I believe the best ones will make interpretability prominent because once creating a sophisticated machine learning pipeline is less than one day's work in a drag and drop interface. We can devote the rest of our time to, to actually interpret the models and improve them. So there'll be more inter more iterations in that sense right now.

Like the iterations are like, oh, let's achieve predictive performance, but that's a better predictive performance. But once you have no code system maximize predictive performance for you automatically. What will a data scientist do. And I think the best thing we can actually do is interpret models and, and we'll learn how to extract the best value out of the models and improve them in other ways.

We're not improving them right now through that. So how to achieve better fairness, how to make them more consistent, how to make them more robust, how to package them in such a way in which they have all these properties that outline their properties, their weaknesses, their strengths, data, Providence, a whole bunch of things.

What needs to change?

Adel Nehme: That's really awesome. And going beyond the future of technical interpretability, you mentioned in the book that there are legal, social and technical standards and procedures that need to emerge to really realize the full potential of interpretable machine learning. What do you think needs to change in terms of regulation and how we organize ourselves as a species and a society to realize progress on that front?

Serg Masis: Yeah, I think there's a lot of things I would propose. I think there's a lot of people thinking about these solutions. I think for one certification is a must, I think before deployment, you can certify models for things like adversarial robustness. There's only very few like methods and standards for that right now, but they'll evolve for fairness.

That's another one. And you can also right now, So many different metrics for fairness, but you could at least specify which one you. And then even a level of uncertainty, there's a method called sensitivity analysis, which I discussed in the book, which can tell you what level of certainty you can have with the outcome, but not so much the interpretation methods, but it's about the outcome.

Then the model card. That's another good one that wouldn't already exist. I mean, a lot of people have criticized it because they think, okay, well, that's so simple, but it's an important step. So if you deploy with, along with the model, you deploy a card which tells stakeholder important properties about the model.

Such words of data came from. Potential weaknesses. And what intended uses you prescribe for it, et cetera. Like whenever we have a product out there, it just makes sense to have like in the label, all the ingredients and everything. It's I see it like that. Also. I advocate for abstention. If a high risk model, you have a lot of low confidence predictions.

Like for instance, you might have, if you. Telling a, a bank customer. Okay. Your loan has been denied because you're deemed 51% risky. No way. That's stupid. Just leave it up to a human. I think that's ridiculous. There should be a band in which you say, okay, well, this is too close to call. Have a human do this, and then you can improve the model.

The people can look through this and say, okay, this is why the model thinks it's like that because it has this, you know, this customer has slightly less collateral than we would expect or because, but I would give it to this person because they actually have this, but that's not in our. So maybe there's ways in, we can improve the model through actually catching this and sending to a human.

Then we have monitoring. I think it, it should include more than predictive performance. We could. In addition for checking for data drift and continuum model robustness, feature importance, or fairness metrics. There's just so many things we could model. We could monitor with the, for the model. Another thing I advocate is the manifest have something.

That the model has that you save that can let auditors trace model decisions much like a black box does in a plane. I think that's super important. And then nobody can tamper it. If there's one use case for blockchain, it's definitely that, uh, no, there's a ton of use cases for blockchain, but I, I think that's definitely one more and then exploration.

I think one of the things that should be in the. If the model could auto destruct, I definitely think it should model should have a strict shelf life at like milk on bad. It should be TSTA as soon as it meets the date, no questions asked and then of retraining procedure. If it's in place, you'd aim to replace the model just before that date.

I think that's very, I. and yeah, that's what I would suggest.

Call to Action

Adel Nehme: Yeah. A small list, definitely a small list, but I really appreciate this really holistic perspective. I think you really are. One of the few interviews I've had where someone lays out such a holistic perspective on the future of machine learning.

So finally, Serg, as we close out, do you have any final words before we wrap up today's episode?

Serg Masis: Nothing in particular. I think people should be excited about where we are in this field at this time. I think it's a good time to be joining or even as a user seeing what's coming out. Uh, as I say, I equated to the mid-level maturity that the internet had say in the early odds, like 2001, 2002.

Things were starting to work well, all of a sudden the internet wasn't the crap that it was before that people were excited about it, but they were like, these websites are so ugly and this one never works. And this browser shows me this and not this other thing. And so I think that's where we at and still, but on the development side, we're still going through growing pains.

People are still writing their code by hand, as I said, in every single line. And they think, okay, well.fit. All the stuff we do, it just seems like it's gonna be, seem very antiquated very soon. And I I'm excited for that as well, but it, one of the good things of being in, in this space at this time is that you get it things at the ground floor.

You learn how to see things without all the abstraction. You'll see them in a few years because once it's all drag and drop, it's gonna open the floodgate for less technical people. And that's a good. But at the same time, it's not gonna allow people to see the guts of things the way we see them right now.

Adel Nehme: That's awesome. Thank you so much, Serge for coming on data frame and sharing your insights. 

Serg Masis: Thank you.

Temas
Relacionado

blog

Data Demystified: The Different Types of AI Bias

In the final part of data demystified, we outline the most common types of AI bias, and why data literacy helps avoid harmful impacts from AI.

Richie Cotton

8 min

blog

Understanding and Mitigating Bias in Large Language Models (LLMs)

Dive into a comprehensive walk-through on understanding bias in LLMs, the impact it causes, and how to mitigate it to ensure trust and fairness.
Nisha Arya Ahmed's photo

Nisha Arya Ahmed

12 min

podcast

Embedded Machine Learning on Edge Devices

Daniel Situnayake talks about his work with EdgeML, the biggest challenges in embedded machine learning, potential use cases of machine learning models in edge devices, and the best tips for aspiring machine learning engineers and data science practices.
Richie Cotton's photo

Richie Cotton

52 min

podcast

Post-Deployment Data Science

Hakim Elakhrass talks about post-deployment data science, the real-world use cases for tools like NannyML, the potentially catastrophic effects of unmonitored models in production and the most important skills for modern data scientists to cultivate.
Adel Nehme's photo

Adel Nehme

34 min

tutorial

Explainable AI - Understanding and Trusting Machine Learning Models

Dive into Explainable AI (XAI) and learn how to build trust in AI systems with LIME and SHAP for model interpretability. Understand the importance of transparency and fairness in AI-driven decisions.
Zoumana Keita 's photo

Zoumana Keita

12 min

code-along

How to Explain Black-Box Machine Learning Models

Learn about the importance of model interpretation.
Serg Masis's photo

Serg Masis

See MoreSee More