Atay Kozlovski is a Postdoctoral Researcher at the University of Zurich’s Center for Ethics. He holds a PhD in Philosophy from the University of Zurich, an MA in PPE from the University of Bern, and a BA from Tel Aviv University. His current research focuses on normative ethics, hard choices, and the ethics of AI.

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.
Chat with AI Richie about every episode of DataFramed - all data champs welcome!
Key Quotes
We have a new technology that has arrived and we like to call it disruptive technology, right? It's reshaping all these different industries and sectors that it's touching on. And we're in a transition phase where it's the most dangerous and volatile because we don't know what to expect. We don't know how to use these systems properly. The development pace is incredible at times, again, for good and for bad, right? And all this uncertainty is causing a lot of problems and it's causing a lot of misuse or overuse over reliance.
The IDF created a system known as Lavender. Lavender provided a risk assessment score to every single civilian or person living in the Gaza Strip. 2.3 million people. The IDF got an assessment on each one of them, how likely are they operatives of a terrorist organization? These lists would be transmitted to the Air Force or to the ground force in order to act on them and attack. Essentially what the system did was create a kill list for the military. What we know from internal testings is that the system will produce about a 10% false positive rate. So let's say you had a list of about 40,000 people, you know for a fact that 4,000 of them are false positives, and yet these go on the kill list.
Key Takeaways
Evaluate AI deployments as sociotechnical systems, not just models: map norms, incentives, hierarchy, and handoffs that will determine whether people follow the tool blindly or challenge it appropriately.
Design for responsibility up front using tracing: pre-assign who is accountable (blame), answerable (can explain), and attributable (owns the decision) when an AI-assisted decision harms someone, and don’t ship systems that create responsibility gaps by default.
Enforce ‘right answer for the right reasons’ with tracking by constraining or auditing features and rationales so models can’t make high-impact decisions based on absurd proxies (e.g., eye movement, dog ownership) even if they correlate in training data.
Transcript
Richie Cotton: Hi, welcome to the show.
Atay Kozlovski: Hi. Thanks for having me. Great to be here.
Richie Cotton: Great to have you here. To begin, what's the biggest ethical AI disaster you've seen so far?
Atay Kozlovski: Yeah, I'm an academic philosopher, so words for me are very important. So the biggest day disaster I don't know if I could exactly pinpoint that, but over the past three years I've been working with a lot of different kinds of systems, from military technology to civilian uses, to individual private uses.
And in every single sector, we're seeing risks and dangers. Just look at the news coming out every day, right? We're seeing cases in which chat bots are promoting or motivating people to commit suicide. We're seeing misuses by governments of these technologies, so it's a bit overwhelming to keep up.
There's a lot of good, a lot of bad. Hopefully we'll be able to, break some of these things down as we talk about it.
Richie Cotton: Absolutely. And there's definitely a lot of good happening with ai, but I agree there are also more problems and there are new kinds of problems as well. So what do you think the most common failure modes are with ai?
So why do we keep having these AI ethical disasters?
Atay Kozlovski: If we take a step back, we have a new technology that has arrived and it's we like to call it disruptive technology, right? It's reshaping all these different industries... See more
We don't know how to use these systems properly. The development pace is incredible at times, again, for good and for bad, right? And all this uncertainty is causing a lot of problems and it's causing a lot of misuse or overuse over reliance. As we look at specific systems, we notice some kind of typical types of mistakes that users tend to do, or users or developers tend to do.
And we can break those down if we like and go through them one at a time. But some of these will be very familiar to the listeners. Some perhaps not, but these include overuse of these systems. So what's come to be known as automation bias. This is this idea that once we have a system, we tend to go with whatever it tells us to do or recommends us, even to the point where we discount our own intuition or our own thoughts about this.
That's one type of mistake we're seeing quite often. Algorithmic bias is another right, where these really complex systems rely on huge amounts of data and we're not exactly certain why it's making certain mistakes or it leans towards certain ways that become almost invisible to us as users.
We'll have to break these down into actual case studies in order to see where these come about. But those are typical types of cases that we see come up quite often.
Richie Cotton: Yeah. So the first one on automation bias. I had a case of this recently. So I was taking a flight and was going through passport control and there's a machine that reads my passport and it takes a photo of me and compares, am I the same as my passport photo?
And the machine said no. And the person, the guard, he looked at me and he told me, this isn't you. I'm like, yes it is. So I'm obviously me. And we stood there for about two minutes. He had no process. So he's, I have no way of dealing with this. And so there's a queue building up behind me and it got very awkward.
And eventually he said, okay, go on. And it was a really stupid process. And the problem was that he didn't know what to do when the AI was wrong. I agree, automation bias is a problem, but can you tell me how you go about resolving this sort of thing? So what do you do when the AI is wrong, there's a mistake?
Atay Kozlovski: Yeah, I think those are two different aspects to it. One is when you're aware, right? Like in your case, this is obviously something wrong. Then you have to make sure that you have, especially if you're a company and you have a lot of different employees in this part of your workflow, right?
So you have to make sure that you have built-in protocols in order to deal with those situations that people don't come across this for the first time and they're completely stumped. Go prove that you are you now the typical comedy that comes up with tragedy, right? That arises in literature.
There was a book I read last year called I Am Still, and it's I think there's a movie coming out about it. And it's basically the opposite case, right? It's about a person who decides to shed his identity and everyone in society. Refuses to accept what he's doing, right? No, you are stiller. We know that, right?
And he rejects this completely, but he stumbles constantly about proving that he is not who he is or that he is who he's, so these types of identity issues come up in weird ways when it comes to technology. So we said that's one side, right? When we know there's an error, and that's a mistake.
The more dangerous type of course is when we don't know there's an error. And we get into that tendency of just following the instructions blindly. This can be really benign sometimes, right? Imagine you're using Chacha PT to tell you to go for lunch, right? Where should I eat lunch today?
And it recommends you a restaurant that doesn't exist. We've all come across these hallucinations, right? And yet you go to that place, you arrive there and you're like, oh, okay the stakes are low, so you go to the restaurant next door. Fair enough. If you're at the airport and then you know, you're being suddenly said that, no, this is not who you are.
And now you need to go to immigration and you're being held in jail cell for two days until you can, I don't know, bring your wife or your in-laws to a test that you are who. So it could be far worse. Maybe I could give an example of the system that I worked closely with.
Richie Cotton: Yeah, absolutely. I'd love to hear an example.
Atay Kozlovski: So I a few about two years ago now, oh, time flies worked on a, on analyzing a system that was used by the Israel Defense Forces in the war in Gaza. And this was a recommendation system that they had created in order to enhance their intelligence capabilities. The idea was, we have vast amounts of data that we are collecting and we have huge population.
We need to pinpoint exact enemy operatives within that vast population, two and a half million people, we wanna find quite difficult. So they created this system known as Lavender. And what Lavender did was basically provide a risk assessment score to every single civilian or person living in the Gaza Strip.
million people, you get an assessment on each one of them and a score, how likely are they operatives of a terrorist organization? And then these lists would be transmitted to the Air Force or to the ground force in order to act on them and to attack these targets. Essentially what the system did was create a kill list for the military in order to know where to attack at its height.
We have records showing that the system created recommendations of the kill list. And this went out and helped the military accelerated its attack in Gaza. And that's what sort of allowed it to solve that bottleneck that every intelligence agency knows. Validating targets takes time.
It's very difficult. And so they went from a process where they can validate to a hundred targets with it per week to a situation where you can do a thousand targets per day. So it's a order of magnitude higher, right? Where with a click of a button, you can get recommendations. Now, what happened in this process as they implemented, we were talking about automation bias, right?
So this was a new system that went into force and reports are coming out that the traditional way in which they would evaluate these targets was a bandit. An analyst would get a target, and here she would have about seconds to approve that target because of the scale and magnitude of of the amount of recommendations coming up within that seconds.
That's the due diligence they would have to approve the target. So basically what they could validate. Is this a man or a woman, right? Because they would have audio recordings, they might have visual recording coming in. If it was a man, probably the system's, okay, let's approve it, stamp it out, send it out.
And essentially what we know from internal testings is that the system will produce about a % false positive rate, right? So let's say you had a list of about people, you know for a fact that of them are false positives, and yet these go on the kill list. And it's mind boggling to think about that.
So you have this idea, you want to improve your intelligence system. You want to create a system that helps defend your country in security, and you're using it in a way that creates egregious human right violations. It's just senseless almost. So really that was a, an extreme example of how when you implement this type of system and you don't do correctly, it has huge implications.
Richie Cotton: My jaw just dropped. That's absolutely awful. I've got some background in machine learning, so I know you're always gonna get false positives, and in this case, conscious of getting a false positive. Absolutely horrendous. So targeting an innocent person to be killed because of a problem in the model, and then not having a process to deal with this mistake, it's just a horrendous systemic failure.
It makes me angry. How do you go about stopping this sort of thing happening in the future?
Atay Kozlovski: A lot of my research has focused on trying to understand this notion of what type of control do we need to maintain over an algorithmic system in order to ensure that it doesn't create, in the extreme cases, these horrible outcomes and less extreme cases, maybe irritating outcomes or bad outcomes.
So our focus has a been on this notion of control and we've worked with a theory called meaningful human control. And this sounds very pla right? We want to be in control of our system. But there's something of an oxymoron here, right? We're creating an autonomous or highly automated system, and yet we wanna control them.
Those two concepts seems to be contradictory, right? It's a tug of war. The more autonomy you give, the less control you have. So in order to try and make sense of this this paradox, what we started doing is started thinking, okay, you can't have operational control over the system, right?
The system is running at a speed that we can't handle using data that we can't analyze or understand. We can't physically control the operations of it. Instead, what we wanna do is wanna have an indirect type of control by designing the system it to meet certain types of standards. This is obvious. It sounds plausible and intuitive.
And what we were working within that framework is trying to define or to clarify what does that actually mean in practice? What kind of conditions would you need in order to say you have a good type of indirect control over the system? And go through three steps that we took in that, in developing this theory.
The first is to shift our focus from only algorithmic analysis to what's known as a social technical analysis, right? You have to expand your scope of analysis beyond just the algorithm or the software that you're using. You have to consider constantly the bigger picture. There are humans involved, there's a pipeline, there are multiple actors where the algorithm is only one part of that.
So when I ever, whenever I say a system, I'm talking about that holistic social technical perspective, right? Expanding, not just out. That's one. You wanna comment to that or should I go onto two or three?
Richie Cotton: Yeah. Okay. Is talk me through one of these sociotechnical aspects.
Atay Kozlovski: Yeah, so consider for instance, the norms of practice in your, in wherever you're implementing this tool.
So if you're in a military facility, you might have a very hierarchical system where people follow orders very clearly, and you can rely on that in order to ensure that the system's functioning the way you want it to. On the other hand, if you're working at a call center, like how I look with this headphones you might say discipline is not such a, a strong su trust.
So we have to account for that when we're creating our algorithm, right? So what I'm saying is we have external sort of variables that indicate what risks we need to account for. So some of them can be normative, the behavior of the people involved. Others could be how many different actors you need to communicate between, right?
And a third might be the history of that domain of practice. Maybe later, I'll tell you about an case far less egregious than the military one. But I worked on a project at a hospital where we were looking at developing, using LLM tools in order to reduce workload for doctors and.
One of the most curious findings that I had during that process was that, there's a sort of culture that exists in a hospital where the doctors have a certain level of expectation to be responsible for their patients. And it's something that, that it, you can't expect this, for instance, if you go to a tech company it's a different type of culture.
And if you account for that culture when you're designing the tool, then you can take sometimes shortcuts and sometimes the other way around, right? What you need to implement. So for instance, in that case, i'll go into more details later, but we knew for instance, that the doctors know that the buck starts with them, right?
I'm in charge for my patient regardless of what happens around, I'm gonna hold, be re held responsible for whatever happens. And that sort of allowed us more flexibility when we were trying to think which function can we or can we not delegate? So that's what I mean by social technical accounting for the surrounding.
Thinking beyond just the code and the algorithm itself.
Richie Cotton: I like the idea. So you gotta think about the people who are involved. You gotta think about the organizational structures. 'cause otherwise having the technology in isolation is. Not gonna make sense
Atay Kozlovski: R and that's a risk of also when we do like off the shelf products, right?
So a product can be designed with one process in mind, and then you implement it in a different context. And it's doesn't work. It doesn't as well as you would want, right? You always have to make these adjustments. And also when you're designing these tools from zero, you bring that in and you adjust from the very beginning, from the get go.
So
Richie Cotton: you were talking about doctors and you said the doctors know that they are accountable. So is that something you always want? Do you always want a human who's gonna be accountable for whatever process you're running? Or would you ever want a case where the AI is in charge and makes the final decision?
Atay Kozlovski: This is good. This is interesting. Maybe this also leads to the second point that we talk about in in the theory, which is called tracing. So tracing is the second condition which we think about which deals exactly with that question. At the end of the day, who is responsible for whatever outcomes come from the system?
So I'm of the opinion that a system might be able to make the end decision in certain context. That's okay. But responsibility is something that can only be attributed to a moral agent. A system cannot be deemed morally responsible for anything. It's not a moral subject of our, it's not appropriate to say that the system is responsible in a meaningful sense.
Responsibility is something that we only attribute to human beings in that sense. Now we could break that down a bit. So responsibility. First you can distinguish between legal and moral, right? So legal is to a certain degree, arbitrary, right? Whatever the law says and whatever jurisdiction you're in, that's the definition, right?
So you're liable in one jurisdiction and another suddenly not. So I'm setting aside the legal issue of responsibility, whereas more responsibility allows us to do more abstract analysis. And when we come as philosophers to evaluate this, we usually focus on three types of accounts of responsibility. The first is accountability, which is who is to blame for something happening.
The second is answerability, which is the question of who can provide me an explanation for what happened? And the third is attributability. To who do I attribute the decision or action that was made? And what does that reflect on that person? So lemme give you an example. Let's take let's say you're using you have a hiring process at your company, right?
And you have Herbert, the HR executive, and he's always done all the hiring and vetting and everything. Let's say Herbert is a bit of a racist and he decides to exclude a person on the basis of their nationality, and that person later complains and a board of increases. Herbert, what were you doing?
You're obviously responsible, right? You you're al you are accountable, you're Ansible and you're attributable, right? We canc attribute all three of these to you. You are to blame. You can explain to us why you were discriminatory, and this reflects your character of being a person who is a bit racist.
Okay? Now, when we incorporate an AI system, suddenly we have problems which are often referred to the literature as responsibility gaps. It means that one of these three accounts suddenly cannot be clearly identified. Now imagine that Herbert's a really good guy, but the system that he is working with, has a bit of an algorithmic bias, which makes it racist. Okay? System can't be racist, right? Obviously we're talking hyperbolically, but the system tends to be discriminatory against minorities. Herbert receives the recommendation and he says, okay, this candidate exclude this candidate shortlist.
Alright, move on now. Now again, we have the same problem. Someone was excluded, they complain. What can the board actually tell Herbert at this point? Is Herbert accountable, answerable, attributable? It depends very much how you design the system, how you implemented it, and when we come and look at different types of systems, we say you need to think of these three aspects in advance when you're designing it.
And you have to create the system, not just the algorithm, the entire system such that you'll be able to answer these three questions down the line when something goes wrong. That's the goal. We wanna be able to identify a human actor who can be accountable, answerable, attributable, and if not, then we need to be aware that this is something that exists.
For instance, if we're working with a black box system no one's gonna be able to explain to us why the system acted okay. If we know that in advance, perhaps we can work that in. But the point of the analysis is to take these into account in advance and design for them essential.
Richie Cotton: Okay?
Seems very important to have all three versions of responsibility. Built in then. So is there something you need to have designed right from the start, or is it something you can retrofit into existing systems?
Atay Kozlovski: Yeah, anything's possible. And remember, I'm not the technical expert, right? So you're not gonna get any deep technical answers from me.
I'm the philosopher. I give the normative analysis. Perhaps it's possible to make adjustments down the line. I think personally that the best way to achieve this is to come with the right mindset from the beginning. And again, this is from an intuition of mine, right? If you have a system that's already out there and working, changing it drastically is gonna be way harder than if you built it in a certain way from the beginning.
That, that seems pretty obvious, right? So sometimes we can fix a broken system and sometimes not obviously. But that also leads us to the third point that we, that I mentioned within the meaningful human control framework, which is called tracking. Now, tracking is the idea that if we're gonna delegate a task, any task whatsoever to a system, we need to be sure that the system can reason properly, just like we would essentially in that case.
Now, of course the system might be taken into account way more information than we could. It might be able to come up with much faster answers and better explanations perhaps, right? Nevertheless, from a moral statement point, we want the system to be able to arrive at the right conclusion for the right reasons also.
So this comes across sometimes very funny. I came. Across this paper that was cataloging the weird biases and correlations that recommendation systems were making for hiring decisions. And one of them that I was intrigued with was that a system was providing hiring recommendations on the basis of the retinal movement of a candidate.
So depending on how quickly your eyes moves or how many times you blink in a certain amount, number of time that correlated into the system to whether you will be a good candidate or not for that position. This seems to be absolutely absurd, right? And even if, the almighty being came down and told me this is the correct answer, it still seems absurd to me because I can't grasp the basic idea of why those two correlate to each other.
So we don't just want the right answer, we want also the right answer for the right reason. In many cases, not always, but in many cases, right? Especially in normative cases, in those cases where values, morals, and ethics are involved. Like in the case of discrimination, for instance, another case that was discovered was that dog owners were deemed to be less suitable for certain jobs.
Now again, maybe in, in a huge corpus of data, such correlations do exist. Does that sound reasonable to, to exclude someone from a job because they own a dog? I would say no. This sounds a repugnant way to hire people, right? Something that should not happen. So these are the types of issues.
So when we talk about tracking, what we are talking about is the reasoning process so that the system can identify the relevant reasons and identify how relevant they are, their weight, right? So that we don't suddenly have. All the relevant reasons, but the waiting is skewed. So I only hire this type of person, even if that would be a relevant reason.
Okay. So those are the three issues that we look at. And what we say is that when these three issues are met, now, it's not a yes or no, it's always to a certain degree. If they're sufficiently met, then we could say that even though we don't have any operational control of the system, so even in theory if the system is fully autonomous, but it meets tracing, meets tracking, and we've done this over a social technical perspective, then we would say we have meaningful control over that fully autonomous system.
So this paradox is solved by introducing these aspects into it.
Richie Cotton: Yeah. I can see how there's a lot of really silly features that can be introduced into models that have a significant effect when you're training them, but then you start to think about them. Yeah. Does this actually make sense? And it probably doesn't.
I mean like dog ownership. So unless you're working. A vet or maybe like a zoo then it's not gonna have a strong effect on the person's performance. So it seems like you're working towards a framework for determining whether you can have control of the AI system or not. And I know there's, there are a lot of AI ethics frameworks out there.
So do any of these frameworks cover ai control?
Atay Kozlovski: No. It's not a high level principled framework. The idea is that you would take these i, these concepts, right? Or these categories, and you would systematically apply them to your use context. It has to be done from the beginning for every use context.
And the idea is that this will blend in the design process and if you account for these issues, because for every context have different issues come up, right? I'm not a fan of these. Documents of AI values or AI principles or constitutions for AI systems? I just think there's too much versatility.
It's too much context dependent and situationally dependent. So not just on the use case, but also on the specific location and timing of it, that it really requires an analysis particular analysis of that specific case. Nevertheless, we can learn from other cases, right? So as we gain more and more experience, like with everything we come to expect certain types of errors.
Lemme give one example. We know that these systems often are designed with one idea in mind, but when they go out to, to, to the wild, so to speak people find interesting and weird ways to use these tools. So an example I came across, which I thought was really funny at this hospital project, what they were developing there was a tool to help delegate the writing of discharge letters.
So the idea is instead that the doctor would have to go through your entire file and summarize everything and then give you that discharge letter when you leave the hospital, the AI tool will do that for the doctor, basically an LLM that feeds into the that we feed to it, the data, your medical records of, and the notes of the doctor, and it summarizes it.
Okay. Now they did a lot of really good steps to make sure that this is safe. They limited the types of interactions between the doctor and the system. They excluded certain types of sensitive information that they wanted the doctor to manually insert. So they did it really carefully to avoid these problems.
And here comes the twist, right? When we started talking with the doctors and interviewing them to see how they're using the tool we learned on the very first interview that they found a really cool use for this as they were starting to use the tool. If they were gone for a few days from the ward, when they came back, instead of looking at the files of the patient, they would just look at the summary that the AI produced to see what happened with my patient over the last three days that I wasn't here.
And that basically bypasses all the guardrails that they built in because everything is supposed to be protecting it on the day that it goes out, that a supervisor who goes over the document, the doctor has to approve it first. A lot of really safeguards and they just bypassed all of them completely.
So that's what I mean by context dependency, right? Sometimes you can't anticipate that and when it's out there, people will find strange and weird ways to use it. Oh yeah. Whatever you're planning as a project manager, once it gets into the hands of users chaos, 'cause they'll find new ways of using the feature that you haven't thought of.
Richie Cotton: I guess you have to have some kind of feedback loop to find out if these are good new use cases or if they're problems. And so you wanna feed tho that information back into the updates. Talk me through what do you need to do to fix these emergent problems?
Atay Kozlovski: As a philosopher and as a person who sees himself as a critic not an anti AI person, but a person who's tried to engage with this critically to understand the limitations and the risks that are coming up.
That's the advantage of not being at a tech company. I don't mean to sell anything. So it's fun, I think one of the things we see often is people don't understand the systems they're working with. And they go too quickly from the idea of what this is going to do to actually putting it out there in the field and then learning retrospectively, oh no, it's not exactly working as we anticipated.
We're seeing this with employees being fired and replaced by AI and three weeks later being rehired because the tool can't do the job right. With journalists, with call centers with different kinds of places. What do we learn from that? We learned that hype is something that should be left for marketing, right?
And our job, either as consumers or as project managers, is to fight the hype, right? We have to be really clear minded when we're working with these tools. And it's hard sometimes. There's extreme pressure to become AI literate, right? And to be at the edge of the adopter, right? To be first adopters.
I get that. But if you do that, you have to expect risks, right? And punishment. Okay. Being critical of the system trying to combat the hype. And I think as you're using the tool, like with any other tools, right? Create discourse. One of the egregious uses that I came across was with a system.
That was deployed in hospitals and it was meant to help doctors and nurses identify cases of sepsis. So very high stake issue. They wanted high better sensors to be able to alert them in advance. What ended up happening is that the sensors would go off and would create false positives, and the hospital had implemented sort of punishments for nurses if they failed to act on these alarms that would go off.
And this caused invasive procedures for patients unnecessarily, high costs of tests that had to be imposed because the system went off. There were conflicts between the expert on the ground of the system and the hospital had built these protocols go with the system always, so that's a lesson we need to learn.
Richie Cotton: So that sounds like we're back to that automation bias problem we talked about before where management. Trust the system and then tells all the employees to trust the system. And then we're back to employees not having control over things. And a lot of the examples you talked about, it sounds like you went through healthcare examples, hiring warfare.
So these are all high risk situations where there are big consequence of things going wrong. So are there any other areas where you think there's consistently gonna be a high risk of ethical issues with ai?
Atay Kozlovski: Yeah, the list is endless, unfortunately. Let's take a few actual cases that we're seeing recently, right?
In the states we're seeing the. The alliance between the tech industry and governance and government specifically with ICE and immigration policies. We're seeing the use of these facial recognition technology decision support systems and recommended systems, social scoring systems. All these systems that have shown signs that they are brittle, that they tend to make mistakes, that they need clear oversight or being used.
To my mind in the most the least risk averse way possible. They're just being deployed from the creators, Palantir to the officer in the field, right? Without anything in between and we're seeing the consequences. We're seeing these social scoring systems creating discriminatory action.
We're seeing people being persecuted without any reason. We're seeing people lose their privacy, even if that some might consider that to be a low stakes issue. We know the risks of privacy laws, so this is a huge problem when military grade technology trickles towards civilian uses that policing. These are what are known as dual use systems, and that's, maybe it's appropriated warfare to use these to, to disregard civil rights and human rights.
Of course it's not appropriate in other circumstances, so we need to be aware of that. Another example might be that we've seen many errors unfortunately, is the use in welfare systems. I don't know if you're familiar with these cases, but in Europe and in the US we've seen a number of cases where they've tried to develop these tools
in. Typically bad ways. This caused a lot of discrimination and misuse of these tools where people lost their jobs, where people's families were broken up because they were accused of crimes that didn't commit. They had to repay funds that they were actually eligible to receive. And even we've seen cases of people committing suicide after being accused of these.
So the consequences are enormous, right? And we see the case in Denmark, in France, in, in the uk. Amnesty International provides detailed reports about these cases. They do a really good job. So if anyone's interested to read up about, I recommend their website. They have great reports that really dig deep into the use of these novel technologies and the risks that we're taking on ourselves as we implement this.
What's common across all these use cases is that those who are most vulnerable will be hurt the most, and. That's typical and unfortunate and horrible. And that's the reality, right? So it's the immigrant, it's those who the state helps fund. It's those who are anyway without recourse to legal defense.
And so the vulnerable remain vulnerable, unfortunately. So that's another example. Finally, maybe a last example is we're seeing interpersonal relationships, right? We're seeing the use of these chatbots in mental healthcare and in in relation to this loneliness epidemic that we are living through all of us.
And people are gradually relying on these systems more and more to find intimacy, to find a shoulder to lean on, metaphorically, obviously. And some of the results are good. A lot of the results are just boring and. Small percentage are very bad. And we're seeing cases in which some people are suffering very much from these.
We've heard about cases of teenagers going down this rabbit hole and falling in love with these systems. And finally, it somehow motivating them to commit suicide terrible and horrible. We're seeing other cases of people abandoning relationships in the real world in favor of these romantic AI partners.
Again, I'm not trying to be judgmental here. I'm just saying that we're seeing a disruption to how society is been functioning over the last years of the advent of these personal software use and this ai revolution now of LLMs is extremely disruptive in every single domain that we can see.
Richie Cotton: Oh man, that's just a big list of terrible things with ai. One thing that stands out to me from that is governmental issues particularly. You talked about policing there. So policing just seem like a very high risk area where things can go horribly wrong. So you've got high consequences of making mistake.
Either a criminal goes free or you're putting someone innocent in jail. You've got invasions of privacy as well. There's just lots of things that can go badly wrong there. And then beyond that, you mentioned chatbots, so that feels like a more recent problem. There've been problems with AI and technology in governmental departments for more than a decade now.
But chatbot is a newer thing. And I guess the next stage of that is around deepfake. I know you've done a lot of research on DeepFakes as well talk me through your research in this area.
Atay Kozlovski: Yeah, gladly. That's the second stream of my research. So what is about this MHC stuff, this meaningful human control.
And the second is really about not specifically deepfake. DeepFakes is one of them, but the field of trying to simulate human beings using ai. There are two issues we need to distinguish there. One is these AI companions that we come across. And these are ba basically just generic LLMs that they're not trying to simulate or to represent a specific person that's the chatt, Claude, whatever.
Yeah. That's an area that I don't do research on. What I'm interested in is on the other half where we take these LLMs and then we train them on personalized data in order to enable them to create simulations of a specific people of myself, of you, of anyone we want. And that's where I do a lot of my research.
We can go through a few examples. I'm sure these are familiar to most people. What I did was create a kind of taxonomy to try and make sense of this phenomena. Most people are familiar with deep fakes. Most people know, there's this problem of deep fakes and recently came in the news, this idea of notify apps and grok, and that was creating deepfake pornography and these really horrible use cases where mostly winning % is female abuse, right?
So they're the ones suffering from this. That is excellent that we're dealing with. I don't find it very interesting to talk about that from my perspective, because I think our intuitions are clear, right? This is a horrible use case. This shouldn't be out there, there should be legal protection for these people.
This is just abuse and from a philosophical standpoint, there's, it seems like there's nothing much more to say about that. So my research doesn't go into that side, but rather onto all those edge cases where we might say that's interesting. Maybe that's okay. So for instance we can think about two distinctions here.
Are we representing someone who is still alive or are we creating a representation of someone who has passed away already? On the alive front, we might use these types of systems. I might wanna create a deep fake of myself in order to delegate certain tasks that I don't like to do. For instance, today, I could have sent my digital duplicate to do this podcast.
Of course, why would I wanna do that? 'cause this is the fun to come here and talk. But recently the CEO of Zoom, Eric Yuan came out and said that soon every one of us will be sending our digital duplicates to Zoom meetings so that we can go and do what's important in life, which is go to the beach.
So that's the priorities there. But that's what we're thinking about. We're seeing professors creating these digital duplicates to assist students in answering questions about the course material. You can go hours a day, you can go onto their website and chat with the professor, which is data curated to answer specific questions on the course that you're taking.
Really interesting use case, right? It seems very positive. A lot of people will benefit from that. Other cases that are starting to get a bit more borderline, we're seeing influencers creating digital versions of themselves so fans can interact with them and have quasi parasocial relations with them.
So one famous case was the influencer Karen Marjorie. She created Marjory ai Oh, sorry, Karen ai and for the low price of $a minute, you could talk to Karen AI and chat about it. And of course, these talks very quickly deteriorated to, to sexting and to all the dark fantas that everyone has.
And she actually decided to cancel that and to shut it off after a while. But now I think there's a rebook going on, but in general, that's trend that we're seeing people try to create in order to market themselves in different kinds of ways, use these types of tools. And we might say there are advantages and disadvantages of that, or what's your intuition?
What do you think?
Richie Cotton: I really like this because a lot of the discussion around fake versions of people, deep fakes is about the negative side of things, where it's celebrities saying stupid things or it's. Nude versions of people or it's some kind of dross. So I love that there are positive use cases as well.
Recently I was playing around with a platform called Delphi. They got an ai, Arnold Schwarzenegger. I had a great conversation with him about his movies in it. And yeah, it's a lot of fun just chatting with pretend celebrities. So that's a novel use case. I also like the professor example, so having AI teachers is a huge thing.
Something we are working a lot at Data camp. So talk me through how you make these positive use cases. Are there any philosophical criteria for determining whether you're gonna have a positive use case of a fake person?
Atay Kozlovski: Surely? Yeah. So the category I was talking about now, for instance, we focused on consent, right?
Does the person know that this is happening? Is this done with their consent? And that's establishes prima facie reasoning to think that this is okay, right? It makes it morally acceptable if I agree to this happening to myself. So that's a way to, we can think about it. On the other hand, we know that there are a lot of cases that are done without consent, and again, they don't have to be as terrible as the deep fake pornography or nude.
A funny example, or interesting example, funny is not the right word that I came across was there was this case where Kanye West was coming out with all these anti-Semitic merchandise and anti-Semitic slurs and a content creator decided to do a campaign anti antisemitism. So a very good cause, very good idea, in my opinion.
I think most people would agree. What he did, maybe not so good. He created a video in which he used the image of celebrities giving the middle finger to Kanye. That was the idea. And then you see all the who's all the Jewish celebrities basically appearing in that video and all of them from Scarlet Johansen to Woody Allen, to whoever.
But without it, they're consent at all. And I think there's something to be learned there, the fact that we can create these images, and sometimes you might even have a good cause that you're trying to promote, anti antisemitism. That's good, right? If you're doing it in a bad way, it ruins the thing.
And I came at as a critic of this and said the cause doesn't matter if you're doing it in an egregious fashion. You should not be using the images of people without their consent. So that's one of these criteria that, that we've come across. Some colleagues of mine and I have worked on a a principle which names five different conditions that you need to think about when you're coming to create these systems.
They talk about consent, transparency, and authenticity. As I'm naming these titles I notice that I'm going back to doing what I said that I don't like, which is listing principles, right? Yeah. But in my defense, the colleagues made the list and then I did case studies evaluating the list. So I stuck to my guns on that part.
But yeah, it would be basically these same principles that we were trying to think in different context maybe even without consent. Let me just give one example about that, because sometimes people think, consent is a basic must. We've actually come across interesting cases where we think consent might not be necessary, at least plausibly.
So one example that. Colleagues of mine have worked on is called the the P four, the preference the patient preference predictor. And the idea is we have unfortunately a lot of people who have car crashes or who are sick, and they go into comas and they're unable to tell us their preference for medical treatment.
And then we have to rely on surrogates. So we contact the next of King, or we contact the someone who's related to this person. We tell them, tell us what John would want to happen, right? Or a tie. Yeah. Would a tie want us to do this surgery or not? You have to decide. Now, there's a lot of data showing that these surrogates suffer from PTSD symptoms where they suffer mental health problems, not knowing did they make the right choice, right?
And so my colleagues, what they tried to do is create a system that would assist surrogates in making those decisions by training a system on. S on a large amounts of personal data of the person in order to derive from that, what would the preference be of that person in this medical situation. And then use this as a recommender for the surrogate to help them make a decision in that case.
Now, first of all, there might not be consent in that case, right? If the person is in a coma, they cannot consent. Maybe we could get that before a prerequisite. Secondly, the system might make mistakes, but I use this as a borderline example where we can see the help that might offer, right?
Especially if people are suffering in those cases we could see where this might go wrong. I don't know. This brings me back to one of the lessons that I unfortunately learned in the hospital when I was working there is that. Sometimes you have to ask the weird question of, is this better than what we have?
Is this good enough? And this is an annoying question, especially for a philosopher, because we deal with the abstract and the ideal. But if you look at what doctors are working with sometimes in the hospital, they have really old infrastructure. They have terrible working hours, they're under extreme amount of stress.
And if we can alleviate that even a bit, maybe in some circumstances, that's worth certain risks that we take. I'm just saying that these are the types of trade offs that we have to make as we use these systems. Okay. But I I've yeah. Gone on a tangent a bit here. Maybe going back to the digital recreation, these deep fakes the opposite side of this would be, again, with a consent issue, is when we start using this technology to simulate people who have passed away already.
Now again, here we have a mixed bag of good and bad. So wouldn't that be cool if you're learning in a classroom you're studying about the Roman Empire or the Roman Republic, right? And Julius Caesar comes to class and starts talking with you, or he is helping you do homework about the culture.
Seems great. Why not? That's really fun. It might be engaging for students. That's cool. Can we have the consent of Julius Caesar? I, it's almost a silly question, right? Obviously not. So again, we see situations where we say we need to adjust the criteria here. On the other hand, lemme give some more actual maybe examples of cases where we're seeing the simulation of deceased people.
We're seeing two main case studies, three, but I'll focus on two. One is grief related recreations. A lot of people who have lost a loved one wanna reconnect with that person or wanna maintain some kind of bond. And they find that by creating these simulations, they can talk again to that person, talk with quotation air quotes, right?
And we're seeing this is becoming, it's slowly moving from a niche to an industry. A whole industry is being developed around recreating in simulation. Our loved ones in order to help us in some sense, cope with our grief. We can go down that if you're interested, but I'll just explain the second example.
The second one is for certain kinds of projects or political acts, and there what we're seeing is the recreation of individual people in order to promote a certain type of act. For example. There was a tragic shooting the Parkland shooting that happened where a lot of students were killed by a school shooting.
And the parents of one of these victims, his name was Joaquin Oliver, they created a simulation of him after his death in order for that simulation to help advocate for gun control legislation. Now again we can ask, is this okay or not? That's why it's interesting for a philosopher to work on this, but that's the type of use case the parents are using, the visual image of their son.
You can talk with that simulation, you can interact with it and it will tell you why we need gun legislation in the us. Why is that important? So those are the two types of cases and happy to go into either of them if you want.
Richie Cotton: For each of these, I think the tricky thing is gonna be how much of this is gonna be authentic in that person's voice, and how much of it are you adding on to them?
So if it's dead granny speaking to their descendants, then you're gonna want to have them talk as they would've when they're alive. Like these are relatives, you know what granny sounds like. So it's gonna be really easy to get into that uncanny valley where granny doesn't sound quite right and it gets creepy.
So I, is that the main issue here, or are there other ethical issues around bringing dead people back to life?
Atay Kozlovski: I think that's one of the issues, right? How accurate do we, what it I'm working now on a paper trying to catalog the different types of inaccuracies that come up and what are their implications.
Sometimes we want very accurate recreation, visually, audio. Sometimes actually not, sometimes we think that our goal can be met by de anthropomorphizing. An interesting example, there was an exhibition that I saw in Germany commemorating the life of Vi gr Vili was a member of a resistance group that fought against the Nazis and he was arrested and executed during the Nazi regime and.
This and he's a hero for his fight and struggle against this oppressive, horrible regime. And this commemoration exhibition used a digital simulation of vi ka where you could interact with it, and it's based on the biography when written about his life. So that's the data that was used for it.
But when you're interacting with it, you see a stick figure. So it's purposefully meant to highlight what is missing, right? So it's meant to create that contrast. This is not graph, this is just a representation of certain aspects. Now, sometimes that's actually what we want with grandma. Maybe we want a perfect recreation of her.
And we're seeing a lot of companies that deal with hyperrealistic anthropomorphic avatars that the goal is precision in terms of audio precision. In terms of visuals, my research focuses on gaps that I believe cannot be recreated. So here's an example. I think that data can capture only certain elements of our personality and our identity.
And I think that there will always be gaps. And when you're creating these simulations, the whole point is for them to create novel speech acts. We don't want it to to quote to me specifically, probably won't have those sources. And even if we did, we want it to be in interactive and engaging. You can't just use quotes all the time, but when it's using a novel speech act, how, as a user, what am I supposed to think?
Is that what a tie would've said? Is that what a tie does think? And I argue in some of my papers that there's always a gap that we need to be extremely aware of, that this is not what AAI is saying or would've said. It's what he might have said. And there's a huge gap there. A huge gap. And to emphasize this, and this is also risk.
And to emphasize this, I often use a a mythical folklore creature from Jewish folklore. And this called the Deb book. The Deb book is a type of malevolent spirit that inhabits a body and uses that body as its puppet for its own ends. It speaks to the puppet's voice. Using the puppet's voice.
It speaks through it. And when I'm working on this technology, I always tell people, keep the deep book in mind and be careful not to become the Deb book. And unfortunately, I think that often this technology, we are speaking through the mouths of those simulations that we are creating. And I, people can do whatever they want, right?
I'm not trying to tell people what to do. Talk to your dead relatives if that makes you feel good. Fair enough. Be aware that this is not them and be aware that this is highly curated and what you're hearing. Just like with Symatic, Synco, chatbots is often meant for your engagement, not for anything else.
And if you have that in mind, okay, then we might see still some positive things, but maybe not. And my concerns are lack of awareness, right? That people might know that this is what's happening.
Richie Cotton: Absolutely. And now I'm just thinking what would an AI recreation of me after I die sound like? And so most of the recording to me is date framed episodes.
So dead Richie comes back to life as ai. I'd be great about talking about data, talking about ai, and probably gonna b my descendants to tears. But yeah, there's a gap about what AI me could talk about when after being improvised. So the other thing you talked about was the idea of political advocacy.
For example, people who were killed in school shooting, and then you recreate them for a political end. So there they're definitely not talking as themselves, so it's gonna be someone feeding words to them, and that feels like a different ethical issue to me.
Atay Kozlovski: Oh, definitely. Here's a case that was discussed quite a lot in the news.
I don't know, maybe I'm so much in my epistemic bubble that I see. It's often maybe others I've never heard of this. The case is about a man named Christopher Pelke. Christopher was murdered during road rage incident. And then his accuser was convicted and found guilty. The family petitioned the judge during the victim statement segment before sentencing to allow them to present a deep fake video of Christopher speaking to the judge and to, to whoever else was there giving his own victim statement.
The judge allowed it, and we have the recording. It's online. Anyone can find it if they want, of the Christopher Pelke providing his victim statement posthumously, and. That raises the exact issues that you mentioned, right? These are obviously not his words. The family felt it was his sister who wrote the speech and they used a hyperrealistic avatar of Christopher to speak these words.
But at the end of the day, these are not Christopher's words, and I found that to be egregious in that the avatar con constantly uses the first person singular pronouns. I and so it's speaking as if it's Christopher. And so it doesn't create that kind of distance that I think would be more acceptable and suitable.
It tries to, tries make all the other influences invisible, right? It tries to, to eliminate them that we don't see. At the end of the day, what we see is Christopher talking and his words. But that were again, made he was turned into a puppet in this sense. And he says certain things that are really crazy he speaks to the murderer and says, in a another lifetime we could have been friends.
That's what sort of the avatar says, and that it thanks the judge for certain. So you ask yourself, how manipulative is this? How e even if this is somehow faithful to what Christopher would've wanted to say, how does this impact now the judicial system, should we be having these type of influences coming into the courtroom?
A lot of difficult questions that need to be addressed beyond the sort of philosophy of whether this accurately represents Christopher or not.
Richie Cotton: Absolutely. To me it seems like that's just a ploy to manipulate a jury rather than it being good evidence to put in a trial. Alright, so we talked about a lot of different ethical issues.
If you are an AI practitioner, what are the most important philosophical skills or ethical skills that you need?
Atay Kozlovski: Yeah I don't know. That's a hard question to ask. I can speak more to myself and to my strategies, how I approach issues, and then maybe I'll share about a project that I'm working on 'cause that might demonstrate some of this.
So as you heard throughout our conversation, I've mentioned Jewish issues several times. That's strong in my identity and background. I've come from a family of, I'm descended from Holocaust survivors and that's always been an issue that has been of great interest and importance to me.
And when I came across this technology a few years ago one of the first idea that I had was, wow, maybe I can use this to help my. Grandfather who had passed away in share his survivor story of his Holocaust survival story. And it got into that whole field of survival testimony of Holocaust survivors.
And the, there's a huge problem there because there are very few survivors remaining, right? The very old, the last ones that are remaining. And they were very young during the war. So we're losing that firsthand sort of interaction with survivors and educators in this field are trying to think, how will we adapt to a world without that?
And the questions that I asked myself can we use this technology to help? And I started working on that and as I worked on it, I started coming across all these philosophical, deep questions and ethical conundrums and principles. So to abstract from that sort of idea, I think.
This is not very suitable to the, to companies that are trying to market products. But I think slow and steady is the way to go. I think we have to be very reflective when we are using delicate technology in a field like Holocaust. A remembrance, it's very sensitive issue for many people.
It deals with a lot of difficult topics, obviously. I think you need to be very conservative in how you do this. I like to say sometimes that. I think you need to wear two hats. As a philosopher, whenever I teach to my students, I tell them, ask the most outrageous questions you can think of.
Push things to the extreme. What if Christopher Pelkey, we couldn't just use them in the courtroom, but then continue living with them. Ask those questions, engage with them. Think about that. That's when you're wearing the philosophy hat. When you're a policymaker or a company that's creating a product.
I think you have to switch that hat and be extremely risk averse. You need to be very careful. And if you come with that mindset of risk aversion and as a philosopher you bring this ethical background with you and you read about these topics, then I think you can anticipate a lot of the inappropriate or bad things that have come out.
I can give an example from my project. So I'm working now with the son of a. A Holocaust survivor called Eva Core. Eva lived in Indiana and she became an advocate for Holocaust Remembrance throughout her life. She died in and her son has continued her work of education.
They go once a year to Auschwitz with tours with students and athletes, and they do wonderful work. They have a museum that they created, they have candles, museum. And what we were working on is creating a digital version of Eva in these AI models. And what I thought. Sometimes these things are very clunky.
They, you can't really interact with them well. And I went and I created a version of Eva together with Alex and worked alongside him a bit in order to get good data sources and to make sure that they're agreeing to this. And as I was developing it I started coming across difficult questions.
One silly example it was replying to me with a style of a Chachi PT style ending in its sentences. It would ask me back a question every time it answered something. Just like you would get when you act with Claude or Gemini or whatever. And. I wanted to edit that out, to make sure that this, it doesn't respond in this way anymore.
And as I was doing that, I started getting these chills of, whoa, wait a minute. I'm making this EVA simulation speak in certain ways. I'm literally becoming the puppet master at this point. And that was really hard for me, even though I felt like I was doing something very intended. And yeah, so having worked on this for two years, I'm still not certain that it's good to do this or that it will be beneficial at the end of the day.
And obviously companies can't adopt this very slow attitude of, really testing, really making sure you stand behind this product or you stand behind what you're doing. But I think that's the way to, to approach this, be fully committed to what you're doing and endorse what you're making, and be aware of what are the risks and what are the trade-offs that you're doing.
So full awareness of that. Okay.
Richie Cotton: A lot of the principles here around critical thinking of what could go wrong. So you need to be able to predict ethical disasters before you release 'em to the public. You don't want to deal with consequences, so you want to deal with 'em upfront in the design.
Atay Kozlovski: So I'm working on a project now, which is trying to consider can we use these digital avatar simulations in order to promote political participation.
We want more people to be involved, we want people to be more aware of what's going on. And what we're working on is we're partnering with politicians to create digital versions of themselves. And we're doing this in Switzerland where I'm based, and Switzerland has a, the me direct democracy system, right?
So there are a lot of referend that you as a voter need to vote on a lot of issues that you don't know much about. And what you usually get is a packet of information saying the position of the different parties, the stakes involved, and then what do you think and can be about weird things that you've never thought about in your life.
And so one of the projects we're doing is to have have the possibility for these voters to interact with a digital version of the politicians on this specific referenda that you need to vote. Obviously, you can't engage with the politicians directly, right? Just because of numbers of modern democracies not possible.
The question we're researching is, can we use these tools in order to create that type of intimate conversation between citizen and their representative? Now we're testing this only internally, right? We're doing experiments with this because there are high stakes here. Is it how will this basically manipulate voters, right?
How will this, will it hallucinate? Will it go off track and suddenly spout the opposite opinion that we don't want it to do? So we're playing around with that. Now. We see the potential benefit, but we see also the downside. So it's just as you said critical thinking, approaching this with an open-minded sort of perspective, but well aware that this, might at the end of the day not work out and not be successful rather than moving fast and breaking things.
The mantra that we're hearing coming out of Silicon Valley I believe them when, democracies at stake, that's a bad way to go. I'm hoping that this type of project might offer insights. Into one way to use this technology that might promote democratic values. And part of the problem is that I'm very concerned that the opposite will happen.
I think rather than increasing knowledge and and access to information, what would more likely happen is that we would automate the voting process. We would say I don't really need people to vote. I can just use the big AI to predict what you're gonna vote and that's good enough.
So I see that as the extreme opposite on the spectrum of democratic AI systems. Yeah. So I'm trying to combat that in that sense. But I think that gives some insight in how to approach these types of projects. Now, in reality, can everyone do that? Of course not. We have this privilege of again being in academia, not having to sell anything, just doing research.
And it's very scary that a lot of these companies today Google meta Amazon, they're more lucrative for researchers, right? Researchers can get their huge salaries and get all the toys to play with, and then they abandon these ethical research principles in favor of going to these big companies.
I can understand that our personal level, but as a phenomena, I think it's extremely dangerous, and we're seeing the trend of publications coming out from these companies. In my opinion, these, there's a little asterisk next to it, always. Corporate research needs to be taken with a grain of salt. How else can you think about it?
Richie Cotton: Absolutely. I love the idea of using AI to educate people, to increase civic engagement, having more informed voters. That's just a unambiguously wonderful thing. I guess if it works, of course, and I guess that's the key, right? You need to make sure that you test things properly. You need to think carefully about the consequences before it goes live, and you've just broken democracy.
But I should say, I worked for a tech company, so I have sympathy for tech research groups but I have also worked in university. I agree. It's a very different experience for how you approach building things.
Atay Kozlovski: And let's be honest, universities aren't perfect either, right? We've seen since the new Trump administration, the amount of clashes that have come between government and universities, and whether these are legitimate or not, I'm not going to that.
It just highlights that universities are not neutral, right? They have positions. So I've painted it as if, oh, this is the good guys, and it's obviously not like that. And I have a lot of colleagues working in these companies, so I think some of 'em do. Excellent stuff. Yeah. I'm just being a bit of a critic here, a naysayer for a second.
Richie Cotton: Yeah, you're a little bit salty. Yeah. Of course the universities are a great place for fundamental research. This is a necessary component of a research ecosystem.
Atay Kozlovski: Yeah. I have a colleague who teaches at Texas a and m, and unfortunately he was recently told what he's allowed that he's not allowed to teach anymore.
Plato texts, it is a philosophy lecture because this conflicts with the new what is it called? The new regulations about diversity, equity, and inclusion. So the anti regulations that came out. So yeah, censorship exists also in universities, obviously, and people who are more radical than I am say that I am I'm working for the man
Richie Cotton: yep.
Get your information from different sources. And again, we're back to critical thinking, being an important skill. Alright. To wrap up. I always want more people to learn from. So whose work are you most excited about at the moment?
Atay Kozlovski: I try to read a lot. That's my job also. But I think getting multiple sources, I try to get out of my own bubble that I think we are all being algorithmically contained in that it can be very difficult sometimes because you think you're seeing the objective reality and then suddenly it pops and whoa, what's going on?
There's so many other perspectives. So I think. Showing interest. Trying to actively seek other types of opinions is important, to think of things, I think that, in my sort of domain I have a colleague, professors that Neil, who does excellent work, and I follow him and read all his new articles.
There's some media channels that I follow. What is media, I think they do a lot of interesting tech related exposes. There's a center at Cambridge called the Center for Future of Intelligence, which I like very much their work. And I follow several of the scholars working there. So I think those are interesting places to look.
Richie Cotton: There's a, just a wealth of breeding around ethical issues in ai. It's just such an exciting topic. So lots of ideas worth looking out for. Wonderful.
Atay Kozlovski: I think that may maybe I'll just add to that, that I think another issue that's important is humility. And I try to do that as often as I can to recognize that I don't know everything and try and be, self-critical as much as I can in these issues.
This is really complex topic. A lot of the stuff go over my head, even though I'm supposedly supposed to be an expert at it and working on this day to day. So I think we need to acknowledge that and I think ask some questions and say, I don't understand, or, doing more research is crucial regardless of the topic we're working on.
And yeah, and so engaging like in these types of conversations, I find to be wonderful because I get other perspectives coming at me from outside the philosophy bubble. So I really appreciate that you invited me here and that I had a chance to talk here today. Lemme just say that I love when people contact me, so feel free to do that.
I mostly on LinkedIn is the only so social platform that I use or my email, if you hated what I said or you liked what I said, just let me know. It's all good. I love discussing this stuff and I'm always open to do that yeah. So really rich, thank you so much for having me here, and I really appreciate
Richie Cotton: Okay.
Thank you for your time.
