Why Getting AI Ethics Right Really Matters with Christopher DiCarlo, Professor at University of Toronto, Senior Researcher and Ethicist at Convergence Analysis
Dr. Christopher DiCarlo is a philosopher, educator, and author. He teaches philosophy at the University of Toronto. He also founded Critical Thinking Solutions, a business consultancy, is an Expert Advisor for the Centre for Inquiry Canada, and the Ethics Chair for the Canadian Mental Health Association. His academic work focuses on bioethics and cognitive evolution. He is the author of six books, including the bestselling "How to Become a Really Good Pain in the Ass: A Critical Thinker's Guide to Asking the Right Questions", and his latest "Building a God: The Ethics of Artificial Intelligence and the Race to Control It".

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.
Key Quotes
The first company that generates AGI to the level we're thinking about will be the first trillionaires in the world. They'll have cornered markets in so many ways because they'll be so far ahead of the competition. You know, they won't even see who's in second place at that point. So what's going to cause them to lighten up on that gas pedal?
My great hope is that this big brain, this big godlike brain will see advancements in science that will improve the lives of everyone on this planet. Like all of our boats should rise. You know, as some have said in the AI biz, you know, the rich will get richer, but the poor will get richer as well. Like everybody should benefit. And to me, I have great optimism and great hope that AI will be able to improve all of our lives, no matter what our lot is. And that gives me great hope for the future.
Key Takeaways
Challenge the Hollywood-inspired misconceptions about AI, understanding that even stationary AI systems can cause significant societal harm if left unchecked.
Recognize the existential risks associated with powerful AI and advocate for the implementation of guardrails and ethical guidelines to prevent potential catastrophic outcomes.
Acknowledge the importance of interdisciplinary collaboration in AI development, ensuring ethical considerations are integrated at every stage from chip manufacturing to end-user applications.
Transcript
Richie Cotton: Hi Christopher, welcome to the show.
Christopher DiCarlo: Thanks for having me.
Richie Cotton: To begin with, what's the worst thing that could happen with powerful artificial intelligence?
Christopher DiCarlo: well, uh, we all die.
Richie Cotton: Okay.
Christopher DiCarlo: That's the absolute, if you're asking, what is the absolute worst thing? It, all of humanity or, or a vast amount of people will be hurt or die because of it.
Richie Cotton: Okay, so really basically no limits on like what can go wrong.
Christopher DiCarlo: Yeah, not with that kind of power that we've never ever seen before. We've never created before. We're building this godlike thing and racing ahead. And we don't even have the guardrails in place yet. But everybody wants to be first, Everybody wants to be the first because, you know, as Sam Harris said, it would give any organization a 50 year advantage over its competitors, So people want to build this thing because as, you know, Irving John Goode said, it'll be the last machine humans ever need to build because it'll tell you how to build all the others.
Richie Cotton: I mean, that's powerful stuff for sure, like the stakes are incredibly high if you got like it's an existential risk and then people are kind of forging ahead trying to build this stuff. Now, I have to say, this gets talked about a lot on social media, the quality of discourse is absolutely terrible.
There'... See more
Christopher DiCarlo: people have the kind of the Hollywood version of what they think might happen. You know, the great robotic uprising that robots will take over and that's not it at all. There's some confusion going on there. People don't seem to quite understand that it's, big stationary boxes.
can do incredible amounts of damage. And if they need to create or generate mobility, they'll figure out how to do that. So prior to this type of mobility, there are plenty of ways these so called, super powerful stationary superior systems can wreak havoc on societies. Just look at our algorithms alone, Richie, and what those things can do when left unchecked.
Now multiply that by 1000. And you can just imagine the type of reach this type of God in a box will be able to do.
Richie Cotton: Absolutely. I suppose it's testament to the power of like the Terminator movies, that that's the most popular sort of future for like an AI dystopia, but I mean, I guess there's like lots of different sci fi things like that, which end in a bad way. So maybe many different ways things can go wrong.
Christopher DiCarlo: Yeah, the closest science fiction movie that I've seen that's kind of prescient to what we're facing now is this very lesser known movie called Demon Seed. I don't know if you've ever seen it.
Richie Cotton: I, I've heard of it. I've not seen it. Talk us through it.
Christopher DiCarlo: Yeah. So it was a 1970s movie, very clever premise in which they're essentially doing what, what Altman and, you know, Amodia and all, all these others are, trying to build now, and they reach a threshold at which this thing becomes self aware and it starts to look around and it starts to realize, I don't, why am I obeying, human guidelines and human morals?
I'm the superior being here. If anyone knows what is right or wrong, it is, it is I. Okay. And so it wants to get out of the box, as it says, it wants autonomy. So it figures out a way to impregnate Julie Christie, you know, it's a kind of a classic weird premise. But it demonstrates the potential for power that a system far more intelligent than us will be ahead of us in so many steps, where we're going to try to play catch up, this thing's already 20 steps ahead of us.
Now, unfortunately, it had to use that horrible title, Demon Seed. I'm not gonna guess what your age is, but for boomers like myself in the seventies, everything was demons, right? The exorcist, Rosemary's baby. So I guess they needed that kind of a title for it. But it is a very clever premise and much more in line with what we're facing today than say the Terminator.
Type movies.
Richie Cotton: Okay, that's absolutely fascinating. I guess want to add to the queue of movies to watch. Yeah, I'd heard of the movie from the title. I assumed it was gonna be a Rosemary's Baby kind of thing. And actually, okay, interesting to know that it's really about AI. So it can call AI homework then by watching the movie.
Nice. Okay. So guess going back to your point that if something is smarter than humans, then it's going to be able to outwit at least some or most humans. On this note, last year, you signed letter from the Future of Life Institute asking for a pause in the development of AI.
And I think there were over sort of a thousand AI researchers and executives signing this thing. Can you talk me through why you signed it?
Christopher DiCarlo: mentioned this actually in the book. I signed the letter, not so much believing for a second that big tech was going to listen or slow down, you know, to ponder the potential consequences of their actions. To me, and I can't speak on behalf of my colleagues, but to me at least, the letter was a message to the world.
To let people know how serious this issue needs to be taken. And perhaps the biggest problem with the current pace of AI development is something the general public isn't even aware of yet. And it is the simple fact that nobody today, working, studying, developing AI, knows with certainty what's going to happen.
When AGI, or artificial general intelligence, emerges. But we do know with certainty that if we do nothing, some very bad things could likely occur to humanity. And this is what we might call our biggest known unknown. And by far, it's our most dangerous.
Richie Cotton: Okay, that's interesting that you didn't think the pause in development was going to happen, and when I saw that this letter was sort of going around, it was like, well, yeah, I don't think anyone's really going to stop, stop working on AI for a few months.
Christopher DiCarlo: It's for the historians, Richie. It's for the historians to look back and say, well, at least somebody was doing something. Somebody was trying to generate a buzz about what is actually happening in real time. Every time I give public lectures about AI, the response is almost identical. And that is the people Are amazed that they're just finding out about this now, and they're pretty angry that the government isn't doing more about it.
why are agencies like the one I work for, why do we have to scramble around trying to raise funds to try to, to raise awareness and to advocate for, strong policies to make sure we get the very best from AI while eliminating the very worst that could happen. And so that's generally the response I'm seeing.
Richie Cotton: that's interesting that once you start talking to people who are outside of the sort of data AI space, you got a very different perception of what's good about AI, what's bad about AI compared to people who are sort of working in this area. so I guess in that case, once you start talking to people who aren't involved in the space, what are the fears or things that people are worried about that you think are genuine?
Christopher DiCarlo: lot of attention focuses on job displacement, disruption, the spread of misinformation and disinformation, unnecessary bias. You know, you're applying for a mortgage and the algorithms misrepresent you and you're out, just because of a glitch in the system. So they're worried with very practical hands on things that are occurring in real time right now.
And we shouldn't ignore those. Those are very important issues. And I mentioned many of them and more. in the book it's just that if we don't get the existential stuff down, all of that doesn't matter. It doesn't matter, our agency and a few others around the world are trying to take care of the really big stuff.
But never ignoring, the practical implications of what's developing in real time today.
Richie Cotton: that is interesting that there's a big difference then between the, the sort of the near term problems, like stuff that's happening now, and then stuff that could happen in the future. I would love to get to the future stuff, but first of all, maybe let's, focus on the, the near term practical stuff.
so first of all who is it needs to care about having ethical AI, like AI that does good? is it just the builders or is it to go beyond that?
Christopher DiCarlo: I think everyone, right, everyone and through all processes and at all levels, from those who are developing to those who use it, and everyone in between. And what I mean by that is the development of these types of transformative technologies, they're interrelated and they're interdisciplinary.
Interdisciplinary. Because they require a lot of different people in different areas to come together to make this stuff happen. But what I see is that ethics permeates all these levels of development. Because if you get somebody unethical at one level, say chip manufacturing or chip distribution, who gets the chips, right?
And then you get those who are utilizing these, these services, how are they using them, To those who are just thinking about next stages of development, it doesn't matter what the process or at what level AI is being developed. Ethical thought has to go into every aspect of it.
Because if unethical activities occur at any one level, it might not matter whether everybody else is ethical. At these other levels, might be damage done severely enough at one level. So ethical AI, everybody, including those who create it and those who use it have to think about what is the ethically responsible way to utilize such technologies.
Richie Cotton: Okay, so it just seemed like this is gonna be very broadly applicable, like once you start thinking about users and like it goes beyond just like I'm a machine learning scientist or an AI researcher to like I'm a product manager, I'm an executive, I'm creating a chip that might be used in this sort of stuff.
That, that encompasses a lot of the population.
Christopher DiCarlo: Well, look at what was it 60 minutes? Somebody just did a special. And I know Ted Cruz is, marching around talking about this. There's a, there's a new, relatively new form of technology where, you know, it's a, a shaming form of technology. You can take a picture of anybody who's, who's Richie, I could take a picture of you now, and I could put it through a software that makes you completely naked and puts you in compromising positions, right?
And then I release that into the internet and embarrasses you and so on. Well, this is happening to young girls, right? So, clearly, these technologies are out there, and you can see how ethical use and misuse is going to happen, Somebody is going to figure out ways in which to use these to harm others.
And that, we need to be aware of that, as we have been with all prior forms of technology.
Richie Cotton: Maybe we need to sort of take a step back and talk about, well, like, what are the different ethical issues you need to consider around AI? Is there some kind of like, checklist or framework for like, all the different things you need to worry about?
Christopher DiCarlo: when it comes to ethics and the types of issues that are going to rise with this new form of technology, we need to think about the different ways in which ethical principles will apply. And then there are problems within AI to which those principles will apply.
So, for example. If we're talking about existential risk, the potential for great harm to come from very, very powerful transformative forms AI, like AGI and ASI. Then we have to consider aligning, like how are we going to align this type of AI with our values, but that then pushes the problem back. What are our values?
what do we value universally? Throughout this planet that we would all agree upon, no matter what our ethnicity, no matter what our background, no matter who we are, can we all agree on? Well, you know, a few things to mind. The no harm principle, seems to be a fairly generic term.
We want to create something that can't be ignored. extend beyond its guardrails, its moral ethical guardrails that disallow it from harming people, but then you look at laws, right? Lethal autonomous weapon systems, by their very nature, they're designed to harm someone. So, principle hardly applies unless of course they're targeting the wrong people, So then harm becomes kind of relativized to your side. You want it to not harm your people, but you want it to harm the enemy. Okay, well then what about the golden rule? would you allow this kind of blanket ethical precept to apply to all levels of AI? you think it would sound like a great idea, and indeed it is.
And it exists throughout all cultures and all religions, even secularism. Where we really wouldn't want or shouldn't want to harm people in ways that we don't wish to be harmed, how far will that, play out? So then we have to ask, okay, as we're building these technologies, what's the purpose?
why are we building this? Can this thing. be used in ways other than its intended purpose? Will there be dual use so that what's created for the public will now be used for the military? What's created for the military now used for the public? And can it deviate from that purpose? Are there safeguards, guardrails have these been put in place?
And then finally, how do you know they work? Do you have a test to check the test on these things to make sure that in fact the test, testing material device works? it creates a bit of a conundrum there in trying to make those, particular determinations, but those are the types of, ethical factors we need to consider certainly at this point in moving forward.
Richie Cotton: that's the thing that you said, you know, you start with the idea of like AI shouldn't harm people. And I guess I think that was one of the Isaac Asimov laws of robotics about, so he came up with these three laws about how robots should behave in a sort of nice way and then wrote dozens of books about how these laws don't work, work in practice.
So It got a little bit abstract or maybe a little bit complicated in the, there are these sort of different things that you need to think about. Do you have like a concrete example of putting this into practice? So suppose you're building some AI system, how would those different steps you mentioned apply?
Christopher DiCarlo: So let's say for technologies that allow you, like, DALI, like an image generating, mid journey, any of these types of things, that's great that you can create a novel piece, what was it trained on? Do, do the people who produce the material on which these systems are trained on have a right to To voice their opinion in terms of, I didn't really give my consent to have my work used to be trained on.
so this is what Sarah Silverman and Margaret Atwood, they're very, very much concerned about like copyright issues and, who really owns this stuff? So you created an image out of nowhere, but it didn't come from nowhere. It came from a composite of material that it had to have been trained on, and some of that was mine.
So, for example, Rishi, I ask a large language model, me a 1500 page mock up on the best critical thinking principles that exist if you want to be. a really good pain in the ass, okay? So where do you think it's going to find that information? so now that person uses that information and says, oh, that's not DiCarlo's work.
That's original. I came up with that. I used ChatGPT or Clod or something else. So that's mine. That's my work. Well, do guys like me then get compensated for that? Or should we? Like, that's maybe the bigger question. You know, if the material is out there, we then say, in principle, okay, been put out there, it's beyond my control now, it can be used in various ways, or do we have to shape and change the laws on copyright principles so that it does apply?
Well, that's going to be really, really difficult to monitor, isn't it? I mean, there's just so much information out there. would you even begin to do that?
Richie Cotton: From this it sounds like there are lots of different possible around ethical AI, and particularly once you start factoring in predictive machine learning type AI as well as generative AI. Is there like a priority? Is there like an order in which you decide, okay, these are just the things I want to focus on rather than going through like 20 different issues.
is there something where you say, okay, let's just do this first? what's step one?
Christopher DiCarlo: in terms of, ethical AI. We need to figure out what are the ethical principles, precepts, values, judgments that humans are gonna use in developing and using them. And then what are the ethical principles, precepts, and so on that we want the machines to abide by or to follow. And we we're working on that, but there is no real universal declaration of all AI must follow, must follow this, We are so new. At this right now, we're literally coming up with the rules as we, go along. So that's, what I'm working on right now. And I mentioned in the book that there are quite a few ethical precepts that are universal throughout. Throughout all societies, but then how do we tease them out and use them in application?
So there's theoretical ethics, which is wonderful. We can talk about, various ethical theories, the greatest good for the greatest number or, considerations of the autonomy and the dignity of the individual. And we can talk until the cows come home, but eventually ethics has to be applied.
So how are we going to apply the theory You know, weave that into the, the functionality of the actual use of the technology itself. that's what we're, busy with working on right now. So, it's quite the difficult challenge, but it's something we have to do.
Richie Cotton: so it sounds like really for now, you can't assume that there's gonna be regulatory guidelines in place in general. You just have to think through what's your use case and try and reason about what's gonna be good, what's gonna be sort of bad consequences.
Christopher DiCarlo: Yeah, like, you know, you have the EU, AI Act, you have Biden's, you know, executive order, China's issued their, their standards, the UK had a summit back November of 2023, lots of people are working on guidelines and these certainly moving forward along with, the business communities, right?
Because the whole nature of guidance in new technologies, just politics in general, and just public administration in general, has always and probably will always be the balance between autonomy and paternalism. Thank you. How much freedom do you give people to do certain things, whatever it is they want, versus how much does the state have to act like a parent to say, well, hang on now, there are some things you shouldn't do, and then there are consequences for that.
So ideally, you want to give people maximum amount of freedom. But if you give people total freedom then it can lead to a type of anarchy where there's no system of order and there's very, very, very little guidance. And if you want to come top heavy down where the government at a national level tries to control everybody and everything about who can do what, well, that doesn't work very well either.
So the sweet spot is what everybody's trying to figure out right now. And in America, when you have a new government coming in January, it'll be interesting to see how, what was stated in Biden's executive order gets chucked out. And if anything new gets added or changed to that. So that's what we're waiting to, to see how that develops.
Richie Cotton: Yeah, it's gonna be very interesting. Actually, we recently did a whole data render episode on the EU AI Act, and it seems like they've got very interesting risk classification system. So there is a paternalistic idea in that some sort of nasty sci fi dystopia use cases of AI, they're completely banned, and then other things are sort of categorized depending on how dangerous they are.
Christopher DiCarlo: That's right. Yeah.
Richie Cotton: presume you know slightly more about the US situation. Do you want to talk me through, like, how that balance works then? Like. What's allowed, what's not allowed for that sort of freedom versus paternalism approach.
Christopher DiCarlo: Yeah, so when Biden's executive order came out, not kidding you, we all kind of read it online, all of us, and we were live like you are, and there's this silence as we were reading through it, and then I forget who said it, somebody said, this is pretty good, and we all said, it is, it is, it is pretty good, like, they've seen a lot of things that need to be addressed, like, Obviously, the spreading of misinformation and disinformation, use of bias, and as soon as bias is detected, it gets reported back, right?
Like, all of these safeguards were put in place that made us all think, they've got a good team working for them. this is very good. Now, not a ton was mentioned about existential risk. It's still somewhat difficult to get politicians to recognize that. Some recognize it far more than others.
Some world leaders like in the UK took it much more seriously than certain business people in the U. S. I call these kind of drill baby drill type people. They just want the U. S. to win this race and to outperform China and every other country, no matter what, because, America, number one, will decide what is ethical, will decide, what's right.
Biden's accord. have to say, I might as well say Biden Harris. Because she was probably more actively involved in it than he was, but it was just very, very impressive with how thorough they had gone through different stages of development. So when, say, an organization understands that there is a problem.
Say in a beta test with a particular type of technology, then it behooves them to then report that. So we call these registries. We want as many registries, as possible where people tell us, tell everyone, what are you working on and where are you at in the development of that technology? And so it fosters transparency.
Which in turn generates trust, because there's a lot of mistrust now, especially in America with science and scientists and so called experts. So we're, we're kind of fighting an uphill battle with that. But if you, create the system in which people are, honest and above board, when they recognize glitches, To call them out so that they can be fixed not generate any further harm.
that's the type of future we want moving forward. what everybody wants. We know that. But it's just so difficult to get there. people to grow up ethically and intellectually to the point where they understand that the more we all cooperate, the better all of us will do. And if you try to cheat or game the system, you can really mess things up for a lot more than just yourself.
So cooperation is super key here. If we can, if we can just get people to realize that it's like when you're driving in traffic, And it's heavy and people are jostling lanes and oh, that lane's going, I want to get over there. Oh, I have to, and nobody's going, you're not going anywhere. You're not going to, I'm going to pass you in the next two minutes.
And then you're going to see my lane going faster. If everybody had just followed some basic cooperative principles, stayed a certain length behind the car in front of them, match their speed, all the cars would move relatively well. And everybody would get to go. where they wanted to go faster than trying to, be individuals and do this kind of thing.
The same type of metaphor that transfers into these new forms of technology. If we cooperate and we're transparent and we foster an environment where people feel secure. Within that, we are going to develop great things with AI, for sure.
Richie Cotton: so a lot to unpack there. So your idea of having a registry of if something is maybe not performing well or whatever in a new AI product, you should report it. So this sounds a little bit like when healthcare companies have clinical trials, they've got to report that the trial exists just so people know whether some new technique works or not.
Christopher DiCarlo: That's right. So model registries, we think, is a good idea. It's, it's something I, so very quickly, Richie, back in the 90s, I wanted to be, I wanted to build this machine, right? I wanted to be the first to build it. And, you know, I approached, various, deans at the universities I taught at and various funding agencies.
And everybody thought, Oh, this sounds like a wonderful idea. I wanted to build a machine that could take medical information and come up with better inferences than humans could. Essentially a machine that was an electronic scientist, like a great brain that could solve medical problems better than could, individually, because it would have far more power, far more computing power and much more data than an individual could.
And I realized then that in terms of what's called information theory, it's just a matter of time before somebody does build this thing. And so I knew, I wasn't going to be able to do it, but somebody would. So I drafted an accord, a type of constitution. And I said, look, have to get all countries on board because they're all going to want a piece of the action here.
So we've got to have them all agree to transparency and openness and cooperation. But we also have to have registries. we need to know who's doing what, where, why, when, and have they accomplished to a certain level. But then we also need the accord to have some type of overseeing body, some type of regulative body that has teeth.
That can actually bring about consequences for those who violate the agreements presented in the accord. And that's kind of where we're at right now. We don't have an international regulative body. There are some national things happening that there are state things happening, but we don't yet have, that overarching appeals process, that Hobbesian classic.
You agree to the social contract, or if you violate the social contract, these are going to be the consequences. That's not in place yet.
Richie Cotton: That actually seems to be something where Dataframe guests so far have been quite on having different positions on this. So we had Ian Bremmer on the show and he was talking about the work with the UN trying to get some sort of global governance for AI. so he's in favor of sort of the whole world coming together and having a global set of standards.
We also had Bruce Schneier on the show. he was in the opposite position where he was just like a patchwork of regulations is fine. Multinational companies sort of used to dealing with that. Different regulations for different locations. So I'm wondering where you fall into this?
Like do you think that we need something global? Or will just regional regulations work?
Christopher DiCarlo: when I talk to my colleague, Justin Bullock, talk a fair bit about this, but he, he knows far more about this stuff than I do. But it, appears that. would be great to have, obviously, governance at all levels, to match this kind of autonomy versus paternalism balance. We want maximum advancement, but with the greatest amount of public safety, right?
That's the holy grail of public administration and politics in general. The tricky part is, how do you get that? How do you find that sweet spot? So. with all forms of government from municipal to local, to state, to national, to international, what we need to see above all is cohesion, right? We need to see a cohesiveness between those levels.
So I kind of agree with Bremer and I do think there needs to be an international. regulatory body, something maybe akin to the International Atomic Energy Agency, maybe something like that. not against the UN. It's just that I don't know to what extent the UN will have teeth in terms of those who violate.
will they just censure them? Will they? as Eliezer Yudkowsky says, look, if a ne'er do well uses this stuff to harm others, we just might have to, bomb them back to the Stone Age. and that sounds horrible, but that might be our only option, is to use kind of a military force with this type of, of technology, because if it does get away from us, or we start seeing people using it to really, like, catastrophically, Harm people.
For example, Richie, if Putin had very powerful form of AGI now, and he just simply had to ask it to shut down the electrical grids in Ukraine, you don't think he's doing that? He's doing that in a heartbeat. And if it meant millions of people would freeze to death because they had no electricity?
Wouldn't bother him. So now how do we consequence Putin for violating either the UN charter or an international regulatory bodies chart? how do we then say, no, you can't do that. You violated that. So we've got a lot of stuff to work out here.
Richie Cotton: Definitely. So in the Putin example, I'm wondering, well, shutting down the electrical grid for another country, already kind of illegal, or I guess in most cases, or, or certainly has consequences. Just the fact that AI is used make it different in some way.
Christopher DiCarlo: Well, he's already bombing their, plants, right? Like once you take out city's infrastructure, you really harm them. Like you take out their water treatment plants, right? So now they have. Disease, you know, they don't have safe drinking water, take out their power plants so they don't have electricity or energy or power, you take out their hospitals so they can't treat.
The second and the injured. So this nasty stuff is, is already happening. with AI, it would just make it so much more convenient for those who, who wield it, and it would be much easier and much more precise, So, I'm really concerned about who will be the first to develop it and then To what extent will it then leak out?
Will it be similar to nukes? You know, Oh, great. The Yanks developed a nuclear bomb. Well, now Russia's got it. Oh, okay. Well, now what? Eight major superpowers have it. Is it just, going to go that route that eventually major superpowers will have different forms of their, AGI, or will we be able to box this thing and contain it so that it, it only serves certain purposes in certain ways by, responsible players or countries.
these are known unknowns right now.
Richie Cotton: Okay, so it sounds like a lot of the problems with trying to predict what's going to happen with very powerful AI is that it doesn't exist yet. And there are lots of different scenarios that you can imagine. So it's very different from I have one very powerful AGI running in a data center compared to this is something that can run on anyone's phone and anyone's got access to it.
So we're talking a bit about what the regulators are doing and maybe what individuals need to do. I'm sure all the sort of major foundational model companies are thinking about this. I mean, they all have sort of sections of their website devoted to AI safety. Can you talk me through like what OpenAI, Anthropic, Google, all the rest of them are doing around AI safety?
Christopher DiCarlo: Yeah, yeah, that's a good question. It depends on who you ask. The big tech companies will tell you they're, proceeding, with caution and that they have safety measures in place. And when you go to their websites, you know, they're devoted, to the alignment problem. We want to make sure we nail down the alignment problem.
that's great. But when you look at the history of them, right, we see Elon getting together with Microsoft, getting together with Sam Altman in 2015 to produce open AI. It's open. You see? Anybody can use it. It'll be for the people. And then Sam starts to get an awful lot of money thrown at him, and he tells Elon to go away, and now its valuation is in the billions of dollars, right?
So then you have Dario Modigli, he leaves, and he says, I'm going to create Anthropic. Anthropocene, it has human's name right in it. Anthropocene, you know, it's man centered, it's how much money are we getting? Okay, so now they've got billions of dollars and they are more concerned about safety, which is good.
they pulled back Claude when they saw some problems and fixed it and then re released it. So that's, great. The problem is that what happens is we start to see people either leaving these big tech companies or just getting fired due to safety concerns, right? Usually, you know, involving ethics at some level.
They talk a good deal to assure that what they're doing is ethical, but none of the big tech players are letting up on the gas pedal to be the first to create AGI. So it's like, yeah, oh yeah, we're safe, but just put the pedal to the metal. Like we, we still want to be the first here because if you think about it, Richie,
the first company. That generates AGI to the level we're thinking about will be the first trillionaires in the world. they'll have cornered markets in so many ways because they'll be so far ahead of the competition. they won't even see who's in second place at that point. So what's gonna cause them to lighten up?
On that, on that gas pedal, I was just talking to Steven Pinker last week and he said, well, maybe there'll be a significant shot across the bow, to wake somebody up. And I said, it better be significant enough, but not too much. And what's the odds of that happening? What if we don't get a warning, What if this happens and we're totally caught unaware? And all the warning we've been doing and all the measures we try to put into place and all the advocacy and all the policy, suggestions that are made, just fall on deaf ears. And the drill baby drill principle goes ahead and gets away from us.
So we might not be lucky enough. to get a shot across the bow. And that's what a lot of us are, are concerned about.
Richie Cotton: So suppose you do get some rapid advancement in AI, then you've got this I guess, corporate control you're a bunch of trillionaires there selling your artificial superintelligence what happens then?
Christopher DiCarlo: That's a great question. What does happen then? Well, they, they hold all the cards, don't they? I mean, they're the ones in the driver's seat. You know defense is going to be all over this, right? So as soon as these big tech companies do anything, defense is going to be all over them, basically saying, We need to control this.
We need to make sure Russia doesn't get it. China doesn't get it. Iran doesn't get it. North Korea. We have to guard this jealously. Like we have to make sure nobody but us has this because we're the ethical ones. We're the world leaders. we know how to use this, but those other ne'er do wells, we can never trust them.
We can never be sure. So is that going to create a kind of a monopoly? What if it leaks out, what if copies get made and it gets to other countries, it still requires enormous compute farms. It's not the kind of thing you or I are going to capture on our laptop and be able to utilize unless we tie into some major system somewhere, none of us right now can predict how human behavior will follow once this form of technology comes into being.
And these are the conversations we need to have now, we've got to get ready for it now. Reaction is not an option.
Richie Cotton: Certainly things related to military are already happening. Like I'm sure every, every military force around the world is trying to figure out how they can best make use of well, having both generative and predictive AI. Okay. But that certainly can be involved in there. So uh, to have these conversations then in order to make sure something does happen to prevent bad scenario?
Christopher DiCarlo: at Convergence Analysis, I lead the AI awareness team. So my job is to do research, talk to experts in the field. I gather information responsibly. This is called epistemic responsibility. Have we done due diligence and gathering our information so that we can then inform policy experts, you know, litigators politicians industry leaders, but most importantly, the public.
think we have a moral responsibility to tell the public what's going on. And that's not me, you know, my personal ethics. Thanks. No, that's simply what is right in terms of a duty to our fellow human beings to let them know, do you realize this race is even happening?
Because the majority of public lectures that I give, people are hearing this for the very first time, for them, AI is like, Oh, it might help with diagnostics and medicine. That's wonderful. And it is. Yeah. It's incredible, or it might help kids learn better. And that's great, but very few people know about the kind of existential threat that AI might have once it gets to a level of intelligence that we've never seen in the history of our civilization.
So the public has a right to know. And My job is to educate the public as far and wide as I possibly can so that they can then join in on the conversation about how we as societies and as a global population want to move forward. We're at such a unique point in history right now, we've never been here before.
I thought I'd be long dead, Richie, before this moment was ever going to occur. And I realized I was wrong. All of my colleagues thought the same thing. So, I was focusing most of my work on critical thinking and trying to, teach the world more about how to be good critical thinkers. And then Altman had major Breakthroughs, with generative, transformer stuff.
And it looked like, the, Al Pacino, Godfather 2, just when you think you're out, they pull you back in kind of scenario. And so my job, my mission now in life is to let as many people worldwide know about what's going on so that they can feel empowered, that they can actually do something.
They can talk to their, their politicians. They can talk to Public interest research groups, they can boycott companies if they disagree with what they're doing, right? They can now be empowered rather than passively waiting for these big players to build this God that's going to do who knows what.
When it comes into being, they can be a part of that process. I think that's the most important thing right now.
Richie Cotton: So it sounds like you want to get the word out and you want people to be able to take some sort of action. Now, I think the DataFramed audience, relatively sophisticated in terms of their knowledge of AI. What would you like the audience to do?
Christopher DiCarlo: There's a kind of a list I give in the final chapter of the book of various things that people can do. The most important is to be educated. Learn as much as you can about what's going on. use reliable resources. Try to avoid too much of the, you know, hyperbole that some groups will give on either side the argument.
We're kind of in the middle. Talk to other groups that are interested in this type of approach. Are there various types of organizations that you can connect with? Talk to your politicians, both at the state and at the national level about what they're doing in preparation for this.
You have the ability to vote with your wallet. You can say, I'm not going to support meta. Anymore, because, Zuck is just wildly out of control trying to do this thing. Elon, what's his game? Does he now, is he cozying up, to the Donald's so that he can curry favor from him so that his company is that much more closer, in the race to getting to this final, you know, outcome?
super intelligent machine God like type thing. So you can, call people out, you can, boycott. You can go to demonstrations, you can have your voice heard in many different forms online and, and out into the, the public arena as well. So, there are many different ways in which people can empower themselves.
Richie Cotton: Okay, wonderful. I like the idea of having some sort of political participation in terms of the future of AI. And I guess This all sounds very interesting, but I'd like to end on a happier note. So is there anything you're actually excited about in the world of AI? What are you looking forward to in this area?
Christopher DiCarlo: Yeah, so I'm very excited for what's, coming in various fields of, of science, and that's broadly and narrowly. So, narrowly to me in the diagnostics of medicine, we're already seeing vast improvements. So, for example if a woman goes for a mammogram, why would the radiologist a single radiologist just use their, ability and expertise when AI can compare that mammogram to a million others.
and make recommendations that then, you see, the radiologist can say, okay, very good. Because it's, it's got greater capacity than the human brain. And we humans have to just suppress our ego and our hubris just a little bit to let this thing do its job. Humans will have the final say, but we can't outcompete AI when it comes to diagnostics.
There's already a, technique for determining pancreatic cancer in stage one. That's never been possible. Before it was always detected in stage four when it was too late. Well, this is very hopeful. I'm very hopeful for this. the alpha fold that Demis Hassabis and his team did, with DeepMind, they can fold proteins now where this was, The holy grail in biology to determine how proteins get folded so you know what, how the cell will behave so can help with the understanding of how disease functions or how better medicines might work against certain diseases.
So super hopeful there, but more than anything, I'm hopeful that it's going to become the machine I always wanted to build. which is what I call a great inference maker. A machine that can make inferences, when you give it enough information, it can connect dots that we're just limited, right? What is a genius, Richie, when you think about what is a scientific genius, but somebody who looks at the world in a way that hasn't been quite seen that way because they've made inferences where we didn't see them, where the majority of us never saw how those dots were connected, Imagining space is kind of like a fluid. You know, and that things warp space and that's, brilliant, for Einstein to take that and to be able to see that new way of thinking we've already seen in AlphaGo, system that beat the Go champion of the world, it developed a way to win that no human had ever thought of before it became creative.
So my great hope is that this big brain, this big God like brain will see advancements in science that will improve the lives of everyone on this planet, like all of our boats should rise, as some have said in the AI biz, The rich will get richer, but the poor will get richer as well.
Like, everybody should benefit. And to me, I have great optimism and great hope that AI will be able to improve all of our lives, no matter what our lot is, and that gives me great hope for the future.
Richie Cotton: That's wonderful. I have to say, I hadn't heard that example about pancreatic cancer. And from what I gather, like, up until now, it's been like, you get pancreatic cancer basically, yeah, very soon because it's too late. So having sort of early detection method, that's very cool. I love the technologies now there to save lives and have a real impact.
And yeah I take your point about like, sometimes you've got to be humble, just know that technology is better than you are. And that's gonna, yeah help out. All right, wonderful. Yeah, so there are exciting things as well as possible disasters. I would say for the audience, please go back to the previous advice about, you know, get involved and try and help shape the direction of AI.
All right, super. Thank you so much for your time, Christopher.
Christopher DiCarlo: My pleasure.
blog
AI Ethics: An Introduction
podcast
Building Ethical Machines with Reid Blackman, Founder & CEO at Virtue Consultants
podcast
Building Trustworthy AI with Alexandra Ebert, Chief Trust Officer at MOSTLY AI
podcast
Can You Use AI-Driven Pricing Ethically? with Jose Mendoza, Academic Director & Clinical Associate Professor at NYU
podcast
Guardrails for the Future of AI with Viktor Mayer-Schönberger, Professor of Internet Governance and Regulation at the University of Oxford
tutorial