Pular para o conteúdo principal

Rebuilding Trust in the Digital Age with Jimmy Wales, Founder at Wikipedia

Richie and Jimmy explore the early challenges of Wikipedia, the importance of trust and neutrality, the role of AI in content creation, and much more.
8 de dez. de 2025

Jimmy Wales's photo
Guest
Jimmy Wales
LinkedIn

Jimmy Wales is an American-British internet entrepreneur best known as the founder of Wikipedia and co-founder of Fandom. Trained in finance at Auburn University and the University of Alabama, he began his career in quantitative finance before moving into early web ventures, including Bomis and the free encyclopedia project Nupedia. In 2001, he launched Wikipedia, which quickly became one of the most visited websites in the world. To support its growth, he established the Wikimedia Foundation in 2003, where he continues to serve on the Board of Trustees and act as a public spokesperson. He later co-founded Fandom in 2004, expanding the wiki model to entertainment, gaming, and niche communities. Wales has also pursued experiments in collaborative journalism, including WikiTribune and its successor WT Social. His work in open knowledge has earned recognition from organizations such as the World Economic Forum, Time magazine, UNESCO, and the Electronic Frontier Foundation. He has held fellowships and board roles at institutions including Harvard’s Berkman Center and Creative Commons.


Richie Cotton's photo
Host
Richie Cotton

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Key Quotes

There is a huge opportunity for AI based tools to support the community. But we're really quite adamant about human in the loop. Part of that is just simple practicality. Like large language models, they're amazing, but they're not good enough to write an encyclopedia article. It's amazing that we were getting such great results, but it does lead to hallucinations and the hallucinations are just wrong. And in fact, they are worse in my experience.

Trust is really important in a business setting. It's a lubrication that makes everything a lot easier.  If you've seen stories that to me are just mind boggling about companies that are when people work from home, they're using software to monitor that you're still moving your mouse and clicking on the keyboard. To me, that level of mistrust of employees is just deeply wrong. You don't want to work somewhere like that. They don't trust you and that's a hostile environment.

Key Takeaways

1

Adopt a neutral point of view in data reporting to ensure balanced representation of all perspectives, enhancing credibility and user trust.

2

Leverage AI tools to identify gaps in data coverage and suggest improvements, ensuring human oversight to maintain accuracy and prevent hallucinations.

3

Encourage a culture of transparency and empathy in data teams to build trust, especially when addressing errors or delays in data delivery.

Links From The Show

Wikipedia External Link

Transcript

Richie Cotton: Hi Jimmy, welcome to the show. 

Jimmy Wales: Ah, thanks for having me on. 

Richie Cotton: Yeah, great to have you here now. I'd like to start by going back to the early days of Wikipedia. Now, it wasn't immediately considered to be a really good idea. I know there were a lot of jokes around Wikipedia. So can you tell me like what was the harshest thing that was said about the site?

Jimmy Wales: Oh yeah, it's a very good question. I would say obviously Stephen Colbert quite famously made fun of us on his show, but that was actually, it was interesting because he played a character that was a right wing blowhard called Stephen Colbert, and he is Stephen Colbert. And so him making fun of you was not completely terrible. Because it was like, oh, he's playing a buffoon. And he talked about Wiki reality and if you don't like something in reality, just edit the Wiki and it changes, so it was a bit silly, but I think mostly it was more like people just weren't sure.

They're like okay. I. Don't think that's gonna work, but Okay. Whatever. Have a try, 

Richie Cotton: yeah. I guess there are worse forms of publicity than having Stephen Colbert do a routine about your project. 

Jimmy Wales: Yeah. No, exactly. Although, I would say the worst wasn't poking fond.

It was actually, there was an error in Wikipedia, and it was about John Sgin Thaler Sr. And it said that he had been briefly suspected of having something to do wit... See more

h the Kennedy assassinations, which was absolutely untrue. He was actually a friend of the Kennedy's. He was a pole bearer at Bobby Kennedy's funeral.

So he saw this and he called me up and he is Hey, I'm like, oh, I'm really sorry. And I, it was down within minutes after I got off the phone with him and I checked the traffic logs and things like that, and almost no one had seen the page. So I was like, okay, that was terrible, but.

But then he wrote a skating editorial in USA today about us which was a big deal and really led to us, led to me promulgating a biography of living person's policy, which says any negative statement in a Wikipedia entry about a living person should be removed immediately unless it has a good source.

IE What was happening at the time was people would say something, they're like, oh, that's weird, that doesn't have a source. And they would go on the talk page, oh, this doesn't have a source. And then people would discuss, oh, we're gonna get a source. It's meanwhile it's there on the site and shouldn't be.

And that was when I would say we got pretty serious about human dignity of saying if it's a biography, we have a really, a higher level of responsibility a biography of a living person screw something up about Julius Caesar. Okay. His family's feelings won't be hurt by it.

It's bad. We should fix it as soon as possible, but the duty of care is much higher when it's a living person. 

Richie Cotton: Absolutely. Yeah. And certainly if you've got bad things being said about you on a WikiEd, on a Wikipedia page about you, that's gotta be quite hurtful, I think, because it's, yeah, definitely.

Yeah. So I love the idea of just prioritizing, these are the really important things that we need to edit very quickly. Alright I'd love to I guess Wikipedia is now is one of the biggest sites in the world. Do you want to just gimme a summary of how you think you got this crazy success?

Jimmy Wales: I think there's a few things. So certainly in the early days we met a need that was not being met online. The idea of just very straightforward, factual information about everything under the sun is an obvious use of the internet, but it didn't exist yet. There were a few encyclopedia reference type of sites.

Some of them were paywalls, some of them were covered with ads, and they were small. And but in terms of the, a lot of times when people go on the internet and they're searching for information, pretty much what they're looking for is a Wikipedia article. That's a thing that needed to be there.

So that meant as the small community got started writing things, they got traffic. Like it was interesting because it was like what people were looking for. And so that had a big part of it. And then, over time, I think there are some of the early principles of Wikipedia, which have been incredibly important to.

Ongoing success and trust as my book is about trust, I should talk about trust, including the idea of a neutral point of view, that Wikipedia should be neutral and shouldn't take sides on any major disputes. And that's obviously always something that's contentious and lots to grapple with there. But I think that's been really important and something that we never should forget and we should double down on as much as possible because it means that anyone from any side of the political spectrum or whatever belief system you might have, should be able to come to Wikipedia and get a clear explanation of the state of debate of about anything.

And so that's really important. And then, rule three in, in the book is purpose. So I think one of the things that help make Wikipedia successful is it has a really simple, clear purpose. Like we're gonna write an encyclopedia. It should be high quality, it should be neutral, it should be in all the languages of the world.

Everyone can understand that purpose. And so it really helped people come together because they we know what we're doing, like we're here, we're gonna start working together. We know what we're trying to accomplish. If it were a little more mushy and vague of oh, we're gonna write a bunch of stuff and blah da, people wouldn't know what to do.

Oh, is it a joke, a site? Or should we make money things? Should we make serious things? Should we have funny cat videos? No, we probably shouldn't. That's not really what we're here for. 

Richie Cotton: Actually, I love the idea of just saying we're writing an encyclopedia, because it just really distills things down into a simple sentence.

If you're like, okay, we're gonna write a thousand articles on these particular topics, and you get overly weird. 

Jimmy Wales: Yeah. And in fact, actually one of the things is like the idea of a free encyclopedia for every single person on the planet in their own language. That's a massive concept, right?

That's a huge project. But actually I think we got more participation early on because that huge project is quite inspiring. If I had said, oh, I'm gonna write geographical history of the United States, I'm gonna have one article each, for each of the 50 states, people would've gone, eh, okay, have fun.

That's not that exciting. Whatever, yeah, sounds like a reasonable thing to do. Go for it. Whereas this was like, oh wow. Oh wow, I can actually take part in something that is super big and super interesting. 

Richie Cotton: That's a, an important lesson. If you give people like a big dream, then maybe that's the way to make things happen rather than give people realistic goals.

And they're like, okay, fine. Whatever. I'll hit those. Your book's all about trust and it's such a hot topic at the moment, but one thing surprised me right the start of chapter one, you got this picture of a triangle. I'm like, oh, this is Aristotle's rhetoric triangle. I was like, oh, two and half thousand year old idea.

Talk me through how is that related to trust? 

Jimmy Wales: So the triangle we actually use Francis Fry's formulation. So she's an academic professor at Harvard who also has a lot of business experience. So she went into Uber when they were experiencing a real crisis of trust and helped manage them through that.

In the interim period between Travis leaving and Daria coming in. And at that moment internally, they had all kinds of trust problems. And one of the things that she said to me was one of the most fun interviews I did for the book. She's a great person. So she said, one of the things people say about trust is that once you've lost it, you can never get it back.

And she said, that's just not true. There are things you can do to rebuild and get trust back. And so her triangle framework is empathy. Logic and authenticity, empathy, authenticity and logic, empathy, authenticity, and not logic. Yeah. I would've gotten there eventually. Yeah. Read the book people. I have, I promise you many times.

But and like these three pillars, she says, whenever you have a crisis of trust or a problem with trust, one of these, or at least one of these is, has gone wrong. And that provided me a great framework to start thinking. And that framework is her framework, but it's not unique.

There are actually a lot of academics working in trust who talk about a three-sided triangle. Those are just the terms she uses. Sometimes they use other terms. I thought that was helpful because I was writing about trust and thinking about trust and the various things I've learned, but I didn't have a systematic way of thinking about it.

And so bringing it together with that really helped me to go, okay, right now we've got a framework for thinking about this and talking about it. 

Richie Cotton: Okay. Yeah, the same very useful. And you mentioned Uber, I think, what was it, maybe around 20 16, 20 17, they were having so many different problems with just everything was like people getting attacked in the cars and all sorts of horrible stuff.

And it seems like they have managed to fix a lot of it 

Jimmy Wales: in internally. They had a lot of trust problems. So the executives didn't trust each other and the drivers didn't, cus trust the company and riders, like passengers were losing trust and regulators definitely didn't trust Uber.

And, they had to really start over. I remember this amazing story about them tracking a journalist, writing an Uber like this was a really bad idea because you're not building trust with the media if you're behaving badly like that. And it's interesting say okay yeah, you go from that point to being, a reasonably trusted company.

Obviously everybody's always got questions about 80 big company of course. But, and it really required them to take step back and have a real think about trust. 

Richie Cotton: Yeah. Yeah. I love that. It's stop doing really stupid stuff, like tracking people in the cars. Yeah. And step one, it's yeah.

Okay. So I guess the other side of things is, WikiEd is very much based, I guess it's a kind of a social media, but a lot of social media is it's horrible and angry, whereas Wikipedia is like, it seems like a mostly positive community. Can you talk me through how you built the community side of things?

Jimmy Wales: Yeah, definitely. So one thing that I would say is Wikipedia is not social media, but it is media and it is social. So it's in that category a little bit. But and one of the big differences is that purpose. So one of the things that makes social media so toxic is, whenever there's a box that's basically what's on your mind.

Some percentage of people have something really horrible on their minds, right? And. This is true actually of a lot of different, so it doesn't have to be a box saying that it can be a place for uploading a picture. It can be video, YouTube, whatever. But that open to creativity brings with it a lot of problems.

That doesn't mean we shouldn't have that. I'm not saying that, but I'm just saying the challenge is really hard there. And one of the things that made the challenge a lot easier for us is that purpose. We're here to build an encyclopedia. And one of the oldest rules, and this is in the book the chapter on civility, your mother was right is no, no personal attacks.

The idea is we're working together to brighten encyclopedia and we shouldn't be attacking each other and insulting each other and behaving badly. We should have collegial debates. We may disagree, but let's treat each other with respect and kindness. 'cause it's really practical. Like it's the only way to get things done.

And in social media you don't see that. Particularly I would say Twitter's sorry, X is probably the worst. Just in terms of, it's a pretty horrible place with a lot of people viciously attacking each other for no actually useful purpose whatsoever. And because we have the purpose then, basically you get blocked.

Come in and say, you're all a bunch of Nazis. Okay, hopefully you don't get blocked immediately. Hopefully some will go, Hey, by the way, don't be a jerk. That's not what we do here. Like that's not really appropriate. But then, if you keep it up, then yeah, you're just gonna get blocked.

It's that's not fun and it's not what we're here for. We're a friendly project to build an encyclopedia. So that's a piece of it. And then, obviously there was a lot of detail along the way. A lot of things to learn. One of the things that I would say about Wikipedia is that it is in many ways a child of the.com crash.

And at a moment when we were growing really quickly, there was no money and there was no possibility of raising money. Had we had money to throw at the problem, we probably would've gone down a different path that wouldn't have scaled. We just wouldn't know what we were doing. So it's quite natural. If you start a website and you see some social problems, you think, oh, we need a, we need to hire some moderator.

We need to hire community managers, and it's gonna be top down and we're gonna make rules and we're gonna enforce them. And the company will be involved. This is exactly how all social media works, is, they've got moderators who enforce whatever rules instead. There's nobody to hire 'cause we've got no money.

So we have to work out, how do we do this? How do we manage this in a community? So we have the admins who are elected by the community, and then, okay, but you can't let the admins become tyrants. So what are the checks and balances? So you can lose your admin ship if you're not doing it the right way.

All your actions have to be transparent and you have to justify them. We have the arbitration committees in the larger languages who are elected to be a higher decision making body. And, like all of these elements had to evolve over time. So we had to involve, we had to evolve the institutions of a community in the same way that, if you think about what makes a good municipal government, right?

You want, you don't wanna get tossed in jail for criticizing the mayor. But you also don't want people accosting your grandma walking down the street. And so you need that sort of balance and you need reason and thoughtfulness and you want the police to be subject to checks and balances, right?

Yep. They're there to enforce the law, but they're not there to crack people in the head for no reason. And so all of that, those kind of ideas and principles which we fully get in general society like. They also make sense online. You can't copy everything word for word and line for line from the municipal code of conduct.

But there's a lot of lessons to be learned, and that's what we had to learn. So we just had to make it up as we went along and think about it in a pulling kind way to say okay, here's a problem. How do we deal with that? It's fascinating. 

Richie Cotton: And from, it sounds like you've invented like a whole system of government, I guess within the side you yeah.

The police and all the delays of adjudication and whatever. So I guess the other thing related to this is that it can be very difficult to know, like what is true online. And this is certainly a case if you're trying to write an encyclopedia, you need to know is this correct or not?

Jimmy Wales: Yeah. So for this, we're pretty old fashioned about sources. We want the sources to be quality magazines, newspapers, academic journals, books. So you know, when we think about. The sourcing for articles, then, it's okay, we're writing about cancer, which do we prefer?

Do we prefer the National Enquirer or the New England Journal of Medicine? I think we'll go New England Journal of Medicine. So it's that kind of pretty basic, pretty simple stuff. And even that, it leaves a lot of room for discussion and thought and chewing on what really counts as a reliable source.

One of the problems we have these days online is things circulate on social media and they get promoted by algorithms, not because they're true, but because they're controversial or they're viral, or they're funny, or they're whatever they are. And so stuff, it's very common on social media to see some infographic that turns out to be complete nonsense.

Like none of it actually makes any sense, or it's really cherry picked and so forth. The Wikipedia is pretty immune to that kind of stuff. 'cause we basically spend our lives debating the quality of sources. And deciding okay, what can we trust? How do we do it? The other thing is what I call, actually I don't call it this, but I like this phrase, I just heard it from Corey Dro said it the other day in a podcaster or a, I don't know where he said it.

Somebody told me about it. Epistemic humility. The idea of, it's not epistemic nihilism of we know nothing. We have no idea. And there's no such thing as truth. No. It's not that. But it's like saying actually truth is hard and getting it right is hard. And we need to be very thoughtful and accept that we might get it wrong.

And we need to always be ready to chew on ideas, chew on criticisms of Wikipedia. Okay, oh, we got a criticism. Wikipedia, we shouldn't become defensive and use PR strategies to combat it. We should go okay, hold on. Let's have a look. Tell me more. Like what could we fix? Do you, what are some sources that we should look at?

And obviously we are better or worse at that at different times. 'cause we're human beings and things happen and the world's a noisy place. But that's the spirit that we should always have is to say oh, how do we make this better? 

Richie Cotton: I love that. And it feels a good life lesson beyond editing an online encyclopedia.

It's yeah. Accepting that you can be wrong sometime and trying to be better seems 

Jimmy Wales: and I think one of the, one of the core principles of Wikipedia is always assume good faith. And this is not a suicide pact, it doesn't mean you assume good faith forever and infinitely no matter what anybody does.

But it's actually a good bet. It's a good starting point. You meet someone new. We do it every day in real life. Like you, you meet somebody new and you don't think, yeah, probably they're gonna stab me. If you do think that, you need some psychotherapy, like you need some help.

So something's gone bad in your life and you're hurting and then that's unfortunate and let's see what we can do. But normally you meet somebody and you're like, oh oh, here's a new person. What are they gonna be like, maybe I'm not interested, maybe I am interested.

Maybe they're annoying. Probably they're not, and it's the same thing online, is that kind of assume good faith when we see people come new to Wikipedia and this is the main application. So somebody comes to Wikipedia and they're doing something and it's not quite right, okay. You can either assume this person is a plant, an operative, they're working for the KGB or the CI or whatever crazy thing, or it's oh, probably this person is trying to be helpful, they're trying to make, we could be a better, but they just don't quite know what they're doing yet. Maybe we can help them, maybe we can give them a pointer and say, oh, like actually, when you wrote this sentence, you used a lot of.

Words that are pretty conclusively, biased in a certain way we would write it would be a lot more reserved. We also need sources, like blah, blah, blah, and I'm teaching you to be a WikiEd. And that's great because probably not always, but probably the person's gonna go, oh okay.

That's great, actually, that makes sense. Yeah, sure. Great. Let's do it that way. 

Richie Cotton: Okay. Yeah. So I love the idea like, just spend the time to teach people stuff before you start having a fight. Yeah. 

Jimmy Wales: No, exactly. And a lot of spaces online aren't really conducive to that. There are great places that are very conducive to that.

One of the things that I write about in the book is there's a subreddit called Change My View. And basically what happens there, people post, whatever their view is on something and they say, change my view. And then what you're supposed to do is whether you agree or not with that view, you're supposed to.

Put forward the best argument you can to try to change their mind. So even if you agree with them, you can say things like I do agree with you, but here's the thing that I think is the most challenging for our shared view. And then at the end people will say, oh, you changed my mind. You didn't change my mind.

Or I, often it's oh yeah, I changed my mind a bit on this part of it. And I still got the general and it's great. And people there they've got really a sense of civility. It's not perfect, like whatever, it's human beings so people get angry and stuff like that.

But generally it's quite good. And I think that is great. And we need more spaces like that online. 

Richie Cotton: As a Brit, I find all these things are like, oh, this is how you have polite conversation. I'm very much in favor of this. 

Jimmy Wales: Yeah. No, I know. I think that's true everywhere. Certainly, Americans are very nice people usually.

Not as nice as Canadians. As nice as Canadians, but yeah, I try to avoid stereotypes, but only I keep the nice ones. Yeah, absolutely. Some Canadians out there. Oh, dare you. We're not that nice. Okay. Okay. Most of the Canadians are very nice. Just not you, but Okay. 

Richie Cotton: Okay. You mentioned like having civil discussion, so I think maybe perhaps the most vicious parts of Wikipedia are things like the talk pages where you have people arguing about things and then also you have edit wars or people will change things, they change their back.

When people fundamentally can't disagree on things how do you resolve that? 

Jimmy Wales: Yeah it's quite hard, like one of the fundamental techniques is to go meta to step up one level and say, look we're not gonna agree about this fundamental issue, but we can characterize the debate, we can be fair about that.

We can describe the issue. So my classic example is, think about a kind and thoughtful Catholic priest and a kind and thoughtful Planned Parenthood activist. They're never gonna agree about abortion. They could fight about it until the end of time, but they can step back and say, okay, but we're just gonna write about the controversy and we're gonna, we're gonna explain what the different views are.

And turns out people are quite good at that. Quite often people can do that. Another one that I think is really important and that I've been lately, it's one of my campaigns, is like avoid what we call wiki voice. In other words, don't say things in the voice of Wikipedia unless you've got really pretty much overwhelming consensus in the community.

If any significant minority in the community disagrees, you should probably step back a bit and start to characterize the debate. Don't say it in the Voiceful community. So there's lots of things you can say it's quite easy to say, Harris is the capital of France. That's fine. That's not in dispute.

Like literally, I don't think there's a single person anywhere who could put forward any plausible disagreement with that. So you can just say it like, you can say it, you might need a footnote, but it's fine. That's not controversial. Loads of statements are, loads of statements are things that you may believe very strongly, but other people in the community may not.

And. I'm not even talking here about do we disagree about the fundamentals. So sometimes maybe you and I agree about the fundamental thing, but I may think, wow we shouldn't say this in the Wikipedia voice. We should talk about the different sides. So even though we agree, we should acknowledge that there's another side and you may say no, the other side's crazy.

I'm not. In that case, I would say, look, if we can't agree, then it shouldn't be in Wikipedia's voice, it should just be describing the issue. So that's one of the techniques, because that element of neutrality, I always say it serves two purposes. So one is the epistemic or the cognitive purpose. The knowledge purpose.

Like I don't wanna go to an encyclopedia and only hear one side of the story. I wanna hear about the debate, but also the social purpose of oh, how can people who disagree actually work together? This is a great technique is to say, okay, we're not gonna agree, but we can write about what we disagree about and we can agree about the debate and how to characterize it.

Richie Cotton: That's interesting. So you've mentioned neutrality a few times and it seems incredibly important for an encyclopedia to have a neutral point of view. Do you think the neutral point of view is always a good idea? Or when would you want to be opinionated when you're putting things across? 

Jimmy Wales: I think the encyclopedia should be neutral.

Full stop. And that's top to bottom. That's the end of it. Other aspects of life? Absolutely not. No. Certainly I wouldn't expect Tom Friedman, who I interview in the book, he's a columnist for the New York Times. He's got opinions. His columns are about his opinions. Great. He should be opinionated.

He's actually an interesting analyst and sometimes I disagree with him, but I'm like, I always have to grapple with what he says, even if I disagree. Because he's a smart guy and he knows a lot of people and he is talked to, he is lived through a lot of things and that's great. And certainly, in my life I definitely would say we, the Wikipedia Foundation, but, and me personally, as a person who's an activist for an open internet and free software open source advocate for access to knowledge, then yeah. I've got a lot of things to say. They're quite opinionated. Yeah, basically you shouldn't be censoring Wikipedia. You shouldn't block Wikipedia because you don't agree with what it says like that.

I've got an opinion there and I'm re ready to fight for it. So yeah the idea of neutrality is something though that even in those kind of contexts it's actually, I think, helpful to people. If you are an activist for something, like you really think, I wanna change the world in this way, let's say the environment, right?

It still behooves you when you are learning about some particular sub-issue within the environmental cause to take that sort of Wikipedia and mindset to say, okay, the first thing I need to do is understand. I need to understand what are the debates, what's the argument here? And if I don't do that, I'm gonna be a less effective advocate because I'm gonna be missing some of the facts.

I'm gonna get some things wrong, and therefore I won't be able to convince the people who disagree with me. 'cause they're gonna be like, you don't know what you're talking about. Yeah. I think it, it comes in a lot of different places where reason and thoughtful chewing on ideas is probably our most important human responsibility.

Richie Cotton: Absolutely. So yeah, I, I love the idea that like, if you've got a good opinion, then yeah, go for it. But yeah. In the context of ency Yeah. Less neutral things. You mentioned human responsibility and there's been a lot of talk like community with respect to humans. I think obviously like we've reached a point with ai, we've had, like some people talk about we're not far off the point of having AI colleagues.

I know Wikipedia is long had a lot of automation, like they've been bots for many years to automate stuff. Have you seen this progress over the last few years? 

Jimmy Wales: Yeah, so it's really interesting. So our bots so there's two things. So one, most of the bots aren't even remotely AI related.

Like they're doing things like, if you see an article go from 20,000 bytes to four bytes, probably that's vandalism. So a bot might revert that instantly and notify a human to be like, Ooh, something weird's happened here. That's not ai. That's just like basic automation of some things.

We also have some machine learning tools and algorithms which are somewhat used in the community. So there's an EZ scoring system, which you can turn on and filter. Recent changes, I sometimes play with it. One of my favorite is edits that are bad. Probably in good faith from new users. 'cause I'm always interested in that oh, okay and can it accurately detect that it's okay, like it's just using, this is pre large English models.

So it's just using some statistical things like actually, yeah, I find it's not perfect, but sometimes you look at it and you're like, yeah, that looks like to me as a human, that looks like a good faith edit. That's not very good by a new user. So then how do I respond to that? Okay, then that's a whole category.

Oh I should probably go talk to them and this and that. And identifying spam, that can be automated pretty well. It's like spam is spam and it's not that hard to detect. In terms of what we're looking at now, I think there is a huge opportunity for AI based tools to support the community.

But I think it's really important, like we're really quite adamant about human in the loop. Part of that is just simple practicality, like large language models, they're amazing and they're, I love playing with them, but they're not good enough to write an encyclopedia article. The way the technology works, and I understand your audience, is a lot of machine learning and AI people.

So for them, sorry, I'm boring and hopefully I don't get it too far wrong, but, the, this, predicting the next token based on, probabilistic guessing what the most plausible next thing is amazing. And it's amazing, it works at all and is amazing that we're getting such great results, but it does lead to hallucinations and the hallucinations are just wrong.

And in fact they are worse in my experience, and I think this is uncontroversial to say, they're much worse on more obscure topics. So the further you get from very simple claims into more obscure things, like I, I saw an example, which I haven't tested much, but I, if if you ask, who is Tom Cruise's mother?

Most of the models get it right. If you ask that person, I can't remember her name. If you ask, who is her son? No idea. Or they make something up. And why is that? It's because they've read it one way. Lots of times they've read about Tom Cruise and they, but they've never read the other way.

And for a human and human reasoning, it's the minute I can ever remember Tom Cruise's mother's name, I'm gonna have to, if I use this anecdote, I'm gonna have to do that. But I know if I know who your mother is, I know who her son is. That's like toxicological. That's very simple.

But the models don't work that way. They're not really understanding. They're just predicting tokens. And so as a result on less common topics they hallucinate more, which means for writing the encyclopedia, they're not good, but there's stuff they are really good at. And, one of the things I've played around with, and I hope to get back to it after the book tour and all of this stuff that I'm doing right now, I wrote a little script just to say, okay, like I can feed it a Wikipedia entry, a short entry, 'cause it, it's not really good at doing a long one.

Short Wikipedia entry and the sources, maybe there's five sources. And I ask it, is there anything in these sources that should be in Wikipedia but isn't? Or is there anything in Wikipedia that's not supported by the sources? Make some suggestions. And you know what? It's pretty good. It's not perfect. It's pretty good.

And I haven't even done any like prompt engineering or playing around to try to improve it, but I feel like it's probably good enough to give some pointers. And so now you can imagine if you had a more sophisticated system, this is just me banging out a script and I'm a very bad programmer, so whatever, but a more sophisticated system that somehow, let's use multiple models to ask the same thing.

Let's crosscheck it. Let's actually make sure that it's not hallucinating the sources. Let's do this and that, and let's tell it. Don't make a suggestion unless you're really sure. That's actually one of the harder things to do. Then what if you could do that and run it massively over all of Wikipedia and leave little notes on the talk page saying, hi, I am your friendly neighborhood robot.

And I noticed that three of the sources mentioned this person's birthplace, but Wikipedia doesn't have their birthplace. And usually we do. What do you think? And then a human can go. Oh, because one of the things we know, like a lot of Wikipedians, they come to Wikipedia not to, like they're just like, I'm a Wikipedia.

I'm like, what am I gonna do today? I'm gonna do something good in Wikipedia. And they're always looking for something to do and they sometimes just click random article. People will enjoy doing that. Click random article and see if you can improve it. This would be a little bit better than random.

It'd be like, oh, here's a list of things where the AI thinks there's an easy fix. It'll take a few seconds for a human to approve. Have a go at it. I think people will enjoy it. If they don't, they won't use it. So that's what's great is let's just make some things that the community might find useful and see if they do.

Richie Cotton: So I, I love that idea of having AI systems for just inspiring humans on, on what to edit. And I guess, there's probably some of our data frame listeners can probably help out building that sort of thing. So yeah, challenge for the audience, I think. Go to Wikipedia, build some tools for yeah.

What to, oh yeah, that'd be lovely. 

Jimmy Wales: Yeah. Yeah. You can come to my user talk page and I'll chat with you about it 'cause 

Richie Cotton: I'm very interested. Wonderful. Okay. Challenge for the audience. But you mentioned you don't think having a completely AI generated encyclopedia is a good idea in that case.

I gotta ask, what's your take on Wikipedia? 

Jimmy Wales: I haven't had time to do a proper analysis, so I'm always a little reserv of, I say, but that's 'cause I'm a Wikipedia and that's the way we are, but if you ask me to predict what it would be like, and then you. Put that up against all the news reports I've read.

I'm like, yeah, that's pretty much what I would've thought. So a lot of errors, a lot of hallucinations. Certainly, although Elon loves to promise that it'll be more neutral than Wikipedia, it seems to interestingly agree with some of his more curious political views. And that's not most people's definition of neutrality.

Like agreeing with Elon isn't the core of neutrality. Unless you Elon. Unless you're Elon, yeah. And it doesn't surprise me. I think it's an interesting concept, right? Could you do this? Could you use a large language model to generate quality content? And obviously we're all gonna be grappling with that over the next few years in lots of different ways.

So already we see in journalism. There are bad uses and good uses that are taking place. Like one of, one of the ones I actually like is, I'll give two examples, but one is earnings report story. So a story about earnings report. I don't know to what extent it's the largest language models, but I know that a lot of the financial publications are using technology to semi-automate making those stories.

And that's fine. Because basically those are very formulaic stories anyway. It's oh, analyst expectations were two point, $2 10 cents earnings per share. But it actually ended up $2 12 cents beating earnings. The stock responded thusly. Like it's pretty basic stuff.

And for humans to have to do that is really boring. And, it's yeah, great. That's completely fine. I hope humans checking it. 'cause it might hallucinate and it might get it all wrong. But another example that I would argue could be useful is a certain type of sports story, the less interesting ones.

But you've got in a, in the sports section of the paper. You've gotta have both, like some pretty basic reporting, like here's what happened in the game last night. Like here's the score, here's the step by step that's actually pretty tedious. Then there's the more interesting thing, the human flare, the angle, like the funny thing that happened, the interesting bit, like how did the players react?

What are the fans say that you still need, like proper journalism, but maybe pieces of that could be helped. And so that as we see that, and then you know, like we know the financial model of journalism is under pressure has been for a long time. There will be publications who want to stretch it a little too far and oh, if only we didn't have to pay all these reporters, we could churn out twice as much content and that's not gonna end well.

I think we're all gonna be grappling with that sort of thing. And then we can also talk about bad actors who actually don't care at all about quality or correctness and yeah, I could imagine churn them churning out lots of stuff that isn't very good. 

Richie Cotton: Yeah, it just seemed like there's a. A sort of economic trade off here.

So you've got like cheap ai I guess a lot of AI is cheaper than humans for this kind of stuff. And then you've got like different in qualities of models. So the really cheap models, they get it wrong a lot more time, but they're, again, you're saving money on all your tokens and then fact checking is just expensive in general.

So yeah, I guess either then you pay a premium for having truth. 

Jimmy Wales: Yeah. No, I think, and I think that's inevitable because, it's an amazing technology and I use it all the time. I'm playing with it, experimenting, using it for personal use. I think the key, as with everything, and I, as I say this, I, it echoes things that I remember saying about Wikipedia when it was a brand new technology, which is, we don't, it's not about do we use it or do we not use it?

It's about let's teach people how to use it. What are the good and the bad uses. Like sometimes you used to hear teachers would say to their students, don't even look at Wikipedia. And I'm like they're a hundred percent gonna look at Wikipedia. Like kids love it, instead you should be saying, oh, here, Wikipedia.

It's super interesting. It can be very valuable, but by the way, when it says the neutrality of this article has been disputed, they mean something by that. When it says the following section doesn't sign any sources, they mean something like that. If you read something that sounds surprising, it might be vandalism, check the sources.

Let's teach people the right way to use Wikipedia. Similarly with ai, like I use it like one of the things I love using it for is cooking. I love to cook. I fancy myself a decent cook and I ask it for recipes and I also ask it for links to, famous cookery websites. And it has in the past absolutely hallucinated the URLs and, apologized to me that it doesn't know which websites are real or not great.

Okay. But I always warn people, I'm like, yeah, it's fun and useful to do that, but I wouldn't recommend it if you're brand new to cooking. I know how to cook. So when it gives me a recipe that has way too much salt, I have a bit of an instinct to go, Ooh, that seems a bit much maybe you're getting that wrong.

Can I challenge you on that? But, it's very useful. I also use it for I'm a programmer, but not a very good programmer, and I can absolutely see why Stack Overflow, for example, has, as I've read, 10% of the traffic it had some years back. And it's because as a programmer, a bad programmer, you hit a lot of error messages and you're like, oh, how do I fix this?

And you google it and you get to stack overflow and somebody's discussing it and you get the answer. Whereas now Gemini, Claude Chee, they're all quite good at coding and they're quite good at explaining a basic syntax error or even a more subtle error and. It's just more efficient as a, as an end user, it's this is a better product for me.

Like I'm trying to write something and I'm getting it wrong. I need some help and I wanna ask a question. It's decent. Although it is far too prone in my opinion, to tell me to rewrite JavaScript. I'm like, no, probably it's my bug. Probably it's not a bug in JavaScript. That's probably not it.

Like I, I don't know why you're telling me this, because it's telling me also I'm a genius, so 

Richie Cotton: it's nice to be heard that you're a genius, but yeah. You've probably made a mistake yourself. 

Jimmy Wales: Yeah. No it's often oh, that's a very insightful question. I'm like, literally I have enough self-respect to know that was not a, like I asked a really dumb question and I'm okay with that.

You don't have to butter me up. Like it's fine. Absolutely. 

Richie Cotton: So actually, you mentioned that Stack Overflow traffickers decreased 'cause people use all these AI tools. Said, how has Wikipedia been affected by this? Because I guess people want to look up like facts about Tom Cruise's mom, but do they go to chat GBT instead of Wikipedia?

Jimmy Wales: Yeah, so there's a couple of things. So I would say not necessarily chat GBT, it's a different use case and it's a different paradigm and it doesn't directly, it may compete. Everything can compete with everything in some sense. I'd say Google AI summaries is probably the more impactful, potentially. So there was a pew research study not long ago that looked at for, according to their study, and I can't really vouch for it completely, but it seems plausible that in terms of like in search results, the first page the classic tin blue links, Wikipedia is cited on the first page.

3% of the time sounded low to me, but it depends on what you're searching for. In AI summaries, we're linked to 6% of the time, so we're getting twice as many links to Wikipedia, but people click through half as often. So why? Why do you not click through? Because it answers your question. So you say, oh, who is Tom Cruise's mom?

And then it goes so and and then there's a link to Wikipedia, but you don't need to click it 'cause you got the answer. You need it. On the other hand, according to Google the, when people do follow those links, they're much more engaged. So they didn't just click into Wikipedia to find out the name of the mom and then Bounce.

They're actually like, oh, that's his mom's name. Oh, great. Also, I was thinking about this and so I'm gonna go read about Tom Cruise so they stay longer. We haven't verified that, but that seems plausible to me. So as a result, that's a wash, getting twice as many links, but clip through half as often is a wash.

But we have seen, we just had a blog post about this recently, an 8% drop year over year in human visits to Wikipedia. That doesn't mean our traffic has dropped. We try to analyze, analyze bots versus non botts, and that's imperfect of course, but we have some idea. And so that's concerning, but not a disaster at the moment.

And then I think a part of that to understand is for Wikipedia, we are a charity and our business model, we don't have ads. So our business model is about as long as the information you got is from Wikipedia. Then when you do see that banner saying, Hey, you use Wikipedia all the time, you should probably chip in, you'll respond.

And that will keep the lights on. That keeps us running. And so we're not so obsessed with page views. If we were. Funded by ads. Obviously an 8% drop in page views is an 8% drop in revenue. And that doesn't really match for us. What we really need is people's trust. We need people to love Wikipedia.

We need people to say, yeah, Wikipedia, that's one of the good things we should chip in. And so that's what we have to defend. Not so much just the page views. Obviously you care about page's, degree and all of that, but it's not our primary motivator. 

Richie Cotton: Okay. That's reassuring. But you mentioned there's an increase in bot traffic.

I presume, is this all the foundation model companies scraping your site then? I know Wikipedia, Texas, like Ty, all of it is in most of the large language models these days. 

Jimmy Wales: Yeah. So all of the major models use Wikipedia quite heavily. And that's fine. That is what it is.

Like we're freely licensed and that's we're happy to be part of the infrastructure of the world. But one of the things we've come to understand, I've come to understand recently, other people got this long before I did, but. Is like bot traffic versus human traffic is very different in terms of what it costs us.

Why is that? How could that possibly be? And the answer is, humans tend to visit the most popular pages, and they, particularly if something's big in the news, we'll get millions of views. So when the Queen died, obviously millions of people came to read about the queen. What does that mean? That means we, we build the page out of the database and we have it cached in memory and we just blast it out.

Costs very little. Whereas if a bot comes and reads every obscure page of Wikipedia, we have to have more servers, either more ram for caching. We have to have more database, calls and construct pages and all that. It actually costs us a disproportionate amount. And what we feel is awkwardly unfair about that.

It's not like it's wrecking us, but it's gee, the people who are giving an average of, as I understand, I was told recently, $10 and 38 cents a year. That's the average donation of Wikipedia. The people chip it in their 10 bucks or 10 quid, 10 euros, whatever it is, they're thinking, great. I love Wikipedia.

I want it to survive. They're not thinking, I really wanna subsidize open AI crawling Wikipedia. That's not what they're there for. And so that fact that it costs us more, and it's being funded by ordinary people, chipping in their money, doesn't seem right. So we have an enterprise product. Google is a good customer, so we're happy with Google, but we're still working on the other companies and we're not yet blocking them or things like that.

We're quite easygoing and friendly and they're like, Hey, come and do the right thing. So hopefully they will pay their way. And that seems fair. That's like the fairness model of Wikipedia is you use it all the time. You should probably chip in. Your bot uses all the time, you should probably chip in a lot.

Richie Cotton: Yeah, I feel like Sam Alton can afford a $10 38 donation to keep this yeah. A little proportionality would be good, but Yeah. Yeah, exactly. Alright it seems like negotiations with the foundational model companies are ongoing. I'd love to move a bit beyond Wikipedia actually, because it feels like a lot of the stuff we talked about with trust, these can be useful beyond just creating an encyclopedia.

So talk me through like, how do you use these principles or how can they be used like in day-to-day life in your work? 

Jimmy Wales: I think, one of the things that's obvious to people is like in, in a business setting. Trust is really important. It's a lubrication that makes everything a lot easier.

If you are, I'll tell you like here's an example. I, you've seen stories that to me are just mind boggling about companies that are, when people work from home, they're using software to monitor that. You're still moving your mouse and clicking on the keyboard and I'm like, wow. Like that level of mistrust of employees is just deeply wrong.

And I think if you are an employee who works at a company like that, every time you're not jiggling your mouse to pretend to be at work, please be on LinkedIn and looking for a job. Like you don't wanna work somewhere like that. They don't trust you. And that's a hostile environment and it's just not good.

Whereas, if you can build a culture where you could say, to the staff like, yeah, work from home, I trust you. What really matters is we've gotta get the work done. We've got customers, we've got responsibilities here, we're gonna, do this. I'm not gonna watch every single thing you do, because that's like weird and annoying and wow, you know what?

You'll get much better work out of people because you've built a healthy environment and people are like, yeah, actually they treat me properly, like a decent human being, so I'm gonna treat them back like that. I'm gonna go the extra mile. We try to do the best for our customers, like we take pride in that.

All of that kind of good stuff really works in an environment of trust. And when you don't have trust, everything starts to break down. Like certainly if you imagine say, a business to business partnerships, like there's a lot that is not written into a contract that is part of the relationship and part of the implied agreement.

And that's good, right? Writing every single thing down and then ticking off to the letter, we are only gonna do what we're contracted to do. That's like when the relationship's gone very wrong. That's pretty bad and it means less success in business and so on. All of these things. This isn't why trust really matters.

And then, how do you build it? Some of it's obvious. We talk about transparency and make it personal and have a good purpose, so all of these rules are designed to be practical, to be things that you could check off. And one of our big recommendations in the book is for companies is take a trust inventory.

Like really sort sit down and go through the seven rules and go okay, what are the things that we maybe could be doing better? Where are we failing? If we know, like one of the issues we have, a lot of companies have this is oh, we need to make this really hard change and it's really tough and maybe we're gonna have to let some people go and so forth, but nobody trusts us.

They think we're up to no good and so on and so forth. And okay, you're gonna have a big problem. But if it's yeah, people will accept like they're gonna be unhappy they left, that they're losing their job, but the other people are gonna be like, yeah, it was spare. This decision was made in a way that was reasonable.

And obviously it's a shame, but we're gonna have to, reconfigure the company. Wow, trust really matters a lot in those cases that you aren't being malicious and capricious and all of that. And so you've gotta build it. 

Richie Cotton: Absolutely. So it seems the big places to watch out for this when either when things have gone horribly wrong or they're about to go wrong, and that's the point where you really to think about how do we make this process trustworthy?

Certainly if you're having to make people redundant, it's are you doing it 'cause the company finances are a mess? Or is it 'cause oh, executive needs to hit their bonus this year. Yeah. You wanna be yeah. No, exactly. 

Jimmy Wales: I also, yeah, think about, yeah, that's like employees and the management and the company.

But there's also like in a business to business situation where my company is supposed to deliver something for your company, but we're gonna miss the target. How are you gonna react to that? And if we have a relationship of trust, and I can go to you and say, yeah, unfortunately these things happen.

We're gonna be two weeks late. Are you gonna go, you suck. This is terrible. I'm outta here. I'm gonna find somebody else next time around. Are you gonna go oh, okay. I know you did your best. I can see the problem. Let's think about how we're gonna solve this so it doesn't happen again. Like you've got the trust of oh, you're a proper partner.

Yeah, all of that. Super important. Absolutely. Definitely better than just being like, okay, we're gonna miss it. Let's not communicate the customer we goes for two weeks and Yeah. And that's exactly 

where you get into a really big problem, right? Because you didn't follow a basic principle of transparency, of oh, actually like one of the things is be transparent, especially when you have something to hide.

That's one of the chapter headings, and it's. It's easy to be transparent when everything's going great. It's harder when it's like, Ooh, actually you're gonna have to go hat in hand and say, yeah, actually something bad has happened and we need to tell you about it. That's harder. But it's actually the moment when trust is built.

It's actually the moment when people can see, it's yeah, it's easy to be, everything's fine, but actually to come in and say it when you know, you'd rather not have to say it. Yeah that's incredible. It's a great moment to build a better relationship going forward.

Richie Cotton: Definitely a bit of a test of character, but it seems very worthwhile. Alright, so I guess, are there any places where you think your rules about trust ought to be applied that aren't at the moment? Are there any areas where you think, okay, we need to really in increase trust here? 

Jimmy Wales: Yes, there are, but I'm hesitating just a little bit, but I'll tell you why.

So one of the areas that we have a big problem and one of the side effects, and we've been talking about the stuff I prefer to talk about, which is, like practical matters and business and personal relationships and all that. But also we know that politics is an absolute mess these days, and trust in journalism, trust in politicians is at all time lows and in many cases quite right.

We have, and one of the issues there is like whenever people are voting and they think you know what, I'm not gonna, I'm not gonna rank my voting in any way based on whether I trust the person because they're all liars and they're all dishonest. So I'm gonna use something else. I'm just gonna go with my feelings about their claimed policies.

That's a problem. That gets us into a world of mess. We need to actually, at the ballot box punish people who aren't trustworthy. Don't vote for people if you can't trust them. And. That the reason I hesitate is well hey, great. Yeah, people might go, yeah, Jimmy, you're right. But that isn't like a proactive simple step to build trust.

It's just like what we can do as consumers of politics. But it's an area where the breakdown of trust is society is a big problem. And I do think it makes sense to say, let's be a bit more demanding about trust. Let's demand that politicians the same way we expect a company like, like Amazon.

Okay, Amazon, I love Amazon. Amazon comes all the time to my house. Do I have complaints about, big abstract complaints about Amazon is gonna be, yeah, of course. Everybody could have something that they, like how are they treating people? Is it okay? All that fine?

Great. But you know what, when I order the thing on Amazon, it's gonna come. And if it doesn't come and I contact Amazon, they're gonna send me another one. You trust them in that sense. Like they just do the right business things. And I feel like in politic. We're not even sure if they're gonna do the right things.

In the us there was just this longest ever government shutdown because they couldn't make a simple agreement on the budget. And it's okay, gang, you're not even doing the thing. Like it's the one thing you're there to do. And you're all squabbling so much and turning everything into a culture war and being so vicious, you're not actually governing the country in any sense of a way about a little compromise.

A little give and take a little trust each other a little bit to say you know what? I'm gonna give you this one, but you're gonna owe me next time. Or I'll give you this, but you need to gimme that. Okay. That we can find a, we can find a path forward. No, it's just like stonewalling breakdown.

It's a lack of trust. It's not good. Okay. Yeah. Certainly, politics is very tricky in terms of trust. But I guess if Uber can turn it around, you talk about like ways to rebuild trust may maybe we fix politics as well. 

There's a great a great quote in the book about a a French leader who said, this is during the height of the Cold War, and I'm not naming the names.

'cause that's not important. It's like a French leader said of the US president. If the president says it, that's good enough for me. There was trust even though they didn't agree on lots of things politically. And I remember when it was Obama versus McCain, and I remember, like at the time, I wasn't thinking about trusting all that.

I didn't have those words to use yet. But I said to lots of my friends, I'm like, you know what? This is great. I'm really proud of the country. Like I have agreements and disagreements on policy with each of them. I know I'm gonna vote one way, but you know what, they're both proper people. They're both sensible.

They're not crazy radicals, they're not dishonest. They've got problems, whatever, it's okay. Whereas more recently I'm like, yeah, this isn't okay. This isn't just oh, I have some disagreements of policy. I'm like, this is not proper behavior. And yeah. And I live now in the uk and I'm the first to say in the last election, it's, and I'm, I don't wanna bore people who aren't in the uk, but I will anyway.

But in the last election the Tories lost and I think they lost more than labor won because they had lost the trust of the country. Not about policy, not about this, and that. It was about that party in Downing Street when we were all feeling for the queen, sitting all alone the next day at her husband's funeral.

And we find out later. And meanwhile, the night before like the queen is in isolation 'cause of COVID and they're having parties in Downing Street this is not okay. You're not behaving like proper people. And I think that feeling. Was more important to voters. Maybe I'm wrong, maybe some political scientists will do surveys and prove me wrong, but I thought I'm right.

It meant more to people than the specific details of policy like that. It was just like, I don't trust you anymore to be doing the right things. You're just having a laugh at our expense. It's not okay. 

Richie Cotton: Absolutely. Yeah. So I guess really, the lesson there is you've gotta behave as a trustworthy human if you want people to respect you and it they want you to trust.

Jimmy Wales: Yeah. When you go out driving during COVID, don't say I had to go driving with my children in the back to test my eyesight. I don't know, that doesn't seem like a plausible answer to me. Sorry. Absolutely. Yeah. 

Richie Cotton: Maybe just, yeah don't lie to the general public. Really good tip for for building trust.

Alright, super. Just finally I always want new people to learn things from. Who are you following right now? Who's work are you interested in? 

Jimmy Wales: I'll tell you who I'm interested in. I'm afraid to say I'm not sure I can follow his work directly because it's, he's too smart and all of that, but I really am a big fan and a friend of Demis has.

So Demis is the CEO of DeepMind, Google's big ai, unit. And he just won the Nobel Prize in chemistry for the work on protein folding. But like the stuff that, that DeepMind are doing, and it's beyond just the commercial large language models and all this stuff, which they're at the forefront of.

But I'm really fascinated about, all of the amazing possibilities, the exciting positive things that are gonna come. Out of, using AI techniques, using large like machine learning to crack some really hard problems in biology and health. I think it's an amazing time to be alive and it's an amazing time to maybe live a little longer and a little bit happier and a little bit healthier.

And so I think that stuff is really fantastic. And so it's the kind of thing that, although for my professional work, I have to think a lot about large language models and stuff that's directly coming into my world. I'm keeping one eye on the other stuff. Driverless cars is another example.

Like I'm fascinated I just think, the idea that, in, I'll tell a funny little story. When my kids were little, I have a Volvo, I'd just gotten my brand new Volvo. This is several years ago now. It's now my not so brand new Volvo. And. The car beeps a lot. Like it's very, the kids call it a scaredy cat, but also I said to them, yeah, actually I'm in a parking lot.

If I get too close to another car, it will slam on the brakes. And they said, so I don't, wreck in anything. And they, one of them said, but don't all cars do that? 'Cause they're little kids, they're like, obviously like cars shouldn't like wreck into stuff. Like, why would a car wreck into stuff?

It's got cameras. Like I, we know it's got cameras. A lot of cars have cameras. Why? Why do they wreck into stuff? It's yeah, you're right. And so I envision, in not that far away, not as quickly as Elon promised us. Like it's a little bit, speaking of trust, but in a very near future, like the idea of dying in a car accident, which is a quite common thing, or being permanently, life-changing injuries is gonna seem like really old.

Like really, it's wow. Dying of scurvy or something, like something that just doesn't happen anymore. 'cause we figured out like how to solve it and that's exciting to me. And so I do watch that as well. Like I'm really interested in the technology around automated vehicles and the possibilities for safety and better travel and transport.

Richie Cotton: I love that. I guess there's a common theme there in solving causes of death that seems yeah, definitely. Yeah. It's a good thing. Yeah, let's do that. 

Jimmy Wales: Let's have a lot more of that and a lot less yelling at each other on Twitter, 

Richie Cotton: alright. I think it's a wonderful way to wrap up then.

Yeah. Thank you so much for your time, Jimmy. 

Jimmy Wales: Yeah, lovely. Brilliant. Thank you.

Tópicos
Relacionado

podcast

How to Build AI Your Users Can Trust with David Colwell, VP of AI & ML at Tricentis

Richie and David explore AI disasters in legal settings, the balance between AI productivity and quality, the evolving role of data scientists, and the importance of benchmarks and data governance in AI development, and much more.

podcast

Trust and Regulation in AI with Bruce Schneier, Internationally Renowned Security Technologist

Richie and Bruce explore the definition of trust, how AI mimics social trust, AI and deception, AI regulation, why AI is a political issue and much more.

podcast

Building Trustworthy AI with Alexandra Ebert, Chief Trust Officer at MOSTLY AI

Richie and Alexandra explore the importance of trust in AI, what causes us to lose trust in AI systems and the impacts of a lack of trust, AI regulation and adoption, AI decision accuracy and fairness, privacy concerns in AI and much more.

podcast

Building Trust in AI Agents with Shane Murray, Senior Vice President of Digital Platform Analytics at Versant Media

Richie and Shane explore AI disasters and success stories, the concept of being AI-ready, essential roles and skills for AI projects, data quality's impact on AI, and much more.

podcast

Building Ethical Machines with Reid Blackman, Founder & CEO at Virtue Consultants

Reid and Richie discuss the dominant concerns in AI ethics, from biased AI and privacy violations to the challenges introduced by generative AI.

podcast

Harnessing AI to Help Humanity with Sandy Pentland, HAI Fellow at Stanford

Richie and Sandy explore the role of storytelling in data and AI, how technology reshapes our narratives, the impact of AI on decision-making, the importance of shared wisdom in communities, and much more.
Ver maisVer mais