Ir al contenido principal

Harnessing AI to Help Humanity with Sandy Pentland, HAI Fellow at Stanford

Richie and Sandy explore the role of storytelling in data and AI, how technology reshapes our narratives, the impact of AI on decision-making, the importance of shared wisdom in communities, and much more.
10 nov 2025

Professor Alex 'Sandy' Pentland's photo
Guest
Professor Alex 'Sandy' Pentland
LinkedIn

Professor Alex “Sandy” Pentland is a leading computational scientist, co-founder of the MIT Media Lab and Media Lab Asia, and a HAI Fellow at Stanford. Recognized by Forbes as one of the world’s most powerful data scientists, he played a key role in shaping the GDPR through the World Economic Forum and contributed to the UN’s Sustainable Development Goals as one of the Secretary General’s “Data Revolutionaries.” His accolades include MIT’s Toshiba Chair, election to the U.S. National Academy of Engineering, the Harvard Business Review McKinsey Award, and the DARPA 40th Anniversary of the Internet Award. Pentland has served on advisory boards for organizations such as the UN Secretary General, UN Foundation, Consumers Union, and formerly for the OECD, Google, AT&T, and Nissan. Companies originating from his lab have driven major innovations, including India’s Aadhaar digital identity system, Alibaba’s news and advertising arm, and the world’s largest rural health service network.

His more recent ventures span mental health (Ginger.io), AI interaction management (Cogito), delivery optimization (Wise Systems), financial privacy (Akoya), and fairness in social services (Prosperia). A mentor to over 80 PhD students—many now leading in academia, research, or entrepreneurship—Pentland helped pioneer fields such as computational social science, wearable computing, and modern biometrics. His books include Social Physics, Honest Signals, Building the New Economy, and Trusted Data.


Richie Cotton's photo
Host
Richie Cotton

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Key Quotes

Ideas and stories are valuable. Learning about things, hearing the other person's story. The relationships that work best across communities are ones that are essentially people trading, like in the sense of commerce, trading stories. I'll tell you about this thing. You tell me what's going on over there. Both of us come over better, we don't have to agree about anything else, but we're helping each other. If you ask people about folks where they have that sort of information trading relationship, you find a great deal of trust and a great deal of value. You need to be a trader in stories that are relevant to what you're doing. And if you can do that, that's good for you, good for others, good for your community.

Everyone's very concerned about polarization. The people on the other side, they're crazy. We did a couple of big experiments, with 30,000 people, statistically balanced for all sorts of things. And it turns out that most of this sort of like people being polarized and disagreeing with each other is that they're just not familiar with what the other side actually thinks. There's so many loud voices, the people who may in fact be genuinely crazy, you know, that dominate and just saturate all of the airwaves and the social feeds and stuff that you think about the other side as these loud voices and you miss the fact that the vast majority of people don't believe that even on the other side.

Key Takeaways

1

Promote the sharing of stories and experiences within and across communities to build shared wisdom, improving decision-making and reducing polarization.

2

Leverage AI to create dynamic, context-aware teams, potentially transforming traditional corporate structures and enabling more flexible work arrangements.

3

Implement AI-driven tools like deliberation.io to enable more productive discussions, reducing the influence of loud voices and fostering genuine communication.

Links From The Show

MIT Media Lab External Link

Transcript

Richie Cotton (00:01.346)

Hi Sandy, welcome to the show.

Alex "Sandy" Pentland (00:03.823)

Pleasure to be here. Let's have some fun.

Richie Cotton (00:05.23)

Absolutely, I'm definitely looking forward to some fun. So I guess today we're gonna be talking a lot about storytelling. So to begin with, can you tell me, do you have like a favorite story about data or AI?

Alex "Sandy" Pentland (00:17.213)

Yeah, but first let me talk about stories, because this is about data. Why don't we talk about stories, right? So stories are like theories or facts, but modest. Because you look at the data and you come up with a story. And you maybe think it's a fact, but three months from now you might find out it's not a fact. So I like to use the word story, because it's an explanation of data, but you're not like 100 % committed to it.

That's why we're talking about stories. We're just trying to remember that we're wrong sometimes, okay? And here's an example of a funny story. This is a young Lacoon's story. So here's a little puzzle for your LLM. A man and a sheep are at a side of a river and there's a boat with room for a man and a sheep. How do they get to the other side of the river? This is such a stupid question.

Richie Cotton (01:10.679)

Okay, so...

Alex "Sandy" Pentland (01:14.099)

that nobody actually wrote it on the entire internet until December when he posted this. A... See more

nd so the LLMs couldn't answer it because it wasn't in their database, basically. It is like, well, that's crazy. I mean, a five-year-old would know what to do with this, but not your brilliant LLM, right?

Richie Cotton (01:37.89)

Yeah, that's true. mean, normally you have like a wolf in there and you've got to work out who goes one at a time. So it's kind of an anti-riddle, right?

Alex "Sandy" Pentland (01:42.655)

And that's what it was. Yeah, and that's what they did. They weren't crazy trying to think about all but no no this this question is stupid It's obvious and they couldn't get it

Richie Cotton (01:47.98)

Ha ha ha ha!

Richie Cotton (01:55.073)

Okay, so I do like the idea that there's scope for human thinking in there, because yeah, often LLMs do get things wrong. All right, so I think one of the recurring themes in your book is all about how technology changes the way we tell stories. So I guess talk us through like how is AI impacting storytelling?

Alex "Sandy" Pentland (02:00.485)

Yeah.

Alex "Sandy" Pentland (02:11.677)

Yeah.

Alex "Sandy" Pentland (02:16.467)

Well, let's say the book, and I think a good way to introduce this is the question of like, you know, so we'd like to have good cultural change and all that. And the Enlightenment comes up all the time. It's like, in the 1600s, they had it right, right. Well, what did they do? Because there was already science and there was already all these things. But what they didn't have until the late 1500s was a postal system.

They couldn't send letters back and forth to debate things with each other, and nobody was keeping count. And the kings at that time made the postal system available to common people who were discussing questions of why is the world this way. And the names you know most, like Leibniz and so forth, would write one or two letters every single day their whole life to different scientists debating things.

And so the king set up a center, the Academy of Science, this is in France, to keep track of this and the people who wrote lots of letters and were really popular made, you know, got an appointment by the king and that was really cool back then because now you had a good income and you got the hoi poi and stuff like that. That caused the alignment. Now, what can we do today? Well, we got people, we've got, you know, electronic things.

Can AI help us connect to each other better the way letters back then connected to people? And I think it can because you can ask, well, versus who in your company is doing the same stuff as I am. Let's get together, right? Or who knows about this or things of that sort. It also like we've built some systems that go to...

pitch here, go to deliberation.io, it's all open source, etc. Just use it. What it is, is a way of people actually communicating to each other, deciding things without all the loud voices and BS that you get all the time. Right? That's what it's designed to do. And so, you know, and it uses AI to help to do that. So,

Alex "Sandy" Pentland (04:23.987)

Yeah, I think we can. What we shouldn't be doing is, you know, just abdicating and say, just say, what's the answer to the AI? Because that's like answering, you know, asking the guy down the street, he'll give you an answer. You may not like it. It may not be right.

you should really try and get into it more and talk to the guy down the street, but also think about it yourself and other people and stuff like that. So that's the basic idea of the book. Let's use AI to help ourselves be smarter.

Richie Cotton (04:58.062)

Okay, so I've seen it's really interesting that you're putting the postal system as like the big driver of the Enlightenment. I think a lot of people like when they think of like technology enabling things, it's like the printing press, because then you could disseminate information one way. But this is about like debating stuff, sharing ideas and having backwards and forwards conversations.

Alex "Sandy" Pentland (05:16.189)

That's right. Yeah, yeah. Yeah, people think about the printing press, but you know what the printing press was mostly used for? It was used for pamphlets. And so like Luther, right, in the 95 Theses, caused 30 years of war. So it was all, it was like all the political BS you see on social media today. These like unregulated, not really a debate, just people spouting off.

That was the printing press for the first 30, 40 years. Yeah, they did some Bibles on the side. know, yeah, we've had social media before and it wasn't good.

Richie Cotton (05:56.399)

That's fascinating. And yeah, so maybe like the first wave of technology is kind of the bad wave where it starts off with like, I guess, politicization and misinformation, things like that. That's the first wave of stuff. And then you get the useful stuff afterwards, perhaps. Okay.

Alex "Sandy" Pentland (06:11.591)

Yeah, yeah, there's people, right?

Richie Cotton (06:14.414)

Is it just people? Yeah, okay, that's the motivation. All right, so I want to talk about your sort of, you've got like a three step process for like how stories are used to make decisions. Can you talk me through one of those three steps?

Alex "Sandy" Pentland (06:26.875)

Yeah, so, you know, the way we think comes from evolution and hundreds of thousands of years of people who didn't do it right got eaten by the lions. So we have a particular way of doing things. And the title of the book is called Shared Wisdom, and that's sort of the key. What we do is we try to build a culture, a set of things that...

our work, our buds, right? All believe. And we do that by sharing stories about, Jimmy did this and got eaten by the lion, right? Or Mary did that and didn't get sick and it was good, right? We tell these things, we end up with an understanding of the world through our stories. The stories have some facts behind them and some outcomes and things like that. But let's be modest again.

It's not like a fair sample. It's not like, you know, all those things that you need for science with a big S, right? And so we build this shared wisdom. And when we have that and something comes up, right, then we among ourselves can say, that's like this. And when they did X, Y happened and we can choose them and get it and everyone will be on board because we all sort of understand.

the range of things that we do. So there's this, you know, shared sort of ideas about what you can do. There's shared then actions that people take to get to a good place. And we can have collected action, all work together. If we have that sort of shared set of things we believe that we've heard in our stories. And then for you yourself, this is third type of story.

is you sometimes don't want to do what everybody else does. So you have to look at the culture, you have to look at what you know, what other people are doing, what works for them, and you pick something for yourself. So there's stories for you, there's stories for action, and then there's this sort of shared...

Alex "Sandy" Pentland (08:30.237)

worldview and that's all constructed by listening to stories of other sorts of things that have happened and trying to estimate what works and what doesn't work and what you know is likely to be able to good for you and so those are the three types of stories.

Richie Cotton (08:50.286)

Okay, that's fascinating. And I guess, yeah, the idea is if you or your peers all have a kind of shared culture, that's gonna decide on like, what is a good idea to do and you can decide whether or not you want to accept that. Actually, it's kind of, yeah.

Alex "Sandy" Pentland (09:04.361)

Well then you can get people to work together, right? Because if everybody goes off their own way, the lion is going to eat you all. I'm sorry. It's just the way it works.

Richie Cotton (09:11.818)

Hahaha!

Richie Cotton (09:15.726)

Yeah, I suppose a lot of the time I mean there's a sort of idea that you should be like a lion not a sheep and do your own thing but actually doing like the sheep strategy is often successful you want to do what other people do a lot of the time

Alex "Sandy" Pentland (09:27.155)

Well, you know, the reason we had villages and the reason we put walls around the villages was because of the lions and we all had to agree to do that and it was good.

Richie Cotton (09:33.998)

You

Richie Cotton (09:38.19)

And then you ask the AI like what should I do and it's not sure because there was no line in the story or something

Alex "Sandy" Pentland (09:46.432)

Yeah, well the AI actually, I mean this is one of the real weaknesses of AI, it has no context. This is the stuff that everybody says is in the AI. It's not the stuff for, you know, where you work or where you live or you as a particular person. It's not personalized, it's not contextualized and so it gives the average answer.

It's like walking up to the random person on the street and asking a question. You'll get an answer. It may not be something that's useful to you, or maybe anti-useful. You really want something that's good for you and your community so that you can get things done.

Richie Cotton (10:25.518)

Okay, so I guess the key to this is having a culture where you are sharing ideas. Like how do you set that up? Like how do you get this story sharing culture?

Alex "Sandy" Pentland (10:38.675)

Well, you have to first of all understand this business of, know, we build a shared culture by telling stories. This is good, right? And then we pick actions from the things that we've seen work. We pick our individual actions from the things we've heard. You need to really understand that it's not

logic and deduction because we don't know all the facts. So that makes deduction not work. And it's not the boss saying jump because then who knows why that's going to happen and actually will jump in different directions. So if you understand that, then you say, well, look, I had better go out and collect some good stories. And there's been a lot of classic studies like there's a Bell Labs one. They took the highest

performing people, the ones that were innovative, the best engineers. And they asked, well, what's different about these people?

different is they had buddies in different areas, different technical areas that they talked to regularly so they could sort of keep up with what's happening and what's coming down the line. And so they had this network of stories that they had gotten from their buds. It was just, you know, have coffee together sort of things. And that wealth of stories gave them greater insight about what's likely to happen.

So that's the type of mental model to keep in mind. And I know you are also asking about bosses. Bosses usually come in and say, here, we're going to do this. Well, you know, the problem is, is then nobody really understands why we're doing this, or they have different theories about it, they may disagree with it completely. You need to have a discussion with people, not just to get them to bought in, but to educate them.

Alex "Sandy" Pentland (12:31.315)

So a meeting where the boss says, we're going to do this, is a bad meeting. Then the actual work happens out in the hall afterwards, where people say, what is this? And then they have these theories, these stories about what the boss is doing and why he's crazy, and it'll never work. And that's not going to be a good team for executing something. I'm sorry. You want people to sort of understand.

the situation and see that this is actually a plausible thing and that requires actually talking to people.

Richie Cotton (13:06.286)

you're doing context engineering on your employees, you're giving them enough information so can make the good decisions.

Alex "Sandy" Pentland (13:12.517)

Exactly. Yeah, context engineering. I like that. I'll use that. Okay.

Richie Cotton (13:18.893)

That's cool. No, I do like the idea that if you want people to do things that you're to have some kind of backwards and forwards situation. So a lot of, guess, work success or maybe like cultural success is going to be about enabling people to discuss stuff rather than just it being one way flows of information. Is that about right?

Alex "Sandy" Pentland (13:37.162)

That's right. Yeah. Yeah, absolutely. An interesting thing is what I see people doing about this, right? So I was on a panel with the CTOs of several of the leading companies and they were talking about how hard it is to get data for AI and all that. But everybody had built what they called AI buddies for every employee.

Richie Cotton (13:39.831)

Okay.

Alex "Sandy" Pentland (14:04.125)

So you know all those manuals that you don't read and the newsletters that, you know, like they just throw them in the trash. They put all those into a customized LLM for each employee. So now...

The employee has an LLM that knows all of the what's going on and what's the rhythm and you know stuff like that and it makes it easier for people to get up to speed. It makes it easier to find people that you want to talk to and learn from. just you know nobody ever reads a manual. I'm sorry. Why do you even produce these things? But if you have something that is active and so that's the difference the manual is now active through the LLM. You can actually

use the information that's in there. Right? And that's actually pretty good.

Richie Cotton (14:56.716)

I do like that, I mean, there's always so many like notices from HR or like this other team that you vaguely care about, they've done like a weekly or monthly like update on their statistics or whatever like their successes, like well, I only kind of care, probably not gonna read like 20 pages of what you've been doing this last month. But maybe I'll ask like one question for the Keyleth Highlights. So I do like the idea of automating to find stuff out better.

Alex "Sandy" Pentland (15:14.419)

Yah, yah.

Alex "Sandy" Pentland (15:22.963)

So automating communication, helping people find the people to talk to there. And then this deliberation IO thing I mentioned is about how do you get people to talk together better without the screaming and yelling and loud voices and stuff like that.

Richie Cotton (15:41.118)

Yeah, not having yelling matches with the colleagues, that seems like a very good idea. I mean, I guess outside of work as well, like having ways to chat in a productive way.

Alex "Sandy" Pentland (15:50.974)

Yeah, well, we did an interesting thing. We helped Washington, D.C. decide what sort of AI should be in city government. It was a big town hall thing, but we did the online part of it. And do you know what people in Washington, D.C. wanted? Now, these are not like the brainy people, the powerful people. These are the people with two jobs and three kids, right? So what they wanted, can you want to guess?

Richie Cotton (16:16.526)

I have no idea, don't know where politicians...

Alex "Sandy" Pentland (16:18.591)

for AI, they wanted an AI for themselves to help them deal with all the crazy stuff that government does. They wanted somebody that would help them fill out forms and notify them when things are happening and, you know, just keep them on course because it's too crazy difficult. And that sounds like this AI buddy type thing, but it's a little bit, you know, it's aimed at government and keeping you square with the government. I think it's brilliant.

Nobody, none of us tech guys thought about that. None of the government guys think about it. It was the person with two jobs and three kids that thought about that. So that's an example of using AI to be able to define things that are actually just great ideas and not hard to do.

Richie Cotton (17:05.774)

Yeah, that's a really wonderful idea and certainly like I mean, it's just an endless stream of laws and Okay, it's like I guess I actually was like US immigration quite a lot because I'm British but live in America and the laws keep changing It's like in between filing things. Oh, something's changing different forms Yeah, hard to keep up. So I can certainly imagine like if you're a lawyer trying to keep these laws like very very difficult time-consuming task

Alex "Sandy" Pentland (17:17.651)

Yeah

Alex "Sandy" Pentland (17:32.723)

Yeah, but the average person, mean, you you're educated, smart, you've got some time. Imagine that, you you've got two jobs and three kids, right? And now you have to keep up with all this stuff? No way, right? You need some help and you want it to be personalized help and you want it to be smart. And it turns out that we can now build stuff like that pretty easily, quickly too, right?

Richie Cotton (17:57.155)

Yeah, that does sound incredibly useful. Okay, so mean, since you brought up politics here, I guess one of the big things is you talk about having, like communicating with people within your own culture. How do you communicate with people who are in a different culture or different community?

Alex "Sandy" Pentland (18:10.11)

Yeah.

Alex "Sandy" Pentland (18:15.635)

Well, the main thing I think is to keep in mind that ideas and stories are valuable, right? So learning about things, hearing the other person's story. And they would be valuable, they would value your stories too. So the relationships that work best across communities are ones that are essentially people trading, like in the sense of commerce.

Trading stories. I'll tell you about this thing. You tell me what's going on over there. Both of us come over better We don't have to agree about anything else, but we're helping each other and if you ask people about Folks where they have that sort of trading relationship you find a great deal of trust and a great deal of value

And so, you know, need to be a trader in stories that are relevant to what you're doing. And if you can do that, that's good for you, good for them, good for your community.

Richie Cotton (19:17.942)

Yeah, I like the idea of having a trade in stories. You tell each other different things to get different perspectives. I'm like, yeah, this is my job. Actually, yeah, do you want to make this concrete? Do you have examples of when this can be beneficial to different parties and when you might want to do this?

Alex "Sandy" Pentland (19:24.383)

This is what you do. In a sense, right? Yeah, yeah.

Alex "Sandy" Pentland (19:40.072)

Well, so for instance, gave at the opening here, I gave this example from Bell Labs where people had developed networks by trading stories of what other people in other departments were doing. They'd go and have coffee once a month or something like that. And it was valuable for both parties. So it was well worth doing it. And that's sort of the basis for a trusted sort of business friendship type of a thing.

So I think that's, and the people who did that were uniformly much higher performance, much more innovative, much more sort of skilled in survival, right, than the ones that didn't. And you know, you see that in also that we've done lots of experiments. I guess one I like that is salient to people is we set up an experiment where we had day traders and we had a couple hundred thousand.

day traders, right? And if you can set it up where they're trading their strategies with each other, not exactly how much they spend, but I did this and it worked out that way, everybody does lots better, not just a little better, lots better. And so there you are, you make more money if you trade stories.

That's pretty good. And we've even done that with like expert traders. We're talking about people that trade $10 million a day for their companies, right? It turns out that if you can get them to trade stories, right? Just like this proprietary data, but they can say, yeah, I think that's gonna be long or that's just BS, whatever, that sort of thing.

That helps them avoid disasters. They are much safer investors. They have much fewer mistakes because they shared stories with other people.

Richie Cotton (21:31.214)

That's fascinating. I do like the idea of like sharing stories in order to make more money. But so is this like an averaging effect then because you're sort of doing the consensus of what other people in the group are doing or is this a different effect?

Alex "Sandy" Pentland (21:43.776)

No, it's not actually an averaging effect at all. It's what you would think, but it's not. What it is is, you know, like if you're doing trading, let's say for, you know, trading strategies in money or something like that.

It's hard to keep all the different possible strategies in mind, and people forget ones. They look only at a few. so trading stories helps you remember, yeah, you know, I could do that. That would actually be pretty good, wouldn't it? So it's this sort of remembering the things that aren't in your sort of immediate portfolio. And then the other one for the expert traders, the biggest effect was remember that things screw up. So it was a tale.

It was, you know, yeah, but Johnny did that ten years ago and got wiped out because this thing that you forgot about actually happens sometimes. So, so it's keeping people contextualized to all the possibilities.

So you're not just in your own personal echo chamber that you actually sort of know what could happen and what other people are doing. And then you make use of that. You make your personal decisions based on that.

Richie Cotton (22:58.156)

Okay, yeah, certainly, I mean, with a complex idea like trading, like investments, there's so many different possible strategies that, yeah, humans just can't keep all of them in their brains at once. I like the idea that just exposing yourself to a few other ideas, I guess it's gonna get rid of your, or eliminate, minimize some of your own biases then. So, yeah, okay.

Alex "Sandy" Pentland (23:06.803)

Let's try it.

Alex "Sandy" Pentland (23:18.685)

Yeah, that's right. And remind you that things sometimes really screw up and you don't want to be exposed to that, right?

Richie Cotton (23:27.246)

Yeah definitely I like idea stories to stop you doing stupid stuff. we're back to like yeah anti stupid we're back to like how do you deal with the lions and the wolves and whatever so yeah.

Alex "Sandy" Pentland (23:32.927)

There we are, Anti-stupidity, right, yeah.

Alex "Sandy" Pentland (23:41.245)

Yeah, same thing. Don't go there. Sometimes people get eaten.

Richie Cotton (23:47.883)

Okay, all right, so I guess the tricky part comes when you've got different people with different opinions on things and you need to make a decision. So how do you go about coming to a consensus when people disagree?

Alex "Sandy" Pentland (24:02.771)

Well, did some big, everyone's very concerned about polarization. You know, the people on the other side, they're crazy, they, you know, eat babies, whatever, some horrible thing. And so we did a couple of big experiments. These are like 30,000 people statistically balanced for all sorts of things.

And it turns out that most of this sort of like people being polarized and disagreeing with each other is that they're just not familiar with what the other side actually thinks.

There's so many loud voices, the people who may in fact be genuinely crazy, you know, that dominate and just saturate all of the airwaves and the social feeds and stuff that you think about the other side as these loud voices and you miss the fact that the vast majority of people don't.

Believe that even on the other side. And so we did something where we had, and this is the deliberation IO thing again. People would put in comments and then we'd just graph them along a particular dimension. And you could see, yeah, at the edges, there are crazy people, but almost everybody on almost every issue clusters in the middle. So those people on the other side, they're actually pretty much like you.

And sometimes it's surprising. Sometimes you think they're liberal, but they're actually more conservative about some things and vice versa. And what that did is it changed people's opinions because they stopped paying so much attention to the loud voices. And they started paying attention to the fact that most people actually are pretty reasonable. And so I can be reasonable, too. And we saw something like a 30 % reduction in these polarization measures and it stuck for a couple of weeks.

Alex "Sandy" Pentland (25:54.674)

people get caught up in the loud voices again. But it's amazing how just sort of having a picture...

of what's going on in the world helps you make much better decisions. And so that's what we're building, And that's what you need to do, for instance, in companies too, is, you know, typical meeting, you get in there, you don't know what people think, they're not about to say it because they might be outliers. So all you hear is the loud mouths, right, including the boss. And you don't really know what most people are thinking.

And if you did...

you might realize that there's another path that everybody could get behind or that it's really not as weird as you think it is. And so you need mechanisms and you can have these digital mechanisms that help people do that. So what we do is we have people put in comments for things. This is on your mobile phone. What do you think of this? Bang, right? And then what the AI does is do summarization. Doesn't add anything of its own. No content from AIs.

What it does do is it reflects. It says, you know, most people think this, but a few people think that. That's all it says. And now you say, really? I didn't know people thought that. And who are these guys who do this? Why do they think that? And so you might say,

Alex "Sandy" Pentland (27:19.273)

Why do you think that? So it gets to be a much more productive thing because it's a genuine communication, stories among each other. And what the AI is helping you do is make sure everybody's story is heard and it gets out there in a non-contentious way. And it works pretty well. We call it Socratic dialogue. You're just asking questions, that's all.

Richie Cotton (27:45.559)

that just making sure that if you understand what other people are thinking then you realize that most people are actually reasonable. I guess if you are hearing the loud voice lot it's like probably time to step away from social media just for a while and speak to regular people.

Alex "Sandy" Pentland (27:59.54)

Well, you know, it's dominated. All the media today, the newspapers, the video, everything is dominated by people who are incendiary, right? You say crazy things and then you make more money because more people listen to you. And that feedback loop just destroys the ability to make decisions.

Richie Cotton (28:21.87)

Absolutely. So yeah, we'll try and say too many incendiary things on the show. absolutely, we'll go viral. So you mentioned meetings. This seemed like everyone spends so much time in meetings. Do you have any tips for how you can make sure everyone gets heard in meetings and all the appropriate stories are shared?

Alex "Sandy" Pentland (28:28.476)

no, let's go for it. We'll get the best ratings ever, right? Yes.

Alex "Sandy" Pentland (28:39.485)

Yes.

Alex "Sandy" Pentland (28:46.249)

Well, that's one of the things with this like little digital app, right? This deliberation.io is you don't have to have meetings. This is literally, you know, the thing your phone rings. Here's the question. Yes, no, six, whatever the answer is. That's it. And then after a couple of minutes, you get to see a little graph about what everybody answered. There are other things like this, but the point is, is that for surveys, but this is iterative.

So you say, oh, people believe that. Then you put another little comment in. And what it is is this discussion that is curated by the AI. It doesn't add content. It just keeps you focused on the problem ahead of you. It doesn't allow you to curse and swear and do stuff, right?

Just a little bit, not a lot. But what it's doing is it's trying to keep people on the whole and not get crazy about particular comments. And works.

Richie Cotton (29:48.087)

Okay, I mean, that's the holy grail is like not having meetings, I think, like for every company, like no one's for me. So yeah, yeah, I love it. Like poll, poll your colleagues like and, you know, summarize what's going on. Yeah.

Alex "Sandy" Pentland (29:52.829)

Yeah, there we are. Yeah. Yeah, yeah.

Alex "Sandy" Pentland (30:01.479)

Yeah, most meetings, you know, like people always say, God, we could have done that in five minutes. Well, you could have done it in less. And that's the point is that, you know, if you have the right sort of tools, it's not like Slack, right? Because Slack is more like social media in the sense of being a continuum. This is, you know, commentary. It's deliberation.

It's iterative, has ongoing summaries, and that makes all the difference.

Richie Cotton (30:34.606)

I like that, having some commentary as well, just like someone gave this answer, like you get a bit of the why there as well. All right.

Alex "Sandy" Pentland (30:42.685)

And you can include all of the opinions, not just the loud ones.

Richie Cotton (30:46.838)

Nice, yes. Okay, so when we teach data storytelling, one of the big things is you've got to personalize stories for the audience so it's appropriate. But the idea of having a shared story, that seems like it's more of a central thing. So when do you want a standard story versus a personalized story?

Alex "Sandy" Pentland (31:08.627)

Well, I think...

One of the big things people forget, it's not either everybody or you. There's this thing in the middle called your community. So like you have a work community, you have a family community, you have the guys who play baseball with whatever, right? All of those communities need to work together and do things together. And so they need to have a a shared culture, if you like. I call it shared wisdom because hopefully it's a good culture.

And you're trying to make it wise so that you make good decisions. And so, you know, it's not like there's one truth. This is one of the things that screws people up all the time. In fact, there's very little that you can really call truth. There's what we think is going on, and sometimes we're pretty good at it, but often not, right? Particularly as time goes on, we realize, that really wasn't what was happening. It was this other thing.

And so each community needs to have its sort of worldview, its culture, and you make decisions that are about the community using the culture. So there are things you would do in a town hall that you wouldn't do around the family dinner and you wouldn't do in the baseball game. There are different sets of behaviors, there are different sets of norms. And so you need to have story sharing within each community.

to help establish the norms, the wisdom, as it were, and therefore the actions that the community can take by sharing the right sharing stories and having people sort of discuss it and react to it. So I'm sorry, that's a long answer, but it's not just everything's uniform or it's all personalized. Actually, you are the compound of a lot of different communities and you need to have things that are relevant to the

Alex "Sandy" Pentland (33:08.521)

Community context.

Richie Cotton (33:11.682)

Absolutely, we're back to context again, but yeah, I definitely agree that like the way I chat on a podcast is different from the way I chat to my friends or my family. Like there's lots of different ways of communicating in different circumstances. Okay, all right, so again, like what's the role of technology in this? I mean, because AI is obviously very good at changing the style of stories. Do you think that...

Alex "Sandy" Pentland (33:21.705)

Yeah.

Richie Cotton (33:41.582)

it's going to become easier to personalize your stories for different groups.

Alex "Sandy" Pentland (33:48.096)

Well, we certainly hope so so the big project that we have going here at Stanford is called loyal agents all one word org It's with Consumer Reports The idea is is you ought to have your own AI not just use those other guys AI right and that AI ought to be loyal to you and your community and So it's an idea of saying okay. This stuff is open source. It's not that hard to build Really what you want to do is you want to have?

the community talk to each other and have answers just like that AI buddy thing where there were manuals and so on. You want your baseball club to have things that are shared among them and the AI should help the baseball club not only share stories but give everybody a good sense of what's going on with the club and the people in the

right, to be sort of the intelligence or the wisdom of that community. And you want a different one for your family, because you're going to do different things there. It's a different context. And so it's context sensitive. And the context are sort of the people you're interacting with, your community, right? And it's sensitive to that, and it has to give answers to that. And you should be controlling the data, not

some guy off in Seattle or wherever. And so that's the idea of this loyal agents thing. And then you can, if you have that, then you can count on the AI to begin giving much more contextually relevant, practical human sorts of answers to things. So that's the idea.

Richie Cotton (36:49.399)

Hi, sorry about that, my internet got disconnected.

Alex "Sandy" Pentland (36:51.519)

Yeah, it happens to the best of us. Don't worry. Yeah.

Richie Cotton (36:56.393)

Yeah, there are some people drilling below the... Yeah, yeah, so I don't know what went on there. But anyway, I'm hoping it's okay.

Alex "Sandy" Pentland (37:04.762)

my, really disconnected.

Richie Cotton (37:14.657)

Maybe we'll just stop this recording and I'll click record again and hopefully that will...

Alex "Sandy" Pentland (37:19.988)

I don't know. I mean, I guess I don't know what yours is like. Mine is still recording right along.

Richie Cotton (37:23.809)

Yeah, I don't want to manage to. Okay, alright, hang on.

Richie Cotton (00:01.728)

So it seems that the key to this is having different stories for different communities. So can talk about how is technology going to make this easier?

Alex "Sandy" Pentland (00:12.63)

Well, it's a challenge for technology because the economics of it, the way people have built things is either it's the same for everybody or it's just for you. But actually you want context. And so first it's like the AI buddy thing I talked about within companies.

You know, that's like the company menus, the company newsletters, the company's manuals, et cetera. And so it gives you context to the community that you're working in at the moment. Well, you'd like the same sort of thing for family where, you know, okay, so you have the trips you've taken and things about the kids' report cards or whatever. And so when it gives an answer, it knows something to give you a contextually appropriate answer.

An interesting thing is that if you look at, say for instance, Facebook groups, they mirror the physical places that people meet up. So people produce these community spaces without being encouraged, right?

And they do that, so everybody in the baseball club can know when the next game is or the next show that people want to see or everybody in the family can be reminded about the upcoming dentist appointment, whatever it is. So we already do that, but now what we want is we want AIs that know about the group.

so that when you ask it something or it reminds you about something, it's appropriate to the group. And you want the AI to be something where you own it, not somebody else. So that data remains private. It's under the control of the group or the individual. And that's what we're building with this loyalagents.org thing with Consumer Reports.

Alex "Sandy" Pentland (02:09.964)

Consumer Reports is a community that wants to not get ripped off by people selling junk, And so they share experiences, stories about, bought this blunder and it broke the next day, right? And...

So when you do things with Consumer Reports, it says, you know, 96 % of people had a good experience with this blender, but 4 % would like set fire to the company. And you know, that sort of thing. And that's really what you want. You don't want it to be influenced by advertising or things like that. Actually, I, let me just do something that's really surprising to people, but you should think about it.

Richie Cotton (02:50.418)

Okay, yeah.

Alex "Sandy" Pentland (02:56.632)

So one of the things that's likely to happen with all this agentic AI, so people will have AI agents, everybody out in Silicon Valley is busy doing this. Now most of them, the company owns the data, not you. Let's just leave that aside for the moment. But one of the things it does is it gets rid of advertising. Because if you say, want to buy a blender, the agent goes out and finds blenders that are good for you. If not,

you know, for clicks and stuff like that anymore. It's something where a lot of this stuff just happens automatically. So like, for instance, CHED GPT can now buy things for you, right? You can tell it, look, you know, I need milk again, and it'll find the best place to get milk, right?

You don't have to spend your time on it. People are going to love this. In China, they have something called WeChat that's a little like that already. And everybody is addicted to it. It's one of the biggest, most profitable businesses in China. And so that's what we're seeing. It gets rid of the advertising community because it's the agents that are finding things, not people like doom scrolling and stuff like that. So that's really interesting.

And then the purchasing decisions are either automatic or the AI recommends something, a bit small. But if it's a big thing, they have to be much more like an advisor and they have to be trusted. And there needs to be somebody who rates how honest all these things are and how good they are. And that's what we're hoping to do with Consumer Reports is actually have a rating system where if somebody says, yes, this is a really good car. And you say, well, yeah, except that.

And that's what Consumer Reports is always about. So we figured that we're a good partner. We'll be trying it out on a couple million people near you very soon. So loyalagents.org. There we are. There's the pitch.

Richie Cotton (05:00.64)

OK, so yeah, I do like the idea of agents being able to perform these tasks for you. Shopping tasks, it does get interesting. yeah, trusting an AI agent with your credit card. It's like, OK, maybe you like to buy milk, not to buy a car. It's not having my... I'm definitely giving it a budget limit.

Alex "Sandy" Pentland (05:17.486)

Yeah, or drugs or from wherever, or know, that prince in Nigeria really doesn't need the loan of $1,000. okay. You can see where it's going.

Richie Cotton (05:30.848)

Yeah, yeah, there's definitely some problems. So actually, you mentioned that that would mean advertising like to humans is going away. That's going to break the business models for like a lot of websites, organizations. Yeah. So are we going to see adverts targeted to agents then? Like something that's going to like attract the agent to go and make the purchase there for you? Do you think that's a possibility?

Alex "Sandy" Pentland (05:43.01)

Virtually the entire net. Yeah, absolutely.

Alex "Sandy" Pentland (05:57.327)

You're going to see things that may or may not be human readable. But what they do is they have standard statistics about things like the price and sort of stuff, the specifications. But they also try to highlight the advantages of this thing, which is a little like advertising. You see it in...

Like if you go on Amazon, they'll have things about like what's interesting about this one versus all the other ones. And then they'll have the specifications. So the agent will look at that. But then what you want is you want something like Consumer Reports that says, this one's BS. Don't pay attention. Right? And for small things, it'll probably just buy things automagically. And for bigger things, it will.

Richie Cotton (06:37.888)

You

Alex "Sandy" Pentland (06:48.76)

probably come back and you'd have this sort of advisory from the agent and then you sort of look at it yourself and you can say things like, well, what are the advantages of this and what are the... But these things just feed off of what's on the web. So you need to actually have human input from people who have bought this thing.

from people who have worked with a company before. So it's a little like Yelp, a little like Consumer Reports, a like a couple of other types of things. You need to have that input to make wise decisions. That's a shared wisdom, incidentally, right?

Richie Cotton (07:24.628)

Absolutely, I like that. It just seemed like if you're working in retail, then you got to think very carefully about what happens once shopping becomes more agentic. How are you going to attract the agents to your site? guess, yeah, all the information on your site needs to be machine readable for sure. Okay.

Alex "Sandy" Pentland (07:42.617)

Yeah, and the, you know, instead of just like pumping out ads, now you actually have to have things that look like advisors. You know, it's like if you go in and you buy, like if you invest in something, you go to like Fidelity or something, you have a lot of money, you know, they'll assign somebody to like talk with you about it. So that's more the sort of character that companies are going to have to have is really as advisors. And they have to be honest at a certain level.

Otherwise, they're going to get a bad rating. So that's really key, is to have that rating of honesty that's accessible to everybody.

Richie Cotton (08:22.1)

Okay, yeah, I guess, yeah, telling stories about like what's good and what's bad then, yeah.

Alex "Sandy" Pentland (08:28.332)

Yeah, exactly, exactly.

Richie Cotton (08:32.243)

Alright, super. So one more thing I'd like to talk from your book. So you made this analogy between shared wisdom and Daniel Kahneman's ideas around thinking fast and thinking slow. I thought it was very interesting. So do you want to explain how it works?

Alex "Sandy" Pentland (08:44.227)

Mm-hmm.

Alex "Sandy" Pentland (08:49.41)

Yeah, so Kahneman, Nobel Prize winner, has this idea that human thinking has two pieces. There's the system one, which is very fast and automatic and, you know, things happen and you're not thinking it through. System two is a more deliberative, sort of thing that, you you're reasoning about stuff.

Well, LLMs are really much more like system one. It's just like all the things they've heard, and then if you push their button, they regurgitate. They're not really thinking about it. Even if you ask them, why did you do this, or what's the chain of thought, no, it's just that they're making up a story based on all the things they've heard.

But what you can do is you can embed them in a loop where you ask them, well, why did you say that? And where's the evidence? And give me the evidence. And you can approach it where there's bunches of LLMs all trying to give answers but with slightly different context and say, well, why do I want to believe you versus them? So those are the sorts of things that people do to winnow out the truth.

And yes, you can build those, and that's one of the hot topics in Silicon Valley today, is how do you take this sort of system one LLM, you push it, it says things, it's not bad, but it's just like the common conversation. It's not good either, right? How do you turn that into something that's contemplative and has some real wisdom in it?

It seems that the strategy that's most popular is we're going to have several little answers from the system one thing and we're going to bang them together to see which ones we really believe. you mean like people do. Yeah, like that. And people could be part of the process too, of course, right?

Richie Cotton (10:40.968)

Okay,

Richie Cotton (10:44.8)

That's interesting. yeah, it sounds like getting to this system to thinking the stuff where you are, I guess, reasoning about things. This is what happens with the deep reasoning tools then. There's quite a few of them around. from OpenAI, I'm probably Google. Yeah, OK. All right. So yeah, do you have any more advice then on how we go about getting better thinking from AI?

Alex "Sandy" Pentland (10:57.218)

They're beginning to be that way, yeah.

Alex "Sandy" Pentland (11:07.98)

Well, I think that there's a number of things that are just sort of fundamentally wrong in there, right? One is that you can get better thinking, but it's still not contextualized. It doesn't know about your community, your goals, your values. It can learn a little bit about that stuff, but there needs to be this notion of conversation among people in the community to establish what are the community values, and that's the context you need.

One can imagine systems that do this, but you'd like to keep that sort of something that you own and not open AI just to pick a company. You know, and so that'll be a battle in the next coming few years is like who owns all this stuff, who owns the data. And I think that there's other things which are for instance, know, AIs are not physical. They don't have the...

the history of being a kid, going to high school, they know what people say about it, but that's really different than what actually it felt like. And so people have this context of their feeling and their experience and their sensory apparatus that LLMs don't have at the moment and are unlikely to have for quite a while. I you can imagine eventually robots will be that way and so forth.

And moreover, you we don't really want AI making decisions for us, in part because it's not grounded in the human context. It's going to say things that are just sort of wrong for us, maybe subtly, but that's the danger part of it. So we'd like to have things where people are still very central to the decision loop.

And that's why we're building things like this deliberation.io. Yes, there's AI in there, but the AI is helping the humans be contextual.

Alex "Sandy" Pentland (13:09.902)

It's not helping, it's not giving them facts or telling them what to think. Same thing with loyal agents. It's not telling people what to think. It's giving them much more sort of an even view of what's out there so that they can learn and express their preferences. So it's not taking the human out of loop. It's empowering the human to make more contextualized choices.

and to be more in control of their life. And I think that's something that's critical to do now because we're at a sort of inflection point where it could go either way and we need to push on the let's make things for us not things for getting rid of us branch of that tree just so that when it grows it grows in a way that we like it. And when I say grow I mean when the technology grows.

It grows in a way that actually complements us, does things that we find oppressive.

Richie Cotton (14:17.024)

Absolutely. So I love the idea of like using AI in order to make like data and like other people's stories more accessible. you helping them make better decisions. And that's certainly, yeah, obviously a much more preferable situation to using AI for eliminating us or oppressing us. Wonderful. All right. So before we finish, I mean, we talked a lot about shareableism, you've also got like decades of like very cool stuff you've been doing. I'd love to touch on a few of those things. So.

You ran the MIT Media Lab for decades. Can you talk us through some of your favorite projects that came out of that?

Alex "Sandy" Pentland (14:51.342)

Well, I was one of the first people at the Media Lab and I was academic head for a while, but I didn't run it for decades. No, no, no, people get touchy about that, right? Context again, right? And some of the projects that I think are most interesting, one of the projects we did is we built a very simple type of AI. It's a rule-based AI, but it runs on mobile phones.

Richie Cotton (14:56.677)

sorry, apologies, I'm inflating your job title there.

Alex "Sandy" Pentland (15:15.688)

And what it does is it gives health information, health advice, to people in very poor areas. So it's a little company called Demaghi. And it helps 400 million people in rural areas stay healthy. It handles one out of every 30 births in the world, right? Helping the mother and the baby do well.

That's amazing. It's very simple. It's a way to personalize and contextualize the medical advice that comes out of the capital city. So same type of theme, but it's really, really sort of spread. Another one that I'm proud of there is...

I was involved in helping data be used for better governance, to make policy that worked. And when the Sustainable Development Goals were set up, I was one of the UN Secretary-General's data revolutionaries who were supposed to help the Sustainable Development Goals be realizable because they were concrete. They could have data. You could assess, are people doing?

And that was the idea. It still is the idea. And the UN is actually now attempting to take the same ideas here and making AIs that are helpful for local people to be able to assess things and make it a much more sort of concrete type of thing. So I'm proud of that. That was a pretty good thing. And then...

The move here to Stanford is interesting because I can do things like deliberative democracy, right? That's pretty good. How do you get people to talk to each other better? Or the loyal agents, how do you make AI serve you and not somebody else? That's a question that industry is not so concerned about, except from a legal liability point of view. So, sir, we're doing things. We're trying to make the world little better, one piece at a time.

Richie Cotton (17:26.88)

Wonderful. I mean, I do love the use cases there, talking about like healthcare, talking about sustainable development, talking about improving democracy. So these are things that will have real social impact as well. It's not just like messing about with data and AI just for the sake of it. It's like got real impacts there. Wonderful. All right. So just finally, I'm always interested in like new people to follow new ideas.

Who you, whose research or work are you most interested in at moment?

Alex "Sandy" Pentland (17:58.511)

Well, there's a guy by the name of Michael Bernstein here at Stanford who runs a social AI group. he's the one that's done, you know, what happens if you have lots of LLMs that think that they're villagers and they live in a village and they talk to each other? What happens to the social interactions? Or most recently, as a book called Flash Teams, which is, can you use AI to help put together better teams?

And if you do that, what does that do to the corporate structure? And I have to say that that's a really interesting question. If you can actually put together teams on the fly, so this is the sort of social context to make them fit together, then a lot of the reasons to have corporations go away.

So that means we get back maybe to guilds the way things were in the 1500s where you're a full-stacked engineer and they call you up to do something for some other project, but it's anybody in the world could call you up.

as opposed to you're supposed to show up in this building five days a week or three days a week, whatever it is. It just changes the structure of society dramatically. We ought to think about this before it gets off there and starts happening.

Richie Cotton (19:20.16)

Yeah, that's gonna be a huge change to the idea of work. I mean, suppose you already see quite a people who are fractional or whatever, and they'll work for several companies at once.

Alex "Sandy" Pentland (19:28.738)

Yeah, fractional CEOs, fractional CFOs, you get things like Upwork and Fiverr and you know, yeah, they come in and they do a project and yeah, you can just do these things dynamically now. And so things that used to take you hiring a bunch of people and a bunch of capital and set it all up, no, no, you can just do it. And it takes, you know, a quarter of the time and quarter of the cost.

Richie Cotton (19:56.64)

Okay, so yeah, the rise of the consultant or external employee then I guess that sounds fascinating. the death of...

Alex "Sandy" Pentland (20:02.936)

Well, it's the death of the consultants, because the consultants don't have a good role in this. But it is, it's the, I mean, it's going to be a little bit more like a gig economy, where people have particular credentials and they plug into different projects, but they're not part of a lifetime employment contract. So that's a change. We should think about that.

Richie Cotton (20:27.273)

Yeah, definitely something to think about. Yeah, want to ponder on that. Wonderful. Thank you so much for your time, Sandy.

Alex "Sandy" Pentland (20:34.915)

My pleasure. Good talking. Take care.

Temas
Relacionado

podcast

Building Trust in AI Agents with Shane Murray, Senior Vice President of Digital Platform Analytics at Versant Media

Richie and Shane explore AI disasters and success stories, the concept of being AI-ready, essential roles and skills for AI projects, data quality's impact on AI, and much more.

podcast

Unlocking Humanity in the Age of AI with Faisal Hoque, Founder and CEO of SHADOKA

Richie and Faisal explore the philosophical implications of AI on humanity, the concept of AI as a partner, the potential societal impacts of AI-driven unemployment, the importance of critical thinking and personal responsibility in the AI era, and much more.

podcast

What History Tells Us About the Future of AI with Verity Harding, Author of AI Needs You

Richie and Verity explore why history is important for the future of AI, the role of AI in society, historical analogies including comparisons of AI to the cold war, the role of government and regulation, and more.

podcast

The Evolution of Data Literacy & AI Literacy with Jordan Morrow, Godfather of Data Literacy

Richie and Jordan explore progress and challenges in data literacy, the integration of AI literacy, storytelling and decision-making in data training, how organizations can foster a data-driven culture, practical tips for using AI in meetings and personal productivity, and much more.

podcast

How Generative AI is Changing Leadership with Christie Smith, Founder of the Humanity Institute and Kelly Monahan, Managing Director, Research Institute

Richie, Christie, and Kelly explore leadership transformations driven by crises, the rise of human-centered workplaces, the integration of AI with human intelligence, the evolving skill landscape, the emergence of gray-collar work, and much more.

podcast

Data Storytelling With High ROI: How to Create Great Thought Leadership with Cindy Anderson & Anthony Marshall, CMO and Senior Research Director at IBM

Richie, Cindy, and Anthony explore the framework for thought leadership storytelling, the role of generative AI in thought leadership, the ROI of thought leadership, building trust and quality in research, and much more.
Ver másVer más