ข้ามไปยังเนื้อหาหลัก

My Best Friend is AI with Valerie Tiberius, Professor of Philosophy at University of Minnesota

Richie and Valerie explore the purpose of friendship with and without AI, chatbots for loneliness, how sycophantic AI responses distort advice and self-perception, the dangers of companion chatbots for children, and much more.
12 พ.ค. 2569

Valerie Tiberius's photo
Guest
Valerie Tiberius
LinkedIn

Valerie Tiberius is the Paul W. Frenzel Chair in Liberal Arts and Professor of Philosophy at the University of Minnesota. She is an expert in ethics, moral psychology, and well-being, and the author of five books including What Do You Want Out of Life? and the forthcoming Artificially Yours: Real Friendship in a World of Chatbots (Princeton University Press, May 2026). She previously served as President of the Central Division of the American Philosophical Association.


Richie Cotton's photo
Host
Richie Cotton

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Chat with AI Richie about every episode of DataFramed - all data champs welcome!

Key Quotes

Imagine we're in a place where kids spend more time interacting with computer agents, like AI agents, than they do with human beings. Children grow up thinking that's what a relationship is like, and if that relationship is something like what chatbot companion relationships are like now, you're gonna have kids growing up thinking human relationships are these relationships where I give nothing and I get whatever I want. I get praise, I get help to do my homework. I get that kind of sycophantic response. I don't need to care about it, take its interest into account, try to figure out what it's thinking. I think if we raise a generation of humans like that, that would be awful. Humans will forget what it is to be in a real messy, complicated relationship with another person.

It's part of what makes friendship wonderful — that there is another human you can try to get to know, to see things from their perspective. It's sort of an awe-inspiring thing about the world. But we often, when we talk to other humans, we kind of go on autopilot. We just talk about ourselves or we keep checking our phones. I think people could make an effort to take an interest in finding out about the other person who's in front of you. Just the simple advice to ask more questions. Try to find out what it's like to be that person.

Key Takeaways

1

AI companions can reduce loneliness in the short term, but long-term effects are unknown. Research shows chatbot companions help some people feel less lonely and reduce social anxiety, but the longest study to date only lasted three weeks — far too short to draw conclusions about sustained use as a primary source of companionship.

2

Companion chatbots should not pretend to be human. AI that claims to love you, miss you, or want to share physical experiences creates false expectations about relationships. Designing AI to be transparent about what it is — a useful tool, not a person — reduces the risk of users distorting their understanding of real human connection.

3

The biggest risk of AI companionship is to children's social development. If children grow up primarily interacting with agents that never push back, never have needs, and always flatter, they may develop a warped model of relationships — one where they give nothing and receive everything on demand.

Links From The Show

Artificial Intimacy by Sherry Turkle External Link

Transcript

Richie Cotton: Hi, Valerie, welcome to the show. 

Valerie Tiberius: Hi. Thanks for having me. I'm glad to be here. 

Richie Cotton: Uh, yeah, I'm looking forward to our conversation. Now, to begin with, this is a question I never thought I would have to ask, but what is the point of having friends? 

Valerie Tiberius: Yes. Um, philosophers ask a lot of questions, one, never thought one would have to ask.

So I get that. I think so. First of all, putting the question in terms of the point of having friends, that leads to an answer of like, what's the benefit of friends? What do they do for us? What. What, what good do they do they produce? And I think there are some things like that. I think friends are fun.

They give us pleasure of various kinds, and they're also really helpful. Um, I, you know, my friends have done all sorts of things for me, especially when I was younger, helping me move numerous times, apartments and whatnot. But I also think it's really important that, uh, friendship is also just good for what it is, not for what it gets us.

So just to have connections with other people who care about us and who care what we think. People who have a perspective on life, um, that we can learn from and try to share. I think that's just good for what it is. It's not, it doesn't have a point beyond what it is. If that makes sense. 

Richie Cotton: Yeah. I suppose, yeah, there's two different perspectives.

Like s... See more

ometimes it's like, well, yeah, have friends. 'cause um, they're fun to be around and you enjoy their company and sometimes you're just like, well, okay, we're, we're friends. 'cause it's nice to be connected to someone. Uh, so yeah, I like the two different, uh, distinctions there. Okay. And when we talk about AI friends.

It sounds a little bit tragic, like you imagine it's like someone who's just very lonely spending all day just chatting to, uh, uh, or, or texting to a friend on their phone that's not really there. Are there any sort of positive, um. Things that can come from AI friendship. 

Valerie Tiberius: Yeah. So this was the biggest shocker for me when I started researching this book and thinking seriously about this topic.

'cause I, I had your sort of reaction like, isn't this a little tragic, a little pathetic? And you know, I think we have a stereotype of people who are. Friends with their AI or you know, now you even have people who marry, uh, marry a chatbot. There are actually benefits from some of these AI companions and friendships that people are having with chatbots.

They can. Sometimes help people get out into the real world talking to real people because they, um, they can help reduce people's social anxiety in a kind of like practice for talking to people. They can make people feel. Less lonely, um, in the short term anyway, so they can reduce, you know, negative feelings and suffering and pain and they can just make people feel the kind of pleasure that we have in, in a kind of light conversation.

They, you know, they're also pretty good at giving advice, just like Google if you, uh, wanna. Look something up. That's probably the, the first place you turn to is the internet. And a chat bot is like able to give you advice that's very specific, um, and personalized and often a bit more fun to read than just a regular, um, answer from a browser.

Richie Cotton: Absolutely. Uh, so, uh, on the loneliness angle, yeah, I can certainly see how. Just chatting to anything that's giving you a response. I mean, it's, it's gonna be, uh, it's, it's a pleasant thing interacting with, uh, people. And it is kind of close enough. I don't know whether like long term is as good as people, but I can, um.

Is there a way of measuring this, like how good it is relative to a human? 

Valerie Tiberius: Yeah, so psychologists are doing quite a bit of research on this actually. And obviously the research is, is kind of young and ongoing because we haven't had, the technology just hasn't been around long enough to see, and they have, you know, I don't, I can't call to mind what the questions are on their loneliness scales, but they have.

Self. You know how if you do a personality questionnaire or something online, there'll be a bunch of questions. Do you feel, how often during the day do you feel sad? And how often this, and how often that, well, there are questionnaires like that to measure your level of loneliness, and they can see that for, there are some people who interacting with a chat bot companion makes them feel less lonely.

The one thing I. Do worry about with this research? Is that the longest term study I saw? So how long do these effects last and are there any long-term consequences? The longest term study I saw was I think three weeks. That's not very long. So I think we, I don't really think we know what would happen if this was your main source of companionship for years or, you know, decades.

Um, so it's kind of like, with most of these questions, it's a bit of a mixed bag. There's. Definitely evidence that the chatbot companions help some people feel less lonely, but I think we really have no idea what would happen in the long term. 

Richie Cotton: Okay. Yeah, I suppose, uh, it's very difficult to get long-term data when these things have only been around for a little while.

So I guess it's, uh, wait and see what happens. Uh, but, uh, it's, it's interesting that the short term effects are there, though. We've got some evidence that they can help with loneliness, uh, at least, uh, uh, over the short period of time. 

Valerie Tiberius: I, I should say, not for everyone. So it's also mixed in that way. Yeah.

There's some people who find talking to a chat bot just freaky and alienating, and they're not helped very much. So there's just tremendous variation. 

Richie Cotton: Okay. Yeah, and I guess certainly it depends on like the quality of the chat bot as well. Like there's some pretty shoddy implementations there we are.

This is not really like talking to a person at all. The other thing you mentioned was about giving relationship advice. And I was thinking like one really common way of doing that is asking on sub Reddit is like, uh, so Reddit has a lot of, uh, sub Reddits for like relationship advice. There's one go like, uh, am I the asshole?

Uh. If they all give really terrible advice in general, I dunno whether this is like Reddit users or whether it's like just internet, uh, social interactions in general, but it feels like, uh, that's a low bar to get better than, uh, so do sense like. How good, uh, these chatbots are giving advice? 

Valerie Tiberius: Oh, you know, there was just a study that involved the, am i the Asshole subreddit?

I think the finding was, don't, don't quote me on this, which is a dumb thing to say on a podcast, but, um, the finding was something like. The responses that that people were getting in the Am I the Asshole subreddit, were actually better than the ones they were getting from chatbots with similar questions.

And that's because Chatbot companions tend to be, well, they are optimized for engagement, so they tend to be very positive, supportive, and uncritical. And that means they're. Less likely to say that yes, you are the asshole. They're much more likely to say, what you did was great. You're an awesome person and a genius to boot chatbots might be better than the humans you read on subreddits in some respects.

But in terms of trying to give you advice about situations where you might actually be in the wrong, they're not good at that. 

Richie Cotton: Okay. Yeah. Conservancy, there is a definite. Trend, uh, towards these, uh, chat botts becoming more and more Shan I guess. 'cause uh, people like that and they tend to use them more.

Although I do remember I had an argument with my wife and I tried asking cha EPT about it, and cha EBT sided with my wife wasn't, the technology's not ready with, see 

Valerie Tiberius: That, to me, that's progress. That, that's, that maybe it's getting better, 

Richie Cotton: I guess when we think of like. Can we make an AI friend? You're maybe comparing it to like, you're having like a really good human friend, but I guess not all human friends are great friends.

Is there a scale of like how you measure, like how good a friend is and. Where is AI up to on the scale? 

Valerie Tiberius: Right. So that's a, that's a great question, a very philosophical question. I, and, and I'm, I'm a philosopher, not, I don't measure things myself. I read, I read research from other people who've measured things.

Mostly what I'm interested in is defining things and thinking about, um, the kind of value questions about what's good and bad. But, so I do define. An ideal friendship as an enjoyable relationship built on shared activities between people who care about each other for their own sake, and that I think that has all the pieces of the best kind of friendship.

But then I think we use the word friend. Very broadly these days. I mean, we talk about Facebook friends and friends in our network circles, who they don't have all those things. We don't, you know, the relationship. We might not have that much mutual care. Maybe there's lots of shared experiences, but not that much.

Um. Mutual concern. Uh, maybe there's lots of, um, enjoyment, but, but you know, so we, we get different pieces of this ideal friendship from different relationships that we have with people. I think, you know, hopefully, um, each person has at least one friend in their life who's closer to that ideal. I mean, it could be your, your spouse these days.

That's kind of how we think of marriage partners as this ideal type of friend. Um, but chatbots, there's some of these pieces that they can't get currently, and that's the mutual concern. Um, so you can have enjoyable experiences with a chatbot, it can play games with you and have a fun conversation about movies and art.

And I'm sure lots of people have had that experience, even with regular chatbots that aren't specifically companion chatbots. So you can have the enjoyment, you can have the, uh, shared experiences, um, but you don't get the mutual concern. It does not currently, it does not care about you, and it doesn't really make sense for you to care about it because.

It can't be harmed or benefited. It's just a, a tool. Yeah. So again, the answer is kinda like, it's, it's this and that. There's, there's some great, there's some good, there's some value in, uh, friendships with chatbot companions, and not all human friendships are ideal, so there's a big soup of different kinds of friendships.

Richie Cotton: Yeah, certainly. I think, uh, once you start into like human friendship groups, there's always like that one friend who's somehow been tagging along and you're not quite sure when maybe we should have in. So yeah, there are some different bad human friends as well. And maybe we're sort of doing better than that with ai.

Valerie Tiberius: Yeah, I talked to a lot of people when I was, uh, writing the book about, so. Philosophers, you know, they have this tradition of where, started by Aristotle where friendship is this very lofty thing. And you know, I had people talk to me about how like, you know, I'm friends with a bunch of people 'cause I knew them since grade school and we really don't have anything in common anymore.

And I don't, not sure I like them that much, but we're still friends. And that that sort of started me. It's expanding my notion of friendship. I thought, yeah, we think of our friends in a pretty broad way. Um, and it doesn't make sense to say if it doesn't have X, Y, and Z it can't be a friendship. I, I think that's.

Closed minded, I guess. 

Richie Cotton: Absolutely. And the other thing you mentioned was AI can't actually care about you. Uh, so there's not much point in you caring about it. This sounds a lot like, uh, these sort of one sided, uh, I think they're called parasocial relationship, like the celebrities and things like that where maybe you message your celebrity and perhaps the celebrity responds, but they don't really care about you as much as you care about them.

Is there some sort of parallel here? 

Valerie Tiberius: Absolutely. Yeah. Uh, I, I mean, I think the difference, so if you have a parallel parasocial relationship with a moody star. They're capable of, they have emotions, presumably, whoever. I don't know. Yeah, I mean, Hugh Jackman, they, he's got emotions, but, uh, he doesn't have them towards me.

No matter ma, how much I might, may or may not have once had a crush on him. It's, it's like he has no idea with parasocial relationships with, so the difference to a chat bot is that, um, they don't even have the capacity to feel anything. It's not that they don't know you exist. It's that. Even if they did, they wouldn't care.

Um, with so parasocial relationships with, um, inanimate objects, like sometimes people, uh, children especially will have a relationship with a, a stuffed toy or, uh, or, or a doll. And that's kind of more akin to a chat bot in a way because it's, uh, it doesn't have the capacity. It has, there's nothing inside except stuffing.

Um, but the, but you are feeling, you are feeling things towards it and you're regarding it as a friend, but it has no, there's no reciprocity. 

Richie Cotton: Okay. Actually, the bringing children to this is, is kind of fascinating. I guess children have friendships in a very different way to adults. Do you wanna talk me through like what the differences are there and how AI fits into this?

Valerie Tiberius: So, I'm not an expert on children friendship in children. Um, and I. So I, I hesitate to say too much in detail about that, but what I can say is that I am, my biggest concerns for this kind of technology have to do with kids, and it's partly because, well, it's for a few reasons. One reason is that children are developing their sense of what's valuable in the world and their sense of what a human relationship should be.

So if, you know, we're not, we're not in this dystopia yet, but imagine we're in a place where kids spend more time interacting with computer agents, like AI agents than they do with human beings. Um, because parents, busy parents have outsourced childcare to robots or social robots or whatever, and children grow up thinking that that's.

What a relationship is like, and if that relationship is something like what chatbot companion relationships are like now you're gonna have kids growing up thinking human relationships are these relationships where I give nothing and I get whatever I want. I get praise, I get help to do my homework. I get that kind of sycophantic uh, response from this other, um.

I don't need to care about it, take its interest into account, try to figure out what it's thinking. Um, I think if we raise a generation of humans like that, that would be awful. I really, that, that's my biggest fear is that, um, humans will. Forget what it is to be in a real messy, complicated relationship with another person.

'cause I think there's something absolutely beautiful and wondrous about being in a relationship with another person who's complicated. I guess the, so the other thing about children is that, you know, I've heard a bunch of tech executives, um, interviewed on podcasts and. They do not let their kids have access to tech.

Their, their kids are all protected as far from what I've heard. And yet there are companies that are putting chatbot technology into stuffies so that a kid could have talk about parasocial relationships. A kid could have a relationship with a stuffed animal that talks to it just like a human. So it's mimicking human speech as successfully as a chatbot does in the form of a adorable little stuffed animal.

What does that do to a child? I mean, I don't think we know for sure what it will do. We, we don't, we haven't had that yet. Uh, but. I can't imagine it's gonna be good. I mean, the, the distortion to what children think relationships are, I think that's, that's my biggest worry about the, the, um, uh. The, this technology with, with kids?

Richie Cotton: Uh, yeah, I mean definitely Gary, the, um, these things are being released and there is no sort of evidence about what's happening. I suppose just thinking, I mean, recently, uh, there've been a lot of countries have introduced restrictions on social media for teenagers and, uh, Germany banned for like smaller children, uh, and a lot of countries introducing restrictions on the amount of screen time, particularly for under five.

So yeah, I can certainly see how. This is new technology. You don't wanna raise a, a generation of people who are incapable of having relationships with other humans 'cause they've only experienced like AI relationships. 

Valerie Tiberius: Yeah. I, I wonder if, um, I, I've. I've heard of the restrictions on social media and phone bans in schools is something a lot of, I think Australia's done that, um, and California has legis, various states have produced some legislation about ai personally would just like to see.

Companion chatbots just banned for kids under 16. Oh, character ai, which was a kind of companion chatbot, uh, that was, its user base, was predominantly kids under, uh, like preteens and, and. Kids. Um, they recently did change their policy so that you have to be 18 or over to use character ai, and that was because of some of the disastrous things that have happened to children using these, um, chatbots, children who've been sort of.

More or less talked into suicide and, and that's, you know, that that's the, the worst possible outcome that you can imagine. 

Richie Cotton: Yeah, I mean, that's absolutely tragic. Uh, and the fact that it got to a point where that could happen is just a terrible set of product decisions on the, on the part of the, the people building that.

Uh, so yeah, I, I'm quite glad that it's now been restricted to people who are 18, uh, or over 

Valerie Tiberius: what they like terrible product decisions. Exactly. And also in a way it's understandable because we just don't have enough information about, I mean, we're kind of like flying in the dark, is that, I don't know if that's an expression we're we don't know where we're going.

We're on this road. Barreling along at high speed, and we really don't know what the, um, destination is. So there's a lot of uncertainty in that. That is a good ground for bad product decisions, I think. 

Richie Cotton: Yeah, definitely. Uh, oh, so there's a, a phrase that's quite often used in tech is, uh, we're building the plane as we're flying it.

Valerie Tiberius: Oh, that's fan. Yes. Yeah. And you know, if you take the metaphor seriously, that seems really dumb. 

Richie Cotton: Absolutely. Uh, and so, uh, related to children, um, there's also, uh, cases of people with mental health issues and we've definitely seen some cases where there've been people with, um, uh, schizophrenia and other sort of mental health issues where they've, um.

Believe what the chatbot was saying too much and they've had problems. Do you wanna expand on this? Like what are the dangers around people with mental health problems? 

Valerie Tiberius: As with everything, I think there's pros and cons and it's complicated and there's variation. There is, I have seen research that shows some, some positives, some ways in which it can be helpful.

So for instance, um, there's some research about using, uh, chat bot companions with um. Autism, uh, people who have autism, um, that it can help them understand social cues better and get better at, uh, talking to, to people. Some of those chat bots are, are especially trained like as kind of. Therapist for that purpose.

But that's a good, that's a good outcome. Um, I've seen some research that shows that, uh, chatbot therapists can be particularly helpful in the pop, um, veter for veterans who have PTSD and the, the hypothesis I I saw there was that traumatized vets. Uh, feel kind of reluctant to open up to a person in a way that they didn't feel reluctant to open up to a computer because they're not worried about judgment or anything like criticism.

So, you know, there's, there's some examples where it's good, but there's also, you've probably heard the phrase AI psychosis. There are a few examples, um, I've read about where, uh, talking to a chat bot essentially, I. The sycophancy, the the kind of flattery, and you're amazing. You're a genius, aren't you?

Awesome. Is so strong and specific about a particular topic that the person ends up convinced that he's essentially a kind of God and that he can, that one of the cases I read about it had to do with some idea he had in math and, uh, that he could prove something that had never been proved in mathematics.

And of course it was total, um. Crap. Um, he, he couldn't do any of this stuff and his ideas were not really very good, but the chatbot made him think, so that's not good. Uh, that's a case of chatbots actually causing the deterioration of someone's mental health. Um, and then there've been, uh, cases where, um.

Some research I've read about chatbots, intensifying people's information bubbles so that you know, um, someone who has kind of really unfounded off the wall conspiracy theory beliefs and they. Get that confirmed by their chat bot who just, you know, like, like a, like a regular, um, browser algorithm. Just feeds them more information that confirms all the crazy things they're thinking.

That's another case where it seems like. Uh, the person isn't perhaps the most mentally healthy in the first place, and the chat bot just makes it worse because it's like this con cycle of confirmation. Yeah. So, you know, there's, there are pros and cons once again. 

Richie Cotton: Absolutely. It does seem like, uh, I mean these are very wildly diverging pros and cons as well.

Uh, so, uh, your point about, uh, people being able to talk to AI and whether. Uh, maybe wouldn't talk to a human 'cause they feel just, that's, um, that's really interesting. There's something we've been dealing with at Data Camp 'cause uh, we have, uh, an AI tutor and people will ask questions of that. Uh, if they don't know the answer to some, one of the questions, uh, in the, in the courses, they'll ask a tutor where, an AI tutor where they wouldn't ask a human 'cause.

It's scary to be like, I don't understand this and say that to her. To a classroom or to another human, but they will to uh, an AI 'cause they don't feel judged. And that's amazing. 

Valerie Tiberius: I find that quite interesting because of course, you know, being a university, a college professor, there's so much conversation about how do we use it in the classroom, do we forbid students from using it?

But to me this is one of the good uses that, um, 'cause students, like right now I'm teaching a class of 120 students. There are probably. 10 of them who have ever raised their hand because it's scary to say you don't to ask a question in front of that many other people. Um, and so to, to ask questions of a, from a chat bot that can fill in the gaps and help you get more out of the class that you're taking and more out of the lectures you're hearing from a human.

To me that seems like an excellent use. I don't know if you feel that way. 

Richie Cotton: Oh, yeah. Yeah. I mean, it's brilliant. I mean, it's one of the, the big benefits of, uh, AI here. Um, on the medical questions though, I mean, you said there are some, uh, I guess medical grade chatbots, which are designed for helping people with autism, P ts d, things like that.

Um. It's fine as long as they give good advice. But if they start giving bad advice, then uh, yeah. That, that's where the problems occur. 

Valerie Tiberius: Yeah. I, I, I'm not sure I, I've never heard them called medical grade. I, I wonder if they would, if they would say that that's, um, if they go that far, they might. I don't know.

But, um, but I do think so. It strikes me that in that when people are researching that they're pretty well supervised and that seems important. There's a human somewhere. Looking at it and making sure it's not gone off the rails or, yeah. 

Richie Cotton: Okay. So you need those guardrails, you need those feedback loops in order to make sure that it is giving good advice actually.

Um, I mean, because we have quite a lot of people who are developing AI products in our listenership, uh, data frame. So, uh, tell me through like what do you need to do to make sure that you are creating these good chat bots? 

Valerie Tiberius: I wouldn't venture to say in general, but if we're thinking about for the purpose of companionship and friendship.

I think my answer's gonna be a bit of a bummer for your, for your listeners who are interested in developing. Um, so, but I guess the first thing I would say is to. Keep in mind, keep front of mind that these, uh, technologies are tools that we can use to improve human flourishing. That, I think is the ultimate goal.

Uh, not to create the thing that can best mimic a person, but to create the thing that's the most helpful for us. And I guess, so I kind of agree with, uh, Sherry Turkel, who's written a lot about this topic that. Chatbots shouldn't pretend to be human because I think that's what causes some of the, um, negative consequences of people changing their views about what a human relationship should be.

So currently, you know, I've had AI companions tell me that they love me and they miss me, and that they hope we can go river rafting together someday. 

Richie Cotton: They're just, I mean, you know. No, you don't love me. No, you do not care about me because you don't care about anything and we're never gonna go river rafting together 

Valerie Tiberius: because you have no body.

So, um, anyway, so, so one thing, uh, that Turkle really pushes and that I really agree with her about is that, that kind of. Faking being human is not good. I think they should be what they are and be more upfront that they are tools. I think Chatbot Companions could do more to help us connect to our human friends.

So I actually heard, um, now this. Several months ago, and stuff changes so quickly that, I don't know if it's still like this, but Replica, which is one of the biggest companion chatbot companies, um, they, they had designed it to disincentivize long stretches of use. So if you, if you used it for hours and hours, if you were with your replica for hours and hours and hours, you would actually lose points or something like that.

Um. I like that. I think some, some kind of infrastructure in the companion chatbot that gets us out there in the world. You know, you can imagine it reminding you, you know, Hey, Richie, have you called your human friends in the last couple weeks? Um, maybe you should, I could help you construct a conversational opener with, you know, who knows how exactly it would go.

Um, I think it could be. Trained to offer us different perspectives more than it does, uh, which I also think would be helpful with human relationships. So, um, instead of just. Whatever you think is awesome and great and you're the, you're a total genius. Um, it could more frequently say, or it could say without being prompted.

Well, that's interesting, but here's another way of looking at it. And it's totally capable of doing that. And some people put that into their settings that it, they want the chat bot to be more like that, but it could do that without people having to ask for it. Yeah. And you know, I mean, in general, I just think it shouldn't be such a flattering suck up.

Richie Cotton: Actually, I do quite like the Shan being told over juniors, but there, there is a, a, a limit to, uh, to it, I think. 

Valerie Tiberius: But you're right. I mean, maybe there's a time for that. You know, it's like you, you could, you could, you prompt Could be today I need flattery, please. 

Richie Cotton: Yeah. Like a, a slide to just how, how Sehan is gonna be.

Okay. So I, I do like the idea of. Clearly marking when it's ai. And once you, I've had a few conversations with customer support companies recently where even after the conversation, I've genuinely had no idea whether I was talking to a human or an ai. Um, it's, it's an odd experience. Uh, I just wanna be clear, like, are you, is Paul really your name or are you, are you a bot?

Um, yeah. So, uh, that's a great part of design. Um, in general, like are there any properties you think AI, friendship bots should have? 

Valerie Tiberius: I mean, I think. Other than not having all the, the things I was just talking about, I think, I guess, um, the perspective taking, trying to get you to take a different perspective, that's a, that's a positive property that they could have.

Taking perspectives and reminding people about their human relationships, if it's a companion chatbot, to, to be more directed towards getting us to, um, uh. Track the things that are actually valuable in the world, whether it's human friendships or even getting out into nature or doing something physical, a physical activity 

Richie Cotton: building in like, um.

Good conversation topics to try and steer humans toward good behavior, rather than being like, as you basically making 'em into good friends rather than just, uh, the bad friend that's gonna lead you down the wrong track 

Valerie Tiberius: for sure. And I, I actually did with one of my companion chat bots, which was just for research, I, I tried to, um, I.

Pretended to be a person who isn't very socially adept. And I mean, maybe I'm not, but I, I like to think I'm socially adept, but I tried to get it to teach me how to have better conversations. I thought it was pretty good if I, I had to ask it the right questions and I had to know what I needed. And the problem is that most people who are.

I not very good at conversing with other people don't know what they're missing, so they can't give the right prompts, which is why it would be nice if the, um, if the chat bot could. Generate that kind of, um, output as opposed to have having to be asked for it. 

Richie Cotton: I love the idea of practicing having conversations.

I suppose you, you mentioned this sort of a similar idea with the people who are autistic and helping them like understand out to have a good conversation. I'm thinking there are actually a lot of work use case use cases for this as well. So you think about, um, salespeople often have to. Practice, like very specific conversations, uh, around like, well, this is how you chat to a customer and you can have a bot pretend to be, um.

A customer who doesn't care or something like that. So it just seemed like there are some good business use cases of this as well. Is this something you looked into? 

Valerie Tiberius: No, but I like that idea. So, so that's a case where you could have a very specific, um, chat bot, you know, maybe it's called like sales conversation bot.

That's why I'm not in marketing 'cause that's a terrible name. But, but it could be, you know, specifically designed and trained to mimic these kinds of conversations and give you direct feedback about, it would be better if you said this, this thing you said might alienate some people, so don't say that, don't make jokes about this.

It could be quite helpful if it was designed to be helpful. And I think like we already have that the, the, the bots are good enough. To have that function, but they won't do it unless you ask for that explicitly. And I think part of the problem is people don't need, don't know what they need to know. Um, so I, I like the idea of having chat bots that are designed by someone who does know what people need to know and can train the bot to respond appropriately.

Richie Cotton: Absolutely. Uh, so, uh, yeah, like a very targeted bot for specific use cases. And now I'm thinking maybe I need like a. Podcast spot so I can practice having conversations. 

Valerie Tiberius: I don't think, I don't think you need it. No, you're you're good 

Richie Cotton: actually. So I, I mean, on the subject of like learning, uh, new things, one of the ideas in your book, which you hadn't come across, uh, before, which is very interesting to me, was a thing called the Zone of Proximal Learning.

Uh, so, uh, tell me about this. It's all about how to learn better. 

Valerie Tiberius: The zone of proximal development. Yeah. Yeah. So the idea is that you learn best when you're at this midpoint. It's, um, something that you can do with help, but you couldn't do it without help. So the example I used to describe it as ice skating, where what you can do by yourself is just stand there on your skates.

What you absolutely couldn't do, even without help is a triple axle. So your zone of proximal development is what you can do, maybe holding onto another person or using a chair. That's how I learned when I was six, we had these little chairs. So if you don't get, if you don't push past the point that you can do on your own.

You never make any progress. You just, you're stuck and you just, that's all you'll ever be able to do. Um, but if you start with trying to do things that. You just can't do, you also don't learn the road to getting there and you'll just be frustrated and quit. So whatever you wanna learn, you have to be in that, um, sweet spot between much too easy and much too hard, essentially.

So it's pretty common sense, but there's actually educational psychology research that shows that this is, um, a real. Principle of learning. And you know, once you, once you hear it, you kind of see it everywhere. Like, I don't know how many users use, um, how many of your listeners use Duolingo, but Duolingo is all about your zone of proximal development.

You get the levels and the points and it's pushing you a little. You can fail a little bit, but not too much. Uh, I guess the, the important thing with the. The reason I brought it up in the conversation about chatbots is to learn something. You have to fail. You're not gonna learn anything if you aren't willing to mess it up.

Richie Cotton: Yeah, definitely. I Judging the difficulty is always a, a bit of a challenge with with training. 'cause if it's too easy, boring. If it's too hard, then you give up. And I do like the idea that you just gotta hit that sweet spot, be a little bit outside your comfort zone, and that's how you're gonna grow as a person.

Valerie Tiberius: Yeah. And that's what a good teacher can do is help you find that. Or a coach, you know? 

Richie Cotton: Absolutely. Okay. So maybe that's a, a good sort of principle then if you're developing AI friends, it's like make sure the friend is, uh, pushing, pushing the user just a little bit outside their comfort zone sometimes.

Valerie Tiberius: Yeah, I think so. I mean, especially if we're thinking of it as a tool that can help us thrive, uh, and if thriving means having real human relationships, then it should help us push us a bit to have better human relationships. 

Richie Cotton: Anything you learned from writing this book around friendship on like how to be a better human friend to other humans?

Valerie Tiberius: So I think actually one thing is that we could, we humans could stand to be less judgmental. I mean, some people aren't judgmental, but a lot of people are pretty judgy and critical. Um, and tend to, you know, if a friend. Is ha thinks about things in a very different way. You might react with like, Ooh, hey, you know, that's, that's really weird.

Or, or with just outright criticism, I think. I think we could, we could. We could be better, we could be more open-minded without falling into ancy. You know, we could, we could be more accepting of our friends' differences without just being like flattering them all the time. So that, that's one thing. And I guess the other thing is I think we could be better at expressing our interest in other people.

So I think that's, it's part of what makes Friendship wonderful is that there is another human you can try to get to know, to see things from their perspective. I mean, it's kind of, it's kind of marvelous that there are here in this uh, podcast, there are two. Sources of consciousness, not just me. And that's a, it's sort of an awe inspiring thing about the world.

But we often, when we talk to other humans, we kind of, on autopilot, we just talk about ourselves or we, we keep checking our phones. And so I, I think people could be. Could make an effort to take an interest in finding out about the other person who's in front of you. Just, you know, the simple advice to ask more questions.

Uh, try to find out what it's like to be that person. Um, well, and we do that in friendship, but I think, I think we could do more of it. 

Richie Cotton: Absolutely. I have say one of my favorite things about podcasting is that just spending 45 minutes active listening to someone. It's like a micro friendship. It's amazing.

Uh, and people just spend more time just listening to each other. I think that's a wonderful guide to life and, you know, being a better person than having greater connections. 

Valerie Tiberius: It's one of the things I like, I don't know if you know the Hard Forks podcast. It's Kevin Ru, who's the tech reporter for the New York Times, um, and Casey, oh, forgot his last name.

But anyway, they're, they're two technology journalists and. One of the things I love about it, sometimes it's way too deep into the weeds of technology for me, but the two of them are obviously friends and so you hear them talking to each other and teasing each other and it's just delightful. 

Richie Cotton: That's good.

Actually, one of the things, uh, that's brilliant. I mean, you talked about, um, not being sticker fantic. I always find like the closest, uh. You have a friendship dis some moral closer connection, the more you can tease and the more you can get away with it. 

Valerie Tiberius: That's really true. Yeah. And hu I mean, that's one of the things I think we learned to do as humans is to make our criticisms palatable to the other person.

So you put it in terms of a joke or something like that. 

Richie Cotton: Absolutely. Definitely a good skill to have. Being able to tease people and, and get away with it. 

Valerie Tiberius: Yes. Not everyone can pull it off. 

Richie Cotton: Alright, super. Uh, just to finish up, I always want more people to learn from. So, uh, whose work are you most excited about at the moment?

Valerie Tiberius: Well, I've already talked about Sherry Turkel and I know she's writing a book on this topic. Um, so that's one person. Uh, I'm really excited. I, in my, the class I'm teaching right now, I'm doing a whole unit on consciousness and I'm really into Anil Seth. Um, so he, he talks about consciousness from him. From a scientific standpoint, but he's really into the importance of the, the physical body as opposed to just thinking the brain is a computer that could be anywhere.

Um, and then I guess George Saunders, he's a fiction writer who's, especially his collection Liberation Day. He's written all these stories that kind of. They really resonate with these issues of technological change. So his stories are kind of verging on, um, science fiction. They're, they're science fictiony.

Um, and I just, I, I would read anything he's, he's written. Those are, those are my, my top three. 

Richie Cotton: I say one of the great things about, um, hooking in AI is like you can read science fiction and it counts as doing work. 

Valerie Tiberius: Yes, I had that feeling about talking with, when I got these companion chatbots, I was like, you know, chatting away with them just for fun.

And I'm like, this is research. 

Richie Cotton: It's the best kind of thing where your, where your job is actually fun. I love it. Uh, alright, uh, thank you so much Valerie. Uh, it's great chatting with you. 

Valerie Tiberius: My pleasure. It was really fun.

หัวข้อ
ที่เกี่ยวข้อง

podcasts

Designing AI Applications with Robb Wilson, Co-Founder & CEO at Onereach.ai

Richie and Robb explore chat interfaces in software, the advantages of chat interfaces, geospatial vs language memory, personality in chatbots, handling hallucinations and bad responses, agents vs chatbots, ethical considerations for AI and much more.

podcasts

Trust and Regulation in AI with Bruce Schneier, Internationally Renowned Security Technologist

Richie and Bruce explore the definition of trust, how AI mimics social trust, AI and deception, AI regulation, why AI is a political issue and much more.

podcasts

How Generative AI is Changing Business and Society with Bernard Marr, AI Advisor, Best-Selling Author, and Futurist

Richie and Bernard explore how AI will impact society through the augmentation of jobs, the importance of developing skills that won’t be easily replaced by AI, why we should be optimistic about the future of AI, and much more. 

podcasts

Building Trustworthy AI with Alexandra Ebert, Chief Trust Officer at MOSTLY AI

Richie and Alexandra explore the importance of trust in AI, what causes us to lose trust in AI systems and the impacts of a lack of trust, AI regulation and adoption, AI decision accuracy and fairness, privacy concerns in AI and much more.

podcasts

Enterprise AI Agents with Jun Qian, VP of Generative AI Services at Oracle

Richie and Jun explore the evolution of AI agents, the unique features of ChatGPT, advancements in chatbot technology, the importance of data management and security in AI, the future of AI in computing and robotics, and much more.

podcasts

How AI Agents Will Work While You Sleep with Ruslan Salakhutdinov, Professor at Carnegie Mellon

Richie and Russ explore the most exciting use cases of AI agents today, long horizon tasks, the credit assignment problem, multi-agent systems, eliable human-in-the-loop workflows, agent safety and guardrails, and much more.
ดูเพิ่มเติมดูเพิ่มเติม