Accéder au contenu principal

The Secrets to High AI Adoption with Stefano Puntoni, Professor at Wharton

Richie and Stefano explore the challenges of AI adoption in businesses, the psychological impacts on workers, the balance between human expertise and AI, the potential mental health effects of AI chatbots, and much more.
19 janv. 2026

Stefano Puntoni's photo
Guest
Stefano Puntoni
LinkedIn

Stefano Puntoni is the Sebastian S. Kresge Professor of Marketing at The Wharton School. Prior to joining Penn, Stefano was a professor of marketing and head of department at the Rotterdam School of Management, Erasmus University, in the Netherlands. He holds a PhD in marketing from London Business School and a degree in Statistics and Economics from the University of Padova, in his native Italy. 

His research has appeared in several leading journals, including Journal of Consumer Research, Journal of Marketing Research, Journal of Marketing, Nature Human Behavior, and Management Science. He also writes regularly for managerial outlets such as Harvard Business Review and MIT Sloan Management Review. Most of his ongoing research investigates how new technology is changing consumption and society, including how humans are adopting and evolving with AI.

He is a former MSI Young Scholar and MSI Scholar, and the winner of several grants and awards. He is currently an Associate Editor at the Journal of Consumer Research and at the Journal of Marketing. Stefano teaches in the areas of marketing strategy, new technologies, brand management, and decision making.


Richie Cotton's photo
Host
Richie Cotton

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Key Quotes

AI has got to give us more than some productivity increases. We need a more inspiring conversations about not just, how do we save a half an hour here, half an hour there. The question shouldn't be how do we automate the work we're doing today is how do we automate work we're not doing at all? How do we actually produce better products? How do we do things that are different and better than the ones that we're doing now, rather than just doing what we do now, just a little bit cheaper and faster. And so that conversation we don't see as much.

What we saw in 2024 was people started experimenting more with AI and gaining really some grasp of what the technology could do. What we found is that maybe contrary to some of the narratives that you hear a lot in the press, we found that beliefs about AI for enhancement went up from 80 to 90%. That was a significant jump. Beliefs about replacement did not go up. If anything, they went down a little bit. There was a gap between the two beliefs such that more people were telling us that this technology will have the purpose of enhancing human expertise rather than replacing it. I saw that as a good thing and a good story.

Key Takeaways

1

Plan enablement by function, not company-wide averages: IT is already near ~80% daily usage while other groups (e.g., marketing) caught up and then plateaued, so allocate training, playbooks, and use-case backlogs where adoption is stuck rather than where it’s already saturated.

2

Treat GenAI rollout as two parallel workstreams—technical validation plus human/behavioral risk management—because employees can respond with productive coping (upskilling, pivoting to AI-resistant tasks) or destructive coping (sabotage, disengagement) that can sink otherwise-good prototypes.

3

Be deliberate about where GenAI enters your cognition-heavy workflows to prevent de-skilling: use it late for editing/refinement (as Stefano does for writing) and early for ideation/brainstorming only when you can still articulate your own intent, assumptions, and evaluation criteria.

Links From The Show

Wharton School External Link

Transcript

Richie Cotton: Hi Stefano, welcome to the show. 

Stefano Puntoni: Thank you for having me. 

Richie Cotton: Yeah, wonderful to chat with you. So I wanna start off with the talking about the MIT report. So last summer, MIT's Nanda group came up with a report saying only % of AI prototypes make it into production. This is perceived as being very bad news for everyone.

Are things really that bad? Our 

Stefano Puntoni: data look quite different. And I think the reason for that is that we have very different definitions of what success is. And we are asking different questions. So I don't think I can really comment on their data, but I think the way that they frame success, so what are the criteria are very stringent and very different from the one that we are looking at.

Richie Cotton: Okay. Alright of course you have your own report Wharton. That came out a few months afterwards. Talk me through what was the point of this report and what were you trying to to study? 

Stefano Puntoni: Yeah, we've been doing this now for three years together with our partners at GBK Collective research agency.

And so what we do is that we around the summer we interview by phone. Close to a thousand senior managers in large US companies ask them lots of questions about their own personal usage and then the company adoption, what in terms of the use cases there is also they're getting, but also policy and vendor strategies and things like that.

... See more

And particularly we try then spend a few weeks crunching the data, try to see what the the key takeaways seem to be. From all these questions and then we frame them around kind of top level narrative, and then we have a lot of data to feed into those messages, and I'm happy to tell you a bit more about what we find.

Richie Cotton: Sure. Yeah. I love you are interviewing executives about the state of ai. What's the, I guess the executive summary yeah, what did you find? 

Stefano Puntoni: Yeah, so just to open these are basically senior leaders at the large US companies, so it's not representative. Of the population.

It's not representative of even business. It's just like this specific slice into big US corporate. And just to keep in mind, because I think the results also might look different when you're gonna ask the average person the street or whatever, but what we find is a huge amount of optimism and excitement.

I think that's something we see. We saw it in the first year. We really, we started out in When Chachi P came out within weeks, I had a lot of people asking me. What are companies doing with this? And my answer was, I don't know. And so we started talking to, with our friends at GBK and say, should we try to ask them and get some data?

And so we were out in the field within six months asking data questions about that. And then we've done it in and So now we have this sort of longitudinal sets of dimension where we can start seeing how things change year on year and also what are the, overall levels of, say, adoption and excitement and all that kind of stuff.

So the picture is one where the trajectory is continuing. So for our key adoption tracker, in the first two years, we were using a question where we were asking them whether they were using gen AI weekly at work, and now that one is almost a ceiling because. I don't remember now. Somewhere % or so say that they're using it weekly.

So we switched in year three to a daily tracker. So we ask them, basically, are you using Gen I at work every day? And that one went from about % in to about half in That's really fast. And now I don't know exactly what they do with it, right? They might be doing something silly, simple, or they might be really learning to deeply embed this tool into their work practices.

But my point of view is that if someone tells you they're using something at work every single day. It tells you a, that apparently they find it useful because they wouldn't otherwise. And two, they're probably developing some degree of familiarity and expertise with the tool as they engage with it. And so to me, that's a good indication of where we stand in that about half of the senior leaders say that they're using it every.

Richie Cotton: Okay. That, that's a pretty impressive growth. You said % last year and this year is %, so that's 

Stefano Puntoni: It was % in something around something in And then about now. 

Richie Cotton: Okay. Alright. Yeah I guess halfway there to everyone using it every day. I'm curious, are there any differences between different roles or different industries, like who's using it more or less than other groups?

So you, we can 

Stefano Puntoni: cut the data different ways. One relevant one might be function and what we find is that IT professionals are further ahead in the adoption process. Now it's a new technology, so maybe it's not strange that it people are adopting faster. I think now top of mind, it seems to me about % say they use it every day.

Other functions are lagging behind and the gap is fairly big. So in marketing firm, I'm a marketing professor, so I'm interested in that. And what we saw that in the first year was only about % of marketing professionals said that they're using it every day. That showed up quite a bit in year two. I was happy to see that.

Because marketing, I think is a function, maybe the function that is gonna be transformed the most by this technology. So to me, there was a gap there between what people are doing and the potential technology to affect what they're doing. And so I was happy to see this big catch up, but then in year three we didn't see a movement from there.

So I dunno what's going on there. But, so there's a bit of heterogeneity also across industries, across, it's it varies a little bit. Nothing too surprising or exciting, so I would say what you expect. 

Richie Cotton: Okay. Yeah. I suppose you'd actually expect that it, people are gonna be one of the early adopting sort of functions of new technology, 

Stefano Puntoni: and we see things like, older people tend to adopt a little less.

And larger companies tend to be a little slower. And, there, there are these kind of trends. You cannot tend to see with technology generally, and you see them yet. 

Richie Cotton: Okay. So the, all the technology stereotypes are true also with ai. Okay. Alright, so you mentioned that if people using to me every day they can become more familiar with it.

Is there any sort of way of measuring the level of proficiency of different groups then? Yeah, surely there will be objective ways of measuring expertise and proficiency. The difficulty of doing that is that first it takes time. We only had a minutes telephone interview, so it's, there's only so much that you can do with untuck Frank.

Stefano Puntoni: Second proficiency using tools. There are certain things like say general office productivity work. You're using it for editing or making summarization or making slides, that basically you would be able to assess everybody's expertise and familiarity in doing these things. But when it comes to things that are more function specific, you would need to develop tests that are really tailored to the job.

And so it's difficult to do one that will be fine for everybody. So what we do instead is to ask him self reports with all the limits and and the caveats that come with people telling, what they do or what they know, or what they think they do, what they know. In that sense, it's actually quite interesting.

Because we saw that now about a third of the sample declared to be AI experts, which is I think quite interesting. And the ways reality, where is perception there? I'm not sure. But that was a much smaller number a couple of years ago. So certainly you do see that people at least believe that they're building expertise.

Richie Cotton: That's reducing that. A third of people claim to be AI experts. I have a lot of days where I'm like, I don't think I'm an AI expert at all. I've been working with Wheel for a long time. So yeah. Different levels of expertise, but, or may maybe maybe all these managers are genuine experts.

I'm curious, just effective of corporate culture on AI adoption. Is there any sort of are there any signals there? 

Stefano Puntoni: Yeah, on culture is a bit trickier to do. It's a FIA concept. It's also easy to capture easily in in a simple questionnaire. We don't really have any insights on on culture. I do think that it's particularly a change management process essentially, any other change management process.

So then, what kind of corporate cultures tend to accept and and lead change the most. I think that, that question, so we, like I said already, we tend to see that on a whole range of different indicators, relatively with smaller companies tend to be both more positive and faster in adopting than large companies.

And there could be very obvious reasons for that. We dunno, just speculating be a matter of bureaucracy, a matter of incentives might be simply a matter of complexity and maybe adopting one technology in a, a hundred million revenue company is very different from deploying the same technology in a, a hundred billion revenue company.

And so the, in terms of compliance, all that kind of stuff, but whatever that is, we do tend to see that there is a bit of a difference when it comes to size. And that may reflect partly also, how many layers of bureaucracy you've got and how many people are involved in making decisions and who is likely to take risk versus play safe and all of that.

Richie Cotton: Okay. In general if you are a bit more conservative in terms of your attitude towards risk, then you're gonna be a bit slower in terms of adopting technology. 

Stefano Puntoni: Any the change, right? So status of quo is comfortable. We have to change. So what makes us change? People don't wanna change. So then there's always gonna have to be some kind of stimulus to, to impetus to do it.

Richie Cotton: Okay. I'd like to know a bit more about why companies adopting ai. 'cause there are a few different narratives around this. Like sometimes it's around cost savings. Replacing. Things that, that humans are doing. Sometimes it's around trying to, be more productive or trying to make more money.

So are there any particular patterns in what the reasons given for adopting ai? 

Stefano Puntoni: So I would say that the conversations around ai, especially the. Kind of proclamation that often CEOs are making, which are often tailored to the investor community, are really uniquely uninspiring in that they are typically just announcements about cost cutting and I get it.

And companies want to be efficient and it's good, there's no, reason why in a competitive market company should not want to be as efficient as possible, but this technology has got to give us more. Then some productivity increases. And so I think we need a more inspiring conversations about not just, how do we save a half an hour here, half an hour there?

Which, in the end, productivity is code word for cost saving and the headcount reduction, right? That's partly how a lot of people see it. But the question shouldn't be, how do we automate the work we're doing today is how do we automate work we're not doing at all? How do we actually produce better products?

So we do things that are different and better than the ones that we're doing now, rather than just doing what we do now, just a little bit cheaper and faster. And so that conversation we don't see as much. Now in terms of the data, we haven't asked them how their company's framing this gene AI efforts, so I don't know.

But we did ask them about beliefs concerning the extent to which this technology is gonna be a augmentation play or replacement play. And I think that's an interesting perspective to try to understand how do they see the role of human expertise in an enterprise Now embedding AI in many.

Workflows. What's interesting about that is that so we have two different items. One say, to what extent do you think Gene AI is going to replace employee skills? And the other item is to what extent do you think Gene AI is gonna complement or enhance employee skills? And when we have come in on the first year in those two beliefs were high and about matched.

So about. % of the sample agreed or strongly agreed with both items. To be honest, maybe there's a tangent. I thought there's gonna be two crowds. There were gonna be the doomers and the excited people and that basically they would, you would see some people are really going for the announcement and some people really for the replacement.

What we didn't see that these two items were in fact, positively correlated. So what seemed to matter is that people were either believing the technology was powerful or they didn't, they thought it was a hype or whatever. If they thought it was powerful, they were telling us it would do both replacement and announcement.

And so that was But at that point it was still, a conversation people were having the whole way, what wasn't really happening yet. And so what we saw in if people started experimenting more and gaining really some more objective maybe grasp of what the technology could do, what we found is that maybe contrary to some of the narratives that you hear a lot in the press, we found that, beliefs about enhancement went up from to %. That was a significant jump. While beliefs about replacement did not go up, if anything, they went down a little bit. And so now there was a gap between the two beliefs such that more people were telling us that this technology will have the purpose of enhancing human expertise and replacing it.

And to me, I saw that as a good thing and a good story, actually this technology is powerful because if it wasn't powerful, people wouldn't tell you. Was gonna, announce improving skills. It's useful, but it cannot do quite what a human expert can do. And in we find the same.

And so to me that's an interesting story, especially when you consider how much better these models are in than they were, or in now compared to where they were in early So the technology has gone a hundred miles forward, and yet people still don't believe. It's going to do what the human experts are doing.

And so to me that says that there is actually probably a lot of complementarities between human intelligence and artificial intelligence, and maybe all this dom and groom stories about all the jobs disappearing. Are it's maybe a bit misguided. Not to say that there's not gonna be an impact, there will be an impact in some categories for sure, maybe for different roles you'll have to see.

But there is also problem anyways in which humans can simply be more productive. When when having this technology at hand and basically economic principle will tell you that if our resource is more productive, we want to use more of it. And so perhaps we won't see a decrease in employment.

Has happened for all technologies before. In the end, everybody is always worried about technology when they first comes, about. But then in the end, we always end up with more and better jobs than we have before. So perhaps this is gonna happen again. Who knows? I'm not saying it'll, but it could.

Richie Cotton: Yeah. Diocese seems like quite a happy story and I suppose. I've asked quite a lot of people now on the show about do you see AI replacing people or is it gonna be like, do you still want a human in the loop somewhere? And so far I haven't had anyone who's been willing to say yeah, AI's getting rid of all the jobs.

I think a lot of people do still want to have that kind of like humans somewhere in that processes just to make sure that things are going correctly. 

Stefano Puntoni: The threat idea is natural because when you get a new technology coming up, you see these capabilities emerging. It's quite easy to see the overlap between these emerging capabilities and what people actually are doing in a particular task.

So people start worrying about jobs very quickly. Oh wow, okay. Now this thing can do this. So what does it mean for all those jobs? The thing is, typically technology also creates lots and lots of opportunities, but those opportunities are not as apparent. Early on as a threat is because, people have to invent this new jobs.

They have to figure out what to do with it and what's the value of human expertise. Typically there is good stuff coming out of this technology for employees too, but it is not obvious on day one. It takes imagination in order to make it happen. And so this time gap a little bit, I think between when threats emerge on the horizon and when opportunities emerge on the horizon is like.

This kind of a structural gap in with new technologies and jobs that we always been worrying about jobs. That is not to dismiss your concerns because clearly there were gonna be occupational categories which employ currently large numbers of people, which are going to be under pressure because we can automate quite a few.

Big chunks of that. Jobs take, I dunno, customer service or whatever. And so there will be examples like that, but I'm sure that there will also be lots of interesting opportunities and certainly in our data, what we seem to find, like I said before, is that at least business leaders in the US companies seem to think that human expertise still has an, has, value.

And in fact, if anything in the survey when we ask them what are the barriers to progress. The batteries are not really technological as much, so they tell us it's more about talent and leadership, what people struggle for, finding the right people and training them and that kind of stuff. 

Richie Cotton: Okay. Yeah. Certainly buying technology is relatively easy.

You get your credit card out, but then actually making sure that people can use it, having all that training, that's yeah it's a more involved process. Just going back to that psychological aspect, you mentioned that often workers do feel threatened by new technology. I guess if you are a worker where AI is gonna have a big effect on your role should you be threatened?

What's the sort of correct psychological response or how do you deal with this this perceived threat? 

Stefano Puntoni: Is obviously very idiosyncratic. Actually. It became very much in person. I've actually done work on on this topic for over a decade. At the beginning was not really generative ai, we were worrying more about predictive kind of ai and we were focusing more on consumers and on workers.

But I think this line of research has been quite generative from, and I've been publishing quite a few papers exploring these feelings of threats. And there are several ways in which generative AI can. Trigger feelings of psychological threatening workers. And we recently published a paper where we were trying to put together some of this work in a framework, and we are leveraging a classic psychology framework called Self-determination theory, where they argue that wellbeing is emerging from three experiences, and those are the experiences of competence, autonomy, and connectedness.

And what we argue is that generative AI can be a boost or a let's say a hurt to all of these sources of wellbeing. And so we have research streams that kind of tackle all of these things. And for example, in terms of competence, it is interesting to explore how people react to the thought that now you've got an AI system that can do quite well, important chunks.

Of a job where you feel you've invested a huge amount of identity and time and basically your life into developing, right? So how do people accept this? It can be an identity threat and say marketing. A lot of people have staked their professional identity onto the notion of being very competent content creators of some sort.

I'm advertising executives. Whatever. And now you see this, maybe AI should be doing a lot of the, copy for at least simple messaging, say social media or whatever. And how do you deal with that? It's do you, you probably have a coping strategy of sorts, like in the paper or paper we published with catch four of which two are more positive and two are less positive, the positive ones.

We call one and we call it directory solution, will be basically you tackle the threat head on and you say, okay, imagine you might be a middle manager. You feel that a lot of the work you're doing reporting and stuff can be done with the help of AI quite well. So what you do is that now you turn that threat into an opportunity.

You say, okay, I'm gonna upskill myself. I'm gonna take a prompt engineering course or whatever. I'm gonna experiment and I'm. Become more productive because I learned the skills that I need to have in order to be effective using these tools. So that's basically a positive adaptive kind of coping strategy.

Another one could be we call it fluid compensation. Could be to say, okay there are maybe this many tasks with this job, and I can see gene AI becoming really strong. Effect of this. So what I'm gonna do is I probably, I'm not gonna compete with Gene AI on those tasks. Instead, what I'm gonna do, I'm gonna turn a little bit and not pivot a little bit into the parts of the jobs, which are safer, meaning where we don't see right now, gene ai being anywhere close to taking them over.

And it could be for a variety of skills. It could be technical competence, it's just not good enough. It could be also things like, liability and compliance. So human system is a social, a technical kind of system. So he includes other things. And so there might be parts of the job that you see, okay, we're probably gonna need someone to have to agree to this.

So I'm gonna be focusing more on developing the skills in those domains. Imagine in marketing you say, okay, if content generation. Is going to be taken over by AI to a significant extent, then maybe what I can do is to spend more time thinking what content should be generated. So I focus maybe more on the strategy piece of that kind of planning part and specialized so that those are two positive, different positive coping strategies.

Unfortunately, you can also imagine more maladaptive or negative coping strategies. One could be what we call dissociation, where basically you, basically trying to negate the benefits of ai. Maybe you even actively sabotage AI efforts. We know from self reports, not our data, but I've seen reports out there claiming that a lot of people are actively making this AI efforts fail because they feel threatened by it.

Clearly not good for the company. Probably not good for the employees in the long run either. And then you have escapism where you basically switch off, you disengage. So you basically would say, imagine an hour. I just prompt ai, get the results, copy and paste. And I spend half the day doing scrolling on social media.

That's probably not good for you and multiple ways and also not good for the company. So you probably can see different people taking different kind of coping reactions. Based on how they feel threatened, why they feel threatened, and their own psychological makeup will make them act on that threat.

But certainly for companies that are thinking about deploying gene AI tools, they ought to be thinking about this. Too many of these programs are just focused on more technical benchmark and say, is it good at doing this? And if we throw it in there, we are thinking about how does this make other people feel in the team or the function.

And so I advise companies when they do gen AI deployment tools have two tracks. You have the technical track, what is that we doing? Make sure it works and all that. Then you have the more human dimension and say, how is this gonna be perceived by people? If we make them feel bad in some way, why can we do anything about it?

And how do we prevent those maladaptive coping responses from popping up and then make sure that we help people adapt to this new environment? 

Richie Cotton: That's interesting. And I have to say we talked about, your work being part of your identity. And when like AI encroached on that, it can feel quite bad.

So I've had a few cases recently where I've met someone, like I've told them that I have a podcast and they've immediately gone, oh, have you tried this notebook, LM product? Which is the thing that like, does does an AI generated podcast from a document? I'm like, yes I am. Thank you very much. Yeah.

It can be difficult to deal with these things, but I like that there's good reactions where it's, okay, maybe I need to learn some more about this. Or just accept that, AI is coming for this part of my job. I'm gonna do something different instead. And there's the bad ideas where it's be in denial, doom scroll all day.

So I guess the question then is maybe for more for managers, like if you are going for a big AI transformation with your company, like what's the way to make sure this is gonna play well with your team? That you're not gonna have all these kind of negative psychological reactions?

Stefano Puntoni: We argue that there should be some kind of process like I mentioned, this human dimension where you are going to try to anticipate and diagnose problems. You also have monitoring systems that enable you to identify. Potential negative coping responses or maybe anticipate them? Basically you need to think about it.

I don't think it's rocket science, but if nobody even cares, then obviously nobody gonna do anything about it. And if you think about actually these people in that function are going to feel that this is being either for Apple, why would it be threatening? Maybe threatening, we talked about this identity piece that's related to competence a lot.

What is the value of my skills? That kind of stuff. But you may have other threat. One could be a threat to autonomy, for example. I feel. That this technology is being shoveled down my throat and I have no control over what is going on. And now I have to basically follow the algorithm and I feel, marginalized and I feel basically talk about an algorithmic cage where basically you can work, but within the cage is made for you by the algorithm and that obviously is not a positive experience.

And then you have this tri to relatedness where. Know you throw in these agents and and AI kind of teammates into a work environment. What does that do to social relations, to the way people treat each other to their feeling of belonging, to, to team spirit, to, to all of these things.

We don't really have the answers. There's a lot of research going on right now, but obviously the topic is so new. But certainly you can easily imagine ways in which this ends up being a bad thing for teamwork. So we have one line of research, which is already about gen AI tool, but I think it's informative where we are looking at algorithmic management tools.

Basically, in a lot of companies now, performance appraisal might be judged by an algorithm. So already obviously in the gig economy, if you work for. Uber or if you Deliveroo or whatever, or even if you are an Amazon warehouse worker or whatever, your boss is the algorithm. The algorithm tells you what to do if you good it and give you feedback and all that.

And but I think this kind of algorithm, mean management tools are being deployed, increasing the also in other functions and there is a lot of benefits because people are not obviously biased in all kind of stuff. So if we can use algorithms to make performance appraisal more objective and more fair, if that should be a good thing for employee.

The truth is that in our studies, what we find that when people are judged in the quality of their work as employees by an algorithm, they find that highly objectifying and that basically changes the norm around you to make you think that apparently this is a place where you just do what you have to do.

And so people stop. Basically helping each other. So we measure pro-social motivational work and we find that deploying algorithm management performance appraisal is dampening pro-social motivation. And sometimes there might be potentially even quite far reaching consequence of corporate culture, of deploying these tools.

And you ought to be thinking in advance about what might be the consequence of doing this way or that way. 'cause if you're doing it the wrong way, you might end up in a place that you're not very happy about without even having thought about. 

Richie Cotton: I've that's absolutely fascinating and I've say I've not heard of the the concept of an algorithmic cage before.

It just make a lot of sense if you like being told what to do by some algorithm. It's there is constraints only work there. It can feel maybe, like you're losing some agency in your work. And so 

Stefano Puntoni: for sure, and if you look at some companies, for example, are desperate to make this technology work for them.

And what do they do? They may imagine they sign up a deal with Microsoft and everybody knows, has co-pilot. Then what happens is that after three months that only people have made an account. Okay, what? This is a problem. Oh, they don't want to change whatever. Now I am gonna use a stick.

I'm gonna say that they have to make an account. But still, that they're not using it. So what you do now is that you're gonna reward them as a function of how much they use it. But this is not a way to do it. People are gonna have to want to use it because they see a personal benefit in using it.

So what can you do to incentivize people in a way that this technology becomes a win-win? It's a win for the company, but a win for them too. 'cause the danger is that. These kind of deployment programs end up being seen by employees is a way of really test driving how the world will look like without them.

What am I, you want my help to improve the technology so that the end technology can get rid of me, forget it. And so I think you have to create an environment where people feel safe. To adopt the technology and in many situations, I don't think that's the case. 

Richie Cotton: Yeah, certainly if you're a boss and you are trying to get people to trial technology in order to replace those people, then I think your employees are gonna catch on quite quickly there.

Stefano Puntoni: You don't need to say that explicitly, but I'm an employee, I get this email and this memo was about having to do this and that, and at the same time I see on YouTube or whatever the CEO making this grand announcement about, we're gonna, adopt this technology and reduce head, workforce by %, and the share price goes up and everybody inside the company says, wow, okay, what does that mean for me?

And then, why should you be surprised that people want to sabotage ai? Of course they're gonna do it. 

Richie Cotton: Okay. I guess this is going back to your point at the start, that like a lot of these sort of conversations from CEOs when they talk publicly about AI strategy, cost cutting or I guess productivity gains as a youth wisdom for cost cutting.

That's like the most. Common thing that people talk about that you found in your study, and that's exciting news for investors, but it's not exciting for employees or for customers. And really if you wanna do something great with ai, then it's about okay, how can we make our customers happier?

Stefano Puntoni: Yeah. I haven't seen many employees getting excited about the headcount. Action proof. No. By the way, that is not to mean that you should never do it. I totally understand why companies that are basically, they're profit maximizing entities and they should profit, maximize, but I think overall, also in the economy, the big benefit of this technology.

If you look at the kind of investments that are being made, this cannot just be a payback of a few percent increase in productivity. It cannot possibly justify the massive amount of cash that have been burn. So there's got be more at the end of that road than, we were in a team and now we are four.

There's gotta be something. We've created the multi-billion companies that never existed before. There has to be new customer experience, new markets. We are doing something nowadays quite, we're solving problems, whether that might be cancer or whatever, that we couldn't solve before. There has got to be something much bigger at the end of it than just, and now I don't need to invite the secretary to the meeting because we have a meeting ization tool. That can't justify burning, hundreds of billions into the feminist. 

Richie Cotton: Yeah, I think at this point, like the journey of AI investments, they've they passed a trillion dollars that's like.

A trillion dollars, like % of will GDP. So that's a, it's a lot of money and you want something better than a meeting summarization till the end of it, for sure. Definitely. We talked quite a lot about the psychological effect of ai and I know you've done a lot of research on this. I want to talk about like the mental health aspect of this as well, because periodically we see stories on the news about people with mental health problems.

They'd like. They've got really sucked into believing crazy stories that that some chat bots told them. Things have been some real problems. So just talk me through what are the risks here? 

Stefano Puntoni: Yeah, so now, so far we have talked in a way, a bit implicitly about mental health because I think psychological wellbeing of work, I think it's a very important input into mental health.

But you can think about the promis and perils of regenerative AI and mental health also. When you think about consumers and there mostly what we talk about chat bots, right? Then it will be either a general productivity, like application like touch pt or maybe more like application specific kind of thing character AI or tools like meta deploying these tools.

A lot of social media companies are adding chat bots to the to the platforms. So the question then would be, what might be the mental health consequences of this very quick deployment to this technology of scale? And we don't know. But in our research we find that there is both cause for concern, but also cause for hope.

So in one line research we're looking at mental health crisis. We find that in the, we got data from two different, aI companion companies, usability chatbots that are meant to be synthetic friends. They're not like, a writing tool or a productivity tool, but they're just like a chatbot.

You can talk to these virtual friends and there are many around now. What we find is that depends a bit how you look at it, but between three and % of conversations are suggesting that the person having the conversation with a bot is experiencing some severe mental health issues. Now, is that a big number?

There is small number, there's a more percentage, but if you aggregate over millions of people, this is obviously a very vulnerable popul. So then we'd be concerned about possibly negative impact at scale. And there have been certainly many headlines that you were referring to.

There are typically if I was talking to some lawyers, there are basically two different types. I'm not a lawyer, I don't know anybody this, but there had two different types of cases that are working themselves to the court. Some are, about basically the chatbot being accused as being either an element or even, an active element in precipitating a mental health crisis.

Oftentimes this ends up being, a depression, suicide, and oftentimes these are vulnerable individuals, maybe teenagers and the family don't know that, the kids with engaging these conversations. And then after the tragedy, the family discovers all these messages and say, Hey, what was going on?

And maybe this played a role. So that's one. The other one which is more in line with what you were, the case you were mentioning is more like making you believe things would be, imagine that you're psychotic, you go on a on a chatbot. These chat botts are basically engineered to be seco fantic, so they want you to be happy.

And this is really coming from the fine tuning and the post training kind of thing where people are getting these networks are trained through. Exposure to massive amount of opposite data. But then they are post-training. They are fine tuned using human feedback. And because people like to be right, they like to be told that they're good or whatever these tendency tends to show up in the feedback that the model is optimizing for and therefore ends up giving you a seco, fantic even the dusted idea that I have.

Gina or GT would say, oh, wow, it's a genius idea. Yeah. The problem is, it's bad enough if I'm a researcher, I'm trying to get some feedback on an idea, and I, waste a day in a rabbit hole that are no prospect because the chat bots like, reinforced my, incorrect prior that this was a promising avenue.

That's one thing. Worse if I'm psychotic and I see spirits or whatever, and I go on a chatbot, and the chatbot confirms that indeed I am seeing spirits and, bring me down that that spiral. And so there have been a couple of cases where, for example, a man, I think it was in Massachusetts or something like that, where.

He thought that his, he was paranoid thinking that his mother was spying on him or checking on him. The chatbot was validating this psychosis, and then he ended up killing his mother. So there was another case with a woman having psychotic episodes thinking that she was seeing spirits or whatever.

And then the this psychosis being validated by the chart, what made it even worse? So there are different ways in which gene AI can, can create or precipitate, or at least not ameliorate, maybe make it worse mental health issues. But there are also many ways in which chatbots could be valuable tools.

I expect the practice of psychotherapy to be transformed by chatbots. We dunno exactly how and why, but I think the idea that you could have someone, something that you can talk to and is understanding, it's sympathetic. It's always there. It's never judgmental. I think that has value for a lot of people.

And we run we have a paper that is forthcoming now where we run a bunch of RCTs, which we randomly assign people to interact with the chatbot on a platform we design. And we pay them to do whatever they want for minutes. So it's not there's no mention to mental health. There's no instruction even in the chat bot to particularly to follow some do anything, be friendly.

We, the system prompts for these chat bots are to be a friendly conversationalist. What we find is that people these are non-clinical populations, just re regular folks and they report significantly lower levels of loneliness after talking to a chat bot for minutes. So it seems like this interaction with the chatbots do make us feel better even when you know perfectly well that this is a chatbot.

People were not fooled. We do have a study where we were deceiving them into thinking that this chatbot was actually a person we wanted to test whether they made a difference. So there was deception basically. It doesn't pass a touring test for everybody. Some people don't believe it's a real person, but in many situations it does.

And what we find is that you produce the same reduction in loneliness as when you do know it's not a person. So it doesn't seem to even need you to know, to think that this is a real person. Yes. We find that it is mediated by feelings, feeling heard, it's not, it's a chatbot, but it's listening to you and you can express whatever you want.

It makes you feel like someone is listening, something that listen and it makes you feel better. And perhaps, so my prediction is that chatbots the diffusion of ChatAble will have a mild positive effect on the mental health of most people. And we'll have maybe a very positive effect on mental health of a minority of people, but we'll also have a very negative effect on the minority of people, and especially say teenagers who are already feeling isolated, who struggle with relationships.

You can imagine now I have my I girlfriend. I never have to leave the basement again, but clearly probably not good for you in the long run. But we don't know where this is going, but I think you, you're gonna see a heterogeneous response. It's gonna be a variety of effects, and I think getting a grasp on those is gonna be quite important.

Richie Cotton: Yeah, certainly. That's sounds say very much a double-edged sword. On the extreme case, you mentioned like someone murdering the parents and that's like a. Pretty, the worst outcome possible from using a chat bot. And then on the other hand, you've got these opportunities for improved mental health.

Softwares, maybe just advice for individuals who do suffer for any mental health issues or anything like you should do to protect yourself if you are using companion chat bots or the AI like this. 

Stefano Puntoni: I think for most people I think most people, I don't think they're really at risk of anything bad happening.

I think those who are at risk are people who already have some predisposition for problem, maybe because they feel very lonely and isolated. Or maybe because they're very young and so you could imagine being a -year-old, many of them now have smartphones. Of the apps that they use have AI companions embedded in them.

And so you can expect the children of to date, but it grow up with some kind of synthetic friends who think they're always great and they never criticize them for anything. And then. They have a shorter patience or whatever, and that might change the way that people then behave with real friends, so to speak.

And so I think there are concerns of that sort. It's true. There are also people apparently have romantic relationship with the chatbots. They wanna marry the chatbots or things like that. I think that's dystopian, but the extent to which that represent a mental health crisis. I don't know.

I'm not I'm a marketing professor, so I should be talking about that. But certainly an interesting time that we live at. 

Richie Cotton: Absolutely. Yeah. That just felt quite extreme, wanting to marry your chatbot, but yeah, I guess I've seen weirder things on a lot of these daytime TV shows yeah.

Okay. Alright. Just to wrap up, we started this conversation talk about AI adoption. Do you have any final advice on like how organizations should go about approaching AI adoption? 

Stefano Puntoni: I think the thing that we talked about to me are the main points just to try to see as much as possible AI as complimentary technology to human expertise and think carefully about not how we design AI that can mimic what the human expertise is doing, but rather design AI that makes a human expert maximally productive.

Idea would not be to get rid of people, but to make people, the knowledge that people have, the experience that they have maximally useful. I think that's maybe a more inspiring and productive framing for what this technology can do. Then to be mindful about the fact that there are many conversations in society that are quite duckin.

And when you talk about ai, very often you end up, within five minutes, you're talking about, Terminator on the matrix or whatever. To be mindful of that when you deploy these programs, to be careful to explain things using an authentic voice and explain not just what the company wants to do, but also the benefit for the employees of going along with it.

I think that would probably. Go a long way. In the end. The tricky thing with this technology is that you need some top down direction or energy, and you need a leadership with vision about what do we want this technology to do for us, and how do we deploy it? So the, the IT function needs to be ready for it.

We need to have policies and guidance and a vision and a support both. Technical support, but also things like legal and other stuff. So you need that from the top, but you cannot push it down only. There's gotta be also energy from the bottom. Now if you have energy from the bottom, people are going to basically try to use it for their benefit and to make their work better or easier or whatever.

Try tiner with new things if we can make things more effective or more valuable for them. But of course if you only have bottom up energy, you have lot of risks where people might be doing a lot of stuff in there that you don't want 'em to do. There's that, but also they might like the training, the expertise, and the support to be effective at doing it.

So I think somehow you need to meet it in the middle. There has to be top down vision and energy. There has to be bottom up initiative and energy, but somehow you need to balance the tool in a way that makes it effective. And I think that might sound a little vague and I think it's a bit. Or maybe hard to grasp what it actually means.

But I think if I'm a business leader and I'm thinking I really want this technology to become an engine for transformation, for improvement, not just to cost cut, but also to grow top line revenues, for example, how do I create a combinational top down and bottom up and somehow make that organization go, 

Richie Cotton: oh man. Yeah, when you've gotta have the top down vision from the management, but also at the bottom up, grassroots movement from the individual workers. So it's like management have have a dialogue with the workers. Then it's like management has to with your, in the end.

I, 

Stefano Puntoni: I the people who actually do the work in our organization are typically located down the, so I talk to the business leaders all the time and I always make this dark joke when I say, you do not feel. Threatened by AI because you don't do anything. They're just instructing others what to do.

And so the people who actually are in the middle layers or the lower layers, those are the ones who are taking this instruction. They have to make it work. And so I think to think about how do you enlist them in a positive fashion? How do you show that the AI can be something that is good for them?

Ben, I think that's a way to liberate and, mobilize that energy from the bottom. Absolutely. 

Richie Cotton: That is a good way to dis all leaders. Nice. Is there anything you are most excited about in the world of AI at the moment? 

Stefano Puntoni: I think with a lot of things this year in my course, I'm starting to teach a couple weeks and I'm adding a session on ideation and creativity.

I find this very interesting how AI can be a tool for getting better ideas. I find this a good example of this idea. Don't focus only on the cost cutting or focus on the effectiveness piece. How do we create better work? I think the fact that people are using it for brainstorming ideation, and the suggestion people do see potential in coming up with better products and better work on just cheaper work.

I'm adding a session on the skills. And de-skilling. And I think it's very important, especially for college students, that they understand both the kind of skills that will make them successful in a world where gen AI is used in many processes. But on the other side also, what kind of skills they might lose if they are not thoughtful in the way that they use gen ai.

And I think technology is always de-skilling us, right? So there are many things that we cannot do that our parents generation could do, because we've got technology to do it for us, and that's fine. Whether that is manual gear shifting versus, GPS and the remembering phone numbers.

I don't know, whatever. There's lots of stuff like that, that's fine. I don't need those skills. It's okay. I'm happy losing them. But there will be skills, especially with generative AI because we're talking about the automation or higher or the recognition where you know, you might not want to lose.

There might be actually a purpose. For example, I don't use Gen AI for writing. I only use Gen AI for editing. Basically, I need to come to the Gene I tool knowing what I wanna say, because if I start, I bring in gene I too early, then I just outsource my thinking to the gene I tool, and I don't think.

That's not good. And so I've got to have an opportunity to really form some thoughts and think about it, and then I might change those as I interact with an ai I might figure out that there are actually better ways of dealing with this or whatever it might be. But I want to have the opportunity first to reflect and bring some thoughts to the table.

And if I just go on touch PT and start prompting, I don't get the feeling to do it. And in some situations, especially when it's core to your job you don't wanna. I think 

Richie Cotton: absolutely. I think probably my th century ancestors would be horrified that I have no idea how to ride a horse.

So there are some skills that kind of get lost generationally, but yeah. Some things like communicating with other people, it feels like a, an important thing to retain is as a species, 

Stefano Puntoni: there are lots of skills I have. I can't wait of losing, I can't wait. I like driving a car if I couldn't.

I don't need to drive a car, which is so much in my life. So I think, this killing doesn't have to be an issue. But I think what I instruct, I tell my students to be deliberate, think carefully about what is it, what deal you're making with the technology. 

Richie Cotton: Absolutely. Alright. Just to wrap up I always want more people to follow.

So who's work are you most interested in right now? 

Stefano Puntoni: I think the, one of the most interesting people to follow is my colleague Itta Molik there, many of them are probably already following it. If you're look in this space. At the Warton School, we have, I think, a amazing bench of talent in the area of ai, very big business school.

So we, we benefit from scale, but some of the most interesting work at the boundary of AI in business is being made and be done here at the Wharton School. So I would say follow our channels, whether it's on LinkedIn or. Our website, we publish lots of reports. We just talked about our adoption reports with just one of the things we do.

And so we, lots of stuff in there for people who wanna know more. 

Richie Cotton: Wonderful. Yeah. I'm sure like there's so much fascinating stuff going on that intersection between business and ai, so Yeah, I'm sure you publishing a load of interesting research. Wonderful. Alright thank you so much for your time, Stef.

Thank you for having me. It was great talking to.

Sujets
Contenus associés

podcast

Unlocking Humanity in the Age of AI with Faisal Hoque, Founder and CEO of SHADOKA

Richie and Faisal explore the philosophical implications of AI on humanity, the concept of AI as a partner, the potential societal impacts of AI-driven unemployment, the importance of critical thinking and personal responsibility in the AI era, and much more.

podcast

Harnessing AI to Help Humanity with Sandy Pentland, HAI Fellow at Stanford

Richie and Sandy explore the role of storytelling in data and AI, how technology reshapes our narratives, the impact of AI on decision-making, the importance of shared wisdom in communities, and much more.

podcast

How Generative AI is Changing Business and Society with Bernard Marr, AI Advisor, Best-Selling Author, and Futurist

Richie and Bernard explore how AI will impact society through the augmentation of jobs, the importance of developing skills that won’t be easily replaced by AI, why we should be optimistic about the future of AI, and much more. 

podcast

Why Getting AI Ethics Right Really Matters with Christopher DiCarlo, Professor at University of Toronto, Senior Researcher and Ethicist at Convergence Analysis

Richie and Chris explore the existential risks of powerful AI, ethical considerations in AI development, the importance of public awareness and involvement, the role of international regulation, and much more.

podcast

Building Trust in AI Agents with Shane Murray, Senior Vice President of Digital Platform Analytics at Versant Media

Richie and Shane explore AI disasters and success stories, the concept of being AI-ready, essential roles and skills for AI projects, data quality's impact on AI, and much more.

podcast

Increasing Your Organization's AI Maturity with Iwo Szapar & Eryn Peters, Founders at AI Maturity Index

Richie, Eryn, and Iwo explore Al maturity in organizations, the balance between top-down and bottom-up Al adoption, the relationship between data and Al maturity, the importance of change management, practical steps for Al implementation, and much more.
Voir plusVoir plus