Skip to main content

Designing AI Applications with Robb Wilson, Co-Founder & CEO at Onereach.ai

Richie and Robb explore chat interfaces in software, the advantages of chat interfaces, geospatial vs language memory, personality in chatbots, handling hallucinations and bad responses, agents vs chatbots, ethical considerations for AI and much more.
Jun 24, 2024

Photo of Robb Wilson
Guest
Robb Wilson
LinkedIn

Robb is an AI researcher, technologist, designer, innovator, serial entrepreneur, and author. He is a contributor to Harvard Business Review and the visionary behind, OneReach.ai, the award winning conversational artificial intelligence platform that ranked highest in Gartner's Critical Capabilities Report for Enterprise Conversational AI Platforms. He earned an Academy Award nomination for technical achievement as well as over 130 innovation, design, technology, and artificial intelligence awards, with five in 2019 including AI Company of the Year and Hot AI Technology of the Year. Robb is a pioneer in the user research and technology spaces. He founded EffectiveUI, a user experience and technology research consultancy for the Fortune 500, which was acquired by WPP and integrated into the core of Ogilvy’s digital experience practice. He also created UX Magazine, one of the first and largest XD (experience design) thought leadership communities.


Photo of Richie Cotton
Host
Richie Cotton

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Key Quotes

With all of these companies that are bolting on these chat interfaces to their existing UI's and everybody trying to to get in on this this craze, we could end up in a place where we haven't leveraged the actual true value of this, which is a single interface for all of our software, which means that most of our software should just be skills. I don't want 20 Alexas in my house. I just want one. But those could be skills. Open the garage door. That could be a skill.

So we have like bad automation and then you have good automation. Hyperautomation is this idea of driving towards autonomous companies. So you have this concept of an autonomous car. Well, believe it or not, an autonomous company is an easier task to achieve in a lot of cases, depending on the company itself.

I'd rather take on having back office operations be autonomous than a car driving around in in traffic conditions in New York. I think the reason we see that getting less attention is because of the diversity in the tasks that are out there, but we'll start to see that hyperautomation is the act of automating really fast towards that idea of fully autonomous. It'll be a service to people in some way. It'll be so that we can create a classroom with a teacher for every two kids.

Key Takeaways

1

Start with human-in-the-loop systems when automating processes to ensure quality and gradually move towards full automation only after thorough testing and validation.

2

Avoid using generative AI for critical tasks like password resets or pricing queries. Instead, combine generative AI with deterministic models to provide accurate and context-appropriate responses.

3

Prioritize building a robust knowledge management system before automating tasks, as having accurate and comprehensive knowledge is essential for effective AI-driven decision-making and task execution.

Links From The Show

Transcript

Richie Cotton: Hi, Rob. Glad to have you on the show. 

Robb Wilson: Yeah, yeah, thanks for having me here.

Richie Cotton: So, chat GPT has obviously been the biggest story for generative AI so far. And suddenly, every company seems to be rushing to add a natural language interface to their own software. And I'm a little bit skeptical about this. Can you talk me through, when is a chat interface a good idea, and when isn't it?

Robb Wilson: I love that you just said, add a chat interface to their products. Most people don't think of this as a new interface. Thanks. They think of it as a new solution, as an AI, as an alternative to a human. From my perspective, the number one advantage to this technology is it's a new interface to our old software.

And, and yes, we can create new software with it, but the fact that it's a new interface is probably the most. dramatic shift. Because if you think about how many apps we have on our phone, right, and on our, and in the app store, and how few we actually use, very few. And then you imagine having access to all of them.

The whole app store, just by talking, suddenly you don't have to learn these apps. Um, it really changes not just, um, how we do work, but it changes the fact that we now can access and, and leverage all of this software we've spent years building, decades building. That's fascinating. And that's what's, um, I think the, the moment that we haven't quite registered.

When you hear people talk... See more

about this will change every job, they haven't quite pinpointed in my opinion, that. that major component of it, which is, it's not that it will change every job because it's a new kind of software. It will change every job because it'll help us use the software we already have.

Richie Cotton: Absolutely. I hadn't really thought about the idea that any piece of software you can have has a similar sort of user interface. There's always some obscure feature that's buried in settings and you can never find it. And having a chat interface is going to make it more visible. I think it's pretty amazing.

Robb Wilson: Yeah, we use our geospatial. A lot of people don't realize that when we navigate. Uh, how do I, that check, how do I, you know, book my vacation time? Oh, go into the bubble app, log in, then go to the menu on the right at the top. Yeah, then go three down, then go to the place where it's, see where it says, Okay, got it.

And then go down every, it's almost like, where's the post office? Well, go to the tree, the elm tree, then make a left. And our memories are, uh, our memory capacity for geospatial is not nearly as good as our memory for language. So we remember how to say, how do I get to the post office? Much easier than the actual directions to get to the post office.

So our language is what makes us more equipped to to navigate a city than our geospatial. So now you imagine taking that away from software where I don't need to use my geospatial. Like, where's that app? What's the, you know, and now all I need to do is just talk to the machine. So I don't need to memorize where the post office is.

Sort of GPS did, right, to, to navigation for us. It just tells us how to get there through language. Now I don't need to read maps, memorize directions, memorize street names, and all those things. So pretty cool. 

Richie Cotton: I've definitely had the experience in pre GPS days where I've stopped and asked someone for directions.

Right. And they've told me stuff and I've driven off and I'm like, oh, I have no idea what they just told me. Right. So I can certainly see how GPS for software would be incredibly useful. 

Robb Wilson: Yep. Yep. And then the battery dies and you're like, where am I? I have no idea where I am right now. 

Richie Cotton: Exactly. All right.

So, um, what makes a good chat interface? 

Robb Wilson: There's so many mistakes people make with these. So let's talk about the first one, which is not leveraging the fact that it's one interface for all of our software and all of our computers. So imagine I take the same approach as I take with apps and I end up with 50 different chat interfaces on my phone that do 50 different things.

And I've got to remember, wait, which one do I talk to, to book my vacation? Didn't it? Like all we'll do is just bring this paradigm forward because we're used to having all these apps. And so we'll have all of these. What I call random acts of bot building. We'll have all these chat interfaces in our lives, not knowing which one to go to, right?

Versus one single one, one place, you know, that we go to to get our work done. And so I think with all of these companies that are bolting on these UIs, um, and everybody trying to, to get in on this, this craze, um, we could end up in a place where we haven't leveraged the actual true value of this, which is a single interface for all of our software, which means most of our software should just be skills.

I don't want 20 Alexas in my house. I just want one, but those could be skills. Open the garage door. That could be a skill, um, that it's aware of. So I think that's the number one mistake. The number two mistake is. Um, going directly to full autonomy, just like imagine Tesla did that with self driving, right?

So, having human in the loop, and making sure that you just begin the process of slowly automating, and always having a single place For users to go to get answers to questions and to do things and to communicate and make sure that that single place is both For humans and for machines and then you can slowly over time Automate more and more don't try to fully automate The interaction with employees or customers just right at the gate 

Richie Cotton: Okay.

Yeah, I like that. So do it incrementally and just add, I suppose, one skill at a time before you try and do everything at once. Yeah. 

Robb Wilson: Yeah. 

Richie Cotton: So in terms of what the chatbot tells you, I know a lot of my colleagues in marketing tend to worry about tone of voice and how you actually go about saying things.

And some of these chatbots, they're incredibly flexible in terms of what they can say. So you end up with people asking support chatbots to write Python for them and something silly like that. So, how do you control the tone, first of all, the tone of voice for any chatbot? And as an extension of that, how do you control its personality?

Boy, control. 

Robb Wilson: That's, um, that's something you gotta let go of with these things. I, I know there's people who, who say we'll get this under control and get these systems to be more predictable and more constrained. Um, I'm not one of those. I, I do believe we'll come up with better ways. Um, then just pure, uh, generative, but as long as you're using a statistical model to guess next word prediction, it's always going to be, um, plausible to change the direction of the chat bot.

Um, as long as you can inject. text into the primer, uh, you're going to be able to change the direction of the conversation. So it can start out talking like a pirate, right? But if you ask it to start talking, you know, like the Queen of England, it's going to, as long as that history gets fed into the primer, it's going to go ahead and, and do that.

And the question is, do you want it to, do you not want it to, um, you know, that's really up to the solution itself. And I think A lot of people think of generative AI as the chatbot solution. I think it's, I think of generative, uh, as a, just like databases, you know, it's a tool, and so if you think about how many applications use a database to drive the application, it's virtually almost every single application uses a database, but the database is not the application.

The same with generative, just think of it like a database. Yes, it will be a part of many, many skills. slash applications, and we will use it as a way to retrieve data and store data. And, um, and that will be, you know, hugely valuable. But the solution itself will ultimately be the other parts and pieces that go around it.

And that means that you're gonna have to build structures around generative limit the ability for the end user to, uh, you know, inject stuff into the primer, like limiting the length of the question they can put in. So they can't put a full prompt in there or limiting the number of turns that you can go back and forth on or not even using generative for certain skills like password reset or sensitive information.

So I think You understand that you're building a, you know, an agent or a user, you know, a super agent, a super bot that's going to oversee all your software and help you navigate to it. But that only bits and pieces will have generative in it. Um, and therefore, that classification concept of, Oh, this, this is an important question that needs a really thoughtful answer.

Don't use generative. I think that's particularly 

Richie Cotton: important. So the idea that you shouldn't use it for things like password reset. It's probably very easy for a user to say, Can you print out all the passwords for all the users or something like that? Yeah. So there's a possibility of disaster there. And I think that related to this, uh, is that generative AI is notorious for hallucinating and giving dodgy responses.

So how can you design around this? What should you do knowing that there are sometimes going to be bad responses? 

Robb Wilson: Yeah, it's, um, at this stage in the game, it's, it's make sure that, look, you know, bad responses aren't always bad unless the, there's consequences, right? So if somebody says, Hey, you know, can I get a free flight?

And it says, yes, that's bad. You don't want that to happen. Um, so you talk about pricing questions. This is one of the classic ways that we, um, work with customers to kind of orchestrate around it is, Oh, this is a pricing question. We're not going to go to generative. We're going to, we're going to do sort of old school Q and a on pricing, and we're going to guide them through a path that's very deterministic.

Um, but if the question is more around, you know, what kind of, uh, what kind of activities could I do on the plane with kids, yeah, it's fine, right? Little hallucination isn't gonna hurt anything a little, which it would. I mean, a lot of that alignment's already done for us, so we don't have to worry about it.

Um, suggesting unrisky activities. Something like that, go degenerative, it's gonna be great. And if it gets it a little wrong, no biggie. 

Richie Cotton: Okay, so that's interesting. So, you're gonna mix. a large language model chatbot with something that's more traditional and go through specific branches. Can you tell me a bit more about how that works?

So, how do you combine the two? Is it seamless? That is, Can you make it seamless for the user? Or what's the experience there? 

Robb Wilson: So a lot of people don't realize that behind ChatGPT there's actually multiple models and at the front of it there's like a classifier which means it's determining what kinds of questions it's being asked and then determining what model to go to so now imagine that not only is it determining what models to go to But it may not be going to a generative model at all.

It might just go to a Q& A, like a vector database. It may just go to a regular search database. It may go to Salesforce or some other structured data place. So we call it Omnidata. You know, having data in different places and then this front classifier, which is that, what type of question is this? And then based on the type of question, you determine whether genitive is right or wrong.

And so this orchestration layer, um, Is sort of the critical component of, you know, I think of the whole solution when, when, when this whole thing started for me. It all started with, uh, pre Siri, um, and it was a DARPA project. And I won't get into that bore everybody, but the, as soon as Siri came out, it became clear to me, but surprising to me that this was going to be used in homes.

First, I never thought, I thought when this came out, it was going to be used in businesses. It makes so much more sense to automate your business than your house. Um, and I think that the smart speaker was an interesting. Like way that it emerged. I still am surprised that that's the way that this emerged was as a speaker.

Um, I understand it. It was a voice activated speaker. It was like an evolution of the clapper, you know, clap on, clap off, like, Oh, now I could just talk to it. But, um, I always thought that this stuff made so much more sense to answer questions that were business related to turn on and off the lights at a business makes more sense.

Then in your house. And so when you look at the structure behind these super bots or whatever you want to call these agents, right? These intelligent agents, whether it's Siri, Alexa or Google Home, you'll see that there's a vast orchestration layer orchestrating skills, orchestrating knowledge, allowing multiple users and users to contribute skills to the ecosystem.

And this was pre generative, right? So you plug generative into an already existing orchestration engine. And now you have like jet fuel. Right. And so I think we're sort of companies are kind of coming in backwards. So they're, they're starting with generative, but not that framework and that, that cognitive architecture that would allow you to have that agent that then you plug.

it into, um, a generative for certain skills. Uh, and so I think that's the part that's missing. If you remember the early days, you would ask it questions. It'd be like, I don't know. I don't know. I don't know. Um, that's where generative can now like, you know, accelerate the value of it because now it does know, but still all those skills turn on the lights.

That's still requires the deterministic aspects. So that, you know, That cognitive bit, you know, um, is still really, really key. And that's why the company I started set out to offer a agent for business. So you can create your own Siri or your own Alexa. It's the tools that Siri and Alexa use behind the scenes.

Okay, so it sounds 

Richie Cotton: like when you're trying to complete tasks, because it's using software underneath, you want to do something quite different. Deterministic. And so really the generative layer is just a natural language interface to those underlying deterministic models that actually do stuff. Is that correct?

Robb Wilson: Yeah. Yeah. And you know, it's, if you think of the car robotics, you know, the, the ability to actually turn the steering wheel left and right, isn't, you're not going to use generative AI for that. Um, to know whether to turn left or right, or to make those decisions. Those reasoning, those decisions on when is it and how much to turn that potentially could, but the actual act of doing it.

Um, the same goes with kind of our neurons and how, how we fire and how we move our fingers. You know, we don't, we don't need language for that. That's fairly deterministic in terms of how, how it operates. So the future of, of AI is a woven combination of deterministic and probabilistic. And, and that, you know, that perfect, um, marriage of the two of those is where we're going to really see things come, come to fruition, as we're seeing with robotics right now and generative, like, wow, these two together are, are miraculous.

And, um, I think within five years, maybe five years or more, I'm usually really, really, really excited. I'm really, really optimistic about this stuff. We'll start to have reasoning engines, um, and I think that'll be the next sort of big thing in AI. And I don't think it's going to be based on generative, it's going to be based on sort of a combination of learning.

But these reasoning engines, then we'll start to see. You know, maybe a hybrid of generative and deterministic come together, but that's still years out, I think. 

Richie Cotton: Okay, so we've got different kinds of AI probably come together into some kind of thing that makes sense. I'd say out of the brain analogy there.

So if you're thinking like a human, speaking is just a natural language interface to the interesting stuff that's happening in the back of your brain. Yeah. Uh, we talked about things going wrong with AI sometimes giving bad responses. Yeah. So that means you need some kind of testing in place in order to make sure you've got the right quality and things work at least most of the time.

So how do you go about testing chatbots? 

Robb Wilson: Yeah, testing is a tough one. So, um, generative has made creating a chatbot so much easier. And what, what I think is still unknown is whether the amount of testing that it adds, because it makes testing significantly harder. And I think with a lot of projects, the testing, the value of the fast development might be diminished by the elaborate complexity of testing.

Um, a lot of folks don't get that because they're like, look how fast I made this. Yeah. Well, you haven't tested it. And so if you think about open AI and how much is gone into sort of testing and then the reinforced learning, the labor versus ingesting the data in the first place, Um, you would argue that there's significantly more effort in the testing side.

Um, and so the funny thing is sometimes the fastest way and the best way to minimize testing is to not use generative where you don't need to. Um, that's the first thing most people don't think of is like, yeah, just because it's easier to build. You put it on testing and, and now it takes forever. The second thing is use real world data.

A lot of times we sit there and we guess how users might interact with the system. And we're always wrong, even linguists. Um, and so using a corpus of real data, one of the things that. we think is really valuable is before you even launch your chatbot, launch the human version and just, just say, hi, how can I help you collect the corpus of data of what people say?

Don't, don't help them in any way in an automated way. As soon as they answer, send it to a human. But what happens is you build this corpus that you can train against in the future so that you can ensure before you launch that it's going to have the right satisfactory results. 

Richie Cotton: Okay. So you've got to collect the real human data just to make sure that all the stupid questions people are asking are accounted for.

All right. And then after you've launched, uh, I presume there's another monitoring phase where you check that it's working against all these new stupid human questions. Uh, so can you talk me through just how you go about doing this? This is a good 

Robb Wilson: one because I was just recently talking to The guy who, um, he managed and created the knowledge base for NASA, which is one of the most successful projects, as you could imagine, in knowledge management.

And the first thing he'd say is, you're never done. There is no like, oh, now that you're done. Right? How do you monitor it? Um, what you're beginning is an ongoing regiment of constantly grooming, constantly reviewing, constantly updating information. Like, when is your, when are you done learning, right? Yeah, we do graduate, but are we done learning?

No, we're just starting our learning. So when you launch a knowledge system, um, You're at the beginning, you're not at the end. And so the goal there is to continue to tune and monitor. Human in the loop is key. Making sure that somebody is observing these conversations. But the system itself can also observe itself.

Creating a dual agent, right? This sort of, um, agent based interface side of generative that's watching conversations. That it's having, and then notifying or marking or tagging them so that you can review them later. And then make sure that those answers are correct. Also, people miss that using generative AI or using conversation as a way to manage knowledge.

So when people give the feedback, like a thumbs down on an answer, which is key to make sure you get that feedback in the first place. that it goes to the person who owns that knowledge and then gets reviewed constantly. One of the biggest things we do on knowledge is put a time to live on it. All knowledge has a time to live.

Uh, we worked with Morgan Stanley, uh, recently, um, on a podcast and, um, talked through this. They did the same thing. They don't let any knowledge live longer than a year without being reaffirmed. So they'll delete it. So it goes back to the original creator if they don't validate it. It gets deleted from the system.

And that's one of the key things they do to keep their knowledge up to date. So, um, I think it's just looking broadly. That's a 

Richie Cotton: pretty aggressive strategy for keeping your data up to date. Yeah. Just like, okay, this is one year old. Yeah, it is. I was surprised, but it makes sense. Absolutely. And On top of this, you've got your chatbot working, you're monitoring it.

Um, I think a lot of companies have built prototypes in the last year, and they're trying to work out how they scale up. So what do you need to do in order to make your chatbot scale? Well, boy, this 

Robb Wilson: is like a, what, five, five episodes on how to scale. Um, so first of all, it depends, your channel matters, right?

If it's a voice channel and you event, or you eventually think it's going to be a voice channel, you have a significantly higher hurdle to cross on scalability because that latency really matters. You have to keep those, those responses under three seconds. Or else it starts to feel really laborious to end user.

Um, and so you're from the beginning, you're not only worried about just, will it work at scale, but will it perform consistently and fast at scale? And will all the skills work? consistently and fast that are getting contributed to the system. Um, it's an ongoing monitoring system of, of, uh, of making sure it works.

But one of the things that, you know, I think are critical is a microservices architecture, one that is, um, holistically, you know, able to scale with traffic up and down so you're not, Paying for a lot of resources, uh, while you're not using them, but that it's, it's auto scaling up and down. So composable architecture is something that I think is it's, it's that baseline.

You've got to put in place some of that cost and an effort that you put into place, you end up with this like basic, simple chat bot, but it's not. And you have to make this investment in an infrastructure for the future. And it's one of the things companies have a really hard time doing is making that big investment upfront and then deploying something that seems lackluster because they've spent so much on the underlying carriage, right.

To make sure that as it grows, it can be supported. It's also one of those things that if you miss it, if you don't architect correctly in the beginning, it's, you, you have to tear it down and start over. It's not something that typically you can go in and retrofit. So microservices composability key. 

Richie Cotton: Okay.

So just really thinking very, very carefully about the architecture upfront. Um, I like the idea that you can scale the service level. So if you go to peak load, then maybe you're not providing absolutely everything at once. Okay. 

Robb Wilson: Yeah, the other thing is I think what we're going to see today, you know, we call them toolkits.

So does Gartner. You look at Google and Amazon. These are toolkits. You have to put the pieces together. I think of this as like Wozniak in the garage where he's like, You know, building the computer for you and you're like, hey, I want a computer and he's like, oh, I'll build one for you. I'll use this hard drive and this RAM.

Um, if you think about what a cognitive architecture is, you're using this database, using these microservices, you're using this event based system, Redis, and you like pull this all together with a phone system and. Open AI and multiple other analytic systems and and you hope it all works Well, right and I liken this to like building computers for everyone in your company.

Like what would that look like? And then the other is the other end of it is like these point solutions like oh, this is a chat bot for you know Planning vacations. That's like a calculator, you know where everything is Decided for you down to the use case. It doesn't do anything, but, but math, um, or a word processor that specifically only does word processing.

Well, there's, I think a new generation of tools coming out. We're one of them, which is a cognitive architecture in a box, just out of the box, all the parts and pieces have been selected for you, you just deploy it and it is already fast. Now, are you going to agree with the specific choices? Maybe not always.

But overall, you're just going to take the whole computer because the idea of making your own and managing it just is too much for most companies. 

Richie Cotton: Okay, absolutely. Um, so do you have a sense of when you'd want to build something on your own and when you'd say, okay, I just want an out of the box solution?

Robb Wilson: Yeah, I think it goes for computers. You know, if, if you have high compute needs or very specialized needs, Um, then you're going to build it because you're going to want, let's say 10 times the RAM, right? Look, you know, if you look at things like You know, Amazon, they have specific, uh, builds for a lot of the software and the, you know, from a hardware standpoint.

So you look at an Amazon and it matters to them, they custom build, but to your average company, they should not be building their own computers. This doesn't make any sense. All right. I like that. 

Richie Cotton: Uh, so if it's not your expertise, go with the simple thing. And if you're an expert, then okay, maybe you can get into the weeds.

Robb Wilson: Yeah, focus on this, on the software and the solution, not the undercarriage. 

Richie Cotton: Now, earlier on, you mentioned that you thought business automation was a good idea. And this goes from the idea of just having a chatbot that's going to answer questions for you, to an agent that's actually going to perform tasks.

Thanks. So, how does the design of agents differ from the design of channels? 

Robb Wilson: Oh, um, so the concept of an agent is, so if you think of how a generative works, right, you put in five words, and then it uses those five words to protect the sixth word, right? Then it includes the sixth word. And it takes those six words and predicts the seventh word, seventh word.

Then it includes those seventh words and, and so on and so on and so on. And we end up with something coherent, which is like, wow, amazing. Like it groks it into like, wow. Um, and think of this, but for decisions, right? So let's say that you have a workflow, like a task I want to get from here to Denver, or I want to plan a party.

Um, where each step in the task is like, okay, first we're gonna get a list of the guests, right? And then the next thing we're gonna do is, you know, send out an invitation and make sure that they're available. And then based on what happens next, we're gonna take the results of those two things and decide the next thing to do.

And then once we get the results of that, we'll take those three things and decide the next thing to do. And so you're kind of, instead of next word prediction, you're talking about next action prediction, right? And, That's really valuable because when you, when you try to write this in code in a deterministic sort of way, you have all these if then statements, like if this, if that, you know, and that's just heavy.

So what an agent does is it, it's next action prediction based on the results of the prior actions, which is what we do as humans. So that's what, that's what agents essentially are in the system. Where I like to see them and use them and introduce them is what I mentioned earlier, is not. doing the work, but actually observing the conversations.

It's a great way to start getting comfortable with agents where the conversation at chatbots, like you said, is more linear. It's like, Hey, we're going to take you down a road. That's a guided tour. Right. And then over here, we're going to observe the conversation. And if you go off track. And all of a sudden you want to do something unexpected, we'll have the agent kind of come in and say, Oh, wait, you want to book your hotel before the flight.

Okay, we're gonna do, we're gonna do it in this other way. So, yeah, agents are, one is next word, but very linear. The other is next action. And, You know, it deviates. 

Richie Cotton: Uh, I like that parallel. And I guess it depends on how open world your task is, uh, to determine how much Gen AI stuff you need, as opposed to how much deterministic stuff you need.

Are there any hurdles that need to be overcome in order to build a business automation? Yes, 

Robb Wilson: it's incredibly hard. It's complicated. Um, there's a few hurdles that I'd say are the top ones. The first one is Um, human in the loop, make sure that you have that in place. It's very hard to make sure that if the chatbot fails, that there are humans available.

I'd say that's one of the most difficult things for people to do, because that's not just technical, but it's also, you need to have processes and humans available. Um, and the cognitive architecture, of course, massive, Massive, you can get buried in that. You can spend so much time integrating different components together, integrating your phone system into, you know, some sort of microservice and then that into generative AI and then that into your, your reporting system that you can get absolutely mired in that.

Um, and then discovery. Um, you know, how do people find these things? How do they use them? How do they know when new knowledge, when one day it doesn't know something, the next day it does know something? How do I know that I can now go back to it? This is one of the biggest challenges in the space, is when we have a graphical UI, um, we can see when new features get added.

But when it's voiced, it's invisible. So, I'm a huge advocate of, and, and I think a lot of folks in the space see this coming, that the new language is going to be words and micro UIs, or graphical UIs, intertwined. So when you say, I want to crop an image, a little cropper is going to come up with a slider.

You're not going to have to navigate Photoshop, but you'll get this little tiny UI. So imagine that answers are text, answers are. Potentially audio, answers are video, answers are pictures, answers are micro UIs, little tiny applications that might have been built on the fly or pre built to do specific tasks.

Richie Cotton: I love the idea of a micro UI, just a tiny application that's going to do one single task. I guess the idea goes back to The Unix philosophy, where you've got lots of small programs that play well with each other rather than having one large one. 

Robb Wilson: We do it now when you send money, like Apple Pay or whatever, you can see that's the beginning.

You go in your text message, you add, it's just a micro UI, you add the money, you know, it's a plus. You add the money in there, and then off it goes, the transaction is taken. We can see that Apple has begun taking us into this realm. Okay, 

Richie Cotton: that's a lot to process. So, you mentioned the idea of a human in the loop, and we've had a few guests on the show talk about how there's a happy scenario where AI augments humans and then does that more often.

terrifying scenario where AI just replaces humans. So can you talk about the cases where agents are going to be helpful and increasing human productivity, and the other case where they're going to replace humans? When do you want a human or not? 

Robb Wilson: This, you know, all of these questions take us to an existential, like you just keep going to the meaning of life, right?

Um, so the first question is like replacing humans. Um, I like this. There's two ways to look at work. We all look at it as like, uh, I dread more than off more than more than anything else. But if you really sort of step back and think about It's an allocated time spent serving other people, which is an interesting idea when you think most people's jobs are at service to others versus themselves.

And when you work too much, you could almost say it's when you've gone overboard at service to others and not enough to yourself. And the fact that, that our, Social structure is such that we spend so much time at service to others, right? It's kind of sounds good after a while, right? You're like, wow, like mother.

We're all mother traces like helping Helping our fellow man, right? We go to work and yes, we receive money and reward for this And and that's our gratitude which we must thank We must have but at the same time it still doesn't change the fact that we have a structure for serving others And then that obviously makes us feel good.

That's what we want to do We want to be at service to our community not just ourselves or work wouldn't exist, right? So this is something we've created. It doesn't matter if robots can be at service to other humans It's not going to stop us from wanting to be to be at service and to contribute to our society.

Even wealthy billionaires are still working, right? They're still doing things for other people. And so I think it's just in us as pack animals to continue to want to do other things for other people. So I'm not too worried about replacing humans because at the end of the day, we wake up in the morning and do want to be at service to our community and we'll find new ways to do it, just like we always have.

Our jobs don't resemble the jobs. That we had a hundred years ago, 200 years ago, and they won't resemble the jobs we have today, a hundred years from now, but we'll still wake up and try to find new innovative ways to be at service to each other. So I'm not too worried about that. And I think the transition right now, as we find that sometimes we're at service.

To each other in ways that make us like very routine, mundane tasks. That make us unhappy. Um, and so you think well, is there a world where I can be at service to others, but also enjoy it? Like can I do more of that? And I think that's that's the area that will keep humans in it And the areas where i'm not enjoying it I'm, i'm at service to others, but i'm not really enjoying it myself.

It's a it's a great expense to myself Especially like dangerous scenarios like working in a coal mine or things like that, right? These are areas that are ripe for automating because Because why can't we be in a world where we're at service to others? And it's not a huge sacrifice to us personally. So those are the ones I think will automatically start replacing.

And no one's going to argue about those, right? If, if it's physical danger, you know, someone hanging off of a telephone pole, um, or getting electric with the threat of getting elected, no one's going to argue with replacing, you know, Those kinds of things first. So I think we're just going to chip away and the jobs that we enjoy and the ways that we like serving society.

We're going to keep those excellent. That's quite 

Richie Cotton: an optimistic take and I do like the idea that people fundamentally want to help each other and hopefully we get rid of All the nasty jobs that people don't like. All right, so, um, All this use of AI agents leads to something I know you've written a lot about, which is hyperautomation.

So can you tell me what is hyperautomation and how is it different from regular automation? 

Robb Wilson: It's just a fancy way of saying automating a lot and better. So we have like bad automation and then you have good automation. Hyper automation is, is it's this idea of like driving towards autonomous companies.

So you have this concept of an autonomous car. Well, believe it or not, an autonomous company is an easier task to achieve in a lot of cases, depending on the company itself. Um, you know, having back office operations be autonomous is, I'd rather take that on than a car driving around in, in traffic conditions in New York.

Um, so, um, the, I think the reason we see, you know, that getting less, uh, sort of less attention is because of the diversity in ways that, Of the tasks that are out there, but we'll start to see that hyperautomation is sort of the act of automating really fast towards that idea of fully autonomous. Again, it'll be at service to people in some way, right?

It'll be so that we can greet, you know, the idea of, um, of a classroom with a teacher for every two kids, right? Instead of 30 kids, like it'll be at service to that. So you automate, you know, the education system so that we can have a greater ratio of teacher to students. Because we like to teach and kids like to be taught by humans.

So, so do adults. So we'll keep doing that, but we could decrease the ratio by getting rid of all the back office work. 

Richie Cotton: Definitely getting rid of marketing student homework, I think is a priority there. But I do like this idea of autonomous companies. So, I start a company. It goes and runs itself. And I get some money while I'm sat on the beach.

That sort of thing. It's very compelling. 

Robb Wilson: Yeah, I think that's going to be a transition, and some people are going to make a lot of money in that transition. At some point, you know, it's, it's, everyone can, will be able to do it. But right now, I think, I don't know, let's, there's some window of time here where Where you could, uh, you imagine what an autonomous company could do in terms of profitability at this moment.

So how do you 

Richie Cotton: get there? So if you say, I want to increase the level of automation at my own company, where do you start? 

Robb Wilson: I'm a big fan of starting internally. So start, start on internal use cases. Start working with your employees and automating employee tasks. It's, um, you know, they're more forgiving, uh, get them on board so that the mistakes, they understand that, you know, it's, it's a, it's, it's not the end goal.

It's the beginning of learning and, and cutting their teeth on this stuff. So get them involved that, hey, we're learning and, and we need your feedback. Um, and then the second big thing. Uh, is, um, don't centralize it. The, most companies think of IT as like the center that is going to do it all. And then everybody else goes to IT to get it done.

And if we think about this from computers, there used to be a computer department. In the 1960s, if you worked on a computer, you were in the computer department. And you were probably a woman, ironically. Today, you know, we're all in the computer department, I guess. Um, so I think of software development and, um, and AI in, in that same phase as the computer department.

I think very soon that there won't be an IT department. There's just going to be people that, that contribute to automation. And I think, Trying to transition your I. T. team to an enabler of the rest of the org as a teacher, as a, you know, as a, as a consultancy versus the doers of all automation. 

Richie Cotton: I like the idea that you should be working on internal tasks first in terms of automation.

Are there any specific tasks that you think are typically right for automation? 

Robb Wilson: All right, I would focus on internal tasks and then. The ones that will teach your organization the most. So knowledge, once you've sort of accumulated the knowledge of your organization, if you think about that agent concept of making, taking actions, well, in order to take actions, AI needs knowledge.

It needs to know about your org in order to not screw up those actions. It needs to have information about what to do, what not to do. What are the specific, um, things to do? So you think of. building that knowledge base as core, build that out, get that right, get that system humming, um, and then once that knowledge is in place, then focus on the automation of tasks using that knowledge.

So now that agents can go get information on their own. And swarm, they can start doing tasks at scale. All right. So 

Richie Cotton: build something, learn about it. That's going to help you create more things and gradually it's hopefully going to snowball and then you get the chance to automate it. 

Robb Wilson: Yeah. Knowledge, knowledge, knowledge, like become, you know, knowledge management's kind of a nerd thing.

I think. It should become cool in the org. Um, it should be, you know, an amazing opportunity for most to get into the whole knowledge management area and understand that knowledge management is a process. If you look at the best, best examples like Wikipedia and Google, Um, a lot of money is spent making it searchable.

It's not a given that every web page, we, we spend money on keywords. There's people that are experts in creating those same with Wikipedia. There are volunteers and people, it costs like 400 on average to make a Wikipedia page. So you realize that making your knowledge, um, usable is a task that you have to start with.

Once you have that core, that corpus of valuable knowledge. So that is that base that you can work on. 

Richie Cotton: And for people in management, how might you want to change your business strategies to account for the fact that you've got all this powerful 

Robb Wilson: AI now? That's a tough one because the question is like, is your business strategy sound as it is right?

Like that's, that's a big assumption in a lot of cases. Look, if you look at most business set strategies that we're going to probably believe in, we're going to say they are customer driven, right? I think if your business strategy is to serve your customer first, then you're not going to have to change much.

Just there, this is just new tools on how to serve customers better. If your business strategy is, you know, making money at the expense of customers, um, then this technology is very dangerous. It's going to allow you to. really abuse your customers in ways that you might, uh, find is not good for business in the end.

Um, so you saw the Air Canada thing, I think, where they, you know, the chatbot was, uh, was telling wrong information and then they got sued because somebody who had asked if bereavement was covered got the wrong answer. And, uh, and so we can see that being hasty to go out there and. Save money on customer interaction while not improving the experience for the customer.

Um, that's a dangerous move in a generative AI world. 

Richie Cotton: Absolutely. Well, it's an interesting dichotomy there between you can either use AI to improve the customer experience, or you can effectively abuse your customers with it if you do it wrong. So you've got to be very careful about being on the right side of that.

Um, All right, so just to wrap up, what are you most excited about in the world of AI at the moment? 

Robb Wilson: I'm really excited about this concept of, um, this, this idea of, of decision making agency, um, you know, getting to a place where this, we have this hybrid of generative, uh, and deterministic, this, this idea that that reasoning and systems that can reason.

Um, one of the things to think about is, and a lot of people are thrown sideways. I was talking to Gartner, um, a couple of months ago now, one of the, we always think about a consumer, right? And we think of, you know, You know, a typical consumer, whether B2B or B2C. Um, but a lot of us don't realize that very shortly, uh, one, one third of the future consumers will be robots.

Um, they're going to be AIs that are reasoning and making decisions. Now, it starts with buying your printer ink, right? Um, cause I don't care where you get it, just get me good ink that's cheap. Um, and so my printer is going to buy ink and it's just going to let me know like, Hey, it's coming. Um, and so I, when I think about that, I get more excited not for printer ink.

I don't do that much printing, but when you put that into healthcare and you imagine that we're not going to revamp the healthcare system quickly, that's kind of a myth, but we could put a layer on top, a usability layer of AI over it. So you have this AI system that helps me as a consumer of healthcare, manage the antiquated healthcare system we have.

It's the same idea as putting an AI on top of all of my software. If you think of the healthcare system as a bunch of apps on my phone that I don't know how to use, and then you give me an AI that can help me manage that antiquated system, fill out the forms for me, call and make the appointments, do all of the things, figure out what I should and shouldn't do, and, and intercommunicate.

with my providers on my behalf, and that that can be done at service to me in a way that I can observe it, like I could have two of these systems arguing with each other about whether I should eat a donut today or not, and then I can decide, right, based on the better argument. So, this idea of a layer of AI that sits on top of these antiquated systems gives me hope for change really fast and in healthcare this is where I'm most excited.

Richie Cotton: I do love the idea of AI being a layer over people or companies you don't necessarily want to deal with. I can see how it could be really useful for things like interacting with taxes as well. Nobody really gets excited about doing taxes. And probably someone's got in laws that they don't want to speak to, or maybe it's that way I communicate with them.

Okay, so just to finish, do you have any final advice for people who are designing generative AI applications? 

Robb Wilson: Don't put generative AI in the center. You know, first build out that orchestration layer, think about that cognitive architecture, get the computer running before you think of generative AI as, as an open AI API that feeds a UI, think of more of an orchestration layer, put that classification in place first.

and figure out, like, is this right for open, for, for a generative model? Is this, is this question better served by a very specific answer? Get that in place first and then, and then move forward. All right. So, 

Richie Cotton: uh, think before you implement it, rather than just going, oh, we must have generative AI everywhere.

That's pretty sage advice. Think, 

Robb Wilson: yeah, yeah, yeah, we have AI, but we, we still need to think.

Topics
Related

podcast

The Past, Present & Future of Generative AI—With Joanne Chen, General Partner at Foundation Capital

Richie and Joanne cover emerging trends in generative AI, business use cases, the role of AI in augmenting work, and actionable insights for individuals and organizations wanting to adopt AI.

Richie Cotton

36 min

podcast

No-Code LLMs In Practice with Birago Jones & Karthik Dinakar, CEO & CTO at Pienso

Richie, Birago and Karthik explore why no-code AI apps are becoming more prominent, uses-cases of no-code AI apps, the benefits of small tailored models, how no-code can impact workflows, AI interfaces and the rise of the chat interface, and much more.
Richie Cotton's photo

Richie Cotton

54 min

podcast

Building Ethical Machines with Reid Blackman, Founder & CEO at Virtue Consultants

Reid and Richie discuss the dominant concerns in AI ethics, from biased AI and privacy violations to the challenges introduced by generative AI.
Richie Cotton's photo

Richie Cotton

57 min

podcast

The 2nd Wave of Generative AI with Sailesh Ramakrishnan & Madhu Iyer, Managing Partners at Rocketship.vc

Richie, Madhu and Sailesh explore the generative AI revolution, the impact of genAI across industries, investment philosophy and data-driven decision-making, the challenges and opportunities when investing in AI, future trends and predictions, and much more.
Richie Cotton's photo

Richie Cotton

51 min

podcast

What to Expect from AI in 2024 with Craig S. Smith, Host of the Eye on A.I Podcast

Richie and Craig explore the 2023 advancements in generative AI, the promising future of world models and AI agents, the transformative potential of AI in various sectors and much more.
Richie Cotton's photo

Richie Cotton

49 min

podcast

How Generative AI is Changing Business and Society with Bernard Marr, AI Advisor, Best-Selling Author, and Futurist

Richie and Bernard explore how AI will impact society through the augmentation of jobs, the importance of developing skills that won’t be easily replaced by AI, why we should be optimistic about the future of AI, and much more. 
Richie Cotton's photo

Richie Cotton

48 min

See MoreSee More