Unlocking Humanity in the Age of AI with Faisal Hoque, Founder and CEO of SHADOKA
Faisal Hoque is the founder and CEO of SHADOKA, NextChapter, and other companies. He also serves as a transformation and an innovation partner for CACI, an $8B company focused on U.S. national security. He volunteers for several organizations, including MIT IDEAS Social Innovation Program. He is also a contributor at the Swiss business school IMD, Thinkers50, the Project Management Institute (PMl), and others. As a founder and CEO of multiple companies, he is a three-time winner of Deloitte Technology Fast 50™ and Fast 500™ awards. He has developed more than 20 commercial platforms and worked with leadership at the U.S. DoD, DHS, GE, MasterCard, American Express, Home Depot, PepsiCo, IBM, Chase, and others. For their innovative work, he and his team have been awarded several provisional patents in the areas of user authentication, business rule routing, and metadata sorting.

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.
Key Quotes
I kind of look at AI as a kind of a mirror of humanity because we're feeding it and then we're being fed from it, right? So it's a kind of an interesting dichotomy.
AI is like it's in an earlier nascent stage where it's a child that is getting to adolescent and maybe it will grow up.
Key Takeaways
AI should be viewed as an active partner in your organization, requiring a shift in thinking from passive software to a resource that can be allocated tasks and responsibilities, similar to human resources.
Personal responsibility in AI usage is crucial; individuals should aim to use AI for positive outcomes and avoid outsourcing their critical thinking to ensure they maintain their value in the workforce.
Organizations need to develop a framework for AI governance, such as the CARE framework, which includes considering catastrophic risks, assessing opportunities, regulating AI use, and having an exit strategy for when things go wrong.
Transcript
Richie Cotton
Hi Faisal. Welcome to the show.
-
Faisal Hoque
Thanks for having me.
-
Richie Cotton
So your book poses a really big question, which is if AI is powerful enough to do anything, then what's the point of humanity? What's his value? So I'm curious, did you get anywhere answering this.
-
Faisal Hoque
Where we left in the book and I have, I had a lot of time to think about it over the last or years. Is that, you know, we don't know. We really don't know the answer. It will depend on what we end up collectively, doing, because if you really talk about what is real humanity right now mean humanity, you know, I mean, we, at a fundamental level, you know, you can say, okay, it's our ability to to be free, to choose.
-
Faisal Hoque
What do we want to do, what we want to eat. Where do you want to go? How? Well, pursue creative pursuit, whatever the case may be. Once you start outsourcing all that stuff, to to something else, then you kind of start losing yourself, right? So that's really, really the fundamental thing. And, I can look at the AI as a kind of a mirror of humanity because we're feeding it, and then we're being fed from it.
-
Faisal Hoque
Right. So, so it's a kind of an interesting dichotomy, but I think, what will happen, is is, what are we end up d... See more
-
Richie Cotton
Okay. That's a good point, that there is a lot of uncertainty around this. And we don't really know what's going to happen, but I do find it interesting, the idea that, if we're outsourcing bits of our lives to technology, particularly to AI, then, yeah, we lose a bit of, autonomy there. And so you had, a great analogy in your book around, it's been like we're parenting AI.
-
Richie Cotton
Do you want to expand on that?
-
Faisal Hoque
Sure. Look, I mean, it's like, you know, I don't know whether you have children, but I, you know, I have a son. I have, now You know, I'm. I was once upon a time, a child. I have my parents live as my parents got an older, I've seen this progression where the parents become the child and the child become the parent.
-
Faisal Hoque
So, I look at I as like a it's a, you know, only a nascent stage where it's, you know, a child and it's getting through adolescence and maybe it'll grow up. So what happens is that, you know, when the parent becomes a child, you know, you basically now start depending on whatever your child tells you to do.
-
Faisal Hoque
Right. So and, they become apparent. So I look at I kind of from that angle, you know, me and my, my, research team, we have a lot of, which, by the way, they have my research team includes about a philosophy background. So we had a lot of time abandoned this idea. So what happens if, you know, now we're being told, what to eat, what to go where?
-
Faisal Hoque
Went to sleep, how to take care of ourselves. You know, what should we enjoy? Pretty soon, you know, it's not letting us to drive because it's too late. All that stuff. Right? So. So because they have taken over. Right. So that's. And I think that's where we will end up, by the way. But that's, you know, I mean, not just me, but a lot of people, a lot more smarter people who have been working on AI for the last years kind of thing.
-
Richie Cotton
It sounds very dark, kind of idea where AI controls us. We lose, our sense of sort of self-determination, but I suppose. Yeah. You're right. I mean, you mentioned the idea that, you could, I might tell you you shouldn't drive in. That already happens with, insurance. You have the the black box in the car that sort of checks what you're doing.
-
Richie Cotton
And if you speeding, then, yeah, it's gonna charge both your insurance.
-
Faisal Hoque
This is like, three years ago, I was driving on Brooklyn, Bridge in New York. Right? And all of a sudden, the car just braked itself, right? And I said, what? What will happen? And it wasn't because there was not enough distance. It happened because it heard some noise. It reacted to that. And I'm saying this is crazy and, you know, and then started to veer off on the left hand side.
-
Faisal Hoque
And, you know, I'm, I'm, I don't drive one of those electric up. I've got this a, you know, it's a normal, is it actually a Jeep? I'm like, okay, why are you taking over? You know, I don't have completely capable, driver. And I know what the distance and all that I don't want you to.
-
Faisal Hoque
So. But so I, you know, you can stop thinking how far it can go, right? So, I mean, that just that can get very interesting.
-
Richie Cotton
Yeah. Actually. What do you think the end game is there. Like, if you take it to an extreme.
-
Faisal Hoque
I mean this is the this is the conundrum and this is the question. Right. So just because we can or should we. Right. So that's really the question. And as you know, in our conversation, I'm sure we'll get deeper into it. We've seen this progression of many types of technology over, over the history of mankind. And, and we have been, in the pursuit of creating something smart that will help us, since, you know, as, as we talked about in, in transcendence and, you know, th century, th century timeframe, it's not this is not a new, new pursuit, but we're finally here.
-
Faisal Hoque
Right? So now that we're here, the question is, how far do we want to take it, you know, and, should we take it? Right. So those are the bigger question, because just because we can create a car that can sense whether I'm feeling sleepy and I didn't have my espresso today, it just automatically drive me to espresso bar.
-
Faisal Hoque
No, no, I don't think so. You know, I want to decide that for myself. Right? And just because I bought a pair of shoes, two years ago, I don't want to be told that it's time for me to buy a new pair of shoes. You know, I want to decide whether I should or shouldn't, but. So that's the dark side.
-
Faisal Hoque
But my good side is, you know, imagine, like, we don't get. I'm not as with my, my mother especially, you know, because she recently passed away, but she was, she was, you know, suffering from dementia and whatnot. And, you know, she had she had a constant companion, and she was a nursing home. I could easily see I for a companion that could have helped her, you know?
-
Faisal Hoque
And she was, she's originally from Bangladesh, and, but she she was born in English because she lived here. But as as time went by, she would start in English and then veer off to, our mother tongue. And then she had a very difficult time to communicate. You can easily imagine a AI assistant taking over and translating and or guiding her and having the communication.
-
Faisal Hoque
So there's lots of lots of good stuff along with all this bad stuff that, that we can scenario. So as you kind of seen from the, the galley of, of transcend, we're not I'm not picturing a dark picture. I kind of tried to take a pragmatic approach whereby we said, look, we're pragmatically optimistic, but what happens to humanity depends on what we do to years or hours.
-
Richie Cotton
So I think that's a very good point, that, yeah, people have been thinking about intelligent machines for hundreds, if not thousands of years, and it's just become a lot more urgent in the last few years. And I would say I really like the idea of, like, having assistance for people with dementia. I've. It's exhausting to deal with, like, for months or years at times for, loved ones.
-
Richie Cotton
So. Yeah. That seems brilliant use case. And in fact, another one of the big themes in your book is around I sort of affecting almost everything, but a lot of the impacts are relatively trivial, and there's only can be a few very important use cases. Do you have a sense of what those use cases are.
-
Faisal Hoque
Or what those points are going to be? I think the use cases depends on who you are and what your, interests are. Right? So in a you know, we talked a lot of purpose, but so it depends on what is is what is it. You know, that individual purposes and organization purposes, you know, I, as, I probably, share with you that that, you know, I but the book was it goes for cancer research because I have a cancer surviving, son and so I see, you know, like a drug discovery, you know, optimizing patient care, cutting costs, for services, you know, those kind of applications.
-
Faisal Hoque
Huge. You know, and as, even as a researcher and a developer, of technology, I constantly find myself using on demand research, even for the stuff that I've done over years of my career. It's it's very easy now, having all these manuscript in a large, you know, a large language model or a library of code based on turkey for me, this pull me that, and and it's, instead used to be like the days and days to figure out even what I did because I'm.
-
Faisal Hoque
You remember all the stuff I did or data. So there are a lot of use cases, you know, national security use cases where you can maybe come up with predictive models. Not so fantastic, but from a defensive point of view. Right. So there's all kind of great use cases. Right. So so I think those those big you know yes.
-
Faisal Hoque
We pause because then I think there are major, you know, major, you know, those kind of impactful use cases that can be very relevant. I don't really find that idea of, okay, can I create a cutesy video and I fake video very I mean, to me, it's like trivial usage. It's entertainment. We're already, gone as a society, like, who likes to be entertained?
-
Faisal Hoque
Someone by And that's why you see all these emergence of TikTok and real and all that. People are addicted to it. But I and and you and I is obviously helping that because it's feeding you by your behavioral pattern and algorithm and allowing you to create all kind of entertaining things. You know, I don't find those use cases very, very, meaningful, that this person would be.
-
Faisal Hoque
Right. But, the use cases that I talked about that could be game to, you know, for various reasons.
-
Richie Cotton
That I have to say, I have actually been quite enjoying all these kind of recent AI videos of, like, cats cooking or whatever bit of a trend. But I'm right there with you that actually. Yeah, probably like using AI to cure cancer. A slightly bigger use case, for sure. Yeah. So there's different sort of levels of things.
-
Richie Cotton
And you're kind of right, but there are some huge problems that humanity has been struggling with, decades, centuries. And we need all the technological assistance we can get, to solve them. So,
-
Faisal Hoque
Well, for sure, I mean, look, I mean, even, like, you know, we thought of things we did, but take it, take it for granted. But once you start thinking about it, like, you know, the medication that you take with you, you and I may be exactly the same age group. Right? Or, I think, but but the reality is that based on your DNA structure, based on where you are, based on what kind of medical history, the same catalog may not have the same effect, but, you know, we get prescribed the same exact same thing.
-
Faisal Hoque
You got blood pressure. So you take this, you got this. We're not at that level of, custom service and custom care yet, and we're getting there. But, I mean, I mean, I think if the I would help us tremendously to collecting that data, generally sizing data and recognizing the pattern and giving you the right solution for the right problem.
-
Faisal Hoque
Right. That's what I'm most excited about. And you can apply that portion of it just about anything, whether that's business model improvement or whether that's cancer cure, whether that's, you know, geopolitical relations, you name it, you could probably apply the same that that basic fundamentals.
-
Richie Cotton
Okay. So, I do love a framework and I know your book has two of them in there. I feel like how we get into sort of practical details of working with AI. So, do you wanna tell me about your open framework?
-
Faisal Hoque
As we tried to be useful, but we also tried to be, very, simplistic in the sense that anybody can grasp it. You know, you don't need a you don't need to be a management scientist or a computer scientist to understand how some of this, framework works. And one other thing, one of the things we did try to do is that we call it a, a framework.
-
Faisal Hoque
And then we have embedded a lot of, you know, sub, methodology. And we don't really call this a methodology because we borrowed or applied lots of different discipline in the framework when, you know, one extreme philosophy for mindset, Alex, seemed like things like, management, science centric, stuff like Swot analysis and, and that's all, you know, program management type of stuff in the framework.
-
Faisal Hoque
So if you look at open, the idea open was we have to be open to possibilities. That's where open comes from. And that care, which we'll talk about in, in a second, which is the that we have to care to, to protect humanity. So you have to be responsible. Agreed. The governance model. So that's where the open care those two work comes from.
-
Faisal Hoque
The, the open stands for, you know, outline partner experiment and navigate. What it really, boils down to that you have to, outline what's the purpose that you're trying to achieve, which is kind of touched up on in our, you know, conversation. What's important to you and who you are? What do you want to do with your life?
-
Faisal Hoque
And how do you want to use anything or or, do anything with these technologies? Really depends on that kind of thing. So, so you basically come up with other possibilities, outline of possibilities. That's the first step. Second step is that then you have to decide who do you want to partner up and what do you want that partner look like?
-
Faisal Hoque
So, what about a partner? What? I'm just not talking about a vendor or a, technology platform. Should you use GPT three or should you use, you know, Microsoft or Google? I'm not just talking about that. I'm talking about what they want your AI as a partner. What do you want them to be doing? You know.
-
Faisal Hoque
Yeah, we're already talking about yeah, he's already moving to agents and we're talking about identity guy. Obviously there's generative AI, there's analytical AI and there's automation AI a variety of. Yeah. So you have to kind of decide what do you want that AI to do. And it's you have to define that AI's persona as your partner. So if you want it to be a research assistant like I'm, that's actually one of my biggest use cases.
-
Faisal Hoque
But then then that's a persona I define is some criteria. And it does something based on the things that I do. If you want it to be an assistant that does your scheduling and maintain your calendar, you can do that too, right? So so you have to kind of decide what kind of party one want that the, to do, but also who helps you to make that happen.
-
Faisal Hoque
And then he had a big experiment to see what's working and what's not working, and then kind of narrowed down, you know, what's really meaningful and what's going to give you the biggest, return. How are you want to define return? The return could be the you want to, for or it could be that I want to quadruple my revenue over the next two quarters.
-
Faisal Hoque
So whatever the the, you know, the goal is you kind of decide based on that thing. And once you do that, you know, maybe you have ten ideas and you've narrowed down to or and then you experimented with, three and said, okay, now I'm going to navigate to implement and measure and track to see whether it's really adding that or not.
-
Faisal Hoque
So using those fourth, segment, there are many, many steps which, you know, we would have put a the time to get into it, but basically it, takes you step by step process of how you do it. And the reason we kind of done that because we were unlike any other technology. You know what, you know, you can look at, okay.
-
Faisal Hoque
Like, you know, CRM or supply chain or customer service, pick any, any technology. You know, those are kind of to me, passive technology was not an active participant in your thought process and your engagement. And bottom, I mean, your, your outlook calendar or, you know, or your Apple calendar or whatever calendar, you can still pass it, your input and then it, you know, you look it up.
-
Faisal Hoque
Right. So it's not telling you, oh, it's time for you to, you know, schedule a doctor's appointment because you have a, has a fitting physical for one year. Right. So it's not doing that committing the the last generation of technology. This stuff is is an active participant. So thinking about from that perspective requires a different kind of way of thinking, a different way of, you know, experimenting and executing.
-
Faisal Hoque
That's why we kind of, you know, laid out those four, four steps.
-
Richie Cotton
That's very cool. I do like, the fact that as you sort of these sequential steps in takes you through from, like, trying to figure out what you want to do, this outlining, planning phase right through to navigating how you're actually going to use this thing. You mentioned it was, inspired by some existing sort of project management frameworks.
-
Richie Cotton
You mentioned agile and a few others. Did you have to customize it, to take account for any sort of facets of AI? Like what do you do to tweak it to make your own?
-
Faisal Hoque
Yeah, absolutely. I mean, the very example that I just gave you, that, you know, it's the active participant versus the passive participant changes everything, right? Because you would come up with a project plan, and it's okay. Look, I'm going to do this develop. And then I would test it and then design it. I roll it up.
-
Faisal Hoque
It is a bit different because it also depends on how sophisticated and complicated your you're, you're making your whatever that you're trying to implement. But I mean, you know, it could get quite involved. And in that scenario it's a bit different. I mean it's not like thinking about it kind of a sequential process. Even though we have, you know, you can argue all our value is really not sequential.
-
Faisal Hoque
It's, you know, you used to do waterfall, blah, blah by room. You can get into all kind of philosophical discussion, but about the true the very nature of this technology is that you have to consider it as a partner. So if you treat it as a partner and then you have to put guardrails, you know, how much partner do you really want to do?
-
Faisal Hoque
Do you want to become a child or do you want to remain as a parent. And it's still kind of under your to the that's you know, that's really the bigger question.
-
Richie Cotton
Can we make this a bit more concrete? Have you got any examples of how businesses might make use of the open framework?
-
Faisal Hoque
Yeah, I mean, I actually a lot of this, open framework or the framework that I have, I've kind of described in is it's like a derivation of my previous work. Like, you know, I, I spent years in organizational and technology transformation. And, you know, if you look at the old days of, of, digital transformation, I mean, you used to have like, okay, I'm going to kind of do an assessment, see where we are, and I'm going to look at what opportunities are and you're going to come up with the series of, a portfolio innovation portfolio or some portfolio things.
-
Faisal Hoque
I'm going to allocate resource money, and then I want to execute the in that example that I gave you kind of doing, very similar thing to bigger. But the biggest issue is that because it is, you know, a lot is unknown. And you're, you're really trying to, you know, create a persona with some of these things that become part of your organization.
-
Faisal Hoque
What I mean by that, now you have human resource and you have a resource, right? So you're allocating task not just to human. You're also allocating task to this thing, whether that's an agent. Well, that's a behind the scene automate or whatever the case may be. It's still a resource. So when you are coming up with this, the exercises of doing your, your, your thinking pattern is more looking at these opportunities as, actual, resources, versus like, you know, a good classic software development model.
-
Faisal Hoque
So that's a, that's a huge, huge shift. And, and, you asked me, if there's any companies or organizations are using open type of frameworks, I would say that we are still at an early stage of, of doing that. There are bits and pieces of these practices that I have already seen in the last or years where, you know, people are kind of looking at this differently and organizing themselves differently and assessing things differently and how we're feeding resources differently.
-
Faisal Hoque
But, I wouldn't say that it's at a maturity level. So, you know, since you brought up agile, I mean, you know, you do a classic maturity model to see where an organization is, obviously. And I development cycle, most organization is probably zero. Maybe one, you know, because just there's a lot of talk, but I think there aren't necessarily a lot of, you know, unless you're talking about like, JP Morgan Trading System or you're talking about, you know, the handful of things that Amazon are doing or, or couple other people are doing, we're not quite there yet.
-
Faisal Hoque
Right. But but the model that I prescribe, it is based on the current practices of many different type of organization, I or not, but also includes AI models. But that I think it's going to become, more important because you say this is not just about systems thinking, it's a thinking system that allows you to, constantly adopt and change, mid-stream as we find out what's really working and what's not working.
-
Richie Cotton
So I'm curious, is this gonna be resources like, so you think of, like, cloud computing resources or going to be like, people team. We're talking about human resources.
-
Faisal Hoque
Hey, no, I'm talking about human resources. I'm not talking about, cloud. I'm not talking about system resource. Right. Because you know, that just, you know, you could I mean, if you go to classic network stack, you can say, okay, well, here's my infrastructure stack and here's my system stack, and here's my application stack. And these are resources, right.
-
Faisal Hoque
And I'm going to spin off a resource to execute a particular set of tasks. Right? I mean we've been doing that for the last years. Various from mainframe to, you know, land to to cloud computing. I'm actually talking about considering this as part of your a resource that act actively does. So what I mean by that, you know, like think about this way that's already happening in this in that you call up for your, your, your current needs service.
-
Faisal Hoque
There's a chatbot that just answered your phone and it's, it's taking your information, digging up your info. That is, I just allocated my chat bot to do my first level of service. So that's my level one service. Right? And then you said, okay, if they do the level one service, if it's XYZ, then I'm going to pass that to my level two.
-
Faisal Hoque
That could be another chat bot. And the level three is actually human, right. So I consciously decided my level one service would be done by chat one or, you know, agent one, which is a I level two would be done by another set of AI agent and level three would be done by human, human, human resource.
-
Faisal Hoque
Right. So, so if that is my organization, right, then then I would just structured our organization by saying my level one service is staffed with this, you know, a basic, chatbots or, or agent. Right. And then level two is slightly more, you know, intelligent who looks at exceptions, and doing exception processing and deciding, oh, based on this, I pass on to best based on this, I pass on that.
-
Faisal Hoque
So here and consciously allocating work. So it's not just about system design. It's also your organizational design and task allocation.
-
Richie Cotton
I guess just taking that to the next level that me just, start implementing like human resource procedures via AI. Then you can give them like a performance review, certain targets, all that kind of stuff.
-
Faisal Hoque
I mean, of fun. It sounds hallucinatory and funny, I think, if I can use that. But yeah, because, you know, you might have seen in said we talk about coming up with RACI charts, you know, you know what RACI chart is basically allocating, responsibility based on, you know, who has the decision power and who was actually doing it was sitting in and all that stuff.
-
Faisal Hoque
So I do prescribe that if you want to implement the agent, then you have to create, procedures in the form of RACI at highest level in the form of a RACI. And then you have to decide, okay. Well, is it really doing it or not? And in the, in the course, you know, in, in the context of performance, it's not just the kernel, is it really doing the task.
-
Faisal Hoque
It's like how accurately it's doing the task. Right? So, you know, you don't want to have a AI agent guide you. Oh, you got a headache, you got a fever. You and you got a broken bone. That means you need to immediately go to surgery. The AI, you know, is not that it doesn't yet, but this is just a dumb example.
-
Faisal Hoque
If I can use that word of of a simplistic way of looking at things. So, I highly prescribe that there is human intervention, you know, in this, you know, in some critical decision, you have to decide what's critical and what's not critical based on the situation. But I don't ever prescribe that we completely take our power, you know?
-
Faisal Hoque
Hands off. Like the car scenario. Do I want my car to really veer off middle of the road because it sends stuff? My blood pressure has gone up slightly higher than what it was this morning. No I don't right. So so you have to decide, you know. But that's you know, I'm you know, I mean even though we're saying this the time cheap way but that is it's these will have to get to that level of granularity if we don't then we're not going to be successful.
-
Faisal Hoque
We need it will stop doing its own thing, the more you know what I mean. And then here's the other reality, which is a simple mathematical reality, right? You as one person or I as one person, versus a network of of us that has every facet of learning from biology to philosophy to organizational management, system development, whatever the case may be, will always be smarter than us.
-
Faisal Hoque
One person, right? So, so, so if you want to allocate resource and then you also want to give the power of allocation of resource is going to come up with far better, more efficient model. Maybe there may not be the right model, but it will always come up with a better and faster way of doing things than you and I individually might be able to do.
-
Faisal Hoque
So there is that danger.
-
Richie Cotton
Okay. Yeah. So, you know, necessarily, as a human, you're not competing against a single AI. You might be competing against a whole network of in terms of performance. Yeah. So certainly like, AI dramatically more scalable than humans. And.
-
Faisal Hoque
Yeah, I mean, that's what really is I mean, like, I mean, I've given my little example of, I loaded up my ten manuscript. I can only think about, or did I write this here? These are the they're versus. I'm just asking you the result you wrote here. Yeah, yeah. It's like a human. Yeah. I mean, my brain doesn't function as fast.
-
Faisal Hoque
Even in that simple example of retrieving a text from a bunch of PDF, you know, manuscript. Right. So the combination of OCR and, and my generative, a generative, you know, way of interacting with the engine to retrieve the data is, is a lot faster than I could do. Right? So.
-
Richie Cotton
Yeah. Yeah, I mean, even what like, is, is it ten books now? I mean, very impressive. But also, yeah, I suppose like an AI could churn out ten books in in half an hour if you want. So. Yeah. Okay. So, there's one more thing I wanted to talk to you about the the open framework. So it seems like, one of the big themes of this is you need to open yourself to the possibilities of AI.
-
Richie Cotton
You need to educate yourself. What is it you need to learn around? I.
-
Faisal Hoque
I think, you know, there's actually three factors of learning, right? One is, I obviously have to learn all the technologies that are becoming available because it's it's almost ridiculously impossible because every day there's something new. Right? But you have to keep up with it. That's one, second thing is you have to understand and learn, and hone, your, your critical thinking skills, because that's how you're going to evaluate, what's real and what's not real.
-
Faisal Hoque
How do you really use it for the purpose that you're pursuing? And the third thing you need to, learn is, is, you know, the real possibility of applications. How can you use this for this? Or I can use that. That's necessarily not. Well, I have this tool. Therefore I can do this is a little more imaginative.
-
Faisal Hoque
Can I do this? I would like to do this. Is it possible to do that? And so when I say you have to be open, that's what I mean by open. It's like no angel saying the more curious you are, more learning you can gain and the more learning you gain. The you know how little you know.
-
Faisal Hoque
Therefore you just keep pushing yourself and that's how you become. So it's a it's a kind of the same model. Except that here, you know, we now are faced with keeping up with, my outsourcing, my critical thinking. But here we are. Right. So. So the more you tell your ChatGPT to craft your email, the more, I got an interview.
-
Faisal Hoque
So let me just, here's the document to come up with questions. You know, I mean, it's people are doing it. Right. So so the more and more you also are like, you know, like the, you know, I saw I saw, show yesterday, morning, last night I turned on my Netflix is already suggesting my next show should be this because I just saw this.
-
Faisal Hoque
What do I really need to know that I. What can I decide for myself? Right. So. So then a various level of of, openness. If I can use it, use that term for, you know, and we can probably spend a whole forum philosophy, but but we, you know, I mean, in the book, you know, you saw we prescribed a lot of Western philosophy as well as eastern philosophy.
-
Faisal Hoque
Right? So the notion of, you know, like the the Roman philosophical view of knowing thyself or you have to be mindful and you have to have a beginner's mind from Buddhist philosophy, we kind of tapped into both of those thought process because that is humanity is to, you know, once you leave, once you are done with it, then then what are you I mean, then you're a mechanical being, right?
-
Faisal Hoque
Because you don't have original thoughts anymore.
-
Richie Cotton
I have to say, you mentioned Netflix and this is something I'm definitely guilty of. It's like, oh, Netflix will like recommend the show, click on it, start watching it. Over years, the quality of my Netflix recommendations have gone down and down and down because I used just click on the thing and now it's just like baking shows and thinks about people with plastic surgery selling houses.
-
Richie Cotton
So yeah, just because this is what gets recommended. So yeah, certainly it's a good thing to know that you can ignore what the AI tells you and just, think of yourself a little bit.
-
Faisal Hoque
And in Argentina again, you know, we, we, we end the book by saying that you have to stay devoted to what's important, and you have to detach yourself from that level of convenience. Again, it comes from the, you know, the different detached eastern philosophical mindset. Right? So, because, if you don't stay devoted to force yourself, it's a discipline issue.
-
Faisal Hoque
Right? So because we are very I mean, human beings are very convenience centric, right. And so, you know, it's easy. So let's do this. You know, it's easy to do that. So let's see this and is easy to order food from the same people. It already has my order. Just click a button and it's just delivering the same for you.
-
Faisal Hoque
So so the convenience factor is what we have to detach ourselves from. And we have to stay devoted to, critical thinking and forcing ourselves to say, oh, no, I don't want to do that. Let me try this. You know, I'm in and, you know, in the I think the people we you know, I mean, it's a double edged sword in the sense that human beings are very creative, but we can't really reach to this creativity level because we force ourselves to learn something different and new.
-
Faisal Hoque
And it took effort to learn. Right? I mean, you can still do that, and then you can actually you can argue that you can learn faster if you stay discipline. So the trick is, can you stay disciplined to learning and use this technology to aid you to the learning versus letting you do it for you? Do you really want the AI to write your next form, or do you want to see what's been completed?
-
Faisal Hoque
It may be shitty, but at least you wrote it. I mean, the process of writing it. You research, there are poetry written by some of the greats, right? So. So it's that kind of discipline. The book's gonna take us there, I think.
-
Richie Cotton
Yeah, it sounds like a lot of hard work. I have to say this despite it, but, Yeah. Incredibly important. And I do like the idea of doing things for yourself sometimes without the use of technology. Just, particularly as you're learning just to make sure it really sinks in. Okay. So, let's talk about your other framework, the the care framework.
-
Richie Cotton
Help me through it.
-
Faisal Hoque
So the care is, you know, I mean, there are as, as, you know, there's a lot of talk around, the, the responsible, usage of AI, for putting guardrails, putting governance, around. I so, and we call it care because, you know, you have to say, well, why do you want to be responsible and what you are?
-
Faisal Hoque
The fundamental point is that that, you want to do that so that you can you can actually protect humanity. And so you you have to be you have to care about what you're doing. So the care stands for, you know, the first, first alphabet stands for, a considering catastrophic situation. So, it's really coming up with the risk portfolio.
-
Faisal Hoque
So the idea is that, okay, you got those ten ideas and now start looking at, okay, what what are the catastrophic thing can actually happen, in the process of doing those ideas. Right. And you have to look at short term, long term and medium term risk, to see what those are. And we talk about four P's, which you might have seen in the book.
-
Faisal Hoque
It's the term of the planet people. And on the profit and product, is the impact on those four things. And considering it from, from that point of view, just like, you know, open you let your imagination go wild. What are the possibilities here? You can actually start considering the worst case scenario, which is healthy. It's not, you know, I mean, I don't think how many of you talk or talk to, you know, the, psychologist and especially the organizational psychologist, though I'll tell you, it's healthy to think about, worst case scenario, and because that's only way you can become pragmatic.
-
Faisal Hoque
You know, false hope is not the way to live, because if you don't consider any risk, you will dive into a situation that you will not get out of. Right. So so the first step of creating these things are coming up with this, this, this, this, notion of, of risk portfolio and allocating risk matrix as number one.
-
Faisal Hoque
Number two is that based on that risk matrix and the risk, you know, risk criteria, you have to assess, the various element of opportunities that you have. So that's the second phase of of care assessing. And then the third thing is that you have to put some guardrails or regulation, you know, you call it regulate.
-
Faisal Hoque
You have to regulate whatever you're trying to do. You know, in a technical term what that means is like, okay, where do you want to put the, garbage around yourself? Data? Where do you want to apply? Let's say, you know, zero, trust architecture to protect your network, etc., etc., etc.. Portfolio, management leadership point of view, but also from a technical point of view, you have to put those guardrails.
-
Faisal Hoque
And the last thing is that they call it exit, which is the cure. You know, the last step of exit is really figuring out upfront what's the exit when things really goes back. You have to have a, some sort of a kill switch because things may actually go back. How do you shut it down? Right. So, you know, for example, you have, some sort of agent running around, that's, cataloging everybody's Social Security, for a particular purpose.
-
Faisal Hoque
And before it knows, you know, you gather everyone in Social Security, and then you just push it up. So do you really want that to happen? Or let's say, you know, I mean, I was there last, last year or a year before last year, and I was working with, one of my, you know, government client, and we're talking about, you know, classifying all stuff that's no longer relevant.
-
Faisal Hoque
And people have the right to know these things. So can we use, AI engine to kind of, look at those documents they called document. So you can't read the whole whole whole thing, but you can get a sense of it. And these are done manually and some very manual, intensive process. And that's part of the reason a lot of these things get actually unclassified.
-
Faisal Hoque
Right. So so the question is where does the data reside. And then how do we decide, the the doctoring of the data is actually accurate. How do we know it's not hallucinating? Like, you know, I don't know whether you've played with, any of this model, but what is Quadro that is ChatGPT or any other model.
-
Faisal Hoque
Yeah. You put a PDF and then it's kind of veers off, so telling you stuff that doesn't it's not true. I mean, it's just not it doesn't exist. Right. Because it's tapping into other data sources. So I mean, it's a little better, like, you know, what I did with my manuscript, you know, it's a closed model and it's my little large virus model just for me so it doesn't veer off as much as, you know, when you use this publicly available models.
-
Faisal Hoque
So, so, so those are, so you have to have a description that's to And then you cannot do this, these things without having, you know, the bigger the opportunities are, the greater the risk are. So you have to have.
-
Richie Cotton
Compared to some other risk frameworks. So the first step is catastrophize. Now my understanding is a lot of risk frameworks. They just like list every single possible thing that can go wrong, whether it's small or large. And then you worry about like how big the problem is later. Why did you opt for going with, the worst case scenarios, these catastrophes?
-
Faisal Hoque
Because, you know, unlike other technology, you know, this technology, just like there's enormous amount of opportunity, there's also enormous amount of risk. And it's not just my word. I mean, you know, I mean, you can look at the godfather of AI to DeepMind's most of, or you can listen to Yuval, Harari, you know, various days of discipline.
-
Faisal Hoque
But they all we all acknowledge that the risk of AI is, is nothing like we have ever seen, you know, and you can get into a whole conversation. Okay. Well, it's you know what? We talked about this framework, by the way, it can be applied for, an organization that developing stuff, meaning using, you know, available, models that are available infrastructure.
-
Faisal Hoque
It could be that the agents, government agency, it could be individual. Right. So someone when and when you actually show that in, you know, individual government and business, but you know that the governments also have huge responsibility thinking this way, because you have to really put a legislative framework that that pushes for, you know, responsible innovation and utilizing technology for doing good.
-
Faisal Hoque
But also, having the guardrail that allows you to secure a national security, secure population, secure people's identity so that the agencies don't become big brother and and tap into our privacy, you know, Rogue Nation doesn't have access to these things, etc., etc.. Right? So and so that's why the risk factor, I mean, depending on what is it that you're trying to do becomes big or small.
-
Faisal Hoque
So it is utterly important to think about, you know, that that kind of catastrophic situation versus like other technological thing, one and the risk is, okay, we we have system balance of people doesn't get to access their, their, I don't know, a bank account for a couple of days. Yeah, that's a risk. But it's not like life and death.
-
Faisal Hoque
Right? Some of you, you'll still survive if you don't have access to your bank code for a couple of days. So, so. But you know, so that's why we kind of looked at it from that point of view, as optimistic as we are, I also want to be as cautious as possible because this is a double edged.
-
Richie Cotton
So I'm I'm curious. So, I mean, you mentioned that, these things in sort of scale right up to existential risks. And I know a lot of serious, AI safety researchers do think there is a worry about, like a real risk of a sort of powerful AI, like, wiping out humanity. So, the probability of this happening is known as, like, a doom number.
-
Richie Cotton
I'm curious, do you have your own doom number? Like, what's the chance of things going that wrong?
-
Faisal Hoque
No, I, I don't I don't have it, but, you know, I mean I'm, I the more I more I look at this, the more I play with it, the more I apply it. You know what I'm saying? I'm playing with it. I don't I'm not just talking about with, you know, like, as a consumer playing with it as a, like a generative AI.
-
Faisal Hoque
I'm, I'm done with enterprise level, type of stuff. We're all like, I mean, I, I'm serious concern. Because it's a, you know, you can also look at short term low risk and mid term risk and the longer term risk a longer term risk is that do not look for it just taken over because we have become child and it's no longer needs us to make any decisions.
-
Faisal Hoque
It's making its own decision manipulating, market manipulating you know, who which, territory belongs to who? Starting war. I mean, you can imagine. And then all of a sudden, you know, I mean, you can go sci fi. Mark over says we are doing all this to protect humanity, but in the process of doing it, they're really taking over and making us as, secondary, entity.
-
Faisal Hoque
Right. But if you look at let's, let's, let's give you like a simple to years of basic, fundamental things that we can all relate. So imagine now % of population got replaced because, meaning, their work got replaced because you have, AI slash robots making your hamburger and you have, you know, self-driving cars driving the car.
-
Faisal Hoque
So you don't have any taxi drivers anymore. You have, drones delivering your delivery stuff. You got drones, AI slash robotics filling up your shelves and, in the warehouses. Right. So all of a sudden, % people are unemployed right now. Let's take it to the next level. Now, next level of, people are basically knowledge workers.
-
Faisal Hoque
So so you say, okay, I used to need ten people to do the research work. I need not to. I don't need, like, ten from an hour. I need only $Because those eyes are much more tolerant of rambunctious patients than people who are in pain. Right. So, so all of a sudden, five years from now or seven years from now, you have % population and global population that no longer are doing what they were doing before.
-
Faisal Hoque
Now some of them will find a different jobs and whatnot, but vast majority of people may feel that they get completely marginalized. You look at today's unrest globally, right, which is human being creating numbers for a human being. Now, you are now got unfairly displaced by these things and there be massive unrest. So so that's a dark picture.
-
Faisal Hoque
But I think that's where it will start, not I taking over marginalized people doing. They're no longer being fulfilled. So you saw you know, in the book we talk about Maslow's hierarchy, which everybody knows. So so if everything is fulfilled, I mean, you don't have unless I assume that somehow, we don't need income anymore because, you know, it's being provided.
-
Faisal Hoque
Right? There's the universal pay and all that kind of stuff. And, you know, I mean, you don't even need to make a living, but there is this innate human need to create to be productive. Once that's gone, then then you have a lot of unknown happy people. When you have unhappy people, that's where a lot of, you know, these kind of disaster scenario props up all over the world.
-
Richie Cotton
Giving us like, you know, what's the next step after that?
-
Faisal Hoque
I think it's already happening. We keep talking about reskilling and we're talking about we have to prepare for part of this. You know, the government framework, that there or legislative framework that I talked about is preparing the workforce for this kind of shape is critical. But, that that can be only driven by, global leaders because it's the awareness and creating that infrastructure that allows you to, to, to do that.
-
Faisal Hoque
Right. So, so if we don't do that, that's why I say what will happen is what we do or don't do. So, so, you know, and we, in human history has gone through a lot of ups and downs. So, it will, you know, time will tell, whether it's upskilling or it's not.
-
Richie Cotton
Okay. Yeah. So that sounds like a fairly terrible future. And, I'm hoping that we managed to prevent it. So can you talk me a bit through about, like, how do you give people the skills to avoid this? Like, being out of work?
-
Faisal Hoque
I think we, you know, we we have spent a lot of time, talking about Stem, you know, and, and we do spend a fair amount of time, skilling people at Stem. I have a counter argument that I think the most, critical skills that that we have to provide to maintain this, this upper hand is we're going to have to start teaching people how to ask the right question and develop a set of critical thinking skills that allows people to take advantage of this technology to be to be part of the process, not being marginalized.
-
Faisal Hoque
So that takes a conscious effort by individual, but that also takes a, institutional effort to do that. Meaning what you're teaching your kids in school, what you're teaching people in universities, what is the, the the, organizations, you know, preparing, their resources, for the next level of, of change, etc., etc.. Because if you don't do that, these are like a collective effort.
-
Faisal Hoque
Effort. Right. So which we, we talked about like, you know, if you sign in yourself for convenience, then you become dollar and million dollars and you're not really funding your skill set. Right. So so that mentality has to a shift. I mean, you know, I mean already we, you know, nonstop research on how much time people spend on social media, use social media, right.
-
Faisal Hoque
And, how much time people waste on, on, consuming things that that makes no sense. Right. So so that's because, you know, we're we're hanging entertainment driven. We have shift into a highly entertainment driven, entertainment, entertainment driven society. Right. So combine that with outsourcing, even our thinking pattern, that's where it will have, you know, that's where disaster can become very interesting.
-
Richie Cotton
A lot of people tend to be worried about this sort of Terminator scenario for like, AI catastrophe. But actually the thing we keep coming back to is just I just taking over, live. It sounds more like, that was the the cartoon movie bully, was in the future. Everyone's just, like, sucked in the hopper.
-
Faisal Hoque
Yeah.
-
Richie Cotton
Jazz or whatever. And.
-
Faisal Hoque
Yeah, I mean, we we're already talk to you about going to Mars, so you can say, how can we destroy the planet? So let's just take like, you know, handful of human humanity and push them to Mars. But it's still with the AI, so. So when even if you go to Mars, if you still like being controlled and manipulated by.
-
Richie Cotton
All right. Yeah. So, another another sci fi dystopia to, to worry about. All right. So, I just want to talk about the the e part of caste is about existing. This seems like a pretty radical, idea is that if I go wrong, you need an exit plan. You got to be able to turn off somehow.
-
Richie Cotton
I guess normally sort of risk management strategy you talk about, like, all these need to pull the plug. Julian, talk through, like, why did you decide that was necessary?
-
Faisal Hoque
There is no, like, one I. Right. So it's not like there's, Yeah, I and you know, if there's a like a centralized thing and you just go on, on board and then you're done. I mean, it's, it's really in the context of this framework because we talked about from a very pragmatic point of view, in the sense of, okay, well, how does the individual use it?
-
Faisal Hoque
However the organization use it allows the government agency to use it or apply it or devolve it. You have to look at it from a microscopic point of view, not necessarily just macro view. Right. So you have to each person will have to have the responsibility to plug the pull the plug. One thing goes wrong. And that's important because if we don't it becomes like a virus.
-
Faisal Hoque
Think about you know, I mean like, if one does, a pandemic happens when, two people. That is not a big deal. Let me go in mall and then infected people. people infect million people. I mean, AI is a network of system, a global network of system and global. Network of intelligence. But so so you can't really pull in Central Park.
-
Faisal Hoque
You have to be cautious of what you are individually building and putting it out there and then decide. And so is it is it, do we leave it all that on individual responsibility? Well, that never works because that, you know, that's why, you know, laws and regulation and compliance and legislation. So you have to you have to focus on that element of it as well.
-
Faisal Hoque
And you know, you in the book, we talk, quite extensively about governance model and who governance and all that stuff. So, so it's the governing bodies job to decide where to pull the plug and when to pull the plug. And if they don't, then there has to be a consequence and those consequences has to be legal. So for example, you know we started talking about big secret earlier.
-
Faisal Hoque
I mean it's so, so okay, so should we allow them to be in every, in every, marketplace, whether that's Apple, wherever and everybody's downloading it and then, you know, they're using it. So and it's sucking your data and now we have it, we have all that data, in the hands of a different, you know, type of, mindset that believes in very different things as we do in West.
-
Faisal Hoque
As an example, should we allow it to do or not? I mean, that's a link. This that's not just a moral question. That's a legal and a moral question. And it's an international geopolitical question. Right. So it's much broader than that. But in the context of this is is like everybody needs to have a, you know, that exit button or switch off born for anything that do.
-
Faisal Hoque
Otherwise you're going to create a massive, you know, pandemic, which there's no point of return from that. And.
-
Richie Cotton
Okay. Yeah. And I pandemic just sound, like a, a new kind of disaster movie as well. I was like, but I think the reason I said this, idea was controversial was because that was one of the clauses that was proposed in the California Air Safety Bill was a powerful AI has dubbed this off switch. And that was one of the reasons the bill got rejected was because a lot of companies didn't want this.
-
Richie Cotton
Do you have a sense of why, companies didn't want to include this? I do it.
-
Faisal Hoque
Because it's a it's, you know, it's a, there's a conflict of interest because we're also in a race of innovation. So the idea is that. Okay, well, are we going to make them the innovation and and, if we put these kind of, you know, guardrails, and exit point. Right. And, my humble opinion is that, you know, we need to push the boundaries of innovation, but we have to be responsible innovator.
-
Faisal Hoque
And there has to be, consequences for not following those guardrails. And, I'm of the proponent of doing those kind of things now, we don't see this, in fact, because it's not like, you know, if you talk about nuclear, assets or nuclear, you know, energy, you can't you can't because we have seen it.
-
Faisal Hoque
You know, we we used it and we demolished, cities and, you know, with atom bombs and whatnot. So, so we now are very cautious about who has access and what can we do with it and all that kind of stuff. Right. We we have enough that we, you know, the information or network or, you know, competing technology.
-
Faisal Hoque
Yeah. Because we haven't seen those disaster. We seen glimpses of it like, you know, remember that, you know, about a year ago when the airports start shutting down because, CrowdStrike, I engine decided that there was some malware and it started to shut down the system. So we haven't gotten to that point yet.
-
Faisal Hoque
And I hope that we don't have to get to that level to decide where where is the pull out switch and what what is really the garbage about, you know, and if you look at pharmaceutical industry, you know, I mean, even there, there are guerrillas in the sense that not everybody can set up, like to the pharma factory and stop, you know, I mean, that's why we talk about these tiny little trucks, right?
-
Faisal Hoque
So, I mean, even with all those guardrails, we still have, you know, having all these, you know, this, this drug problem all over the place. But even in that industry is far more regulated than, let's say, information technology or computing technology. Right? So I don't we not want to have at least a similar level of guardrail.
-
Faisal Hoque
So if we're not, then we're not being responsible. And, you know, if you look at the US versus Europe versus versus, you know, the China or other non, you know, the other parts of the world, there's different viewpoint. So, so but I think we have to we have to push the boundaries of innovation. But we cannot not be responsible if we are in the those disaster scenarios that we talked about.
-
Faisal Hoque
It is bound to happen.
-
Richie Cotton
Okay. Yeah. I like the idea of like you got to take responsibility for and yeah, you're creating or you're using, but we've had a lot of talk of possible disasters here. I really would like to finish on a happy note. So, I mean, since your book is called transcend, suppose everything goes well with AI. What can I do to help humanity transcend its limitations?
-
Faisal Hoque
I think that, you know, I could, give you a better quality of life. Whether that's a dementia patient or, providing a better, you know, a drug discovery for really bad diseases like cancer, allowing you to be more productive and creative. Like, I, I find myself being much more creative and productive in terms of my research work than I was even a year ago, because now I have a blast access that can, you know, get me stuff like a it can be a better system, you know, it can assist you in many different ways.
-
Faisal Hoque
It can make organization more, productive as a result of people that right organization can focus on more, more creative pursuit, you know, because ultimately, you know, if you're going back to Maslow's law, I mean, once everything is met, what's left is, you know, is fulfillment. That's very that's very human. Right. So we want to be fulfilled by it, by pursuing our gifts and deciding to make a difference or whatever.
-
Faisal Hoque
Right? So I, I really believe in that. So, so from all those point of view, you know, you know, can think about climate, you know, the climate disaster, all kind of stuff, you know, predictive modeling. Yeah, it could be enormous, too, right? If we apply the way in a responsible way, with the intention of doing good, you know, it's like, you know, nuclear nuclear energy is a is a is a good example in the sense that, oh, we can create nuclear war or we can create energy source that can actually create, you know, light up the entire city with very low cost.
-
Faisal Hoque
Right. So, so, you know, it's our choice difference is that it's a very it has a democratic access. Anybody can access it. And anybody can do whatever they want to do with it. I know I don't think you can leave it up to just personal responsibility, because there were always some people who want to do bad. So you have to have laws, you know, and the civil society doesn't work without having laws and legislations.
-
Richie Cotton
Okay. Yeah. So, scope for really good things already. Bad things. We somehow ended up going from happy things back to, chance of wars again. So, yeah, let me just, to finish, like, one of the recurring themes seems to be, about personal responsibility and also just, societal responsibility on a broad scale, things like isolation.
-
Richie Cotton
Do you have any final advice for the audience in their interactions with. I?
-
Faisal Hoque
Look, I mean, I think you can look at social media, as a, as a use case, you know, I mean, it has done a lot of good, and it has done a lot of bad. Right. So, so I think I will only, only enhance that because the mirror. So, so in terms of use it, use it for, to to do something good with it, don't use it to do bad.
-
Faisal Hoque
That sounds so basic and so moralistic, but that's what it is, right. So because if we collectively all start, you know, kind of don't have any responsibility for doing anything like identity theft, you name it, you can do all sorts of things, but you can also like, you know, use it for all kind of things.
-
Faisal Hoque
So, you know, more, more creative. So, so trying to consciously think, what are you using it for? And, you know, and then for a very selfish reason, don't ask yourself this. Don't you know, if you outsource yourself, you're not going to have any value. So for even for selfish reason, you know, use it as a, as a aid, not a replacement.
-
Richie Cotton
I like that, just be mindful about trying to do good with, technology. You give an excellent, All right. Lots of wisdom there. So, thank you so much for your time, Faisal.
-
Faisal Hoque
Thanks for having me with you. I enjoyed our conversation.
podcast
How Generative AI is Changing Business and Society with Bernard Marr, AI Advisor, Best-Selling Author, and Futurist
podcast
The 2nd Wave of Generative AI with Sailesh Ramakrishnan & Madhu Iyer, Managing Partners at Rocketship.vc
podcast
Leadership in the AI Era with Dana Maor, Senior Partner at McKinsey & Company
podcast
New Models for Digital Transformation with Alison McCauley Chief Advocacy Officer at Think with AI & Founder of Unblocked Future
podcast
How Generative AI is Changing Leadership with Christie Smith, Founder of the Humanity Institute and Kelly Monahan, Managing Director, Research Institute

code-along
Building Trustworthy AI with Agents

Shingai Manjengwa