How are Businesses Really Using AI? With Tathagat Varma, Global TechOps Leader at Walmart Global Tech
Tathagat Varma is the Global TechOps Leader at Walmart Global Tech. Tathagat is responsible for leading strategic business initiatives, enterprise agile transformation, technical learning and enablement, strategic technical initiatives, startup ecosystem engagement, and internal events across Walmart Global Tech. He also provides support to horizontal technical and internal innovation programs in the company. Starting as a Computer Scientist with DRDO, and with an overall experience of 27 years, Tathagat has played significant technical and leadership roles in establishing and growing organizations like NerdWallet, ChinaSoft International, McAfee, Huawei, Network General, NetScout System, [24]7 Innovations Labs and Yahoo!, and played key engineering roles at Siemens and Philips.
Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.
Key Quotes
I think we are for the first time removing the real physical, the digital divide, a cognitive divide between the haves and have nots, we’re creating a much more level playing world for everyone where it's not just access to the information, but it's also the access to the whole context. It's access to the whole ability to make far better decisions anywhere on the planet.
AI adoption has to be seen as a tide that lifts all the boat together. So if you really want to orchestrate this, you have to have the leadership setting the agenda and making sure that, and I'm not saying that it has to be necessarily seen as a negative thing that, hey, there is a top -down mandate sort of a thing. But I think it's really a question of making sure that you are watering all the plants at the same time,
Key Takeaways
Ensure active involvement from leadership to orchestrate AI projects, avoiding siloed efforts and fostering a unified approach across the organization.
Start with identifying pressing business problems and strategically align AI projects to address these issues rather than chasing the latest technology trends.
Choose initial AI projects that are probabilistic, have good data availability, or involve repetitive manual tasks to demonstrate tangible benefits and build momentum.
Transcript
Richie Cotton: Welcome to Data Framed. This is Richie. I'm sure you've noticed that recently there's been a lot of pressure to add AI to absolutely everything. Two years into the hype cycle, we're seeing Two types of problems. The first is organizations that haven't done much yet with AI, because they don't know where to start, and now they feel like they're behind.
And the second is organizations that rushed in and failed because they didn't know what they were doing. Both the symptoms of the same problem. Not having an AI strategy and not understanding how to tactically implement AI. There's a lot to consider around choosing the right project and putting processes and skilled talent in place.
Not to mention worrying about costs and return on investment. Today's guest is Tethergat Varma. He's a global tech ops leader at Walmart Global Tech, where he leads the next generation tech stack program for digital transformation and data quality across Walmart. On top of this, he's just completed a PhD with a thesis on how businesses can effectively make use of generative AI.
And that experience makes him ideally suited to explaining how you can make use of generative AI in your own organization. Let's hear his thoughts.
for joining me on the show.
Tathagat Varma: Hi Richie, how are you?
Richie Cotton: Excellent. So to begin with, I want to know, why are there so many failures in the adoption of AI?
Tathagat Varma: Okay, so that's like a trillion dolla... See more
Okay. Now that may manifest in many ways. It may manifest in the form that my hardware is not up to speed, or I don't have enough compute, or I don't have enough storage or the performance cannot be scaled up because my data isn't there. So all of them to me are manifestations, but to me the basic idea is that we solve the problem, or an MVP, if you will as a very, very bounded problem, and then we suddenly take it out of the context, and we want to implement that in the real world.
Thank you. And it's not a very seamless process.
Richie Cotton: That's really interesting that you think the problem is in going from a prototype to having something that works sort of in real world use cases. Since every company is trying to make use of AI somehow at the moment, can you talk me through the role of leadership in making sure that it works?
Tathagat Varma: Yeah, I think like any other significant enterprise wide change, especially technology change AI and generative AI is no exception. And I don't think any of this can succeed without not just the leadership support, but active leadership involvement. In fact as a part of my own research that I've been speaking to a lot of subject matter experts from the field, Thank you.
Some of those firms were actually very vocal about the fact that without leadership, you will have, and we have seen that in the past, right, in the classical digital, in the IT environment, we have seen a plethora of those so called , shadow and, and, and sprawls. And I think we are beginning to see a semblance of that with the whole AI thing.
Uh, That means you have a lot of localized siloed solutions that probably happen and operate within the context of what they're trying to do. But then if you really want to, bring up the whole thing, and to me, AI adoption has to be seen as a type that lifts all the boat together. So if you really want to orchestrate this, you have to have the leadership kind of setting the agenda and making sure that.
And I'm not saying that it has to be necessarily seen as a negative thing that, hey, there is like a top down mandate sort of a thing. But I think it's really a question of making sure that you are kind of watering all the plants at the same time. Unless you do that, you're probably going to see very lopsided results.
And what will happen even worse than that is some of the gains that we have made in productivity in one of the pockets are offset by the drag on the system. It's like, the classical physics analogy that if you have three pencil cells or batteries and two of them drain out and you remove the two and put new ones, well, the old one will still continue to drain the new ones there, that kind of a thing.
So I think the impedance mismatch is going to be significantly higher if you still do not take care of some of them. So I think the leadership is to me really making sure that They can take a 30, 000 feet view and really look at the systemic view of the whole thing and remove the bottlenecks that are really blocking the system wide or enterprise wide progress, as opposed to only saying, Hey, I can manage to do just the siloed one.
And that's all we need to do.
Richie Cotton: A lot of great analogies there. I love that idea of watering all the plants at the same time , so leadership's role is to make sure that everything is harmonized and working together rather than against each other. Just digging into this a bit more, I know in theory it should be a chief A.
I. officer in charge of all the A. I. projects, but a lot of companies don't have that. So can you talk me through who in leadership needs to get involved in A. I.?
Tathagat Varma: That's a very interesting question, Richie, and I'm kind of divided in my mind as to if number one, is that the right ailment? And number two, is that the right remedy? Because lot of organizations in my own research, I've seen the pattern that they typically start for no fault of theirs with kind of a quote unquote a small body of experts like a data scientist and AI engineers who kind of sit in a silo and they start doing the things and then over a period of time, what they realize is that, hey, AI is not a plugin module, if you will, or it's not an afterthought, or it's not definitely something that can be stood up as an independent end.
Capability inside the organization. So you have to start kind of horizontalizing it rather than verticalizing it which means you have to kind of start embedding it into the work processes if you would like to really harness the true power of AI. And At that point in time, you may or may not need, depending on the culture of the organization.
But I, I'll take both the pros and cons approach of that. It might be helpful to have somebody who's kind of setting the common agenda rather than letting everyone go on their own speed and their own pace and in their own direction. So to that extent, I take the analogy of the of the conductor of a philharmonic orchestra.
You really are that very low touch kind of a, approach, but you still are kind of coordinating because everybody is a world class musician in a Philharmonic Orchestra. So it's not to say that you're teaching them how to play the instrument, but you are still making sure that 50 or 70 musicians are all singing to the same tune.
So the same way you have to make sure that you're orchestrating the efforts and making sure that People are making the right progress. They're aligned to one common goal larger than any of their individual capabilities. And more importantly, you're the one who was able to provide the look ahead 40 steps in front when any one of them cannot see more than five or seven steps at a time.
And you are able to anticipate some of those big boulders and kind of say, Hey guys, we need to get ready for that. And even more importantly, will be the common causes that we continue to see because what's going to happen in the absence of any central coordination is that you will continue to see some kind of localized solutions what people think is the localized problem.
Because if my visibility is only my department, I'm only going to think internally and I'm going to only solve the problems within that. Now. Unbeknownst to me, there could be five other similar kind of fires that are happening inside the organization. And if nobody's really able to look at all of them you're probably going to have a tough time and because you're going to repeat and waste a lot of organizational resources and time and you may not, you'll never have the semblance of one single common seamless solution.
So I think a chief AI officer, to me is a , mindset. doesn't have to be really seen as one single individual but it has to be seen in the shape and form of a common thought process. That's kind of, stitching all these perspectives together. It's a forum, it could be a platform, it could be a body, welders, whatever you like to call it.
But obviously it has to have teeth it has to have some way of kind of enforcing the. rule book, so to say, and it has to have the mandate that, hey, there is somebody whose KPIs are all about organizational deliverables as opposed to the functional deliverables inside. So that can be in some cultures of organizations, it can be done by one individual who is a CXO level executive.
So be it. In some other organizations, it could be an added responsibility to one of the CXOs with a shared goals at a later level. So be it. And again, the reason why I'm saying this, for example, if I may people organization. Let's say I'm developing people as a complete core competency.
I might be using software. I might be using a number of things, but I'm really a people organization at heart. As opposed to that, let's say I'm a marketing organization and I'm really doing marketing for a living and I'm still using obviously people and everything else. Now it might Probably makes sense for me to think of that the HR subject matter expert in the first case or the marketing subject matter in the second case could be more aligned to the business context in which we are trying to solve the problems and hence with the right kind of a training and with the right kind of aptitude, they might actually be able to do a much better job of integrating the solution domain and the problem domain Rather than having a third person who has no idea absolutely about HR or marketing or anything else, and is only talking about AI for a living, is really coming in the room.
So I, just to summarize it, my favorite humorous way of looking at it is that,, they were two people in the room and they were not talking to each other. , so, in our wisdom, we decide to have a third person. Now, guess what? The three of them don't talk to each other. So I think that to me is really danger of somebody who has no idea about the business in which we are operating and only has a solution mindset.
So if that is a kind of a. prescription somebody gives to me, I'll be very, very wary of that because I don't really want a solution when they don't understand the problem to me.
Richie Cotton: I definitely agree with you that A. I. is often a very sort of cross functional problem, so more horizontal than vertical. So it's interesting that you said that have a business domain expert and you have an AI expert. Are there any other teams that tend to get involved in this?
is it just going to be those two teams or is it going to be broader than that?
Tathagat Varma: No, I think it, has to be in my view broader than that because AI is not like a icing on the cake. It is the cake itself. I mean, it's, something that goes top to down and across the organization. So, if you have a public facing let's say you're building some solutions that's going to public health risk.
For example, let's say you're a health care company. And now you've definitely run into some of the risks that might put into the ethical issues in terms of, let's say, gender bias or racial bias or some other form of bias. That you want to make sure that you're doing the right thing as an organization.
So you have to make sure that you're talking to your communications partner in terms of, Hey, are we really aligned with the values that we profess that we believe in need to probably talk to your legal team and make sure that, where we are just basically aligned to the right ways in which you want to represent your organization.
because I think the danger is that of course, it's like you cannot have a situation where the inmates are running the asylum sort of a thing there. I think the tech industry for the longest period of time has been criticized for exactly the syndrome that we have.
inmates are running the asylum which is like basically the whole technology determinism kind of a thing there that, hey, I can really do an inside out thinking that let's say I invent 3D printing. Now I don't really care whether what problem you are in, but I just go and kind of slap it on top of everything else and 3D printer.
Now that may not be the right perspective. Nope. You probably need to have the outside in perspective and techies are really not the one who are sitting closest to the customers on a daily basis. So we don't really always know, for example, what's going on in the minds of customers when they want to return an item.
Now, unless you really talk to your customer support, you're probably not going to get a sense of it. So I think getting that empathy as a first hand activity needs to make sure that you have all the people in the room and all of them have a representation on the table.
Richie Cotton: In that case, it sounds like, starting the technical solution and then trying to figure out what the, business fit isn't going to be great. But when you're trying to come up with your AI strategy, how do you align that technical strategy and the business strategy?
Tathagat Varma: I think lot of times in my own analysis, both from primary data and secondary data, what I'm seeing is that some of the success is far more correlated to the organizations finding the right strategic intent. So here's the thing, right?
If I say, here's the big shiny thing, and this is doing wonders, and everybody is writing about it, all the airline magazines are carrying an article on Generative AI, that may not be the best reason why somebody wants to look at Generative It also may not be the best reason just because simply the investor call or the analyst call, they are saying that, hey, you should do Generative AI.
I think The most sound reason, in my view, will be to look at your business landscape and really say, one, what is hurting you the most? What are some of the real issues that you are facing? might want to start there. And then you really. Apply the whole closed loop management thinking to solving the problem as you would do otherwise. I mean, why should things change just because there is generative AI?
You just have a better hammer. That's all I would say at this point in time. And simply because you have a better hammer, doesn't mean that you start hammering everything that looks like a nail. You might be running into a lot of problems, so I think you start with the outside in perspective and say, Hey, what is it that's changing outside?
What is it? That's really either hurting us or there's a lot of money we're leaving on the table simply because I don't have the ability to not using the term risk react, but respond to the. positive risk. And the positive risk in this case is, there is this new opportunity and there's a windfall that's waiting for you, if only you had the ability to go and capture that.
So now you will start from that mindset and say, hey I think I have this great opportunity of doing it. Now let me go back and see what does it really take for me to deliver that ability. And technology is going to be one of the moving parts in that. And at that point in time, you might even have to make a decision and say, hey, Why do I really need to take AI solution?
Maybe there is a the regular software is going to help me because it's a very deterministic problem. The rules of the game are very clearly laid out, and I think that it makes sense for me not to go with an AI. So I would say that to me, AI is, Just one of the decisions that's there is a part of the solution design and not really simply because you have a better tool or somebody claims that it's a better tool.
Richie Cotton: so really, you start with the business problem and then you work out what is the best technology that's going to solve this business problem, rather than the other way around. Alright, I'm curious as to how much companies should be investing in AI. Is it something where, particularly if you're just getting started, should you go tentatively into this, or is it something where you want to bet the farm on AI?
Tathagat Varma: Obviously, everyone's risk appetite is very different, and everyone's internal technology quotient as a company is also quite different. For some companies, it may be a very frivolous thing to do and say, hey, we are into manufacturing or we are making homes or we are doing something that's very remotely related to whatever is generative AI.
So yes, that may be a kind of a challenge there, but I would look at it this way. I think a hundred years back, for example, when Edison kind of created Ability to transmit and either the power generation and power supply. I think he created a system where everybody was able to become a consumer of electricity.
And over a period of time, what we are seeing is There are literally only two types of industries left. The one who use electrical power and the one who do not use electrical power are extinct. They are gone, right? So now the question that comes is that you may or may not be in the business today, but over a period of time You are a knowledge industry by default.
I mean, as simple as that. Today, I think we have all come to the point where the default state of any industry or a firm is that you are a knowledge industry, number one. Number two, I'm also reframing it in a very radical way by saying that, it might a little provocative to a lot of listeners, that What is the core business you are in?
And somebody might say, Hey, I'm, I'm manufacturing jet engines, or I'm doing oil well for a drilling. But I argue it that if I reframe, majority of those companies are actually in the business of decision making. Now, The manifestation of that decision could be that you are choosing where do you put your money or how many people you buy?
But I think end of the day, we are all in the business of decision making. So these are two big reframing that a firm has to do. And your knowledge, you're dealing with knowledge, and then you have to basically do the decision making. Now, If you are able to go with me so far on that, as a firm, you might actually then be tempted to think and say a my investments have to be commensurate to the impact of making bad decisions.
So if my risk of making a bad decision is going to be 20 percent of my exposure of top line revenue year on year, then I think if I can find a way in which I can put a fraction of that and mitigate some of it, if I can bring down that exposure from 20 percent to 10 percent, then I think that's the right thinking to do.
But I do not think that way, The way I look at it, Richie, is that it's always going to be seen , as a surplus or a unnecessary rather than actually seeing that it's a genuine part of my business strategy to mitigate the risk and fortify my business because I think the ROI of these experiments and investment is going to have a substantial impact on my top line and bottom line.
So, To me, I don't want to offer a mathematical solution and say like 2%, 5%, 20%. But I think to me, it's really a mindset shift that needs to be there. That if I really shift the mindset and if I see that my business is exposed to making bad decisions and I can improve the quality of my decisions and the judgment by suitably using the right kind of data and the tools.
Then I think answer will be in the question itself. In my view,
Richie Cotton: If every company has to invest in AI, one of the big questions is going to be, well, AI projects often have a long delay from when you start to invest, getting some sort of payback. So what sort of timeline should executives be expecting before they get their money back on those investments?
Tathagat Varma: so let me also just add one more perspective with this kind of continuation of the question you had. I think the reason why everyone has to be there is because the nature of AI as a field, and especially generative AI as a technology of that field is that I think it's one of the most deserving candidates to be called as general purpose technology in the recent years and it's not for a wrong reason.
I think a general purpose technology by definition is something that has a very horizontal pervasive impact Not just at the point of application or in the functional area in which it is developed for, but actually across a wide range of industry. So, if we see historically, we have seen examples like fire or electricity or computers and mechanization.
A lot of these technologies known as General purpose technologies, which also incidentally, the acronym for that is GPT and not the same GPT as we all know. So, the challenge now is that these are very foundational technologies and they have a lot of wide ranging effect.
In many cases, the effect will take percolate over a period of time. And by definition, they are very, very fundamental technologies. like a wheel is a very fundamental technology or electricity is a very fundamental technology. You cannot simply take electricity and say, let me run with it.
And tomorrow I'm going to give a result. You are probably going to build a first order, second order, secondary, tertiary effects of that. And over a period of time, people are going to learn a lot about what it can do. You will build up AI and generative AI, you're probably going to build a lot of solutions that are going to do a lot of data ingestion, data processing, then you're probably going to see a lot of softwares that can actually use them and put them in a different context.
Now, once you talk about decision making, then you are saying, am I going to use decision making for, let us say dispersing the loans, or am I going to see it for deciding , the patients who need a certain , medication? And then over a period of time, you will see those changes really rippling through the whole industries.
So, I would say that I think depending on one of the things that's also coming in my research is that there is no one single broad brush of industries or firms that we can paint everyone with there is a very clear sort of a classification of firms which I call is the Gen AI value chain that I'm beginning to describe in my research where I'm seeing that there is a continuum of players from what they do basically.
So now some of them who are probably sitting at the very fundamental nature of it. So for example, if I'm building cloud as an infrastructure, or I'm building AI as an infrastructure. Now obviously, I'm going to really spend a lot of time in building a platform like OpenAI or something, because I'm ingesting like billions and billions of these tokens and that's a lot of time, effort, and money needed for that.
for it. So ROI period is going to take time. And we have seen that happening. Yes, at this point in time, the only monetizable model that we have seen is really the 20 monthly subscription and other corporate ones. And we are seeing that some of those are still coming like. we saw at the end of last year, we started seeing a lot of IP issues coming up.
But then we are beginning to see that a lot of IP issues are getting addressed by the IP agreements that some of these platform makers are beginning to have with the publishers, And then you will have a situation where you are basically going to have people who are kind of bundling them together into some kind of a usable thing.
So whether it is in a search or whether it is in a co pilot kind of a thing, or some of those things where you are not really having to spend time building those core technology components, but you are using them into kind of offering some value added services. And then you have the final consumption happening, where people are actually putting it in the context of certain companies.
Every company has its own, let us say guardrails around what they want to do, the messaging, the domain in which they operate uh, the context and so on and so forth. So I think the point I'm making is that if you are going to be the one who is really building that foundational pieces, you will have to spend a lot of time because you are building a very broad kind of a model.
It's not very deep. You're building very broad because you are kind of trying to cover everything under the sun. It's going to obviously take time because you have to have the right kind accuracy of the data, right? kind of scraping the data or IP arrangements or whatever is the right kind of thing for you to do.
If you are at the far end of the adoption funnel. Let's say if I'm an individual user or a student who is basically looking at seeing how can I use it to improve my essay writing skills, then I'm just ready to go with the plug and play because I don't need really, I've not spent that time and effort and money in really building the whole thing.
So if I'm a firm that's basically looking into buying a very generic solution something like an ERP solution of sorts where I don't have to build the whole thing, but I just have to do a little customization and configuration and it's ready to go. Then I'll take much shorter time. But if I'm going to build the technology as a first class component, rather than using tool as a point intervention.
And I'm going to integrate them in the processes. So let's say I want to change the way I really do supply chain across the firm because I'm going to embed pieces of artificial intelligence at every touch point across the chain. It's going to take much more time because I'm really reimagining the entire supply chain.
So I think the question to me is a very loaded question because it actually does not look into the nuances of what does it take. Every firm has its own requirement. If I'm just looking at. What we are also beginning to call as an AI washing or veneer, for example, people are just kind of saying, okay, just pay something and say, this is AI, obviously you can start getting results from tomorrow.
I can build a chatbot literally on a Sunday afternoon and I can have it rolled out on Monday morning, but it doesn't have all the right guardrails. It doesn't have all the ethical considerations. It doesn't have the right kind of a training that my people need to do. It's going to obviously have a challenge.
So I think one has to take it with a very, very measured amount of optimism around. It's not a question of if simply because the core technological piece is working, that the solution for a firm is working. I think these are two different things. So when you are scaling up the whole thing across the organization, across all the legal safeguards, across all the ethical concerns, across all the societal issues in which you are going to operate them in I think it changes the game completely.
And if you are a firm that has a lot of exposure in terms of the risk to the public safety and health, for example, you have to indemnify yourself against that. So ROI is really going to vary on a lot of those considerations. In fact, one of the things that I'm trying, my own research, I'm unraveling is that what are those sort of, what we call as the barriers and enablers to basically a successful adoption of generative AI.
And one of my biggest learning has been that it's not like a single pass event, but it is actually a multistage process. So you have to go through various stages of diligence to make sure that you are really checking on the boxes. And only then you are really going to get to the point where you are defining the success.
Like, what is the success? Which again, there is no one single definition for success for everyone as we know. So,, the short answer to that is it depends. But then the long answer is really that there are too many variables in that which have to be really vetted in order to make sure that we are fully ready with the proposed solution.
And we don't have to really stare at a rollback simply because we have not done all the diligence. Thanks.
Richie Cotton: Okay, so That's really like quite a broad range of sort of timelines there. So if you want to build your own LLM, you've got to collect the data. So what you could be spending five years getting something fantastic on the other hand, if you just want to write better marketing copy, you
Tathagat Varma: Yeah, and I'm not by any means, I'm not saying that one has to one is better than others. I mean, it's really up to the needs of every organization. Because let's say my core business is in construction and I think I have a lot of data and I don't think I will really be able to provide the right kind of motivation for a lot of data scientists to come and really build a core technology for which I'm just going to be a consumer at the end of the day, I think I might be better off in finding a via media where I can still use some of the properties of Open platforms like kind of the broad knowledge that they have on very, very domain agnostic knowledge because they are covering like 20, 000 domains under the sun.
And then I kind of use Iraq or some other ways in which I'm kind of, doing some fine tuning what I'm trying to build a capability so that I'm able to. focus , and do the sort of retraining on my domain context there. So it may be able to mitigate the issues without really hampering a lot of my growth because obviously, like we see the adoption of cloud, for example, I think the biggest opportunity with cloud when it was introduced was that let's say I was a startup, which was trying to build an e commerce platform.
Prior to the cloud world, I had to go buy the servers, I had to go buy a space, I had to rent a space, I had to get air conditioning done, I had to get power supply with the UPS done, the whole nine yards of that, right? And it took me six to nine months just to, like, I was still in the business of kind of doing the civil engineering and networking and plumbing and what have you, before I could even start writing the first line of code.
code. And with cloud, all I had to do was, hey, somebody already was doing this for a living. All you had to do was swipe your card and you could focus on the code business that you were in, which was writing the code to start building an e commerce website. Now, the same kind of capabilities coming to us with the generative AI especially.
I think even the predictive AI was limited because you still needed to have a lot of quote, unquote, data scientist and AI engineers and machine learning engineers to essentially build those complex models. But I think the whole idea of democratization AI has happened thanks to generative AI. It suddenly raises the opportunity that On day one, you can actually start putting it through some kind of a testing, like even though it may be very rudimentary MVP that you build, but you can actually start doing it as opposed to kind of saying, okay, let me get the data first.
Let me start doing it. So I think there's a substantial difference in how it has been able to change the game for everyone.
Richie Cotton: That's interesting that as technology gets more powerful, it means you need to worry about it less, and then you can focus on the business side of things more.
Tathagat Varma: Correct. Yeah.
Richie Cotton: So, back when you were talking about electricity a few minutes ago, you were saying that there are so many different possible use cases of electricity, it can be quite hard to figure out what you do first.
I think the same is true of AI. So, how do you know where to begin? what's a good first project?
Tathagat Varma: Yeah, and I think this problem in a way has been around in many ways, shapes, and forms with any new technology introduction. Some are natural candidates because. It's like you will put a generator or a light bulb only wherever there's an electricity flowing, so it's a very easy decision.
But I think selecting a pilot is obviously fraught with a lot of risk because if it is too trivial a use case, then obviously people are going to, I mean, The status quo it acts like the corporate immune system. It doesn't really allow the firms to make a lot of changes to their way of working.
So it's not an easy thing for firms to look for it. And in fact, most of the examples, if you'll see are the ones that were actually fooled in crisis that the burning deck was almost always. Like a dream come true for a change agent because that's when all the barriers against change are down and that's the way you are able to do.
But obviously the strategy cannot be that we are hoping for a burning tech to make the changes. So more proactively, what you really want to do is look for opportunities which are the strengths of generative AI and AI in general, which is like, a lot of times people are making decisions which are subjective at times.
It is not possible to always codify all the rules because theoretically, if you could codify all the rules, then a software engineer can just go and write a solution. Why do you need an AI for that? you could write deterministic software program as opposed to a probabilistic.
software. So if it is too cut and dry kind of a problem, you probably don't need AI for that. You just need to get the regular solution because the business rules can be codified. If it is too abstract and too one off ish kind of a thing every single time, then also you have a problem because the body of knowledge is not yet there.
in the form that it can be codified to build some kind of even heuristics around that. And you don't have any patterns that are statistically significant for you to kind of say, Hey, this is like there is a sky overcast, then it's going to rain tonight. Right. I mean, if, that's left to the vagaries of nature all the time, we are never going to be able to build the weather models, for example.
So I think look for those patterns, which have a recurring theme. sort of a thing. There is a high level of, there's a reasonably high level of inefficiency in the process because that makes it easier candidate for us to say whether the technology can be shown to work or not. Look for, problems where you probably have a lot of manual repetitive processes which are even to the point of really the people who are working on it might feel that, hey, this is too monotonous and boring.
This is like a busy work sort of a thing. so for example back in India, I would take one of the examples as know your customer in a banking, KYC is a big thing, right? And then we would typically take photocopies of our government IDs and take it. And then there are like.
literally in a large bank, thousands of people sitting at some back office somewhere and they are manually validating whether every single detail has been provided correctly or not. Now , that's something which in my view is a uh, right candidate for looking at, hey, how can we really use AI for solving something like that?
And it's also, well, obviously, I'll not, maybe if we have time, we can talk about, the nature of job and the impact of that. But I think if you focus on what is the right thing for the organization, , you're unlocking the efficiencies that can be made possible by a smart use of technology into a process that otherwise is seen very boring by the people who are doing it.
And even it's very error prone because there are too many moving parts in it. And there is no way you are able to really do that. I mean, this particular case, one of the real examples was that it used to take one week for the KYC to happen, and they actually were able to bring +eKYC which was able to do it faster and may not be the apple to apple example of that, but it was able to reduce it down to like 15 seconds.
So if you are able to do that kind of a thing, you are actually. Drinking a lot of efficiencies into the process. Now the thinking is same. It's basically the decision making the frame that I shared with you earlier. If I take the same frame that you are making a decision at the point of looking at the application and saying, Hey, is this my bona fide customer?
they good to go as far as my subscription to services are concerned? That's a decision you make, you are able to roll them out the services faster. So it has a direct bearing on how the customers see or what kind of experience is real, able to deliver to them. So I would say multiple of these factors have a right size problem.
Obviously, it should not take . too much of time. If it takes like one year just to deliver a simple success, people would have given up on that because they will think that, hey, this is not worth it. You know, it's time. But if it is also very trivial that, hey, you can come back and say, we'll save the company billions of dollars in just doing your work over the weekend, it's probably going to trivialize the whole value proposition in a certain way.
So I think it has to be a sense of it's like the right kind of fitment of the problem and the solution that has to be seen in terms of timing, in terms of problem in terms of the impact to the people who are working on that, that it really helps them solve the problem better and the kind of results that are believable and commensurate to the firm's operating structure.
So yeah, I think it's not like a clear rubric, but I would take, I would definitely look at these, parameters.
Richie Cotton: Okay, so lots to unpack there. So, If I've answered this correctly, you either want something that is probabilistic, can't solved by some sort of deterministic, if else kind of logic, or you want something where there's a lot of good data available, or you want something where it's very manual and you can automate things.
Tathagat Varma: also I would even say that it has to be seen as reasonably strategic. Now, obviously I want to be very careful on saying it because strategic typically has a connotation that it's something that's going to take me three years and there's no way I can really demonstrate, but something that's very, very tactical, which can be done over a weekend or one week or two weeks can often be seen as a very lightweight thing and organizations will have a tendency to push back on that and say, hey, that was an easy problem.
I don't think because a lot of times The NIH syndrome, the not invented here syndrome, is a very strong thing in organizations, and like I said, the corporate immune system is very strong against changes, especially if they are being perceived as coming from outside, and let's be honest, a lot of times organizations also have this whole challenge of, hey, the change is being perceived It thrust upon me or it's being imposed upon me.
And a lot of times people don't know why are we making the changes. And there are a lot of misgivings about the change anyhow, right? I mean, there are a lot of misgivings in the popular media and the depiction of AI in Hollywood movies uh, makes one believe that, the dystopia is a certain reality.
And next weekend we are going to have dystopian future, that kind of a thing. So people are already up in guard against it. In fact, one of the data points I had in my research was they used it very well. They actually said, let's have a hackathon and we will say you choose your own problem.
So they actually allowed all the employees to choose their own problems where they see they would like to use some of the generative AI and see how they can solve the problem better. So a lot of part of the change resistance is also coming from the fact that when people believe that I'm fine, but you are really bringing this change to kind of do something and I don't know the future.
Maybe my job is gone in the future. Now, if you are asking me to say how we are going to use it, and I, so first of all, already showing that I'm positively aligned to doing some of these experimentations. I'm part of the whole thing. My agency is not being trampled upon. So I'm kind of partner and collaborator and co creator of this whole solution.
I think that makes it easier. for that. And we can have a system where the let the best problems rise up to the top because it's kind of Darwin at play there. So you are able to see out of 10, 000 employees, 700 employees participated that they are bringing 700 different problems, and then you can create a system of competition, a treasure hunt sort of a thing where you are saying the top 10 problems we will bring it up. And then you are automatically letting the employees basically drive the whole process improvement part of it, because they will be able to actually tell us better as to what is hurting in the trenches as opposed to people sitting in the boardroom deciding for them.
So I think you, cannot just cherry pick one of these and kind of take them out of the context and do that. I think to me it is really building the right kind of a culture in the inside the organization where the leadership is really plugged into what's happening in the trenches.
there's a lot of trust building happening. You are inviting the associates at large to basically saying, Hey what are, your pain points? Let me even not talk about AI. Let me talk about your pain points. You tell me what are the ones that are hurting. Let me give you an opportunity to train you.
A lot of firms in my research have actually taken the proactive step of not just expecting people to be at the receiving end of a change or even a pilot, but actually inviting them to give training to them and saying, okay, firstly, we'll train you no strings attached. We will train you.
We will give you the tools and then , we'll encourage you to play with that. And then why don't you come up with the ideas there? So I think that's also interesting play because in the beginning stages of your adoption you are not very clear about what is the right kind of a problem.
It can be solved. And especially if you're not from the tech sector you don't want to kind of second guess the whole thing, but you do want your employees in the trenches to be able to feed you with the right kind of problems. And that could be a great. collaboration between the leadership of the company and the workforce.
And I think that may work very well for a lot of cultures, because it's got a lot of side effects in building a culture of trust in addition to adoption of technology. So I would, certainly advocate looking at it more holistically as a change exercise, rather than kind of saying, let me cherry pick and say, do something, but I'm not kind of, I'm glossing over everything else, but I'm expecting miracles out of that.
Richie Cotton: I do like that idea of focusing on what your pain points are and then saying, okay, how can we use AI to help out here? So back at the start of this episode, we were talking about, well, why do so many AI projects fail? I'm curious as to whether there are some kinds of projects that have higher success rates than others.
So, what tends to work?
Tathagat Varma: I think what tends to work and again, when I say what tends to work is it doesn't mean that it may be most strategic, for example, but what tends to work is also again, has to be seen in two levels, what tends to work for the firm versus what tends to work for the customers. So, for example companies are seeing that Customer service is one of the areas where there is a lot of opportunities and they are seeing that it works because most of the firms report some kind of statistics like 70, 80, 90 percent of the questions are relating to the same one or two or three dozen of those queries.
Like, where is my order? What happened? I gave the order last week. It has not been delivered. I have a bad product. All those things. Now, a lot of that does not need to be really addressed by somebody has to listen to them. So they are looking at the opportunities of seeing, how can I really have some of my AI solutions cover it for me?
There are advantages on both the sides. One is, can extend my working hours. I don't really have to be dependent on my office hours in order to do that. I can be 24 by 7 and I can serve it number one number two. I can have a lot of knowledge made available because a lot of times there's a lot of tribal knowledge in some of these.
I may be a great customer service agent Who has a lot of fitness and how I do my work, but there is no way I can train 20 people. So I'm able to actually find out those patterns of successful customer service and I can kind of replicate them very fast across it. And then obviously there is a, because I do all those things I'm able to use the technology, I'm also able to a lot of the cost savings around that now again, I want to be very careful in that the cost saving to me is actually the final order outcome of that and not the first imperative.
It's not the first order of business. If I go only with the whole idea that, I'm going to contain the cost and I'm going to do that. I think one may be in there for a root shock. The fact is, when you are doing the right thing for your workforce, you're doing the right thing for the business, you're doing the right thing for the customers there might be opportunities that you are able to deliver.
And it's not about the monetizable ROI alone. There's a lot of non monetizable ROI. And that's why I said the Better for whom? Is it for the firm or for the customers? Now, the customers will love the idea if it has been a very well designed customer service solution, that I have the convenience, I don't have to speak to everyone, can call up in the middle of night and I can know where is my order or I can book something.
So that gives me the added flexibility. But on the other hand, if that experience that is delivered by that solution is so broken that while the firm is saving money by doing it and the firm believes that they are able to do it, but the customers are unhappy with that, I think that's not really the desirable solution.
So I would, be careful and say the firms probably need to look at both the sides of the story in saying where it is working well but in general, I think the areas, for example, internal, process improvement is a great opportunity. these are the things that do not have a customer bearing effect.
If the firm is very worried about the fact that there may be IP issues or I don't have enough safeguards in the system to put the technology in the solution outside, I can try something inside. So, firms are using it, for example, for summarizing information reports . Who is going to read a 40 page report?
Can I condense it into four paragraphs? And then I can really cull out the meaningful information out of that. But then the point is, if firms don't really understand the hallucinations part of that, then they are going to sign up for something that they might be peddling out wrong information and sign up for uninvited trouble.
So I think to me, the answer lies in the fact that The firm has to really see which are the areas which are meaningful, both for internal and for external and not really shortchange one over the other. look at the balance there.
Richie Cotton: , that is interesting that some of the success metrics may not be directly um, just increased revenue or decreased cost. You also have to look at slightly less tangible stuff like Are customers having a better experience and internally, are your processes more efficient? So I guess those are possible to tie to money eventually.
Tathagat Varma: Correct. And, I would also say that, see, and many times there's a time lag between the improved customer experiences. Yes. Causally, we can say that an improved customer satisfaction will lead to a repeat customers, will lead to more word of mouth, will lead eventually to more top line revenue.
there is a time lag between these steps there. but I think it all has to first start in this, especially in the era of social media and everything. think it's even more important, customers have choice. I mean, go to a supermarket, you have like, dozens of types of products there.
Which one are you going to buy? so people have to obviously be people are careful about the experiences they get. And if you do not deliver the experiences and only look for the monetary ROI you might probably be killing golden goose that lays the golden eggs because you are kind of shortchanging your short term are prioritizing your short term interest over the long term interest.
And. It's possible that you may get those short term successes by fluke, for example, because you just were lucky and you just continue doing it, but you're not really uncovered all those things. On the other hand, you may also become unlucky and you may, it may be a failure you may be unlucky. blaming yourself, but you're not really done the right use case for that.
so I think it's important to kind of find that balance, I would say. And again, it's not an exact science like a lot of management decisions. But I think , it's a lot of fairness that one has to acquire and develop over a period of time.
Richie Cotton: I'd like to talk a little bit about skills. Are there any particular skills that you think are important for organizations to have internally just to make sure that they can be successful with AI? do you want to start with the technical skills?
Tathagat Varma: Yeah, I think the technical skills are absolutely I think there's no getting away from that in terms of math and statistics and computer science. I think computer science is very kind of falling out of favor because a lot of people think that computer science well, AI is also a software.
AI also requires a deep knowledge of computer science. Mere programming is not enough, I believe, number one. Number two I think the whole idea of math and stats is a little underrated to the lay people. I think it's important for everyone to realize that it takes a lot of rigorous math and statistics to basically build those technical solutions.
Now. Two caveats here. One is generative AI seems to give the whole illusion of simplicity, as I call it, where all I have to write something in English and if I've taken care of my Ren and Martin during my high school days, then my English grammar is good and I don't have to worry about things.
There are a couple of things with that. Number one, I think calling it as a prompt engineering itself is, in my view, detrimental to understanding what does it really take because writing English by no stretch of imagination can be called as engineering number one. Number two a lot of us are non native speakers of English and we don't always speak English so well and I may not be so good and able to do that thing.
So I think that is another part of it because Most of the organizations are hiring people for tech skills. And I can say a lot about India, for example. Pretty much every technical school the four year undergrad degree. I don't think there is any course on creative writing like, obviously you don't need to understand your Shakespeare or somebody else, but I think you need to understand how do you write and articulate your queries in a proper English.
That's not something that's a priority. I mean, when I did my master's in computer science all I learned was. Pascal, C, Fortran, Lisp, and a bunch of these languages 35 years back and in fact, there was no paper even on soft skills and social skills and human skills. mean, I always felt that learning ethnography and sociology was far more important to building software than actually writing code.
So I think a lot of those mental models are being refined as we speak. So technical skills, I don't want to compartmentalize by saying that only knowing those technical skills is going to be enough. I think it has to be a package deal. Writing code is as important as talking to the people who are going to give you the insights on what exactly you want to write about, or even understanding the data.
Like, I believe data is a very, very value neutral entity. And value data by itself doesn't really judge you. But it's us as human beings who really provide the color to it and say, hey, this data means it's good, and this data means it's bad. Now, if you don't really have a very strong foundation in social sciences, and I think it's even more important for now, the future generation to actually learn about things like philosophy ethics and morals, actually, Well, anything that is computable, anything that's mathematically computable, that's anyway going to be taken care of by the algorithms.
And I mean, the algorithms are actually improving faster than any of us can individually grow. But then as human beings, in fact, the applications that we are designing, the design we are making, the context in which we apply and deploy and monitor and track, I think that is going to have much bigger bearing on the eventual success of it.
So, yes, technical skills are important if you are especially going to write those platforms. But a lot of people are going to be really kind of operating at a very high level of, because the whole consumerization is actually creating the illusion of simplicity. That if I'm sitting on a chat interface and I'm writing something, then I'm actually doing engineering because the prompt is known as prompt engineering, which itself in my view, it's a :misnomer.
But then that is not by any means the engineering skills. There's a lot of heavy lifting that has to go at the back end to do that. Now, even if somebody is using generative AI, LLM kind of solutions or diffusion models, it does not mean that they can get away by not knowing the engineering. In fact, the more they know the more.
They are able to appreciate as to how the bias is creeping inside my solutions, why the systems are behaving in a certain way and so on and so forth. So I would say that there's a bunch of things I would strongly recommend it's not something that you, I mean, the way I look at it is, Every generation of industrial revolution has brought with it the whole it's like when slide rules came, I was seeing one of the old advertisements actually that I think it was in 60s.
I won't name the company, but I think they gave this ad which was like buying one of their mainframes is going to be equal to having 150 engineers using a slide rule. Now, obviously, getting that mainframe in the 60s at that point in time I don't know on what basis they said it's better than 160 engineers, 150 engineers using a slide rule.
But then what has happened in that bargain is that we got a lot of people who know how to use the computers. We also got a lot of people who do not know how to use slide rules, not that it is needed anymore. So the point is that I think it's going to be kind of a Irreversible tight. It's going to ask people to do that, and you are not required to do a lot of skills that you don't need to do.
But I think generative AI can give that sort of a false feeling that just by knowing good English, I can actually solve everything and I can take my company to the next orbit. I think that will not probably work.
,
Richie Cotton: lots going back there, and lots of interesting ideas. I like the idea that creative writing and philosophy are going to be sort of core skills in the future if you're going to be working with AI. But also the fact that having some of that sort of technical knowledge is important, but even people who, you know, don't have data or AI in their job role.
They're just using ChatsDB or Claudia, whether it's using these LLMs, but having those technical skills is going to help them understand where the biases come in and what they should do, what they shouldn't do. Alright. just to wrap up, what are you most excited about in the world of AI?
Tathagat Varma: Yeah, I think one is obviously I mean, almost 35 years back when I was in college doing my master's, I was working on AI and I was working on fuzzy logic, which was one of the technologies under the AI field and , when I passed out of college I didn't find that technology had a lot of use in the industry.
Okay. So one is obviously my own interest , as somebody who felt that his journey was unfinished. And I'm getting a chance to live it now again. I'm very happy that now we are actually, The closest that we have ever been in this field to make this a reality, number one. Number two, I think the idea of really creating some kind of a democratization of knowledge, skills, and capabilities and putting it in the hands of every single person on this planet so that they can be it is like, look at it this way, I mean, There was this analogy, it used to come some time back, that more number of people probably have smartphone than they have toothbrush, for example, or both.
I mean, the point I'm making is, yes, 30 years back, 50 years back, 100 years back, having a home, or having a shelter, or having bicycle was probably the important thing. And, bicycle gave the sense of mobility that people could actually go around anywhere. then Internet gave the flexibility that people could find information from anywhere.
But they were still, you know, kind of passively consuming the whole thing, but they were not actually a part of the whole ecosystem. I think Especially with generative AI and with the whole prospect of the new language models that are coming and they can probably sit inside your smartphone.
I think we are for the first time , removing the real physical, digital divide or, I'd like to call it as a cognitive divide between the haves and have nots, and basically creating a much more level playing world for everyone, where it's not just access to the information, but it's also the access to the whole context.
It's to the whole ability to make far better decisions anywhere on the planet. And we can kind of , raise the common, The denominator itself is kind of shifting probably that everyone is now like two steps above that kind of a thing there as opposed to only a few people who are able to make those decisions better.
So I think to me, that is the potential of this technology that we are able to actually democratize the whole process and give the tools to everybody. Now it depends. I want to use the tools. If I am a farmer, I want to apply that for making decisions about when is the right time to water the plants and when is the right time to spray the fertilizer or if I can be a doctor and I can be in the remotest of the villages where there is no access to the quality health care, but I'm still able to actually bring them.
I think those opportunities humongous. So feel very optimistic about the benign power of this technology that. Can actually create a world of a difference for the first time that it can truly democratize the whole access to knowledge and give the tools and the power in the hands of every single person.
And thanks to the whole mobile internet and cloud revolution. We can piggyback on the ability to deliver those services to everybody. But obviously it comes with its own danger as well. But I would rather concentrate on the positive sides because the history has shown that every technology, every meaningful technology has had both the positive and the negative sides.
We do not stop in our tracks simply because something has a negative power. But over a period of time, the society evolves around building the right kind of safeguards, whether it is the social safeguards, whether it is the legal safeguards, whether it is the regulatory safeguards. So I think that will happen as has always been the case.
But I think the positivity around it that I see is that we will be actually able to make a difference to the coming generations by creating , far more fair, transparent, and level playing field. , that's what interests me.
Richie Cotton: a wonderful vision. Just getting rid of any sort of cognitive or digital divide, making it a level playing field for everyone, empowering everyone. I have to say though, one statistic you mentioned, more people have smartphones than toothbrushes. That's absolutely disgusting. Who's not brushing their teeth?
I hope none of our listeners uh, if any, if that sounds familiar to you, please go out and buy a toothbrush. All right. Just to wrap up then, do you have any final advice for organizations who want to make better use of AI?
Tathagat Varma: first of all AI is, like I said, it's a general purpose technology. Don't expect it's going to be a plug and play and don't expect that any of the the side effects of that, right? Neither the benefits nor the pitfalls are going to show up overnight.
You have to work towards it. So that's number one that it's not like something you go and buy and from tomorrow you can be an AOI. firm that's basically benefiting from that, number one. Number two, even if you're not, if you're a traditional firm that does not really have the access to the technology and does not employ techies for a living don't worry about that.
It's not something that is going to deepen the, existing digital divide further. I think the good news is that it's starting new for everyone. I, always like to tell people that just like, you know, I used to say it when COVID happened that everyone has exactly the same experience in dealing with COVID.
It's just a surprise that we have seen overnight experts popping up everywhere, right? And it's the same thing in a positive with Gen AI also, like all of us have exactly equal experience with generative AI. Just at, Curiosity some people suppress the curiosity and say, Hey I don't know anything.
I don't want to learn it. But some people are saying, Hey, I don't know anything. So that gives me the license to actually make a lot of mistakes and get away with that because I can learn something. So I would say use the ignorance as a very, very genuine reason to actually , build your curiosity about learning it.
Try out in different things, play with them. Don't be shy of really talking to the people who may know it better. Because pretty soon, it's just like the VCR analogy. And some of us probably are of that age who remember that adults, we never figured out how to program the VCRs. And I think the world over, all the VCRs were always flashing at 12.
00. But the kids figured out how to use the VCRs all the time. And the kids were smarter. And the elders who actually wanted to use that their benefit actually smart enough to ask the kids in the family to basically say, hey. Can you help me with this one? I think it's the same thing for us. some of us who are probably born as digital natives who are probably going to be faster on the learning curve can actually help a lot of people, and there's nothing wrong in finding people who can help you with that age notwithstanding.
And do a lot of experiments, find out, don't trust the hype. I mean, I would absolutely say that stay away from the hype. The hype is both good and bad. One is obviously it raises unnecessary expectations, but it's bad, although in the sense that, if all you are hearing the media is negative stories about dystopian future, then obviously a lot of people are going to get a lot.
lot of concern about it, and they will probably do something about it. So in some sense, hype also has positive value because it's helping us stay alert about the potential issues. So let it act as your guardian angel and making sure that, hey, are we doing the right thing or not? Are we really staying alert?
Trust your employees at large, especially if you come from a background As a firm, it does not have a tech familiarity. Crowdsource the problem internally. Let everyone be a part of the solution. Don't just expect and impose a solution. Don't bring outsider and somebody saying, Hey, this is my new consultant.
And they're just going to give, dump a solution. Because that, that's change pattern, even without AI doesn't work. You have to bring people with you. You have to listen to the trenches. You have to work with them. You have to contextualize the changes. You have to articulate the why more important than the how part of it.
A lot of firms mess up that whole process because they focus on, this is what you need to do, but they don't contextualize why are we doing in the first place. So I think one is obviously treat it like a proper change management, number one. Number two Also understand that the technology is not a plug and play.
It's not like you just buy a Word editor and it's going to work on its own. It's like I was saying to somebody today, actually in one of the LinkedIn comments, that it's almost like buying a new puppy. rather than it's fully trained service dog. you expect that all you are getting is actually a fully trained service dog, you're making a mistake.
But you think that you are bringing a puppy and you are basically working with the puppy to basically say, hey, how do I really give the right feedback? How do I train it? How do I make it better? How do I earn trust? And over a period of time, you also learn, hey, how does the puppy really grow into the right kind of a service dog who's going to help you.
So you grow as partners, basically, because as much as it has to learn how to trust you, you have to learn what are the capabilities, what can it offer, what are the idiosyncrasies, and so on and so forth. So give it time, but work with it, and then over a period of time, you would be able to come to the point where The combo of human and the AI is really going to do better than any of them alone.
And that is really the opportunity you have. And I would say at that point in time, you have built the foundation for actually going to the next level. stage of evolution of your firm, because now you have a closed loop system in which you have a data driven or data informed culture, you have the right kind of tools, technology, and infrastructure, you have the people who have learned how to work with the technology far superior, you have integrated that in the business context, and you are solving the business problems, so you are able to build that kind of a flywheel, and now you are in a position to actually leverage that to kind of say, okay, what's next for me?
So I think that is, to me, the true destiny of a firm that's really looking at applying for it. But obviously, don't be shy of taking the first baby steps because it's only when you start taking the baby steps at the base camp that you can actually get to the top of the mountain in some point in time.
Richie Cotton: That's wonderful. I love that analogy of the AI being your service puppy and you've got to train it. So that's brilliant. And I also like that it's quite inspirational, the idea that you should just be curious, play around with these things. If you make mistakes, that's absolutely fine, but just keep learning.
Wonderful. All right. Thank you for your time. Have a good one.
Tathagat Varma: Lovely talking to you, Rishi. Thanks for the opportunity.
blog
The Role of AI in Technology: How Artificial Intelligence is Transforming Industries
podcast
How Walmart Leverages Data & AI with Swati Kirti, Sr Director of Data Science at Walmart
podcast
Data & AI at Tesco with Venkat Raghavan, Director of Analytics and Science at Tesco
podcast
Data & AI Trends in 2024, with Tom Tunguz, General Partner at Theory Ventures
podcast
Creating an AI-First Culture with Sanjay Srivastava, Chief Digital Strategist at Genpact
podcast