Skip to main content

How Generative AI is Transforming Finance with Andrew Reiskind, CDO at Mastercard

Adel and Andrew explore GenAI's impact on financial services, the democratization of AI tools, efficiency gains in product development, AI governance and data quality, the cultural shifts and regulatory landscapes shaping AI's future, and much more.
Mar 3, 2025

Andrew Reiskind's photo
Guest
Andrew Reiskind
LinkedIn

Andrew serves as the Chief Data Officer for Mastercard, leading the organization’s data strategy and innovation efforts while navigating current and future data risks. Andrews's prior roles at Mastercard include Senior Vice President, Data Management, in which he was responsible for the quality, collection, and use of data for Mastercard’s information services and advisory business, and Mastercard’s Deputy Chief Privacy Officer, in which he was responsible for privacy and data protection issues globally for Mastercard. Andrew also spent many years as a Privacy & Intellectual Property Council advising direct marketing services, interactive advertising, and industrial chemicals industries.


Adel Nehme's photo
Host
Adel Nehme

Adel is a Data Science educator, speaker, and VP of Media at DataCamp. Adel has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.

Key Quotes

An interesting exercise we went through earlier last year was finding out where our priorities were. Our priorities are still very much about data quality. The AI basically consumes much more data, much faster, with less human intervention. And so, the quality of the data, the understandability of the data has become that much more important. It’s this feedback loop that is just strengthening the need of just getting the basics right to feed the AI so that we know what the quality metrics are relative to completeness, to accuracy, that is mission critical.

The mindset I'm taking to responsible innovation here is, I want to create new tools. I want to create those better tools, but let's do it slowly. Let's do it in that responsible way. Let's make sure the controls are in place. Let's make sure our partners are there to make sure that they understand their responsibility in this ecosystem.

Key Takeaways

1

While generative AI hasn't unlocked entirely new capabilities, it has significantly increased efficiencies in existing processes, such as personalization and fraud detection, by making them faster and easier to implement.

2

Unhyped or boring use cases, like note-taking during meetings and desktop research, are driving significant value by reducing time spent on mundane tasks and allowing employees to focus on higher-value activities.

3

AI governance, data quality, and cultural adaptation are crucial for responsibly innovating with generative AI, ensuring that AI systems are trustworthy and aligned with organizational values.

Links From The Show

Mastercard External Link

Transcript

Adel Nehme: Andrew, thank you so much for coming on the show. So we're speaking in 2025. It's January 10th today. Over two years since the launch of chat PT kind of unleashed this ative AI hype that we're in right now. Maybe reflecting on the past two years. Andrew, how would you assess generative AI's impact on the financial services industry?

Both good and bad. 

Andrew Reiskind: I. We had actually been using generative AI techniques for building some fraud models and for testing fraud models. And so for us, when the LLMs came out, which is, just the latest version of using generative ai. We were used to thinking about generative ai. For example, we had created fake transaction data to test fraud models.

And so for us there was sort of, okay, this is great, this is nice, but as we know, there are always gonna be challenges with adopting a new technology. And we're used to adopting new technology. So when it came out, what it really started doing was taking data science and AI technology.

Democratizing it. And so I think that's really been the biggest impact. Not necessarily just even financial services, but for society as a whole. it is now very transparent to a larger population as to hear the potentialities of ai. AI had been in place and had been used by society, financial services in many ways, but it was not transparent.

I think having chat bots as an obvious example, image generation is another obvious example of where generative AI has gone, has made it much more t... See more

ransparent and available to users. And so within financial services, what it did was start sparking interest at all levels of an organization, how it, how this, what can, what?

Everybody's been asking lots of questions and doing adoptions in very different ways in financial services, but more holistically, and that is what we are really seeing. I think we're at the beginning of a very long journey. I would say that Ade. 

Adel Nehme: So. Definitely a long journey. And I like what you mentioned here on the democratization aspect because indeed like generative AI and you know, data science and machine learning technologies in general have been part of the toolbox for, quite a few years now for many organizations today. But it's interesting to see that chat has indeed like democratized AI in the hands of the masses.

And when you look at the latest generation develop latest generations of lms, you look at, even commercial chat bots that you mentioned here. What capabilities have they unlocked for a company like MasterCard that simply wasn't there before? I.

Andrew Reiskind: The interesting thing I would just say is I don't know that it's unlocked anything that wasn't there before. What it's done so far is create efficiencies and whether the efficiencies are in some of the personalization work that we do. So as a taking example, we have a shopping muse product for personalizing offerings on our e-commerce websites.

We've done personalization using AI before, but it's just, it now makes it that much easier, faster to take image to text, text to image, turn product catalogs into something searchable within an LLM. Hadn't been there before, but we all did that. But we did that in different ways before. Same thing with the fraud tools that we built.

We've been building fraud tools for over 20 years. This just made it easier to build some of them. Did it increase efficacy? Yes. But every year we've been increasing efficacy. Every year we're deploying new technology and new data to get there. So I don't know that my answer to you is yes, it created, it unlocks something we were never able to do before.

But what I would say is it's enabled a lot of efficiencies within the product developments lifecycle within actual deployment of products and for our own teams.

Adel Nehme: Maybe focusing here on the last part, on the own teams, you know, how would you characterize usage of generative AI at MasterCard, even by like frontline workers, folks who have done business functions like, thinking here about commercial tools like copilot or chat or something along those lines.

How is that maybe unlocked value? I. 

Andrew Reiskind: It certainly made all of our lives easier. I would say that and I'll give you easy examples, right? like many other companies, we've adopted it for coding purposes, for the engineering teams in order to do coding. So that piece is been unlocked. The marketing teams for creating taglines, for creating images, we've.

A chat bot available to us that can search a lot of our internal materials. Summarize it so that's unlocked efficiencies. One of the ones that I love is you can walk halfway in through a meeting and the chat bot will actually automatically just offer up. Can I summarize the meeting for you up to this point?

And do you want a detail on a specific piece you missed? Okay. Uh, I might, what inefficiency that is might annoy your colleagues, but, okay. So I, I would say those are the, the kind of tools that we've already adopted for those internal efficiency pieces in addition to some of those product examples that I was laying out for you.

Adel Nehme: you know, I saw you recently talking about un hyped or boring LLM use cases that drive value in financial services today. And I think it's worth barking here a bit. gimme examples of what you would define to be an unhip or a boring use case and what. Percentage of total use cases should they represent as part of a company's portfolio.

A company like MasterCard, for example. I'd love to see your perspective here. 

Andrew Reiskind: On the unhip, it's these internal efficiency use cases like the one of note taking during meetings. The project and program managers are in love. I would just say that all of them couldn't wait to get their hands on this. Some of the ability to do desktop research has gone from four hours down to one hour.

And so it's just all of those pieces that are making all of our employees, their lives better, easier, allowing them to not have to do as much, I'll put it nicely grunt work, but enable to get the value out of them, right? we hire our employees, not as much for their hands as their brains, and although unlocks their brains, and that's what we want our employees to do.

Portion ones of portfolios. I think that is a journey where most organizations are very much in that we are unlocking that internal efficiency first, that is the core, but that's shifting over time. The next layer there, what are common processes across multiple organizations? So easy example is marketing.

Is you're seeing a lot of tools out there to create marketing campaigns, to make marketing campaigns automated and get automated feedback on a regular basis. As those tools become more commercially available and we don't have to build them ourselves, that again, expands what our portfolio is.

The third portfolio is for, I would call it longer term investment, which is where on the beginning of that journey, and we're deploying pieces of it, like our product onboarding assistant that we've announced and that we're piloting out a chatbot for small businesses is to easy examples that is slowly but surely growing.

And so would expect over time the high value piece being from that third bucket of net new products, much improved product offerings, easier to use product offerings. Product offerings with better ux. That piece of the portfolio has got to expand. One would hope that becomes the 80%. Right now, that's less than a 20%.

I think that's a journey. I think technology, the. I would've said, oh, it's gonna take five years to get there. I don't know that that's the case. I think the rate of adoption, the speed with generative AI is increasing at a much faster rate than expected. As we're seeing reasoning happening, as we're seeing the ability to do math happening, maybe that's within two years, our portfolio shifts.

I think right now our perspective on this is we need to be nimble. We cannot have a set mindset with regard to this, and we have to be in a continual experimentation mode with regard to it and ready to adopt and ready 

Adel Nehme: It's fascinating. And you know, as CDO of MasterCard, right? You're steering this massive data and AI ship. You've been steering it for a long time. You've mentioned that you've been working on ative, I use cases even before this hype cycle that we're in right now. And maybe I often think about what it means to be A CDO in November of 2022 when you're already in the thick of a data and AI transformation.

And then TPI is really setting that the board is asking you questions, what are we gonna do about generative ai? So on and so forth. Given this backdrop, right, and what you just described now about that kind of portfolio of use cases. How do you as a CDO keep focused on making sure that you're making the most of this transformation right now?

And that you're aligning teams with creating value with these technologies. And what are some of the unique challenges of working with generative AI as opposed to other technologies that you've had to adopt? 

Andrew Reiskind: So C

Raw. And then I'm also responsible for making sure it is used appropriately within the AI tools, and it is fit for purpose for the AI tools. So for example, I have data quality, data sourcing and AI governance on that usability side of it. But I also just have data governance, data management. Master data management, things like that.

So when I look at it from the data perspective, I, I have two issues. One of which is I now have a new set of data, of unstructured data that I now need to govern and think about how I govern. And the other piece is I have new set of tools to do governance with. So on the new data assets to govern, fortunately we have structured data assets unstructured, unstructured data assets, or curated data assets.

So taking as example, we have many manuals and publications that we provide to our banks. Issuers and acquirers in order for them to connect to our network. We're a network based business and most of our business is B2B, so I really have these curated, unstructured data assets that I could deploy readily.

I know how to manage, and so we are in a fortunate position because of that. Relative to the unstructured data assets, I would say a lot of companies probably aren't in that same position. You're really only dealing let's emails, which what versus bad. that even us, I mean it's to a lesser extent, we need to go up relative to the tool set is, I was just doing demos yesterday with new tools, is how do I apply generative AI to my own data assets to summarize it, to create the glossaries, to create categorization, to create classification.

It makes life so much easier. Right? I can speed things up. I can get to data assets so much faster. Right? You're hearing my excitement. This is really exciting for me that I can now improve that, which I do. I can change that my, some of my data governance teams, they don't have to manually classify things.

No, no, no. I'm gonna feed it to the generative ai. It's gonna do a good job. So I would say that's my data side of the equation. On the AI side of it, I'm looking at it from that AI governance side and that usability side. And I would say an interesting exercise we went through earlier through last year was where our priorities are and the interesting pieces.

Our priorities are still very much about data quality. The AI visa consumes that much more data, that much faster, less human intervention. The quality of the data, the understandability of the data has become that much more important. So it's just this feedback loop that we're getting that is just strengthening the need of just getting the basics right to feed the AI so that we know what the quality metrics are relative to completeness, to accuracy.

that is mission critical. I would say the other piece to this though, on a AI governance side, though very much work in progress is where are standards for explainability? Where are standards for explain explainability and transparency and accuracy? English language is not an accurate language.

We all talk past each other on a daily basis, that is just inherent in the, the English language, in any other language honestly. And so we're doing this, are standards that. We used to apply in AI governance. Have to change, have to transform the tools that make accuracy analysis have to transform.

I think you're probably aware from an accuracy perspective, people use qa.

It doesn't make me feel comfortable as the chief data officer, but I also understand that, that we're at the beginning stages there. So I would say that is a big thing on the AI governance front that we are looking at, because we, six years ago, had launched our data responsibility principles and launched an AI governance program.

And so we're very much used to putting AI governance on our tool sets. But right now it is very much relative to AI governance side of ai. Very much still feeling our way through it, understanding it, learning, working with others. We're happy to work with others in generative AI to build out the standards, build out the tools, work with new partners to build that out.

But we're on the, beginning of a very long journey that. So I would say that's a piece that feeds to your challenge question.

Adel Nehme: Well, what strikes me from what you're discussing is that there's a lot of foundational work that needs to be set in the governance, the quality, so on and so forth. Then there's also, sure there's a lot of pressure on, okay, how do we make sure that we operationalize this technology in the wild and like drive value from it?

How do you balance that trade off? 

Andrew Reiskind: That is my job. So as, as I define my job, it is the, historically it was innovating responsibly with data. Now it is innovating responsibly with data and ai. So my job. Has expanded and there is a balance to this. And so just taking some of the things that we've piloted out there and even the conversation was having last week is as we're using generative ai.

It is, where is the testing to make sure that this works? Where are the controls to make sure that this works? I am not stopping anybody from using it. So just easy examples we as an organization, always made a decision. We're not blocking people from leveraging gen AI tools. However, we are setting controls to say, we're gonna monitor what you're doing to make sure you're confidential information on where. And now slowly but surely as we're launching products, it is a continual conversation with the product teams. So we actually have a couple things out in pilot in production where we are saying, what are the controls? And as a B two B2C company. Sometimes the controls are easier to implement. So as an example, we offer a lot of macro economic trend information out to our customers.

One of them is called market trends about, just to understand what is your market looking like as far as spending trends in specific categories about among specific types of cardholders. And that one, we've taken that which was static information and put it in behind the chat bot. So somebody could do English language queries against it.

We did lots of testing, QA for accuracy, secondary models to monitor outputs. But at the end of the day, we're testing it out with real customers. But the nice thing is those are banks. They're responsible parties. They understand that. They can call us and say, there's a problem here. They can test it. Great. Let them test it. These are controlled places to do the testing. So I would say.

 I want to create the new tools. I wanna create those better tools, but let's do it slowly. Let's do it in that responsible way. Let's make sure the controls are in place. Let's make sure our partners are there to make sure that they understand their responsibility in this ecosystem because they want the benefit out of the My point about democratization.

It'll democratize the benefits of generative AI for the bank's employees, if their employees can get access to the information more broadly. And they don't just have to use data analytics people to get at it so they understand the benefit for them. they are equal partners in this conversation. So we'll just use that as one example of how we're thinking through it, how we're deploying it, and how we're doing this in this responsible way.

Adel Nehme: What's interesting from what you're discussing here is, the innovating and leveraging the partner ecosystem to also get feedback, right? And connects back to what you mentioned about needing to stay nimble no matter how this technology evolves. And this actually.

Connects to a report that I think your team put out on how to think about generative AI use cases across industries. And the categorization that you have is you have informed AI use cases, perceptive AI use cases, and proactive AI use cases. In a lot of ways, this reminded me of the different levels of analytics and the early days of data science.

Can you expand on these different category use cases and how you think about them? 

Andrew Reiskind: So informed. I think the easiest way to think of its simple chat bot, Asking. These LLMs and depending upon the subject, they become subject matter experts for you. our use cases, we're building out tools so that people can come to us and just self-service information about our security manuals, our franchise requirements, so that they understand in a much simpler way, here's information of how you need to work within our network requirements in our network, instead of having to call up a customer service agent.

So you could do this at two in the morning when you wake up thinking about a problem, but also. It makes it more efficient for a customer and more efficient for us, so that is what we would otherwise call informed. Perceptive is multimodal. How do you take multiple kinds of information and feed them together to get to a more a synergistic approach and a syner synergistic result?

And so in our space, I think this is still very much in development, how you take, for example, structured data, which we have. Billion to put nicely right all of our transactional data and combine it with an English language kind of modality, math and English. Talk past each other much of the time. And so you'll have to think of this as a little multimodal.

How do I do that to democratize structured data? How do I do it to come up with better analyses, but also do this in other ways and, and shapes and forms because we've tried this before. And with varying degrees of success. But how do you tie in images when you walk into a store with human information in order to make it a better shopping experience?

I, I will tell you, not something we did, but personal experience that I'm absolutely loving is my supermarket instituted smart where you. And as you put items into your cart, it scans them and just tallies them. And at the end of the shopping experience, you pretty much go through and get a QR code and you just pay off the QR code and when there are questions, there's a tablet at the end and the customer service agent sees there's a video, so it's your hands only, so it's totally nicely private that then can see what you put in, what you put out to do.

Double checks if question. Oh my. This is multimodal. The thing's smart enough to understand what I'm putting in, putting it out. It's gotta scale, so it's also pulling in scale information. It clearly has product information going. This is a great experience of that perceptive AI from that consumer experience standpoint.

I'm sure on the backend store for the store, it creates a huge efficiency. So those are the kinds of consumer e-commerce experiences that we wanna power with the perceptive ai, but we're seeing examples of it. The next piece I would say is the proactive one. How do we take all of those experiences and create it?

So we actually guide you through, or anybody guides you through where you want to go. Leveraging multiple data sets. So I think many of us are used to routing when you're driving to do some level of prediction as to where you should drive during rush hour, not just of where you are today. But imagine that with travel planning, I'm personally going on a, for the first time to some warm.

I would love for being able to pull in my iPhone. All these pictures of the kinds of things that I like to do and see. So it would be able to do that and then tie that into images from Mexico City of a consumer traveler experience to then give me feedback is here are suggested places you should go.

And by the way, here's how to route it because half of these museums are closed on Monday because typically museums are closed on Monday and here are your travel dates and here are the planes, and pull that all together to give me those, not, not just. An itinerary, but actually start booking in it and getting there.

So I would say that is closer to where I think people are calling it age agentic in certain ways, but to me it is a little broader than age agentic because age agentic could just still be within a single mode. But the idea here is to get you to a multimodal approach to like my point, it's. Images, images capture so much about me as an individual.

What are the pictures that I'm taking, and that's probably true of others. And do that with text and with other sources of information.

Adel Nehme: And a lot of people are talking about 20, 25 being the year of age agent use cases in the year of agents. And it strikes me from what you're discussing here is like similar. To the different levels of analytics where you require some level of maturity along the spectrum to be able to operationalize at scale different levels of use cases.

And I think majority of use cases I've seen today from most startups, organizations that are lies in that perceptive and informed AI category, majority informed some perceptive. How long do you think we need to see maturity on the proactive side and the proactive use cases to start seeing like real use cases that drive value on these on these categories?

Andrew Reiskind: We're already seeing the value. 

And we're already beginning to see the value, my shopping cart experience, so people are launching those kinds of products. I, I think for the proactive one, that's gonna take a little time till we all trust how it works together. But that being said. This kind of goes to part of the concern of why people are slow to deploy is they're concerned about accuracy.

They're concerned about reliability. There are all those levels of concerns that people rightfully have before you could actually deploy to market. But therefore, are there use cases where you're gonna have a human in the loop. And my bet is those are the ones where you are going to actually see faster things happen because I've seen some medical use cases that then give information to a doctor.

And the doctor is that subject matter expert who can then take that information and work with it. And to me that's a good example because you have a subject matter expert as a human in the loop to double check those kinds of outputs before actually deploying it. So I imagine we will start seeing those in the next year or two.

I think true, full on consumer experience stuff is more to the three to five year plan. But that being said, as a guy who's lived through various generations of, of technology deployment, this could happen sooner than what I'm used to from prior generations of technology. 

Adel Nehme: You mentioned the human in the loop thing, and I think that's actually quite central as a pillar, right? In how you roll out generative AI within the organization. you've mentioned those foundations of AI governance, data quality, but I think people and culture is also a major pillar in succeeding with these technological transformations.

Maybe walk me through the cultural changes that need to occur and the skills transformation that needs to occur to be able to drive value at scale generative AI in an organization like MasterCard. 

Andrew Reiskind: I would say, and I will acknowledge this, that feel very fortunate MasterCard as an organization and everybody soak this in. This is just part of our, our DNA trust is our. We have no product without trust. If you do not trust using our products, you're using your card or your, device at a store, you're never gonna use it.

And therefore we have no, no business. So it really is a business imperative for us to have trust and engender trust. And therefore, the AI governance conversation is an easy conversation to have within this organization because everybody understands, oh, I need trustworthy ai. AI governance is the methodology by which you get there because it is just an approach and thinking about deploying the AI in the right way because it isn't just about compliance with laws.

For us, that's baseline. It really is about that efficacy, the non-discrimination kind of thinking. So that if, sorry, I grew up in the Bronx. If I'm in the Bronx, I don't get different answers than I am in, now that I live in Westchester because there are different geodemographics from where I grew up than where I'm living now. 

And so those are important to us because we don't want those kind of negative impacts to populations. And so I would say for us culturally, that was an easy thing.

Are.

And are doing it so that now regenerative AI people get it and they understand those concerns. I think part of the, cultural piece though. Is different than structure data and for us to all reflect on our own language and how our own language impacts people or how our own language changes people. I mean, it's funny being on, a, call with some of the British people is this quote unquote the same language, which could still talk past each other, right?

There's terminology. One of my guys used the term intrain and everybody's like, intrain, wait, where's their train and thought.

This is an example. And so that's where some of those cultural things are not necessarily the obvious cultural things because of what's coming out from these chatbots. Well, maybe it is geared to American English, but we as Americans wouldn't think about that. So I would think some of these cultural things we are all going to have to figure it out together.

And one of the big things that we've actually I know been looking at even from that culture perspective is we operate in 200 countries and as a result, English language chat bots. Not gonna work in two countries. How do we get to languages in multiple countries and how do we get to a technical language that is used in a payments industry or financial services industry versus consumer language?

Those are two different things. So I would see When you say culture, I agree. I just think every company where you're situated are going to face some different cultural issues, issues, If you're a company that's run fast to break things, your culture is gonna be completely different. Your risk tolerance is gonna be completely different than where we are in a financial services company, where our brand is built on trust. 

You are gonna be very different if you are just based in the US than you are. If you are just based in France. Very different cultures, very different perspectives you're gonna have to take to it. So I would say we have our journey, our experience, happy to share, but I do think anybody else is gonna have to take and internalize that and make that work for their own company.

Adel Nehme: Yeah, completely agree. And maybe when you think about here, how do you make the use of AI. Fit the paradigm of culture of a company like MasterCard. How do you empower people to use AI in a responsible way that kind of drives value for the organization, for their own workflows, but also make sure that you keep that notion of trust central while scaling the use of ai.

Andrew Reiskind: What we did, as I mentioned, is we allowed employees to have access and said. Using confidential information, we then deployed internally a tool so they can use LLMs and use confidential information. And so we are empowering our people. We're gonna keep adding in functionality and one of the conversations, okay, what are, what are the platforms you're deploying to enable your people?

All of that is for allowing experimentation and we are trying to enable as much experimentation as possible. What we then have is we built out an intake process to start getting all the ideas in, and so we had a couple hundred ideas in, this is an example, and then we're filtering through where's the value, where are the redundancies.

Visa can see out the, these. Not surprising, right? And so when we get to those 10, we say, okay, who can work on them within our standardized, which we call studio product development process, that starts looking at questions such as, do we have the data? Where are the challenges? Where's the market demand, what's the level of effort?

There are standard list of questions that we go through, and then also standard stages because we first do some market research before spending time and energy on it. We also would do some pilots do things piloting internal pilot externally. and we have, I think. It doesn't really matter the actual steps, but it is that you should have some formalized process that gets you through to understand what should you actually be spending the time and money and resources on in order to actually get it to a full product standpoint.

That product lifecycle includes privacy by design, includes security by design, includes AI governance. So those are preexisting processes that we have built into product development. So we just deploy those as an ordinary course once it's in our product development lifecycle. So again, we are fortunate that we already have existing processes and we were just trying to figure out how do I take from experimentation down to a funnel?

It would be too much to take a hundred things and put it into a full product lifecycle management process. So we just had that upfront piece of, okay, where are we focusing? So we just built that out and are deploying it more recently. to make it easier for all of us to track and get ideas of.

But it, my bad is most companies, there were, a lot of companies do have a formalized product development lifecycle processes and they should leverage. That would be my suggestion. Let the thousand flower bloom. Go figure out how you win down to which ones you actually turn into the bouquet.

Adel Nehme: We're talking about people here and like, how do you make sure that people feel empowered as a kind of a pillar. Another pillar that you mentioned is AI governance. Right? We'll switch gears a bit and talk about AI governance. You come from a pretty heavy privacy and legal background, which I think not many CDOs know much about the regulatory and governance dimensions succeeding with AI and data as much as you do. 

You already mentioned what are some of the pillars of a strong AI governance program that you have at MasterCard for? Of explainability interpretability, so on and so forth. But maybe how did that background help you define the roadmap for success when it comes to gen AI and data science at MasterCard and how does that inform your day-to-day decision making when you think about roadmap?

Andrew Reiskind: I think it gave me a lot of credibility in the conversation when I said we have to let everybody experiment because I was actually a strong advocate where there were countervailing voices you would imagine about, well wait, wait, we're gonna let people have access to these chata. I explained where we can build in the controls, how we implement controls to allow for that experimentation in that responsible way.

And then also getting people comfortable as we move to deployment and commercialization where we. All of that to be said is there's certain risk tolerance and part of my job is managing that risk tolerance for the organization on things like that. And so part of that is I question the risk tolerance at times and how we're implementing the risk tolerance.

So security is an example, is of one of my big partners. So I'm constantly sitting there with them going, okay, where's your framework for security on generative ai? How are we doing? What kind of testing? Where are the tools to do it? What are the tools you need? So I would say my background actually built in that level of credibility and trust.

I wasn't gonna be the cowboy in the room. As opposed to, I think if, if you had somebody who came across as a cowboy, people would be very restent to go along because they'd be a little fearful here. There was not that fear factor. Instead it was very much. Okay, you're gonna do this, but how much does that slow us down if we're gonna do the experimentation?

And that is always a balance, I don't wanna slow down people, but on the other hand, I can't launch a chat bot that was not tested for security purposes, right? That's just irresponsible on all of our parts. So I could have that as a balanced conversation and people would listen to me. So I actually do.

Having that background, having that level of trust within the organization, having the buy-in from my partners, who knew they could trust me on that.

Adel Nehme: Right, and we've partnered with a lot of organizations who don't have access to LMS due to privacy risk concerns, so on and so forth. And we always hear from them, you know, from the frontline workers, especially like, I, I wanna convince my boss to get. To get access. Who are some of the arguments that you've used in the conversation there to be able to make sure that you're able to create a culture of experimentation within MasterCard.

Andrew Reiskind: I would say, so first of all, we can start with the privacy one, is we already have standards for data minimization. That personal data is never used in any process unless explicitly necessary. We use indirect identifiers, we do aggregation, we do a lot of things like that. So okay. people have to just follow that which they've already learned of data minimization.

We're not changing anything. We just have to reinforce that, what we're already doing. So that's sort of one of those easier conversations. The other piece is, as I said, we have a product development process already in place. we're gonna rely on it. We're gonna double down on it. We're just gonna figure out what we actually have to do to strengthen it and where the weaknesses are.

So for example, like I was saying, the testing of, of a chat bot at the end of the day. Well, okay, how are we testing it? Is it just QA pairs? No. We're gonna have a secondary model monitoring the outputs to see if there's a pop. Okay, fine. Good. So it is those kinds of conversations. Okay. What's your concern?

Let's get the concerns out on the table. And it is, how do I address each one of your concerns? Because you can probably go through that ordinary list of security, privacy, confidentiality, those are your topic. Regulatory, those are, are the top ones that come to mind and you can address each one of them.

Hundred percent. At hundred percent level, no. But I would tell you with existing analytics, you couldn't address that at a hundred percent level either. If you walk out the door every day, you are assuming a risk, right? So just living life, but you're gonna live life. And so in order to run a business, you're always accepting some level of risk.

Where is that level of risk? What's the risk tolerance? What's the risk you're trying to address? Here are the controls against the risk. I think those are healthy debates, healthy conversations, because different companies, different cultures will have a different point of view as to where that risk culture is and that line of risk is.

And different levels of sophistication of how to manage the risk. Those are good conversations. Go have them.

Adel Nehme: You mentioned something at the end here, like one of the components is regulations, I'm sure you're following the state of AI regulations today. You know, we have the AI Act in Europe. We have many state regulations popping up. As well in the United States. How do you view the regulatory landscape today when it comes to ai?

What should be strengthened and what should be relaxed in your point of view?

Andrew Reiskind: So I think we are at a very early stage relative to AI regulation. I think the regulators are figuring out what is needed, what is appropriate, what can they leverage from other places. So, for example, a lot like when they're concerned about discrimination from ai. A lot of the times they realize, oh, we already have anti-discrimination laws.

why do I need know the laws? So I would say the regulators are very much at that learning stage, and they're also learning what's the risk with ai. But how do I not shut off the innovation spigot because I, I actually do think in most cases the regulators are actually concerned about that innovation spigot.

Because they understand how it can help, AI can help their populations, help their economy. They don't wanna be our economic back order. So I would say there actually is an interesting and very useful dialogue on that regulatory front. From our perspective, we believe that any regulations has to be principled based, which actually works very well because we have principles.

Please, you know, take off from our principles, but it actually allows for innovation because the principles can change over time. We think any regulations should be risk-based. What are the potential harms you're really worried about? Is it physical harms? That's one question.

Is it some kind of psychological harm? What is the questionnaire? Let's get some of those harms defined and make the regulation. Around that. I would say that the self-driving cars is quite different than my planning a vacation. Two very different kinds of risks. Let's, let's make clear the regulations fit for purpose.

And then the next piece is where are the standards? So, you know, Europe can say we need transparency, but what is transparency in the context of a chat to be determined is somebody you chat, the, these, the conversation that we need to have. How do you define discrimination? For those of us who are used to the credit space, we are very much used to a standardized way of looking at discrimination.

There are like two or three kinds of tasks and there's a lot of math behind it that you can get to. I don't think that works with the English language, right? As we were discussing about how the language can work, so what is anti-discrimination test regard? How do.

A chat. Where are the standards that you could deploy for that? we are all, I say this including academia working on this. We partner with academia, we partner with thought leaders to try to develop those standards. 'cause honestly, it'll allow us to innovate that much faster if we know where the guardrails are.

But it will also get to better consumer adoption because honestly, you won't, wouldn't get on a plane if you didn't think the FAA had standards for the safety and soundness of that airplane, right? As we've seen O over the last year or two. And so we want consumers to have that full faith and confidence in the AI that they're being exposed to, and they themselves are using.

How you build that trust is we, some of the standards help get to that level of trust. So I would say those are ideas relative to where that regulatory landscape is. And the interesting thing, I mean I would come back to the regulators are aware of it, Colorado. When they did what they did, they said, we likely have to come back and revisit this.

We know just stake. But this is gonna probably have to evolve and change. So it's interesting. I'm not used to legislators thinking like that and talking out loud like that and acknowledging that like that. And it's actually a great dialogue for us to have as organizations, probably many stakeholders to have with the regulators in those kinds of conversations about, well, what does good look like?

Adel Nehme: I mean, you mentioned early in the episode how. The rate of change is going on so fast. We're having now much stronger reasoning models. Much stronger models that that can do math and coding. if that slope of progress continues to evolve over the next two years, right?

That puts a wrench in a lot of existing regulations today about what it means to be, to have an intelligent system, For example, given current capabilities. Maybe as we end, Andrew I'd love to ask you a few rapid fire questions if you don't mind. On how you think about the future of the AI industry, right?

I'd love to get some quick thoughts of you. A first question. 10 years from now, how will Genai stack up compared to other technological revolutions in the financial services industry, whether internet, crypto, cloud, love to see what you think here.

Andrew Reiskind: Crypto and cloud, I think are examples of things that most people, they're not engaging with, right? it is on the backend side in many ways, or very specialized. I actually view it much more like where we are with smartphones, smart devices, that these are gonna be things that we are going to engage with on a daily basis that will transform how we engage with each other, how we engage with information, how we consume entertainment, how we.

So I actually think this will become much more embedded in each and every one of our daily lives and change how we lives.

Adel Nehme: Okay. Second question. you mentioned that AI has been democratized, right? And in many ways AI is a democratizing force, So, good example. I code now much more often now that I have a chat bot that helps me code, right? I analyze data much more now that I have a chat bot that helps me analyze data and it will change how we work. 

So maybe over the long term, how do you see the future of work evolving and what will be the main skill that differentiates anyone in the job market?

Andrew Reiskind: I think each of our jobs hopefully are going get easier and more interesting. So I don't. An email, but I can give it a couple of bullet points and it creates an email for me and saves a half hour of re bulleting and, structuring an email. Thank you. I mean, yeah, it'll just make my job that much easier on the skill sets.

Therefore, like I was saying, creativity is going to be that much more important baseline critical thinking. Is that much more important? I just had somebody who joined my data governance team that I was talking to her yesterday and she said, well, should I feel outta sorts that basically I was an English lit major in college.

I said, no. Do you have critical thinking skills? And that is going to be what I'm gonna ask you to do. Like think through the problem, come up with a solution for the problem. I think as things get democratized, a lot of these technical skills are going to become less important. To your point, you don't need to learn to write R and Python anymore.

You can write one and it'll translate for you. And you're gonna get to the point, you don't need to learn how to write any software code. So some of these technical pieces of knowledge are no longer gonna be required, and therefore, the basics, critical thinking.

Adel Nehme: Okay, great. And then third question, what is the future that excites you about generative ai and what is the future that scares you? 

Andrew Reiskind: as you could hear, the excitement are, are things like just simple things like that. Shopping cart, right? Is

it's going to transform and make my life more fun, is what I expect more interesting. It'll make entertainment, not something that I can just consume, but something I interact with more. So I actually am looking forward to very much that side of him.

I think on the scare part, we're all scared about the, you know, Skynet, I think that that is always a bit of the fear about can somebody use technology in a nefarious way? I would say that's already the case. We already have debates about how technology is used today. Facial recognition, probably a great example of something that people are concerned about.

Is it too intrusive? I. My concern gets ameliorated. As long as we are able to have those as open and rational debates about what a society should adopt and what it should control, that's great. My fear would really only arise if it's all hidden behind the scenes and it's not an open debate for society.

Adel Nehme: Okay. And then final question, what are three books that helped shape your career and journey so far?

Andrew Reiskind: So the weird thing on this side. So, on this one, I, I had a given a little bit of thought and I would just say a, I'm a sci-fi geek um, always have been. And I really don't like reading those management information books kind of things. Those really just aren't who I am. I find them. Very shortsighted in the sense of the a span of history and a span of time. 

And so I would just say one of my favorite books growing up and I like watching the movie, I always feel like, nah, it still doesn't do the book justice. It's something like June, where it's about power structures. It's about lifecycles of history over a thousand year period. It's about genetics, it's about limited resources.

It's about

all these 

Adel Nehme: as well. Yeah. Incredible. Yeah. Incredible. set of, incredible 

set of 

Andrew Reiskind: and so you sit there and go, we should all be thinking and implementing in our. and that's why a book like that to me, excites me, interests me. And I would read it a dozen, I probably read it more than a dozen times as a kid and all the spinoffs from it.

But it is one of those things that you could still come back to because it still raises ideas and broader themes for you. So I'm just use that as sort of an easy example. I also love history books. And so something that, a book that again I read a dozen times was the Rise of the Great Powers by Paul Kennedy, which was very much how.

countries projected themselves was all dependent upon economic power and how economies grew during the industrial revolution and how that was expressed in military power and political power. And so again, grand themes of history. I actually think give you a broader perspective than just worrying about who moved my cheese.

Um, Honestly, for, for me personally, because I'd rather think where to your point, where are we gonna be in 10 years? That's an interesting question. How do I get to where I wanna be in 10 years to that vision of that interactive entertainment? Just is an example. Would the building box where is a larger societal piece that gets us there?

Offer those up of examples of books that got me excited. I know other people will have different points of view.

Adel Nehme: Okay, perfect. And finally, Andrew, before we wrap up, any final call to action before we wrap up today's episode?

Andrew Reiskind: Keep an open mind at all times. Just listen, learn, absorb. In my career, I've changed you probably. I saw my resume change jobs, roles, what I do multiple times. It keeps life interesting. Generative AI will keep our lives interesting. Just be open to it.

Adel Nehme: Okay. Amazing. Andrew, thank you so much for coming on DataFramed. 

Topics
Related

podcast

Self-Service Generative AI Product Development at Credit Karma with Madelaine Daianu, Head of Data & AI at Credit Karma

Richie and Madelaine explore generative AI applications at Credit Karma, the importance of data infrastructure, the role of explainability in fintech, strategies for scaling AI processes, and much more.
Richie Cotton's photo

Richie Cotton

47 min

podcast

Generative AI in the Enterprise with Steve Holden, Senior Vice President and Head of Single-Family Analytics at Fannie Mae

Adel and Steve explore opportunities in generative AI, use-case prioritization, driving excitement and engagement for an AI-first culture, skills transformation, governance as a competitive advantage, and much more.
Adel Nehme's photo

Adel Nehme

39 min

podcast

The Data to AI Journey with Gerrit Kazmaier, VP & GM of Data Analytics at Google Cloud

Richie and Gerrit explore AI in data tools, the evolution of dashboards, the integration of AI with existing workflows, the challenges and opportunities in SQL code generation, the importance of a unified data platform, and much more.
Richie Cotton's photo

Richie Cotton

55 min

podcast

How Generative AI is Changing Leadership with Christie Smith, Founder of the Humanity Institute and Kelly Monahan, Managing Director, Research Institute

Richie, Christie, and Kelly explore leadership transformations driven by crises, the rise of human-centered workplaces, the integration of AI with human intelligence, the evolving skill landscape, the emergence of gray-collar work, and much more.
Richie Cotton's photo

Richie Cotton

47 min

podcast

How Generative AI is Changing Business and Society with Bernard Marr, AI Advisor, Best-Selling Author, and Futurist

Richie and Bernard explore how AI will impact society through the augmentation of jobs, the importance of developing skills that won’t be easily replaced by AI, why we should be optimistic about the future of AI, and much more. 
Richie Cotton's photo

Richie Cotton

48 min

podcast

The 2nd Wave of Generative AI with Sailesh Ramakrishnan & Madhu Iyer, Managing Partners at Rocketship.vc

Richie, Madhu and Sailesh explore the generative AI revolution, the impact of genAI across industries, investment philosophy and data-driven decision-making, the challenges and opportunities when investing in AI, future trends and predictions, and much more.
Richie Cotton's photo

Richie Cotton

51 min

See MoreSee More