Generative AI in the Enterprise with Steve Holden, Senior Vice President and Head of Single-Family Analytics at Fannie Mae
Steve Holden leads a team of data science professionals at Fannie Mae, supporting loan underwriting, pricing and acquisition, securitization, loss mitigation, and loan liquidation for the company’s multi-trillion-dollar Single-Family mortgage portfolio. He is also responsible for all Generative AI initiatives across the enterprise. His team provides real-time analytic solutions that guide thousands of daily business decisions necessary to manage this extensive mortgage portfolio. The team comprises experts in econometric models, machine learning, data engineering, data visualization, software engineering, and analytic infrastructure design.
Holden previously served as Vice President of Credit Portfolio Management Analytics at Fannie Mae. Before joining Fannie Mae in 1999, he held several analytic leadership roles and worked on economic issues at the Economic Strategy Institute and the U.S. Bureau of Labor Statistics.
Holden holds a Bachelor of Arts in economics from the University of Toronto, a Master of Arts in economics from Queen's University, and a Doctorate in economics from The Johns Hopkins University.
Adel is a Data Science educator, speaker, and Evangelist at DataCamp where he has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.
Key Quotes
Math at scale, is what Excel brought us in some sense. Or from telephone to email to instant messaging. These are tech that have changed how we work, how we engage, but also it's accelerated our work. And I think that that's what this tech is going to do for us. But it's going to change our work in ways that we still don't really understand. And so our approach is to try to get this technology into the hands of our employees so that they can tell us what it can do in the context of how they're operating.
In terms of a competitive advantage, all of our initiatives are really in service to making sure that the things that you end up doing are done safely and responsibly and that the things that are beyond the pale in terms of risk, you don't do. And so how do you navigate that efficiently? How do you make sure that those innovative ideas that have real business impact and can be embraced responsibly make through the system whilst you stop those things that really have some challenges. And so the way we've thought about this is we have a group that we've stood up that's cross-divisional and cross-functional. So a lot of different expertise has come to the table. And what we're really doing is saying, how do we engage our governance systems efficiently?
Key Takeaways
Align AI initiatives with your existing tech stack and business objectives. This alignment ensures that AI projects are both feasible and impactful, avoiding unnecessary complexity or misalignment with organizational goals.
As AI tools become more accessible, it’s crucial to invest in improving data literacy and fostering curiosity within your teams. This will empower them to effectively use AI and data tools, driving better decision-making and innovation.
Adopt an incremental approach to AI development. Start small, learn from each implementation, and gradually scale successful projects to ensure responsible and effective AI deployment.
Transcript
Adel Nehme: Steve Holden, it's great to have you on the show.
Steve Holden: Great to be here. I'm a big fan. Very excited to join you today at all.
Adel Nehme: I'm a big fan of you as well. So you are a senior vice president and head of single family analytics at Fannie Mae. You are also in charge of Fannie Mae's Gerontive AI program and initiatives. So maybe setting the stage before we talk about Fannie Mae in specific, what are the opportunities for Gerontive AI if you are in a large financial services organization?
Steve Holden: JAI brings it's actually sort of an incremental technological advancement that I think has unlocked a lot of really neat and interesting business. possibilities that we're exploring things that we're all familiar with, like translation and summarization and, content generation.
But the things I'm most excited about, I, I put it into two buckets. The first is just a much more powerful search capability. And the simple example I use to talk about this internally is if you think about when you go to a website and you're looking at a frequently asked questions, and if you've ever had that experience of looking at.
the 10 FAQs and thinking to yourself, Oh, those don't really apply to me. I actually have a very different question. What Gen AI does is it enables the context of your situation to be understood and then to be sort of melded with the content that's out there, on the internet or wherever you're searching.
And the ability to go s... See more
The second thing I'll point out is this power to interrogate data. It's really going to upend, I think, the way that our business teams interact with information. And so, a way I talk about this internally a lot is, we've made available A lot of reporting information, information about our business that gets updated automatically on a monthly or weekly or, in fact, daily basis, and our teams can self serve.
They can go grab that information themselves. They never have to call up the analytics teams. But as soon as they have a question that hasn't really been contemplated before, that becomes a very bespoke and and involved analysis. And I think what Gen AI is going to do is enable business teams. To actually start to interact with the data directly and ask questions that we haven't really asked before without necessarily having to engage a technology team or an analytics team to, prepare the work.
And what that means is that business teams are going to actually have to become much more data literate. They're going to have to understand how data is a reflection of the business that they're trying to understand and its strengths some of the shortcomings. but I think it's going to be a new world and I'm excited for it.
Adel Nehme: Yeah, I'm excited as well. On the power to interrogate data, it's truly magical As someone who's not a coder, for example, but has strong data literacy skills, has a coding background, when I work with a tool like JatJPT or something along those lines with data, it really down the barrier to entry to working with data, right?
And I couldn't agree more, it puts a lot of pressure on data literacy, and we're going to talk about that. But I think it's safe to assume, you know, given the way you just outlined, that there's a lot of nuances for using Gerantive AI, whether you're building use cases within your organization or you're rolling out a tool to do work, right?
And especially in spaces regulated in financial services, there's quite a few unique challenges to using Gerantive AI, So maybe walking through those challenges and how you see solutions to these challenges.
Steve Holden: Look, this is a really important question. We deal a lot with models at Fannie Mae, as you might imagine, and these models are fundamentally different from what we're used to and in very specific ways. So let me, outline a couple of things to think about. first of all, we're used to models that have measures of accuracy, think confidence intervals.
We're used to models that are explainable. We're used to models that have, I'll say, handfuls of parameters not billions of parameters or trillions of parameters. And we're used to models that provide consistent output, meaning that if you provide the same inputs or close to the same inputs, you're going to get very close to the same outputs.
And on every one of those dimensions, Gen AI presents challenges, lots and lots of parameters, as I mentioned. You know, it speaks with the same level of confidence, no matter whether it's, has results that are highly accurate or not so accurate. This is the concept of hallucination That's gotten a lot of press.
The explainability, you know, there's a really great. Youtube video by andre carpathy On explainability and he talks about the fact that the engineers themselves that have built these models Can not explain exactly where these outputs come from and that's You So, for all of these reasons there are challenges in how we introduce these models into the enterprise.
By the way, I think all of these are going to be they're all surmountable. They're going to get addressed. have a lot of confidence based on just seeing how things are evolving that, that, We'll resolve these issues, but We're in the early days here some things that we think about a lot, making sure that we've got, so called human in the loop.
So, there's an accountability. They're always back to a person. The computer is never responsible. It's the person that's uh, algorithms. We do a lot of testing to make sure that specific to the use that this tool is being deployed that it's, going to do a good job in that specific narrow use.
Obviously prompt engineering plays an important role. There's a lot of discussion right now out there about using LLMs in these sort of challenging situations to sort of challenge the output of, Other LLMs, I think that offers a lot of potential, and so we've started exploring that.
And also chain of reasoning, making sure that, you know, when you get a result, you've got, some, data upon which those results are based that you can go and reference some sort of direct sources of information. And so understanding where that reasoning is coming from is important.
So those are all, all mitigants that one might consider.
Adel Nehme: Okay, great. And there's a lot of challenges here to unpack, And, you know, you've been heading up AI program. a lot of organizations today are thinking about how can we accelerate our Generative AI capabilities. They have a Generative AI lead, they have a Generative AI program.
And I think you have a very unique point of view, You know, we've talked a lot behind the scenes as well about kind of the weekly blogs that you write internally at Fannie Mae, kind of, spreading the message, evangelizing Gerontive AI within Fannie Mae. I'd like, we're going to definitely discuss those.
But to first anchor our discussion, what do you think are the key components of building a successful generative AI program within an enterprise like Fannie Mae today?
Steve Holden: I was asked to take this role on back in November of last year. So it actually hasn't been quite a year yet. And I spent a couple of weeks really thinking carefully about the approach I wanted to take. And I largely concluded that we needed a set of guiding principles. And so, I established three that I'll take you through, but what I'll say is, throughout this whole program that we've stood up, those three guiding principles will keep showing up again and again and they've proven to be really important in our approach.
And so they are as follows. The first is balance. And by balance, what I mean is, you know, I think there's a risk here of going too fast and of going too slow. Especially for a highly regulated financial institution such as Fannie Mae, we've got to get that balance right. We want to make sure we're making progress, but we want to make sure that we're doing so in a responsible way.
I think there are examples out there in the press that you can read about companies that went really fast out of the gate and ended up, finding out that there are some real legitimate challenges with this tech. And so, striking the right balance is important. And again, this is On both sides of the equation, it's not going too fast, but it's not going too slow.
The second is around transparency. And you mentioned that I write a blog every week. That's one example. We also have a biweekly knowledge share that we have our engineers come in and share their work. So that as people are figuring things out, that information is finding its way across the organization.
We're sharing any decisions that we're making and sort of very. Public ways. And then we set up advisory groups and councils so that knowledge flows through divisions in a very organized fashion. So transparency is important for us. And then finally, I'll say, and this is probably the most important thing, is humility.
This is such a dynamic area. Literally every week something's changing. You have to, understand that, what you thought was true yesterday may not be true tomorrow. And so, We have a very much a test and learn approach. We're trying to figure things out. I was at the AWS conference back in November and there was a talk given by Jonathan Allen, one of their executives, who talked about one way versus two way doors.
And I found that a really compelling kind of way to think about this, which is make sure that at these early stages that you've got this ability to back out of decisions. Don't make these sort of choices that are irreversible too early because Things are going to change and you're going to need that nimbleness, that flexibility as you figure things out, as you move along.
And so humility, I think, is a way to remind everybody to stay in that learning mode, to keep testing and figuring things out, and not declaring really long term, final decisions about how things are going to turn out, because you will inevitably be wrong.
Adel Nehme: that's really great. I want to unpack a lot of these, right? So you mentioned balance, transparency, and humility. And I love humility. And I'm going to discuss that in a bit more depth. But first, maybe let's chat about balance, because I've, you know, I think moving too fast or too slow indeed is a major consideration for many organizations today.
And it really depends on where you sit in the risk spectrum, the type of industry that you're in. So maybe Steve, outline for us kind of the key considerations or factors that determine speed to market, right, with these use cases. Like maybe what are the different considerations that you have when thinking about the speed of execution for these use cases?
Steve Holden: So what I would say is This tech is new and we manage a lot of different types of risk at Fannie Mae. And so probably one of the things that really is the determinant of the pace that we can move at is getting the governance There are a lot of individuals within the company responsible for managing different aspects of our risk.
And everybody's trying to figure out what generative AI means to our risk surface, how our risk surfaces is being impacted and changed. And so, you know, we're trying to engage our risk teams in an efficient way. And one of the ways that we accomplished that is through really establishing a prioritization for what we've done through governance.
And the idea here is that if every idea went through governance all at the same time, all of our governance teams. would just grind to a halt because we couldn't possibly move, especially at this very early stage where, where there's just a lot of learning going on. And so we've been, I would say, fairly ruthlessly prioritizing the places that we want to engage our governance teams.
And we have four criteria that we think about here. One is well, we first start with, is this a low versus high risk use case? Because we really want to be in the realm of low risk to get started. I believe that a lot of what we're going to do out of the gate here is very much internally focused and likely productivity focused.
Although there are some things that go beyond that. Second thing is uniqueness. What we don't want to do is have the same ideas being pursued in different parts of the company without that awareness of one another. And so what we're really looking for is, is this idea that we're going to engage our governance teams around something that we are already trying somewhere so that we can be really efficient on that.
And the third thing is I call it tech stack aligned, which is, if you're going to investigate some new sort of technology let's make sure that it's, nicely aligned with what we're already doing, or if it's going outside of our existing sort of tech footprint or tech stack, that we're very purposeful that sort of the juices weren't the squeeze that the opportunity there is, is really, is really pretty high because we're going to have to engage a lot more of our technology partners if we're going to be able to pursue those those ideas.
so making sure that you know, it's low risk, it's a unique idea and it's tech stack aligned. And then finally, we, we ask that the teams are really clear on the business use cases so that we understand what the business outcome is that they're expecting from pursuing these, these concepts.
Adel Nehme: Okay, that's awesome. And a couple of things that we can kind of follow on this trail in the conversation as well is you mentioned that, when building out use cases, it's really important that a lot of the choices that you make, and it comes a lot on what you're saying, right?
Like the use cases are low risk, the stack stack in line, a lot of the choices that you make make sure that they're also reversible because the, because the space is dynamic and it's changing so quickly. Right. How do you make sure that you're making choices that are reversible?
Like what's an anti pattern that you need to avoid here?
Steve Holden: Well, I think one of the, places where we're going to see this play out is in which models we end up using. You know, we are looking at this sort of concept of a model garden. We, we certainly aren't sort of all in on one single model. We're testing lots of different models with a variety of capabilities.
And. I think that ultimately our architecture needs to be able to sort of swap in and swap out those models depending on how the tech is evolving and what the needs are. You know, there's a lot of discussion out there now about small language models versus large language models. I'm super interesting.
There are times when we're going to want to go pretty deep into the tech stack, but things like doing rag implementations or fine tuning, other places where. maybe there's some capability that's just embedded in the software already that we checked out seemed to work pretty well. So, you know, I think it's just this idea of not being too committed to any one approach.
and so just having an awareness of those decisions that you're making along the way.
Adel Nehme: And, you know, a big part of, you mentioned this creating use cases, especially in the early run on internal use cases focused on productivity, I assume it's really like linked to the idea of making sure that it's a low risk use case. But maybe walk us through a bit in, more depth that your thought process behind use case prioritization.
How do you determine what is the use case that you want to pursue, So I'd love to kind of see how you think about use case prioritization.
Steve Holden: so say a couple things. First of all, on the engineering side, we've created I call it centralized governance with federated research. So the idea is because again, engaging governance teams, is going to be a big resource draw. We've created centralized areas for our engineering teams to operate.
So the engineering teams themselves could be in lots of different parts of the company, but there's one place where they can come together. And do the work and all that work is is inventory so that we're very clear on on what's going on this, you know, user agreements and, manager approval and things like that, but we've got, we've got a place for people to engage.
We've separately gone out to every division in the company, and we've engaged with them around. This technology and what it is and what it isn't, and really sort of laid out in fairly straightforward terms what the business capabilities are that we believe that generative AI is unlocking. And then what we've said is, you are the domain experts, you understand your area really well.
Now that we've explained this technology, what are the things this tech could help you accomplish as you think about your business objectives? And so we sat down in sort of in a follow up session and had them do that ideation and really start to list out these, these opportunities. And it's not that we're going to go pursue all of them, But it's that we've got them thinking in sort of differently in terms of the tech and what it can do. And we are organizing all of that information and we're in the process right now, reflecting that back to management. to say, here's what the teams are thinking about. Some of these things, by the way, are big ticket items, are things that are sort of more enterprise type feel to them.
Other things are much smaller. but in their totality, they could have a pretty big impact. And so that's the process we're in the middle of right now. But what I'll tell you is that these sessions have been extremely engaging. There's been a lot of excitement. Across, the whole enterprise.
And we're really excited to see what, what comes of it.
Adel Nehme: Yeah, I was about to follow up actually on the excitement element of it because, What were your expectations maybe coming in before these meetings? Because I assume that there would be in any organization, some folks that are excited, some folks that are not excited about AI, right? So I'd love to see how you manage these conversations and how, what were your expectations going in and how did they end up panning out in terms of like the level of excitement around these use cases?
Steve Holden: You're absolutely right. we've got a group of pioneers who are out there really pushing on the tech. We've got a lot of people that are really interested in the tech but don't quite know how to engage in it. And so we've tried to create mechanisms to enable people to just kind of lean in and see what's going on.
One of the things that we did really early on actually in February, I announced that we were going to set up a old company internal upload conference. And I announced the date, which was in the middle of May. And by doing that, I signal to our engineering teams, look, if you've got some concepts that you're thinking about or you're working on, here's a date that you can aim at and see if you can, bring this to fruition and we can start to share it across the company and show, you know, in a sort of spirit of transparency, as we talked about earlier, we can help our employees see what it is that we're really thinking about here.
And so, so we, did that. We had about, 50 concepts, I think, that got registered on our sandboxes, people were actively working on. About 17 of them by May 15th was sort of ready for primetime, ready for demoing. And and so we ran that, conference. We had about 850 people show up in person in, in our rest office in Virginia we had about 1300 people join virtually.
We had some other sessions we had. We brought in some technology companies, some vendors who have LLM model that they could put on display in a developer lounge so people could go sort of look, touch, and feel the tech and get a, get a sense of that. And it was. think by all measures, a huge success.
There was a lot of energy and enthusiasm and we brought a lot of transparency to what we were up to. There wasn't sort of this thing going on behind the curtain. People would come and see what was going on. I would say, you know, the other thing I'll just mention on this front is you're right. I think there is you know, on the one hand excitement, but on the other hand, there is concern.
And My view on this is just like all other disruptive technologies that have come through over the last, many decades The one thing that I I'm pretty sure about is it's going to accelerate our work Things are going to move a lot faster. The examples I often give are from Pen and paper to hand calculated to Excel spreadsheets, right?
Math you know, math at scale, right? Is what Excel brought us in some sense. Or from telephone to email to instant messaging. These are tech that have changed how we work, how we engage, but also it's accelerated. our work. And I think that that's what this tech is going to do for us. But it's going to change our work in ways that we still don't really understand.
And so our approach is to try to get this technology into the hands of our employees so that they can tell us what it can do in the context of how they're operating. And, you know, I always tell people I'm an optimist personally. I believe that the parts of your day that are probably more mundane and less interesting are the parts of your day.
That Jenny, I can come in and actually make some of those things you're doing a lot easier. that's my hope. That's my expectation. And some of the engineering work that we're going to be doing internally focused is going to be looking at things like that. You know, we talked about knowledge management.
I think that that's a huge opportunity for us. You know, if you spend a lot of your time trying to Understand a certain policy that you're trying to deploy in the context of where you're working. That information is going to be easy to get, easy to understand, easy to make use out of those sorts of things.
I see in the days ahead,
Adel Nehme: Okay, I couldn't agree more, especially on, I'm very excited about Generative AI taking the parts of my day that are the most mundane away. And, Steve, you know, you shared a lot of the blogs that you, you write, like the culture of transparency, creating a a conference internally.
And I think that's really wonderful. And a big part of driving success with Generative AI is harnessing and building that excitement within the organization, getting people to become, you know, believers, to be, you know, creating a decentralized network of evangelists within your organizations. maybe walk us through kind of your, your methodology for driving excitement a bit more?
How do you drive transparency? How big of a focus it is? I'd love to see your philosophy on driving excitement as a Gen AI leader.
Steve Holden: can promise you in any organization, there are people. throughout your company who are really really want to get engaged. And what they're looking for is just a little bit of structure, They're looking for, how do I take my energy and, point it in a direction that's going to be aligned with where the enterprise is trying to go.
And so I view, My job is to send those signals and help those teams take that energy and use it for a productive good, moving in the direction that we're trying to go. Again, I see with Gen AI, there's a lot of opportunity and there's a lot of risk. And the trick is going to be to strike that balance.
And so you've got to find those people, that community. And it's not a sort of an organizational direct reporting centralized structure. It's, they're in lots of places. And so, blogging is a way to communicate. And, and, you know, I very purposely don't email my blog out to a lot of people.
I put it out on a website, on an internal website. and I have a link. And people who want to go read it can go read it. And I have regularly about 300 readers every week. Those are the people that I want to talk to. Those are the people that really want to engage around this tech and want to understand it.
And so they sort of opt in as opposed to me sort of pushing this out and not really knowing who I'm talking to. And so now as we start to engage the community, I know who I need to talk to, because I know that I've got a sub root in the company who are going to help drive this tech forward.
Adel Nehme: You know, you mentioned here the folks that are, and certainly there's fear within any organization today by AI taking jobs about, you know, the AI risks, et cetera. You talked about this, but I'd love if you can expand a bit more, right? Like, how do you make sure that you're communicating a message that inspires, given the fear around AI.
Steve Holden: when we did our conference, one of the things that we did was we didn't just do concepts. We brought in specific teams in the organization, where we felt this tech had a pretty important play. And so, for example, let's say that we did nothing.
Let's say that we sort of sat back and, this tech sort of evolved out in the sort of ecosystem, but we didn't really lean into it. From Fannie Mae's perspective, that creates its own risk. And the risk that it creates is, fraud or cyber attacks. I'm going to get better with this tech, Others outside of our company are going to be using this technology also. And so I think, expressing the importance of all of us sort of leaning in and trying to understand it because it's literally affecting all of us. I think it's just, a point that's really important to communicate.
And so the way we did that was, we invited business leaders across the company, both, for example, our chief economist came in and gave a talk about the disruptive nature of the technology, the economy broadly. if any, maybe we sit on 17 million single family mortgages, right?
And these are, everyday people across the country who, are employed in a variety of jobs that are going to be impacted in one way or another. by this technology. We brought in the cyber teams to talk about cyber risk. We brought in the fraud teams to talk about fraud risk.
We brought in our HR teams to talk about persona based curricula in an era of advancing AI tech. And so the idea here is How do we create the mechanisms for our teams to lean in and really think about their own skill sets as AI is going to change potentially the way they engage in their day to day.
And so really being open about what we're up to and sharing things as we know it. And then again, Putting the tech in the hands of our employees and having them lean in and having them tell us what they're seeing. I think all the ways that we navigate this. We don't have all the answers.
I believe we can't ignore this technology. I think there's a lot of risk in ignoring it, as I've mentioned. And so I think we just have to embrace it in a responsible way.
Adel Nehme: Yeah. And, this is a great segue to my next question, you know, embracing gerontophia in a responsible way and kind of making sure that you're aware of the risks and you're able to navigate the risks. One thing that you wrote in one of your articles internally that, I was privileged to read that you shared with me.
How governance and financial services is a competitive advantage when it comes to AI. So maybe walk us through your thinking a bit more depth here. I love this notion of how governance is a competitive advantage. So I'd love if you can expand that notion a bit more.
Steve Holden: Yeah, I mean, as I mentioned earlier, there is the whole nature of the risks we're managing is changing. And so, you know, our governance teams are all learning this as we are learning this. We've got a set of policies and procedures that were written in a time that predates the arrival of this technology.
We have new regulations that are coming from our regulatory environment we have a set of AI ethical principles that we've embraced as a company, which will actually largely look like what you'll see from any company embracing AI responsibly, things like, transparency and you know, accountability.
And, privacy and reliability, et cetera. So these are all pretty standard. But in terms of a competitive advantage, what I would tell you is all of that stuff is really in service to making sure that the things that you end up doing are done safely and responsibly, and that the things that are beyond the pale in terms of risk, you don't do.
And so how do you, navigate that efficiently? How do you make sure that those, innovative ideas that have real business impact and can be embraced responsibly, make it through the system whilst you stop those things that really have some, challenges. And so, the way we thought about this is we have a group that we stood up that's cross divisional and cross functional.
So a lot of different expertise has come to the table. And what we're really doing is saying, how do we engage our governance systems efficiently? And what I will tell you is the first, the first tools that go through the system, it's difficult, right? It's challenging because we're all learning.
But having that mindset of continuous improvement, doing those after action reviews and saying, how can we go through this process next time? a little bit faster whilst not compromising any of this safety that we're embracing. So that's how we're thinking about it.
we've got a system that we've stood up and it just gets better week over week over week. And I believe that, you know, we run this through a few cycles and we're going to have a very efficient way to start to bring this tech in that's going to be responsible and safe.
Adel Nehme: I love that. And, I think it's a great segue as well to switch gears and talk about the skills transformation agenda here, because, you know, having folks within the governance program needing to know how important, like key AI concepts, how AI use cases are being built. But you also mentioned at the beginning that one of the main use cases of within an organization like Fannie Mae is.
The power to interrogate data at scale, which really puts pressure on data literacy, right? maybe how big of a moment is this for skills transformation overall, whether the need for AI literacy skills, to be able to understand how these systems work or for data literacy, to be able to understand, you know, how to work with data at scale as well.
So I'd love to understand your perspective on the importance of skills transformation today. how do you think the skills agenda will evolve over the next, you
Steve Holden: this is a super interesting question. And I'll tell you some things that are on my mind. I definitely data literacy, I touched on this earlier, but this idea of, sit in a secondary mortgage market. we actually don't directly interact with borrowers, but we have a huge stake in understanding, that borrowers who take out mortgages in the U S market are being set up for success, that these are sustainable mortgages, that these borrowers are going to be able to be successful at home ownership.
And so we understand that through the data. And so understanding the information content of that data, understanding how to navigate the data, that's, that's, that's, becoming, you know, increasingly important in an era where, again, business teams can self serve. But I would say that beyond that, things like curiosity are going to become really important.
asking good questions and then knowing how to get those answers. Because those answers are now going to be at your fingertips. I think those are going to be important. This prompt engineering skill that's gotten a lot of discussion. I think that's going to be a legitimate skill, both in terms of asking questions the right way, but also because these tools are expensive.
Do you have to think about both architecturally how you design the tools in your ecosystem, but then how you train your teams to engage around them in a way that manages costs responsibly to get answers that are, coherent and accurate usable. So those are some things that, that I think about.
I also say that Aspects of what we do. of becoming more automatable. But there are elements of what we do, in my view, are not replaceable. And one of those things is sort of the human element of this, right? The trust, the rapport that you build through human interaction and I think one of the huge opportunities with this technology is it can free us up so that the parts of our interactions that are automatable can happen in the background.
And the parts of our roles that are vulnerable but are incredibly important can really be focused on. And so, for example, you know, there's a lot of discussion in the JAI space about how this technology can transform call centers. And think about what a call center is. You're having an engagement with, another human being, and it might be a variety of settings.
It might be someone who's in a mortgage, who's having trouble making a payment. It might be originating a mortgage where, you know, the bar is being being evaluated for credit. And if you think about the ways that humans can build trust. and that information can be captured in a more automated way in the background.
I think those, those are opportunities that I think present sort of interesting ways that we're going to evolve in terms of how we operate.
Adel Nehme: Okay, great. And I want to unpack maybe a few of those notions that you just mentioned. So doubling down on data literacy, for example, as an essential scale when working with data, right? Like you mentioned that Fannie Mae use case of working with borrower data and, mortgage data, I think is so foundational.
Do you think that technical knowledge will be important or will it be subject matter expertise about business data that will be important here? Or is it both? For example, for me, when working with a tool like transcripting, knowing Python and knowing that, okay, I'm able to edit that particular line of code, it's quite important, but also knowing the data is, is important.
Like how, how important do you view these skills evolving the next couple of years? I'd love to also understand.
Steve Holden: I think this is going to be an area of active debate. Okay. And here's why. the idea that some of the skills that have been typically thought of as sort of technical skills and, professional disciplines, Some of those things can now be handled in certain situations.
And so there's this discussion around low code and no code, right? This idea that you don't need to write code anymore because, you know, you can interact with the system. And the question that I think this raises is just because you can, should you, and I'll give you two examples and I don't have the answers, so let me set it up front.
I don't have the answers, but let me give you two examples that I've, that I've thought about. So one is we've introduced GitHub Copilot. into our environment here at Fannie Mae. And so we have been gradually rolling out to our developers. We have a lot of developers here. We have, I think, 4, 000 employees who would be classified as a developer of some sort of software engineer, a data engineer, a data scientist.
And so we've made this this capability available. It's a, code generation sort of, support tool. And you know, when we were first contemplating rolling this out, one of the questions that I was challenging the team with is, Which job functions are we going to make this available to?
Are we making this available to professional developers as a productivity capability? Are we providing this to all employees as a way for anybody to start doing development work? And we very squarely landed in Category 1. We felt that this was a place to enable productivity for our employees. The second place I see this showing up is in the data science area where, you know, you've got a lot of pretty advanced data mining capabilities.
You can really open up to almost anybody. because the barriers to entry have come down to almost zero, you can start asking questions hey, go run me a random forest model, and you might actually not know what even you're asking. And again, I think this presents a lot of risk because, You have to understand how these models work in order to understand some of the risks that you might be introducing into your environment by using these models.
And so again, you know, I think that sort of these professional designations around data science and statistician and econometrician, these aren't going away. These are going to be incredibly important. But this is a frontier that's going to have to be navigated because these barriers to entry are coming down.
And so we're going to have to be really purposeful about how we handle that.
Adel Nehme: Yeah, and I love that example of GitHub Copilot within the organization. I think, providing a tool like GitHub Copilot to folks who may not necessarily have the skills to challenge the output could be not the most ideal situation in fact, trust in data down the line, right? Like if, people don't trust data because of hallucinations.
So, you've been doing this role for a while now being the, transitive AI lead at Fannie Mae and a lot of ways you are building the plan as you fly it, right? So. What have you learned about leading a generative AI program or about leading the generative AI program, Fanny Mae, that you wish you knew before you started?
Steve Holden: I think I knew this intuitively, but it really played out, which is, it is really easy to demo something that captures people's imagination. You can get people really excited about something with this technology. Very, you can set it up very quickly. That's actually not that hard. What's really hard is to scale it in a responsible way across an organization.
That's really hard. And I think that reminding people that over and over again is important so that there's this awareness of. What it really takes to deploy this technology at scale at a company like Benny May in a way that's going to harness that productive opportunity but do so responsibly and in a way that doesn't end up creating unnecessary risk.
Adel Nehme: A couple of final notes, what would be advice you would give other leaders who are in a similar boat or position?
Steve Holden: I've always been a big fan of both taking a real test and learn incremental approach, and that's, I think, paid off for us. thus far. So I would, I would say that that's worked really well for us. And then also, sometimes there's a tendency for control, trying to really sort of control sort of every aspect of a program like this.
And I think that's folly. I think that if you want to really create an innovative environment where great ideas are going to have their moment. I think you have to, again, put the tech in the hands of employees. You have to really set up some structure and some basic programmatic, capabilities, but then really get out of the way and let the people who know how to innovate, go innovate and show you what they can do.
And I think that if you can do that I think that you'll be off to the races and you'll do some really interesting work.
Adel Nehme: And it's not only folly, it's also really good for your mental health. And maybe one final note as we wrap up, Steve what are trends that you're excited about in this space?
Steve Holden: look, I go back to some of the modeling comments I made earlier. I do think the explainability is going to improve and I'm looking forward to that. I do think that some of the issues around sort of confidence in the output and being able to measure that, that's going to get better.
I'm looking forward to that. I think honestly starting to put generative AI into our workflows, is something that I think is on the roadmap ahead. And I think that's where we're going to really start to see the value show up. You know, there's a lot of talk out there about agents and agentic framework, and we started exploring that and we're seeing some early interesting opportunities there.
So as this stuff starts to come to fruition, I think it's it's going to become real for people. And so that's what I'm excited about.
Adel Nehme: Okay, that's awesome. Steve Holden, thank you so much for coming on DataFrame.
Steve Holden: Thanks for having me.
podcast
The 2nd Wave of Generative AI with Sailesh Ramakrishnan & Madhu Iyer, Managing Partners at Rocketship.vc
podcast
Creating an AI-First Culture with Sanjay Srivastava, Chief Digital Strategist at Genpact
podcast
AI and the Future of Art with Kent Keirsey, Founder & CEO at Invoke
podcast
The Past, Present & Future of Generative AI—With Joanne Chen, General Partner at Foundation Capital
podcast
Inside the Generative AI Revolution
podcast