Self-Service Generative AI Product Development at Credit Karma with Madelaine Daianu, Head of Data & AI at Credit Karma
Dr. Madelaine Daianu is the Head of Data & AI at Credit Karma, Inc. Before joining the company in June 2023, she served as Head of Data and Pricing at Belong Home, Inc. Earlier in her career, Daianu has held numerous senior roles in data science and machine learning at The RealReal, Facebook, and Intuit. Daianu earned a Bachelor of Applied Science in Bioengineering and Mathematics from the University of Illinois at Chicago and a Ph.D. in Bioengineering and Biomedical Engineering from the University of California, Los Angeles.

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.
Key Quotes
The most important form of data is the user data. That's essentially a user's behavior in the app or in a marketing channel or any other channel that we are engaging with a user
We have to constantly and critically rethink our stance, our infrastructure, our nimbleness. For instance, one thing that we are doing right now, especially with systems that we built internally that leverage GenOS, we are modularizing those as much as possible so that it's easier for certain components to be swapped in and swapped out.
Key Takeaways
To successfully implement AI, start by treating data as a product and focus on building a strong data foundation, which includes clean, high-quality data and robust data lineage tracking.
Utilize a modular approach in AI system design to allow for easy updates and integration of new technologies, ensuring that your AI infrastructure remains adaptable and scalable.
Integrating generative AI into fintech applications requires a balance between deterministic and non-deterministic approaches to ensure both personalization and compliance, especially in sensitive areas like finance.
Transcript
Richie: Hi, Maddie welcome to the Show.
Maddie: Hi. Thank you for having me.
Richie: Cool. So, I know you're building a lot with generative ai. Can you tell me what's the AI application you are most proud of?
Maddie: So a bit of context. I lead the data and AI organizations at Credit Karma, and yes, we have been very publicly invested in Gen AI for a few years now. There's a lot of investment at various levels, platform investments that we'll likely touch on as well as the applications that arise from those platforms.
So, Richie, before we go into the details, maybe I'll give you two buckets of Gen AI applications that we are proud of. One is the embedded experiences that essentially use LLMs or LM applications capabilities to plug in with, say, credit Karma offerings. So we want to be able, for instance, to contextualize what we give and show members.
You might know this, that really the bread and butter of Credit Karma is being able to guide members in their financial journey with a wide range of offerings. Recommendations ultimately is what they are. So being able to plug in Gen AI within those recommendations and explain why we show members what we show them has been tremendously impactful.
The other bucket of work is chat. A lot of people are trying to do chat. It's a bit more complex to get product market fit for chat. it is an avenue that we are continuously employing and exploring. the success that we've had. So far is especially the first bucket,... See more
when you see essentially an offer, which is a credit card, a personal loan, an all loan, as you navigate the Credit Karma app, we have a little button at the top that essentially says, see why. And upon clicking on that, you can see an explanation or the context around why we are showing you this. Based on your interest, based on where you are in your financial journey, based on what the offering is and how it can benefit you as a member, which has been very helpful not only to educate our members, but also to be able to help them make better decisions in their financial journey.
That's really interesting that I think a lot of the times you think, okay, well it's using artificial intelligence. It's just some kind of black box. We don't understand why it's happening. But you're saying actually the most important feature then is this explanation of why you've got this personalized result.
Absolutely, especially in the FinTech space, this is becoming one of the areas that are catching on the most, mainly because doing your finances is so difficult. And he's so personal and he's so specific to you. Richie's finances will be very different from Maddie's finances, so being able to contextualize and hyper-personalized holding your hand along the way using both AI and some of the amazing LM based abilities from the Gen AI landscape have been one of the best ways that we've been able to essentially advance the business at Credit Karma.
Richie: I mean there's how many different applications for ai? I don't know whether that's a good place to start or not. So if your company's like, okay, we need to get in on the AI bandwagon, do you start with personalization stuff or is that like a more advanced use case?
Maddie: I will say there's two main categories that I can talk to you about in the landscape of ai. Credit Karma has been around for many years now and one of the biggest. Differentiators for Credit Karma is in its data and being able to essentially treat data as much as possible as a product, which we still have a long way uh, to go on.
Nonetheless, it's very, very important for us to essentially understand our members. Well meet them where they are, understand their goals, being able to essentially. Harvest their information in a way that we can serve them really well. And we've been collecting data for about 10 plus years now, which is really, really critical for Credit Karma to then apply that data to fuel AI applications.
Without it, there's only so much you can do. So a big bucket of work that actually my team owns is around recommendations. So this is essentially a system of recommendations that you can see on what we call the front door of credit Karma, where you can see. Offers across credit cards, personal loans, amongst other use cases as well as insights that can guide you and educate you along the way.
And then we have marketplaces that go even deeper. If you really want to go deep dive on a particular offer, you go into the marketplace. And that's where we also use personalization to satisfy and serve something that the member really wants to see at that particular point in time. So that's something that has been around for a long time now.
Recommendations, and we continue investing and democratizing that, especially now that we are part of Intuit. But then the new frontier has been, of course, gen ai. So now that we have recommendations, how can we contextualize them is the next frontier of work. has been also like the biggest product market fit in the application.
Richie: Okay. That's fascinating. So a lot of the things you described, things like recommendations, these feel like standard retail experience thing. So time. You go shopping, you're gonna have these sort of recommendations like, oh customers who looked at this product also bought these other things.
So it sounds like this is basically, it's, like retail for finance, but you're saying again, it's like this level of personalization and I guess the explainability, that's kind of the, the new frontier there. Okay.
Maddie: Exactly, and like you said, Richie, I think with Gen AI we are able to unpack a little bit of that black box that used to be very hard to unpack in terms of why we are showing members what we are showing them. And now we can not only extract, for instance, some of the, what's called most important features that lead to recommendation being shown based on the member's journey.
We can take that and summarize that in legible text. To be able to like essentially explain to you and bring you along in a way that is explainable to any human being. Why this is being shown and I.
Richie: Absolutely. So it seems like, customer experience or user experience is kind of at the forefront of your goals there? Actually, yeah. do you wanna talk like what are goals around making use of AI then? Is it just customer.
Maddie: Customer experience is first and foremost, we are a very customer We call them members, member focused company. So making sure that we satisfy our members goals is. First and foremost, our top priority. Along with this, there's quite a bit of, again, investment in more of this evolutionary phase of AI and gen ai in the context of how can we really identify what else beyond context solution works.
So there's research focus in the chat space amongst other areas. But I also say that one of the big investments that especially my team is leading right now is around data. data is not a solved problem. And what I mean by that, especially as we grow as a company, there's a lot of fragmentation in our data because not only that we acquire more data but how we, that data is critical, clean, high quality.
Our AI systems, gen AI or not, will not be successful. And especially we are part of Intuit. So we have a huge opportunity to think about the member, in fact, what we call now the consumer across both the Credit Karma and Turbo TurboTax landscape. Combining data for your finances with your tax information can be a huge unlock in the types of personalization capabilities that we can show you, either when you do your taxes or whenever you essentially wanna advance your credit score or any goal that you may have throughout the year.
So that is one of the biggest areas that I'm most passionate about and I'm focused on right now the most that can really unlock, I think the next evolution of ai.
Richie: Okay, so that's interesting that having this larger sort of, data set on individual users, that's gonna give a more powerful experience. So can you just talk us through mean, you mentioned before that having data is, kind of the secret source of this. What like user data or other data do you need?
Make AI work?
Maddie: So the most important warmup data is the user data, and that's essentially a user's behavior. In the app or in a marketing channel or any other channel that we are essentially engaging with a user with, whether it is in Credit Karma or TurboTax. But there's also a lot of information about what we call content.
Like what is the recommendation or the offer per se, that we are showing. So is that a credit card, a personal loan? Is it a TUR tax offering? That information also has to be featurize, or in other words, it has to be enriched because at the end of the day, you need to have a user and whatever.
The offering is really well embedded in what we call this user journey. And then there's a lot of other data that essentially is pertinent to our systems that also are important, to be able to track the cleanliness, the lineage of our essentially data stack end to end. But at the end of the day, like these two pieces are critical.
The data about the user and the data about the offering and being able to pair that effectively is essentially key. And being able to ingest that effectively into machine learning platform systems and gene AI systems is also very critical because without that being clean we will not be able to like serve a summary about a context that is trustworthy.
We might be getting data that's. Not your data or not the representation of Rich's information. So those aspects are actually harder than one may think.
Richie: Okay, so this is kind of fascinating because I've read like a few things in the last or about how, okay, you've got the, all these like giant neural network. You don't need structured data, just throw any old unstructured data in there and it's gonna give you an answer. But a, a lot of the stuff you were talking about there, it sounds like this is like proper feature engineering stuff like the, the real sort of bread and butter of traditional machine learning, making sure you've got high quality structured data in order to feed into your models.
do you have a take on this? Like, what's your position on like, these sort of more traditional machine learning techniques?
Maddie: You are right when it comes to, say, training an LLM, you feeded the internet. In fact, we ran out of the internet. We ran out the content that has been created in our time that we can feed. A large language model, which it's what is trained upon. So that can be unstructured, but that essentially is text.
It's not necessarily structured information that, that this has to be paired with to be able to understand the members' financial landscape, where they were a year ago, where are they now, where are they intending to go? So that has to be paired. A well trained has to be paired with. Effective behavioral interest based, preference based context about a user to be able to essentially, ultimately identify product market fit.
Otherwise, that's the one of the biggest pitfalls I think of what I'm seeing at facing in my experience of why it's hard to make Gen AI oftentimes work for some of these use cases that are in the FinTech space and beyond, because we have to pair. Meet the member where they're at, and we can only do that if we know exactly what they're looking for and where they've been and the context that we have to offer them that ultimately can be plugged in with.
Richie: Okay. So it sounds like a lot of businesses then are making mistakes in their journey of AI strategy because they're sort of diving in, trying to build stuff. So what's your sort of blueprint? How do.
Maddie: I'm not sure if many businesses are making mistakes. I, I hope that they are not, but it's certainly not easy to get well, and I do believe that there's. The strategy that we have to set forth in this era where really us as technologists are playing a critical role and have a huge responsibility. You have to think about foundational pieces, like what are the foundational data and infrastructural pieces that have to be sound.
That have to be scalable, that have to be governed properly, that are foundational to anything else that you build on top of. So an AI platform or gen AI set of applications will only thrive when your foundation is sound. So I think many companies have been doing this and. Rethinking how they do this.
We know that, for instance some of the biggest companies do this from the get go. They build their companies around ensuring that that data foundation, that ML platform architecture, is built well within to their genesis. And then there's of course many other companies who migrated to the cloud.
In recent years, maybe like 10 years ago. That was a very big wave. And furthermore, we have to keep refining this. As we especially get more data, as we especially open new product lines, and that is, I think, foundationally critical for most companies. And I've seen that in my journey being some of the maybe least sexy pieces to talk about.
But it is so critical for being successful.
Richie: Yeah, I think maybe data engineering is kind of, it's never quite been as cool as it should be 'cause it's so important to company get this right. Maybe let's indulge ourselves and talk about these foundations then. what do you need to do to get things right, tools get the data engineering sorted, or what do you need?
Maddie: It really depends on where you are as a company in your lifecycle. So something that we've done at Credit Karma is we've built a lot of legacy systems initially, and then we migrate to the. We borrowed essentially managed services that can also help with scalability. So right now we have a combination of both legacy and managed services that operate in the cloud.
So that's likely very common for most companies that are at this size at this stage. Having said that, some of the most critical things that, I've been doing as a leader in this space is. On a yearly basis, I do a gap analysis, which is essentially understanding, for instance, the health, the opportunities in our data stack end to end, and that starts with the.
Raw form of data that we have all the way through that data point being derived for various use cases, either because we show it to a member in a display formatting the app, or because we use it in the background for input into a model, we really wanna track the lifecycle of that data and have.
Really good lineage around it. Really good tracking around it, really good explainability and metadata tagging along it so that we can be able to like, know what we have available to us so we can harvest that really well. So that's one of the honestly, hardest things to do that takes a lot of piping and a lot of hard work.
And it does take a while until you can see and reap the fruit of what can come out of this work, which is oftentimes spending multiple engineering organizations and any company. But it's critical for us to then be able to plug this in with say a well summarized answer. That can help you make meaningful changes in your financial journey because we then know exactly where you've been and where you wanna go and really meet.
Richie: It sounds so simple when you say out loud, we just need to know what data we have and how it moves throughout the organization. But yeah, I can imagine that ICE is an absolute nightmare to figure out in most cases. Okay. So sorting out your data lineage, that seems incredibly important and I guess the, the provenance aware of coming from.
Now I know you built a lot of different AI applications. What have you done to make the process scale.
Maddie: That again, goes back to where you are in the lifecycle and how mature the AI application is. So I think there is maybe a, a two two approaches that one would take depending on the maturity and the lifecycle of the company For classical ML is what I like to call it, which is machine learning models that fuel things like recommendation systems and identifying someone's intent and targeting for that process.
Scaling can be a bit more straightforward because we've been at this for. We know essentially what are the components of being successful? What are the metrics by which we measure success? what is the talent that needs to be in place to be able to essentially execute on this work? And oftentimes we also have a really strong strategy, ultimately goes back to like what I like to call the three-legged stool of any organization around which process is a big component, but then you have the people, the talent, and then you also have your strategy.
So I think it's been a bit easier to scale processes with classical ml. I think where it gets gnarly is when you are in a zero to one space, which is what Gen AI is, and being able to essentially scale something that you are paving the path for is. Not only complicated, but maybe like that's not even the first thing that you should do.
Like scaling something that is just a zero to one as you're applying it across the organization might be kind of putting the card in front of the horse. So I think you want to create the safe space for the right talent to explore and to be able to essentially make progress and show in a POC format initial value.
Is critical for our members. That is critical for our business. And then you think about.
Richie: So, very different use case then for these processes are a bit more mature maybe for traditional machine learning. So you mentioned like you just gotta pick good metrics, get good talent in place, and I think it was having good processes. We definitely have to get into the weeds of those and find out what those, what all the answers to those parts are.
But then for generative ai, if it's because start simple and see.
Maddie: Absolutely. I think where I've seen innovation scale is when you really want to push for scalability, especially as a leader, and it's very hard to then enable empower teams to be successful and deliver. Really what they can deliver. So if you're pushing for something that is essentially taking away from the time spent on innovation and you disabled them from enable, from have, having the wide space to think big.
So I think it's a very critical timing that leaders have to like, essentially think about as they think of s.
Richie: so in that case, you, you really need like your product people who you engineer. Needs to have some playtime and just have a chance to sort of think about what they're doing. Alright, so, just going back to the the machine learning case, I'm gonna make you reveal, mentioned you need good metrics.
So what do track, like what constitutes success for a machine learning project?
Maddie: Absolutely. So I think for metrics, you want to think about a few different ecosystem level flavors, if you may. So a machine learning model or classical traditional ML is only successful when you think about the whole ecosystem that. So the first piece would be what are the core member goals that we are advancing?
So typically you would have essentially some kind of metric that tracks whether a member is essentially meeting or seeing what they're expecting to see with that application in mind. And that can be done explicitly or implicitly. So what I mean by that is you can very. Crisply collect feedback, like thumbs up, thumbs down, information from the members.
Or you can see if someone interacted with something click on say an offering or if you target somebody and they engage with whatever you're targeting them with, is implicit formation that somebody is actually potentially interested. But that is a little bit also tricky because you had to have guardrails.
In something can be clickbait and you want be thoughtful about how you interpret essentially that engagement to make sure that they're member level goals. So our, the machine learning models, first and foremost, meeting members where they're at, but also advancing certain business level goals. And these could be various forms of goals.
Engagement goals are critical for the company in terms of driving retention for our members, but also oftentimes retention correlates with revenue. So then revenue goals are also important, but first and foremost, we index on retentive behavior. Driven by models. So that's all essentially business member level metrics that are critical.
And then there's a lot of, I would say, capability metrics that we track, for instance, is the model in terms of its predicted value. Meeting, essentially the actual information that it should be predicting. So it's a delta between predicted versus actuals, which is more, more of a machine learning, essentially metric that we track to see whether our recommendations are calibrated.
So there's a wide range of these ML level metrics that we track on a very regular basis that tell us how well we are training our models. And then, then finally there's system metrics in terms of how much cost and usability of the systems that really propel and power the ML models and.
Richie: Now I'd like to talk about one of your bigger projects. So this is Genos, which I believe is like a generative ai, I guess the operating system, but it sounds more a sort of platform. Can you tell us a bit about.
Maddie: So first and foremost, I am a consumer of Genos. I'm not a builder of it, though my team does contribute back to Genos as much as possible. So just to orient you a little bit, we are part of Intuit. We are Intuit, essentially, and there's a central organization within Intuit that has. Very intentionally and proactively built Genos, especially as the hype around Gen AI started.
So what Genos is intending to do is to democratize gen AI applications for all of Intuit, including all of its business units. So he has about four key components that the teams use. One is called Gen Studio, which is essentially just a sandbox where like developers, data scientists can go in and. Check out, test out, various commercially available or any kind of, open source LMS that they want to test with. Then there's a runtime genos essentially environment where you can start coupling those in with data, coupling those in with orchestration systems or whatever, like it is the architecture that you want to essentially productionize and run in runtime.
There is also a unit called Gen srf, which is critical. It's very critical for us because that is essentially enabling us to safeguard, put guardrails in place and to assess risk. As we know, there's a lot of non-determinism that comes with LM applications, so this is essentially a set of guardrails that we put in place to ensure that nothing is.
Going out of you know, is meeting all the, regulatory, it's meeting all the compliance, safety and quality guardrails that we attest to. And then finally, there's a gen ux, which enables designers and front end engineers amongst others to go in and couple in LMS in the backend with widgets and design and build frontend capabilities more quickly than they would be otherwise.
So it's a really exciting suite of applications that really speak to a wide range of. And in fact, using Genos is how we were able to successfully launch the first use case. I mentioned to you earlier around what's called cy, which is essentially again that contextual explanation for why we are showing you say a credit card at a particular point in time in Credit Karma.
We use Genos in the background and we were able to spin that up in a matter of weeks versus what you had taken us months to build. If we had to put all this scaffolding in place business unit by.
Richie: Okay, so it sounds like you're almost building AI applications then. Oh, like consulting things with Lego, is that, is that kind of right? Like you pick the model and all the different pieces, the user interface and it. Are you able to talk through, like how that's impacted your ability to create these applications?
Maddie: the, the biggest thing is velocity. Being able to essentially stay competitive as a whole company is very important for us, especially with Gen AI being such an important investment for all of us in not only the tech industry, but also more broadly speaking. So being able to like enable our teams to go fast.
We need to have the tools in place to be able to do that. So prototyping is further enhanced by having Genos and then corresponding lead teams such as, for instance within Credit Karma like mine, we contribute back to say gen SRF, which is a unit I mentioned to, is critical around guardrails, around safety.
Because at the end of the day, Genos is set up in a generic way. But the applications of alms to be successful have to be business specific. So an application for Credit Karma is a little bit different than what somebody in TurboTax might want to see. And therefore, also the guardrails and the underneath infrastructure sometimes might differ.
The evaluation systems will differ. So we oftentimes contribute back to Genos, essentially with these modules that can further enhance it. But we also have to build a ton within the business unit to be able to essentially make sure that we check all the boxes before we launch something.
Richie: that last point about guardrails. Incredibly important to find this. I can imagine if a financial application is giving you like a wrong answer, that's gonna have some pretty serious consequences. So talk me through like what kind of guardrails you need. What kind of testing do you need to do to ensure quality?
Maddie: Yes, absolutely. Not only that we would lose the trust of our members, but also we can impact the brand of our company or our partners. So we take this extremely seriously. We have a multi-layered readiness framework that we employ before we launch something. And if you remember, I mentioned there's two flavors of applications in the gen AI space.
One is this. A combination between deterministic and non-deterministic gen ai. So this is enabling us to put some controls in place versus enabling the generative aspect of LMS to just respond at their liberty. And then there's Chad, which is a bit more essentially constrained, but still has a lot of guards in place.
So I'm evaluation framework. And also additional pieces that we have to think through that are very use case specific. So first and foremost we have stable stakes benchmarking. That's like likely most companies that do an employ LMS will look at industry level benchmarking to just test whether in a generic way if the LM is making sense.
So that's table stakes. Everybody likely does that. Who is really publishing or productionizing lms. But then things get interesting when we essentially think about the use case specificity of every single application like Credit Karma. So then the first level is we want to make sure that initially through automation and oftentimes employing other LMS as judges.
Which is a concept of constitutional ai. We are checking whether a response from an Allen application say a summarization of an explanation for why you're seeing something, whether that is meeting essentially the point of view of Credit Karma, whether that is using language that is aligned with our brand, with our partners.
Whether that also is meeting the guardrails that we have in the background. Sometimes we have just classical ML checking if we have PII information in a response. We don't want to have that, so we make sure that we rip that off. As part of this step, we wanna make sure we, also check for hallucinations as much as possible with this automated version.
But oftentimes this takes us to the second level, which is human review. So to this day, humans have to be coupled in with any form of automated evaluation, mainly because. The generative aspect can have a wide range of answers. And we need humans to interpret a subset of those answers at the very least to ensure that all these checkpoints that I just mentioned are sound.
And also there's aspects like math or other implications around hallucinations that can be going sideways.
Customer success, member success experts who really understand what a good answer would look like. So all of those are initial steps that we go through. Then we have essentially this system check where we check whether the LLM that is ingesting information from the system. 'cause no LM is gonna be operating alone.
It needs to know its context. So we typically have tools ing data about what widget. Or ingesting data that is de-identified about the member so that it can provide a more personalized answer. We check, is it doing it well? Is it summarizing the information effectively? So the innovation of the system is critical.
And then some of the final steps are. Around compliance, and that is very much so applicable use case by use case, depending on what the solution is. We check, for instance, are the benefits of the partner that we are serving an offer from. Correct. For what that partner would be looking for. So those are actually very hard to do because we work with hundreds of partners and they have very different, essentially, for instance, benefits or.
Brand posture. So we need to make sure that we honor that. So we have a very in-depth compliance, safety, and quality check that is both automated, but especially humanly checked to make sure that before we launch something we are really meeting the standards that we set for ourselves and that we promise our partners.
And then finally we do a ton of stress testing, which means we try to break it. We try to break the application internally to see where it has weakness points, and then we come back with solutioning that can really fortify that application before we go live.
Richie: It's really interesting that you have AI checking ai. Like the first line, and then humans only come into this after that. I'm curious as to like what sort of models you use to check. Are the models, are these like off the shelf LMS or do you have to have something custom? What do you use here?
Maddie: So we didn't use to do it this way. We used to have humans checking the response first, but that would take months. So that is also not enabling us to move quickly and we were able within those months to collect enough label data from humans to be able to get a sense for what is the distribution of answers that we can actually have an LLM, like a GBT that is more advanced, do for the human instead, and then route the human to where had a low level of accuracy.
For determining whatever like level of readiness we are tasking it with. So in other words, like we started with the human, but then to scale, going back to your point around scaling processes, it's not sustainable to essentially have humans review hundreds and thousands of answers that essentially capture the distribution of.
Potential responses any LM application could give you. that's essentially how we ended up scaling.
Richie: So, it sounds like things are kind of working fairly smoothly now, but what do you think has been the hardest part in creating all this?
Maddie: I would say that the balancing act between the increasingly new versions of LMS from. GBT Gemini commercially available capabilities, but also open source as integrated within the systems that we have built internally through Genos, but also through all the systems that a product like Credit Karma has to build is difficult to upkeep with because you do have to swap out an LLM when you make a change.
And adapting it to our use case.
Safely with high quality and rigor is actually not easy.
Richie: I can certainly imagine that's a challenge. I think it's something that a lot of businesses are facing. Do you have any advice for how you can keep up with ever moving technology platforms?
Maddie: I think we have to constantly and critically rethink our stance, our infrastructure. Our nimbleness. For instance, one thing that we are doing right now, especially with systems that we built internally that leverage N os, we are modularizing those as much as possible so that it's easier for certain components to be swapped in and swapped out.
But that takes essentially like a constant keen eye of always rethinking where you stand and what you've built because sometimes some of the work can be thrown away. And, you have to allow yourself, an organization has to allow themselves to actually say, okay, now it's past the time when we can use this.
We have to move, for instance, from a monolithic approach to a more modular approach to be able to move more quickly. So that's one piece. The other piece is like really staying on top of what's working and what's applicable to you and ingesting all this information. Having essentially someone who is an expert at research and can really relay the information to you as a leader and as an organization is critical.
It's. Quite overwhelming to keep up with everything that's happening externally and knowing what's high signal and applicable to you as a leader or as an organization. So having someone can really do that in the format of an architect or chief scientist or something of the like, I think is important for those companies that can position themselves that way.
So I think like those educational pieces and nimbleness pieces are, are key all while identifying product market fit. Right, because that is ultimately what success means and where you drive value for members. So it's quite a bit of.
Richie: That's interesting. The idea that you have dedicated research people because I suppose, yeah, researching new ideas is a very different skill for. Implementing stuff. So keeping those researches separate from, I guess, the more engineering roles. That's an interesting approach. Alright. On like, who is involved in creating these AI applications then?
A.
Maddie: Yes. I can share a little bit about my team makeup, and I work with many of my colleagues across the company to like, make gen AI and AI happen. So right now I lead a few different flavors of what a data organization entails, and that includes data science, machine learning, engineering, and the platform within that business intelligence data teams that are tracking and pipelining online and offline data.
Experimentation platform are all essentially within the umbrella that I own. So within this team, the people that are working on gen AI and AI applications are very much so. The first few teams I mentioned, data scientists and machine learning engineers are at the forefront of this, but these teams have to be coupled with product engineering team.
Who then essentially take these applications and integrate them in a front-end product like Credit Karma or as well as marketing teams that take this and implement it essentially within an email campaign. As well as we have a lot of engineering teams that build a scaffolding around, say, a Genos, which is a essentially hosted platform and its integration with product.
There's yet another engineering team that has to think about what's the SDK or the tooling that we can put in place for developers to really utilize as well and build an end-to-end system that can be productionized for our members. And of course there's a lot of safety compliance experts.
That are either in the product space or more in just the compliance regulatory space are critical. There's legal counterparts who are extremely critical as well because there's a lot of implications on just the legality of all these applications. And there's also just the product folks that are thinking about product market fit and working tightly with all these folks to really advance the landscape.
So really takes a village. Of people.
Richie: like almost every team within.
Maddie: Yes. Yes, it does. It, it does take a, a village because it is as mentioned, a zero to one space, but also all these teams are critical to be able to advance the landscape and integrate well with systems that we are building from ground up.
Richie: No, it was good. That was like an Oscar speech where you just mentioned like everyone does it, that you've ever met? No, I, I can see how like, yeah, you've got the core there. So you've got your data scientist machine and scientists who are kind of building this core application. Then you've got like the software like engineers and product people to Put it application format. Do you think there's been more of a, a crossover between those two roles recently? Like, with the sort of data roles and the engineering roles, or are they still completely separate? Karma.
Maddie: They are not separate. they absolutely need to be in sync because typically data teams that generate data are feeding into a consumer and that consumer can be. A product or another data platform or a machine learning platform, or a data scientist or a product leader. So, it's very, very critical actually to like, make sure that there's integration and visibility into why these teams exist, who they serve, and why they're important.
Because in fact, that's one piece that is a pitfall for organizations when there's not enough visibility in education about the end-to-end stack. What it really takes to put something in front of our members takes a lot of teams. And being able to essentially streamline those teams is critical.
And the other piece I'll say is sometimes these teams are organized under different leaders, under different organizations. And that can create silos. So data teams, especially as a company, grows, can be fairly large. So making sure that they talk to each other no matter how their organization structure is set up, is important because what I've seen and what we are tackling now, especially at Credit Karma, is different breast practices get developed and different tracking mechanisms get developed.
Different applications of those data are being made, different trade-offs are being made, and that typically slows down velocity. That typically slows down innovation. So, there's a lot of fluidity in the collaboration in terms of skill interchangeability that's important too. So I'm making sure that my data practitioners understand what ML means.
the ML practitioners have a deep appreciation and empathy for data, and now that gen AI is important, we wanna make sure that people have access to tools and they have access to training that enables that to take that even further because you wanna be able to like ensure our talent continues growing and understanding the landscape, they're landscape that they're operating within.
So there's quite a bit of collaborative pieces that are critical, but also how do we make sure that we continue evolving our talent by not only keeping them in touch, but also by skill transferability within the organization, but also externally. How can we bring in ways to teach the talent and expand their skillset?
Richie: Okay. Yeah, so it seems like, the, the two biggest challenges there is like it's cross team communication and also making sure that you have the right skills. For all these different people. On that latter point are there any skills that you think are most important for your staff to know?
Maddie: I think some of the biggest and most important skills go back to what does he mean to be successful. And that relates to essentially driving impact and lending impact as we discussed in the beginning. So I would say like first and foremost, any form of accountability and ownership outshines honestly any other kind of skillset because someone who has the drive self-motivation and takes the ownership and accountability to go outside of their comfort zone and learn and apply themselves can go much farther than otherwise.
Other than that, of course, skill sets are applicable by function, and you need to know, have someone or a team that's an expert at data, at machine learning, at data science, at research. All that is important. But at the end of the day, what I thrive to impress upon my team is that this self-motivation and the constant self reevaluation of how can we do best?
Our members, how can we drive? Impact is, I think, the most critical.
Richie: Okay, so the technical skills, they're almost table stakes. And then it's this uh, the, the self. Motivation, the ownership that takes people to the next level. Do you have any advice? Is this something you can in people or train in people to take more ownership?
Maddie: It's certainly a trainable and coachable skillset, so I have seen a wide array of responses to it over my career. Having said that, I think as any leader of an organization, it's important to set the tone and role model that as much as possible and being extremely intentional to express what you are trying to do.
Like I'm trying to build the highest performing team there is. I'm trying to ensure people. Maximize their impact and really love their jobs and are really essentially coming to work, enjoying what they do, growing their skills and driving impact for the overall ecosystem of, of products and, and members that we have.
So being able to take that stance and inspire people by also role modeling what you stand for is important, but then critically important is spending time with your best talent. Being able to coach and grow them is absolutely critical. Being explicit about goals, being explicit about their aspirations, thinking big along with them, helping them to think bigger, and then carving out almost a milestone based path to getting there has been, I think, the most important or important toolkit for enabling people to really go farther.
Richie: Okay. Nice. So, yeah. Just making sure that I, like I did it. Yeah. You just coach people. Maybe the, maybe it takes, maybe it doesn't, but just keep trying to get those best people better. Alright, nice. Cool. So, just to wrap up, what are you most excited about in the world of data and ai? I.
Maddie: I think we are at such a pivotal time where technologists have such an important role to play in shaping the landscape. Of not only our companies, but honestly the world. Our country. And also that comes with a huge responsibility because there's a lot about the evolution of gen AI agent workflows.
And the applications of that can be also scary. So I do think that I am more excited about this landscape than potentially intimidated by, as, as many, you know, many of us are by some of the risks that come with it too. So I think we are at a time where we can really define the future that can be transformational as people that are in technology more so than ever before.
So it's a special era to be part of, and I'm humbled and grateful for it.
Richie: I think there are a lot of people who are a bit intimidated by aa, but yeah, there there's a lot of cool. Stuff happening and hopefully the benefits will outweigh the, the scary bits. Wonderful. Nice. Thank you so much for your time, Maddy.
Maddie: Thank you.
podcast
The Data to AI Journey with Gerrit Kazmaier, VP & GM of Data Analytics at Google Cloud
podcast
The 2nd Wave of Generative AI with Sailesh Ramakrishnan & Madhu Iyer, Managing Partners at Rocketship.vc
podcast
Effective Product Management for AI with Marily Nika, Gen AI Product Lead at Google Assistant
podcast
Developing AI Products That Impact Your Business with Venky Veeraraghavan, Chief Product Officer at DataRobot
podcast
Generative AI in the Enterprise with Steve Holden, Senior Vice President and Head of Single-Family Analytics at Fannie Mae
podcast