Adel Nehme, the host of DataFramed, the DataCamp podcast, recently interviewed Dan Kellet, Chief Data Officer at Capital One UK.
Hello everyone. This is Adele data science educator, and evangelist a data camp. What I absolutely love about hosting data framed is that you get to glean insights from the best minds and data science on how they've been able to build high-impact high leverage data teams. This is especially the case when it comes to building high impact data science teams and very specific regulated industries like financial services and banking, and it doesn't get a lot better from the data science team at capital.
This is why I'm excited to talk with Daniel Kellet today. Chief data officer at capital one, Dan has been with capital one for about 22 years now and leads all the data functions for the bank across the UK. Throughout the episode, we talk about his background, the hallmarks of a high impact data team, the importance of skills and background diversity when building great data teams, how to deliver impact with data and financial services and much.
If you enjoyed this chat, make sure to subscribe and leave a review, but only if you like it now, let's get. Dan. It's great to have you on the show. Thank you. It's great to be here. I'm excited to talk to you about your background, leading data at capital one, building effective data teams, how to scale the impact of data science, especially in large enterprise banks and more, but before, do you mind telling us a bit about yourself, your background and what got you to your current role at.
Dan Kellet: Of course. Yeah. So I studied mathematics. So my background is in mathematics and did my degree at the university of Nottingham in the UK, graduated back in 2000 and was really keen to apply mathematics to real world problems. And so to kind of move away a little bit from that kind of academic to naturally to really use what I've learned.
I joined capital one as a graduate statistician back in 2000. And I've been here ever since when I joined capital one, the organization had only been in the UK for about three years. It was very kind of rapid growth, fast moving business, and it was a kind of a great place to join as a graduate, pick up all these skills and work on these different projects.
Initially, my role was around building and maintaining the models that we use for marketing director. But over the next five years, I worked on a variety of different models and analyses across the whole customer life cycle from marketing through acquisition and through to customer management as well.
Also kind of some operational use cases. And even some people analytics work as well, stepped into a small team leadership role, providing statistical support for some test European businesses we had at the time. And back in 2012, then took over our leadership for the UK statistics team. As a director of statistics, responsible for team strategy, delivery, recruitment and development about two years ago, moved into the UK chief data officer role encompasses all the different data, job families.
And the aim is bringing those different aspects of data together. We're able to provide more focus and greater opportunity to leverage the great skills that we've got within the team.
Adel Nehme: It's really great. And I love that story of sending the ranks at capital one. that must have been a very exciting.
Capital one is often hailed as one of the most data mature banks out there. And this is a credit to the awesome data teams working there. So I'd love it. If you can break down in your own words, what are the hallmarks of a high impact data team?
Dan Kellet: I think team is the key word here, right?
I think having a really diverse mix of skills and experience is what makes a great data team. But for me, there are, there are three. Real areas that I look for in terms of building that team, firstly, is a real consumer and end user focus. I think it's very easy to build statistical models on machine learning models that go nowhere and they go nowhere because there is no actual consumer need or there's no business need for, for what the model is building.
And so I think a great data team really has those consultants skills to be able to understand what. The countable executives awake at night, what are the worries of consumers? And then how do I build a solution that meets that need? So I think that's one area. The second is technical excellence.
And I think, you know, you see these posters that have the million and one different skills that a technical data scientist needs. And I think for me, you need all those, but you need those in your team. It's a team game very much, but you do need that technical excellence in terms of statistical rigor, in terms of your kind of software engineering skills in terms of those machine learning algorithmic knowledge.
So technical excellence is definitely the second bucket. And then the third is strong teamwork and communication. Again, you could build the best model in the world that actually is pointed that solving a real world business problem. But if you're not able to talk about that model and really discuss the benefits, then again, it's not going to get used.
And so the ability to kind of. In a diverse team and then to be able to, communicate results and understand different perspectives. I think there are the three areas that make the most impact for me.
Different Skills You Hire for on Your Team
Adel Nehme: So I'd love to unpack these elements even further. So let's start off with the skills component. So you mentioned here one subject matter expertise and user experience, attention to technical expertise and high level of skill density in three communication and collaboration skills.
So let's focus on that second element. You slightly alluded to the. And your answer on the diversity of technical expertise of the team and how skills are distributed across the team rather than an individual. And I love that perspective since I definitely agree with the notion that there is no such thing as a unicorn data scientist. So can you walk us through the different skills you hire for on your team and how they're distributed?
Dan Kellet: Absolutely. so if I think specifically about the data department, we have four key areas. within the department each have having their own team and kind of distinct skills, firstly, our data analysts.
So our data analysts are great at explaining what has happened. So they're the people we go to understand trends in the data when something's gone wrong. How do we unpack that and make it right? But also they are the team that implement the data. Customer decisions. So things like campaigns that we might look to run, or the strategies that we might look to implement when we're thinking about SMS contacts.
So the skills that are most important for those data analysts is they need a really good understanding of the data domains. So what data is held, where, and how do I use. The coding level needs to be really good, but then also they need to have those great kind of consumer stakeholder skills as well, to be able to deliver the campaigns.
For example, some of our analysts might have designed so that a data analysts data scientists are hopefully great at predicting what's going to happen next. And so for me, They have to be experts in algorithms, uncertainty, and predict. We place a lot of faith on our data scientists to be able to take historic data rollover trends forward.
So I think a great data scientist, for example, I think about it like. So you should have all these kinds of knowledge of different algorithms and approaches on your bookcase. And the real scale is knowing when to reach for which book in my mind, there's no one algorithm is better than another. And actually sometimes an arithmetic mean is good enough.
And so you being able to, to trade off your complexity with your predictive power. and really understand the caveats that might happen for certain algorithms. Us, that's kind of a big skill for data scientists. The third group are the data stewards and, and really that kind of the gatekeepers for making sure that we have high quality data.
And not only that we have that data, but where we can find it easily, the right people have access. We know where it's stored, and we know when it's going to be deleted. That's really important. This it's a relatively small team, but very highly skilled in terms of again, data domains, but also that's probably where we have our deepest coding skill in terms of data manipulation, and also kind of that data governance.
The final group within the department is there is a bit of a mixed bag, but it kind of encompasses both our data product side and what we call analytic engineering. And really this is probably where within my department, we have the, the deepest software engineering skills and this team really looked to help design.
Uh, maintain the platforms and tools that underpin all the analysis that we carry out within the business. these are the people who really understand the benefits of well-designed well-maintained code in order to make everybody's life.
Adel Nehme: That's really awesome. And I love how these skills cover the entirety of data projects from ideation to deployment.
You know, one common theme or skill set that emerges across these different roles is subject matter expertise. So I wonder for technical roles, as you're finding candidates that come from different backgrounds and industries, how you embed this subject matter expertise into your people over time, as you said, it's relatively easy to create statistical models, but not that easy to. Great data products that have an awesome user experience within a specific data.
Dan Kellet: Yeah, and I think there's no substitute for experience here. And I think one of the things we try to do when we bring new people into the team is very quickly to get them working on real world issues, real world problems to solve.
And so they started to build up that knowledge and that expertise from day one key to that is making sure you're partnering people up with good mentors and slowly looking to build out expertise and knowledge.
Diversity and Inclusion
Adel Nehme: So of course beyond just hiring for diverse skillsets, hiring for diverse backgrounds and lived experiences is highly important for building a high impact data team.
Do you mind expanding on how you embed diversity and inclusion as part of the hiring process and how you're able to leverage that diversity in the data products?
Dan Kellet: Yes, definitely. And there's a whole raft of things. I think, as an organization that you can look to do to try and increase the diversity of backgrounds within your team, a big area for us has been to look at what our recruitment channels are to make sure that we've got a, Mix of different backgrounds within the team.
I think it's fair to say, if you've gone back five, 10 years ago, we had a very narrow range of different channels and, and historically was very focused on mathematics graduates, but actually over the last kind of four or five years, we've dramatically expanded the, where we get our great talent from. So one particular kind of example that I, I feel really proud of actually.
We have put in place a program to bring people who are currently working for capital one in the operation into data focus roles. And so this has been a kind of great success. and one of the things I like about it was this is not leadership led. This is, an idea initiative that was brought to us by the team with kind of a lot of passion and a lot of thoughts about how this is going to work.
And over the last 12 months we've had. For people from the operations move into roles within the data operation, where we've looked to upscale them in technical skills, whether that's data science or data analytics roles, but that's been really successful. And what we found is they have brought with them a whole raft of knowledge.
About what it's like to be a comfortable and customer, for example, because they have spent all their time working with, or talking to our customers, but also some of the, the challenges that might be in place within the operations. So some of the kind of process challenges that might be in place, or of knowledge of, how things actually work, rather than how we think they work or not, that's been really, really useful.
It's definitely been great to firstly expand. The diversity within the team and the way of thinking, but also just the passion and the enthusiasm that these people have brought with them is in fact,
Adel Nehme: That's brilliant. I love this story around upscaling here and this segues to my next question, fantastically.
So we discussed the importance of the skills, diversity and high-impact data team. And given the difficulty of finding those unicorn data scientists with the right combination of skills, where does upscaling fit within the data team and harping on that last notion, where does upskilling fit within the overall strategy of the company to level up the entire organization's ability to work with?
Dan Kellet: Yeah, I'm a strong advocate of shaping roles to individuals' career development aspirations. And I think that's something we try and stay true to within the team. We encourage all the members of the team to take control of that personal development. A big aspect that especially at the start of the air is, is to spend a lot of time thinking about what do you want this year to look like?
What are the things that you want to be working on? Whether that's new skills, whether that's new areas of the business. and then how do we kind of shape your role to allow you to pick those things? Now a big part of that is how do you help people pick up those skills? And for me, that's around multiple things.
We need to make sure that we have good mentors in place. So it's really important that if you want to develop your machine learning skills, for example, not only. Do you need to be in a role that gives you the opportunity, but you need to be surrounded by people who are going to help you learn those things.
We kind of supplement that with, training where appropriate. So, if there's some kind of opportunities to go externally and pick up some of those skills again, that's something we let's try and do. But then the other thing that we look to try and do is get that external inspiration and support.
And typically we do that with strong links with the university, with academia and Nottingham university in particular, given that's where we're based. We have kind of a long history of strong links with the university, whether that is providing master's projects or lecturing or pH. Co projects and really, that's a way for us to bring more of that outside thinking in and allow people within the team to almost fast-track their knowledge on particular.
Scale the Impact of Data Science
Adel Nehme: That's great. And I'm a huge fan of the attention to creating career goals and pathways for the people at the company. Now, outside of building high impact data science teams, I want to segue to discuss how to scale the impact of data science in itself within the organization. data culture is obviously a huge element of creating a successful data-driven organization.
Do you mind walking us through the importance of data, culture at capital one and how you've been able to scale it.
Dan Kellet: I think it's not an understatement. Say data and analysis is really at the heart of the key decisions that we make a capital one, in the UK. And it is part of the DNA of the wider organization as well.
They're very much kind of runs throughout the business. I'll give you an example of that. I think one of the most important things to get right around your data culture is this concept of data, registration, and data. I think too often, I see examples elsewhere of data transformation projects, where everybody's got really excited about technology and a data lake is set up for example, and, everybody piles in throws in their data in a way that isn't necessarily organized or structured.
It just is. I've seen elsewhere examples of hackathons where the goal of the axon is to get as much data in as possible. And I lived that enthusiasm and I think that's, great. But I think where that ends up is with a data lake where you can't find anything. And you don't know who owns what and when something goes wrong or you need to find that key piece of information you're stuck.
And so having that really thorough data, registration and ownership model while it's not being the most exciting thing, it just saves you so much time and mixing so much better going forward. I'm really happy with the approach that we've taken a capital one, which is less nail that data registration process, so that any data transformation we make in the future is much more organized and is more likely to succeed.
I think the, the other example is actually just the role of the CDO itself. I think just the acknowledgement. This is an important role, that there is a key financial business benefits in having strong data leadership. And I'm promoting individuals who value data. I think there's another kind of example that
That's great. And following up on that. How do you, as a data leader manage the conversation with the remainder of the C-suite around data culture and its importance. You, I'm definitely not asking for any trade secrets here, but general best practices on how to approach this conversation.
I think that's all around really understanding what keeps them awake at night. I have this approach that I talked to with kind of new graduates around asking why. And I think as a, as a data scientist, big part of our job is to ask why, and then to ask why, and then to ask. And then to ask why, because the first reason is never really the reason. And so if you can be curious and keep asking, why do you care about that?
Why is that important? Why do I need to solve that? You're more likely to get down to the actual key, the actual thing that needs resolving and more often than not, that's something that data and algorithms can help with. And so if you are using data in a way that really solves the things that matter, you're going to build that buy-in, you're going to build that kind of culture across the C-suite.
Adel Nehme: I definitely agree. And do you think organizations starting off in their data journey should opt for a low-hanging fruit type of proof of concept when it comes to working with data or go for a full fledged project and try to demonstrate value?
Dan Kellet: Yeah, it's tricky. I think it's all about a blend. Actually.
I'm a big fan of quick wins. So if there's things that we can go out and get that, build some momentum, then that's great because I think what you don't want is an executive stand at the front with a great PowerPoint and then go, well, we'll be back in three years and he will deliver all these amazing things.
On the other hand, I think if all you do is quick wins, there's a real risk that you never get to some of that transformational stuff. So it is all about a really nice blend. And I, I hope that part of my role of CDO is to try and find that balance to help paint that longer term strategic view and make sure that that continues to stay on track, but also make sure that in the, the weeks and months that follow there's lots of great news stories.
There's lots of good stuff coming out.
Adel Nehme: And as a CDO, where does self-serve analytics and empowering the rest of the organization to work with data fall into the team's priorities? What are the ways you've enabled people who are not necessarily data scientists to query and work with data that is relevant for their day to day?
Dan Kellet: This is a key challenge for a lot of organizations, definitely for us as well. It's something that we spent a lot of time over the last year or so grappling with. And I think there's a few. Approaches to tackling this. Firstly, I think you've got to have some training in place that put in place. Those kind of core coding skills and data knowledge.
What we found potentially historically is, especially in that data analyst function, there's a real risk of it becoming a bottleneck, because they're the only people who is perceived can get that data or can pull that information. And that just creates tension and frustration everywhere. So if you can get some basic training in, that's probably going to serve 75% of your needs.
For me, that is all around knowing where to get the. Knowing how to use that data. And then some, some really core coding skills just to get people started. The other things that we've tried to do is, is bolster that with some additional support. So we, for example, we have a range of different slack channels internally that are staffed to really answer quick questions about data.
So if I'm working on this particular table and I don't know what this particular field is, I can hopefully ask a quick question. some of those channels are. Staffed intentionally. Some of them actually are more community staffed and it's just a way of. Unblocking, some things. The other thing that we found really useful is some actual in diary sessions.
so we have, an ASCA data analyst session twice a week that people around the business can just sign up to come along, bring that query and are they trying to less well we'll help them out and hopefully make that run more efficiently or pointing them in the right direction. If they're looking at.
Place. I think the other bit on the self-serve analytics is, to really think about what you open up to whom. And I think the key here is not to overwhelm people. So I think there's a real risk if you go. Well, here is all the data we have across all our different systems and all our customers.
Go for it. I have this view. It's really easy to add a lot of hay without adding any new needles. And all you really want to do is focus in on that and say, well, okay, for these types of roles, actually, what you really need are these data sources. Don't worry about all these other things.
If it gets too complicated, we can help you with some of those additional things. But actually the majority of your queries, you're just going to need this table, or you're just going to need these small number of so.
Adel Nehme: I love how you end it here by making it simple for the user. Do you mind expanding what the iteration process looks like and how you continuously integrate feedback from your stakeholders?
Dan Kellet: Yeah. And again, I think that's I tell before about that consultant. skill set. That's where that really comes in here, actually, because I didn't want to get to a point where the data team is seen as a blocker, whether it's because functionality doesn't exist or the data is in the wrong place or there isn't the right access.
So we continue to try and actively seek feedback from users of the data. The other thing is as we look to build tools and plans, It's almost looking for net promoter scores on those tools and platforms as a great way of seeing, are we making progress here? Is this becoming more a tool, a platform that people enjoy is, is actually of solving their problems or other real issues.
So definitely that feedback loop is, is.
Adel Nehme: You mentioned here, the importance of training and upskilling, where do you view the role of the CDO in scaling organizational data literacy in? Can you comment on how it has evolved over the past few years from a role that's just been around leading the data team to now leading massive transformational projects?
Dan Kellet: Yeah, I think that that of area of data literacy is, is definitely one of my key concerns. And there's a real risk that you build a bit of an ivory tower. If you've got a data team that goes, "Hey, these guys are the only people who can get to this data that the only people that can do this analysis, they bamboozle us with their algorithms and their charts."
And if that's the outcome then where we're not winning in my mind. I think that the role is to help people better understand data and to demystify both data and machine learning. And I think the way to do that is to make it relatable if possible, to make it fun. So one of the things that we've looked to do in the past is, is a bit of a roadshow around what is machine learning.
Hey, I read about it in the newspapers. I see these articles about these things. What is it? How can we use it? What are some of the risks? And actually, can we present it in a way that's fun. That's doesn't make it scary and, and really raises everybody's knowledge up a bit. And so we've, looked to try and do all kinds of stuff there, but I think the more you can make it hands-on and maybe even a bit ridiculous, then a better people are going to engage with that.
Challenges With Data
Adel Nehme: I love that. Especially creating a community around data culture. Now, of course, given that we've been talking about data science at capital one, I'd be remiss not to talk about some of the data workup one has done and the challenges and being impactful in the financials.
Space. So I'd love it. If you can walk me through the challenges in working with data within an industry that is extremely regulated. And how do you ensure that you're consistently innovating?
Dan Kellet: There's different ways to look at this. And I think for me, the level of regulation in some ways really helps provide focus and actually ensures that innovation not only helps push the business forward, but it does it in a way which takes into account consumer impact.
So in my mind, actually working within that regulatory framework really helps give that focus. I think part of that is making sure that you're building strong links. Both of the regulators and with consumer groups to try and help shape the future. One of the initiatives that I'm most proud of over the last couple of years is working as part of the bank of England and the FCAs.
Artificial intelligence forum over the last couple of years has brought together a range of experts across finance to better understand the implications around data model management and governance. When it comes to artificial intelligence, really helping to bring up the level of knowledge and, and have.
Insightful debate about where do we want this to go next? And actually that's, that's been a great forum. That's been a really good way of learning different perspectives. so getting different views from across the industry, across regulators and also helping to educate and engage. I think the links of academia are also really help here.
Cause again, Building out knowledge. And then I go back to my book case metaphor, but this is really helping us build another wing to that bookcase. The more you go out and you talk to different people, the more ideas you're going to bring back in. That's great. Then in terms of prioritization, how do you balance between quick wins and transformational outcomes and how does the regulatory dimension of working in financial services impact that prioritize.
I think, ideally you want to have a real mix of different things that you're working on. Those there's a real risk that if, so, if I think about those three buckets, right, you've got your longer term strategic delivery, you've got your quick wins and then you've got your almost must dues because of the regulatory nature of things you want to make sure you've got a real mix of those three things going on.
I think key to that is active and regular prioritization. And one of the benefits we've really seen at bringing all the different data families together. Is the ability to prioritize across the data department. That's given us a lot of flexibility to, to actually move quickly and to reprioritize. If something urgent comes in whilst keeping a focus on what is our long-term goal, because it allows us to build a strategy alongside delivering those rapid.
Adel Nehme: Of course in financial services, the impacts of AI could make or break someone's ability to buy a home, receive credit, or even accidentally leave to predatory behavior. For example, alone recommendation algorithm, giving a recommendation outside of a consumer's capability. So I'd love it. If you can walk me through how you've embedded responsible and ethical use of AI in the development process and how you've been able to minimize harm.
Dan Kellet: Yeah, this, this has been a big focus for us over the last few years, and I think, has led to a lot of different things. The framework I always try and use when I'm thinking about responsible AI or, ethical AI, is to think about the different audiences that this is for. And I think I ended up with, three distinct audiences that are trying to bear in mind when we're doing these things.
One is the consumer, and so we need to bear in mind that decisions we make have a real consumer impact, as you said. And, and we need to think through not only, what is the business impact of making some of those decisions? What's the consumer impact. And so an ongoing push of how do you start to make your decision frameworks more understandable to a consumer?
And I don't think that's necessarily a really easy thing to do, especially as you increase complexity of your algorithms. But I think it is something that we always need to bear in mind is how do I justify the decisions to make. Framework is making in the consumer and the regulator's eyes. The regulator is that, is that second audience that we need to think about.
So if the consumer is all around at the micro level, how do we understand those decisions? I think from a regulatory perspective, it's all around. How do we justify the fairness of the systems and the understandability of the way that, that. And part of that is being ferry direct around the trade-offs around making decisions.
Especially if I think about your choice of algorithm is, is a great sample here where you may choose really kind of black box algorithm. That's really complicated and actually will get more of your decisions. Correct. Or something really simple. That is much more easy to understand. And perhaps not as effective, I think as a data scientist, as a CDA, you need to be working with regulators to understand whereabouts on that spectrum.
Is there a. And then I think that the third area to focus on is, your key business, accountable executives. So they're the people that are going to be using the output of these models to drive their business decisions. I think the area of interest there is perhaps around kind of bias. How do you understand the potential biases that might creep in through your data, through your choice of algorithm and how does that play through, into the decisions that are going to rely on those models?
Adel Nehme: That's great. And I really appreciate how you break it down into multiple components between regulators and stakeholders. The last thing you mentioned here is how to manage the stakeholder relationship and AI governance. Do you mind walking us through what that governance model looks like at capital one?
what are the checks and balances embedded in the development process to avoid and minimize the harms of machine learning models?
Dan Kellet: Yes. Yeah. And we have really of strong model governance processes in place throughout the model's life cycle. not only in the model build phase where there's some really good structure around understanding what is the need for this model?
Where do you get your data from? How do you know your data is correct? How do you understand the algorithmic fits? Of area of focus there, then moving on to the deployment. And to be honest, this is the area where actually typically you're going to find most of your model breakages actually happen more at the deployment phase rather than the actual model build phase.
And so a real focus on how do you know that the data feeds are going to flow through in the way that you expect. How do you know that your algorithm has been implemented in a way that you can evidence and test, and especially in those first kind of few hours or days, what testing in place have you got to make sure that you're getting the results, but then it doesn't stop there.
And, and, once when I say model is deployed, you need to make sure that you're continuing to monitor that usage, whether that usage changes. So you're going to expand the usage or. Maybe the regulatory landscape changes. So you need to make sure that you're continuing to reevaluate that model. And so I really strong model governance center and processes is key that I think another part of that is, is clear roles and responsibilities as part of that governance process.
And so we make sure we're very intentional as to who is the person who really knows this model, who knows it inside out, who is the person who. On the hook for the decisions that these models make. And then who is the person who is going to be validating that model in an independent way. And so by having really clearly delineated roles and responsibilities, that's, another way that you make sure that you've got those checks and balances.
So as a follow-up given the amount of interlocks and collaboration needed, would you say it's accurate that a data culture and common data language is needed to have a fruitful collaboration?
Definitely. Definitely. I come back to what I said at the beginning around the importance of team. You've got these different roles across the entire build and deployment life cycle of a model.
And if any of those roles fails, then you don't end up with a model that's making the decisions you want. So yeah, they have really deep teamwork and collaboration is, key there.
Adel Nehme: That's great. Now, Daniel, before we wrap up, I'd love for, we can think about the future for AI. What are some of the trends and advances in data science that you're particularly excited about that you think will have a big impact in the financial services?
Dan Kellet: So one of those, one of the trends that I'm really interested in is the open banking initiatives. So the ability for consumers to allow financial organizations to access that banking trade line data, I think open banking as initiative has been in place for quite a few years now, but what we are now starting to see some really impactful.
Real world applications of this. I think there's, there's a lot of momentum there, whether that's in things like income verification or whether that is in, helping with credit risk. I think there's, there's a whole lot of opportunity in open banking. The other trend that I'm really interested in is, well more generally, I'm very interested in the hype cycle around.
Data science and machine learning. And I think we're at a point now actually, where the rubber really hits the road. And I think people are needing to see kind of real return on expenditure when it comes to machine learning. And for me, the key to that is around simplification and focus. I think it's around doing a smaller number of really well-focused executions.
And actually, I think you'll start to see some, some real forward leaps there in the financial sector.
Call to Action
Adel Nehme: That's really exciting. Now, finally, Daniel, before we wrap up, do you have any final call to action before we end today's episode?
Dan Kellet: I think there's a couple of things that popped up, especially over the last two years from a one of the lessons as we've gone through a pandemic response.
And I think one of those is that short-term versus long-term investment in your foundations really shows at times of stress. So going back to the quick wins, I think if all you're doing is quick with. And then the world changes. I think you find it really difficult to react because that investment in the foundations is not there.
You can find yourself behind. So I think that's been a really key lesson for me over the, over the last couple of years. I think the other thing is this period has been a real reminder that changes that happen in the world will find their way into your data and your models. And that has to be at the top of your mind, if you're obsessed around customer impacting models and analysis, that's just gotta be top of your.
Adel Nehme: Thank you so much, Daniel for coming on data framed.
Dan Kellet: Great. Thank you.