Scaling Experimentation at American Express with Amit Mondal, VP & Head of Digital Analytics & Experimentation at American Express
Amit Mondal is the VP & Head of Digital Analytics & Experimentation at American Express. Throughout his career Amit has been a financial services leader in digital, analytics/data science and risk management, driving digital strategies and investments, while creating a data driven & experimentation first culture for Amex. Amit currently leads a global team of 200+ Data Scientists, Statisticians, Experimenters, Analysts, and Data experts.
Adel is a Data Science educator, speaker, and Evangelist at DataCamp where he has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.
Key Quotes
We are in the golden age of experimentation, especially in the digital ecosystem. Many things are going to change, but the ability and willingness of businesses to adopt experimentation, run experimentation, and deliver results, I think that's just going to keep improving over time.
Customer reaction functions have become highly nonlinear in today's world. Think of random things going viral. Similarly, companies need to test a lot of different ideas. Some of them will succeed, many will not. But there is no way of predicting success ahead of time. Even the best -placed intuition is going to go wrong. So the only way companies can get to this level of success that they want and desire would be to expose ideas in front of customers, let customers adopt those ideas, let ideas find the right product market fit, and voila, that will be success for companies. And therefore, in today's world, you really need to experiment and experiment hard.
Key Takeaways
Successful experimentation requires collaboration across product, technology, design, analytics, marketing, and legal teams to ensure comprehensive and effective results.
Explore the use of large language models (LLMs) for qualitative data gathering and customer feedback to enhance the depth and scope of your experimentation insights.
With the decline of third-party cookies, use marketing mix models and geo-based experimentation to measure the impact of your media strategies and understand customer responses.
Transcript
Adel Nehme: Amit Mondal, it's great to have you on the show.
Amit Mondal: Thank you, Adel. Really excited to be here.
Adel Nehme: You are the VP and Head of Digital Analytics and Experimentation at American Express. So maybe let's set the stage for our discussion. Let's start with the fundamentals. Why is experimentation so important today in any, modern organization? And how does it drive innovation? value. And I'd love if you can maybe anchor in a couple of explanations as well.
Amit Mondal: in today's business world, it's become increasingly important for companies to scale ideas quickly, find the right ideas. and make sure that they get in front of customers for their feedback as soon as possible. And the only robust way to do that is through a experimentation program. There is one more critical reason.
Customer reaction functions have become highly non linear in today's world. Think of random things going viral. Similarly, companies need to test a lot of different ideas Some of them will succeed, many will not, but there is no way of predicting success ahead of time. Even the best placed intuition is going to go wrong.
So the only way companies can get to this level of success that they want and desire would be to expose ideas in front of customers, let customers adopt those ideas, let ideas find the right product market fit, and voila, that will be success for companies. And therefore, in today's world, you really need to experiment and experiment hard.
L... See more
to sign up for the same product. We love this way of acquiring new customers because existing customers can be your best friends when it comes to going out to the marketplace. And it's great for customers because they get a benefit of both their friends getting the card as well as they get some points and value for themselves.
Now, we tried taking a very hard look at what would drive incremental success for an already well established program like MGM, member get member. And in this instance, we decided we needed to figure out how quickly do our pages load across each stage of the customer interaction life cycle. Now think of it, there are two stages to this.
One, an existing customer decides to refer a friend. And second, The friend or family then decide to accept that offer. And we needed to figure out if some stages of the journey were more important. If we have a speed issue in one or more than one step of this journey. And we experimented into it in a way that allowed us to have that knowledge.
And we found a really nice improvement in our overall conversion rates at the end of a series of experiments across multiple markets globally. So this is again, to me, a If we had not experimented, maybe it would still have been the right thing to do. No one says load on your page. So yes, faster pages are good.
But the value of that work, the incremental value of more investment in increasing page load, we would not have been able to know. And therefore, this was a really successful program in my mind that explains why in today's day and age, you need to experiment and to change, even obvious change.
Adel Nehme: And what's really interesting about the example that you're mentioning here is, on page speed loading, it's not something that is just spearheaded by the analytics team, right? There's, an engineering component. There's a user interface experience component. And know, we've been talking about this behind the scenes.
What's really impressive with the experimentation culture at American Express is that it's not just confined to a single team or individual or a whole earth. function. It's an organization wide effort endeavor. So maybe what are the key components of a successful experimentation strategy that allow for this level of decentralization and effectiveness to be able to drive success here?
Amit Mondal: I think it goes without saying, it depends on the size of the company. Amex is a fairly large company. It's a financial services company that has a lot of regulatory and legal requirements that we need to make sure we are following. And in a company of our size, tens of millions of customers, Tens of billions of dollars in revenue.
There is just no way an individual can drive something like experimentation across the company. We need multiple different teams to come together. In the previous example, it would have required product owners, technology owners, Designers, analytics people to come together with our marketing as well as legal partners to drive this agenda of change.
First of all, any program that needs to succeed in this kind of a complex environment. It needs to start with what is the business problem. And by starting with the business problem, you can then connect where you are experimenting, what you're experimenting to questions that are really critical to the, business, either surviving, growing, thriving, whatever is the objective that you have at that point in time.
In addition, having the right platform makes it easier for people across the company to experiment into things that they want to experiment into. It requires training. It requires people with the right level of knowledge and experience to drive this culture of experimentation. It requires leadership, leadership to accept that not all experiments are going to be successful.
The experimentation program can be very successful, even when individual experiments are not. And then a successful experiment is not an experiment that results in something successful. It can be something that results in a lot of knowledge and, stops you from or helps avoid a big pitfall down the road.
I consider those as successful experiments as well. So this ability to accept change and accept knowledge and, learn from experimentation is also a critical part of culture of experimentation.
Adel Nehme: Okay, that's really great. And there's definitely a lot of things to unpack here. You know, you mentioned having the right platform, having, the proper skills within different team members in the culture of experimentation, right? And even looking at experimentation , as a capability within the organization where you're learning, it's not necessarily just about , driving value in the bottom line, but also learning about pitfalls down the line.
One thing I want to get before we discuss all of those is kind of ownership and who spearheads the experimentation effort, right? we mentioned those the kind of the decentralized nature of experimentation, American Express, from engineering to use data to business functions.
Maybe who should spearhead the efforts of experimentation and how do you coordinate diverse functions kind of to work cohesively here on the experimentation agenda? So I'd love to kind of learn how ownership is assigned on, experimentation.
Amit Mondal: Experimentation itself doesn't have a singular owner, but experiments definitely do. And when it comes to experiments, you know, it really depends on what is the objective function you're trying to influence. And that usually results in, the right set of people taking ownership of that particular experiment.
If we're trying to improve conversion rates, it's probably going to be the digital product teams. If we are trying to improve the technical infrastructure, it's going to be a technology teams and so on and so forth. And I think it really also depends on the size of the company. If you have a very large company, you are almost always by definition, you require multiple.
levels of leadership from different teams. If you are a smaller startup, perhaps you can then, you know, have one singular owner of all the experiments that run in the company. I think the last thing I would want to call out is the area where you are experimenting. Now, most of us are very familiar with digital experimentation, but we experiment with all kinds of things.
It could be pricing. It could be how you answer phone calls. It could be how you send out advertising. And those require other sets of leaders in the company to then take ownership of the particular process where the experiment is going to have the biggest impact. And that's the way we end up deciding ownership and leadership of experiments.
But again, it requires the entire village to come together to drive successful experiment.
Adel Nehme: for the entire village to come together, there needs to be buy in, by the wider organization, that's where kind of the culture aspect of it comes into play, where there's a culture of experimentation, people want to try to do these things, right? So, maybe how do you ensure that everyone is bought in?
within the organization that they need to be kind of culturally aligned on the importance of experimentation? Is it a top down thing? is it more decentralized? I'd love to see your approach here.
Amit Mondal: . again, there's not going to be a single answer. A lot of planning, a lot of patience. a lot of making sure you understand and put yourself and, have perspective that your partners are going to have, right? Because not everybody looks at the same problem in the same way.
Not everybody has the same set of priorities. And with the right planning and process, I think this is exactly the way to go. is almost no way you can force people to experiment. You know, it's really something that must evolve organically. And I think we are lucky that we have buy in right from the very senior level of the company CEO down in terms of using experimentation as a lever to drive the evolution of the business and how the business is going to move forward.
Adel Nehme: you mentioned here the buy in from leadership, what does sponsorship from leadership look like? in an experimentation setting, I'm sure there's a lot of leaders within the organization looking inspire other leaders, or other members of their organization to, drive more experimentation within their organization.
I'd love to learn from you what role executive sponsorship and leadership plays in driving a culture of experimentation and what that looks like in practice. So love to kind of, if you can expand on that, I mean.
Amit Mondal: There are two sets of questions you're asking. One, is the executive leadership aligned with the principles of experimentation? And second, What do people actually experiment into? And they're not necessarily the same things. Because yes, you could have a situation where leadership is very aligned with the principles and the values of experimentation, but they may or may not agree with a specific experiment being conducted, or they may not even agree with where, when, how a particular experiment is being conducted.
And I think that's where the different levels of leadership becomes really important. To the extent that. I, as an individual, care about my functional area. I should have a backlog of ideas and experiments that is already aligned with the key business questions that my functional area can then drive success in.
And if you start with those business questions, you will be able to find that experiments that link back to successfully answering those business questions get prioritized In a rapid fashion areas where you are not able to establish connection between the experiment results and the business outcome will always be challenged, will always be challenging in terms of finding the right resourcing across the organization.
I think the last thing to call out there is that , experimentation, needs to have a vision. It's not just individual experiment that you're running. You are usually running a series of experiments to answer a set of questions which are all linked together.
And to the extent that you can link back specific instances or specific experiments, to these hypotheses that you've already pre aligned across the organization. That, in effect, leads to easier experiments when the rubber meets the road.
Adel Nehme: There's something that you mentioned here that I'd love to expand on, Aligning, defining exactly the scope of the experiment and, what is exactly the metric that we're trying to move the needle on? What is the business outcome that we're trying to move the needle on? And that will get you, whether you frame that correctly or not, that will get you the level of executive sponsorship necessary.
So, walk us through the framework that of a successful experiment. How do you define a target for an experiment? How do you go about, actually building an experiment that is rigorous? I'd love to kind of see your methodology here so we can unpack and dig deep into a bit more.
Amit Mondal: Again, starting with the business question, you now know the particular objective that you have, the outcome that you want to drive. This next step in that process is to really come up with hypothesis, because a particular objective can be driven in many different ways. There's no one right answer when it comes to experiments.
So you need to come up with a set of hypotheses that could potentially take you to that same goal. Once those hypotheses are in place, I think then the team goes back to figure out what's feasible and what's not. Which one can be implemented quickly? Because again, time to market is really important.
There is a opportunity cost of experimenting. You don't want to experiment unless you think that there are no other ways of getting the same answer. And that multiple sets of eyes on the cost of experiment, the time it takes to experiment, Then leads to further winnowing down of the hypothesis into maybe a couple of absolutely key hypothesis that you want to test.
Once all that is done, it needs to feed back into a backlog and we have a quarterly process, a very agile process where we look into all the ideas, we mean, we figure out where the business wants to go. And then we prioritize amongst those ideas using again, business value. time to market agile tools like weighted shortest job first, and various ways of winnowing down candidates into either very specific hypothesis or very specific experiments.
Ultimately, it also depends on buy in from and inputs from very senior leaders as well, which is also critical. Sometimes, experimenters may or may not have all the context, and it's always helpful to make sure alignment and inputs from senior leaders help guide the right decision.
Adel Nehme: Kind of doubling down on this particular point, when you're trying to, you know, making sure that you're building a rigorous experiment. And when you're adopting this framework, a couple of things that you mentioned here when building in important experiments, it's really important to have, considerations such as sample size or making sure that you have comparable new populations within your experiments.
can you delve a bit onto the nuances explain a bit how you approach building robust experiment designs? And what are the pitfalls to avoid as well?
Amit Mondal: Again, the objective function determines the type of experiment team wants to run. So, for example, we A B tests, we run multivariate tests, leveraging designs of experiment principles. We run multi unbanded tests and really they are tools in your toolkit to be leveraged as per the need of the business question you are trying to answer.
There is no one size fit all. So that's the starting point. Once you've decided which type of experiment would help answer the question in the most Then you go ahead and decide specific areas that link back to running that experiment. For example, sample size. What is the minimum sample size you require?
What is the expected lift? We run a lot of, frequent tests, frequent statistics based testing in this company. We want to make sure that we leverage Our robust framework for levels of significance and power, which we do not like feeding across different experiments. So those really help us come up with the right sample size.
And, finally, we also leverage. The ability of, large company like this, where we look at other business units that may perhaps have run similar experiments or, learning from prior sets of experiments to inform how long and as well as the pitfalls that can come up when it comes to learning experiments.
So all of those play into our ability to look into sample size, look into. the window of time that would be needed. And this is sometimes important because again, alignment with stakeholders is important. If a particular experiment is not going to provide the kind of results you want to be able to kill that experiment , ahead of time.
And, you know, you also want to set up those boundaries in terms of if you have really negative results. how quickly you want to practically stop a particular experiment. So yeah, those are some of the key things that determine the way and the framework we leverage to experiment.
Adel Nehme: That's really great. And when you look at the timing or the duration of the experiment or the sample size, you have any recommendations on like at least a minimum? Duration or sample size to look out for an experiment to be robust or useful.
Amit Mondal: Absolutely, we have, a minimum duration requirement but that also depends on the type of experiment we are running. We know the seasonality and our traffic patterns across many of our products. We know that a lot of our traffic, depending on the day of the week, it changes. So we want to make sure that we take into account those kinds of confounding factors that can impact the results.
The other thing to keep in mind is that the experiment result should also be something that we can take forward. So in a way, if you are experimenting between 20th of December and 31st of December, I doubt if those results are going to necessarily scale for the rest of the year. So we avoid specific instances of, you know, strong seasonality that we see in some of our journeys and products.
Adel Nehme: You mentioned here confounding variable, I think this is probably a pitfall that a lot of teams fall in pretty quickly when they design an experiment. Walk us through, for those who are not aware, what is the concept of a confounding variable and how do you avoid designing an experiment where there's many confounding variables at play?
I
Amit Mondal: Well, we need to really have statisticians to answer that question and which we do in our team because when we set up an experiment, we want to make sure that both logically as well as mathematically, we don't have confounding variables. And we also challenge our experimentation results to make sure that this was not influenced by confounding variables that can potentially come in and.
There is no one clear answer. Again, it depends on the type of experiment in the area of the business where we are running a particular experiment to have knowledge of these confounding variables. We have built , the set of knowledge over multiple years where we have Confounding variables in different parts of a business are generally well understood, but we do not take that for granted because again, when you run a new experiment, you could potentially have confounders, you can potentially have changes in the traffic pattern, you can potentially have issues external to the experiment itself that ends up impacting the way we see results.
Some very obvious ones that probably impacts every single experimenter globally, would be when you launch a new experiment you can have a recency effect, you can have a primacy effect where Just because something is new, customers might be clicking through on that at a higher rate. So your initial reaction may not be the steady state reaction of customers.
That's one factor to absolutely keep in mind. The other thing which I'm sure impacts acquisition teams across companies is that, Maybe you are running an experiment on one part of your funnel, but if your advertising strategy changes top of the funnel and somehow the advertising strategy change is not well randomized, you end up in a situation where you a nonrandom set of customers go through different arms of the experiment again, potentially creating issues with the results themselves.
So we try and keep our eyes very, very open to these kind of challenges. They are constant, they are ongoing and, that's one of the art of experimentation. It's not just running an AB, it's also making sure you understand randomization. And you understand all of these things in the ecosystem that can potentially muddy the experimentation results.
Adel Nehme: think this segues great to my next question because there's a lot of things that you need to know because you mentioned earlier in our chat. You mentioned that experimentation is a capability that is decentralized at American Express, like different teams spearhead different types of experiments that impact their function,
and then here now we're discussing that there's a lot of things that you need to know about running a successful experiment, which kind of segues here to my next question was how important is it to have a common data language or data literacy or maybe statistical literacy? within the organization to be able to scale experimentation similar to what you've done at American Express.
So I'd love if you can maybe comment on the role that data skills, data literacy plays when it comes to being able to scale this culture of experimentation.
Amit Mondal: Absolutely. I think you're asking , a very long question and asking me to comment in a short time. It's almost all unfair.
Adel Nehme: I'll let you comment for a long time if you want, I mean, so I'll let you go, go into detail.
Amit Mondal: yes, of course. Look, data literacy is absolutely basic to an experimentation culture, and if different parts of the company has different definitions for what is in effect the same data, it's very hard to then compare results of experiments.
If different parts of the funnel. Define the same item differently, and it's not an unknown problem that can be its own confounder, because sometimes the experiment that you're running runs across, teams that actually operate in different parts of the funnel. So practically, you need to be able to bring different teams together and have them speak the same language.
I think it's also important to have very robust statistical standards. As I said, we usually leverage frequent test statistics for this purpose. The statisticians on our team have looked at the corpus of experiments that we have run, and they have a set of standards that we enforce across all the experiments across the company.
But it's, also up to individual teams because sometimes you have sample size issues, especially. When you're running experiments in a completely new product or a completely new channel, you just may not have as much sample size. So would it necessarily make sense to have the same set of standards that you have for your primary product or your primary channels where traffic is usually not a problem.
Probably that's, a question to be answered by individual companies and individual teams on how to react to problems like that. But if you do not have that singular language of assessing experiment results, that can cause its own issues later on. So I would strongly recommend teams before they get into a, widespread experimentation across a large company to have and set up those standards and make sure everybody aligns because again that really defines which experiment is considered a success, which experiment is considered inconclusive, and you do not want to change standards in the middle of a particular experiment.
That can cause a lot of relationship issues, I can tell you. I
Adel Nehme: Yeah, I can imagine. you know, you're talking about these standards, right? And I'm sure this is something that your team is the one that's building these standards, advocating for them within the organization as part of that overall statistical literacy that we've discussed.
walk us through what those standards look like in a bit more depth. And, if you add like a magic wand, what does statistical literacy look like within an organization that is operating at full cylinders when it comes to like experimentation?
Amit Mondal: mean, I think, the statistical standards, I don't think they necessarily deviate much from the textbooks that you can pick up on designs of experiments, But I do think a few things change I can give an example and this is an example that's perhaps a little hypothetical, but hopefully makes sense.
What is the cost of reading an experiment result inaccurately? Say, for a financial services company, for an e commerce company, versus a pharmaceutical company. If you are a pharmaceutical company, and by the way, I have no experience working for a pharmaceutical company, so this is a hypothetical example.
People could die if the way you have looked at the data and you have used statistical standards that are not aligned with standards regulators require. You could be, as a company, be sued out of existence. Now, if you are trying to improve conversion rates in a funnel, that's a somewhat different type of impact.
Yes, you could have a financial loss, but you'll not be impacting people's lives in a way that's extremely detrimental. So your ability to accept errors in experiment result readout depends on which part of the funnel you're operating in, what is the business question you're trying to answer, how critical it is for you to have the right answer the degree of confidence ultimately determines the the standards that you put in.
So there's probably not one single answer that I can give we understand the different needs, different business units and different parts of a panel have. And based on that, we have set up standards.
Adel Nehme: you know, you're hinting here at different standards across different teams, and you've also mentioned previously, when you were discussing is that depending on the area, the channel, the product, the standards of experimentation may be different, but as well, how you approach experiments may be different.
I remember Brian Chesky, CEO of Airbnb mentioned once the limits of experimentation, especially when venturing into new areas. there's not a lot of data that you can work with if you're in a completely new area to run experiments on. Can you walk us through the limits of experimentation?
Adel Nehme: At what point is there too much reliance on experimentation or too much experimentation happening? So I'd love to see kind of the counterpoint here.
Amit Mondal: Personally, I love Airbnb and it's amazing, the business that Airbnb has built over such a short time and the brand it has created. I have to say, I don't. necessarily agree with the statement that there can be too much experimentation. But I do think what Brian wanted to say is that experimentation cannot answer every question and not every situation can be converted into a specific experiment.
So there is always this need to rely on business intuition. But , in effect, business intuition also helps you choose the path you want to take. And some of those are very difficult to decide ahead of time or experiment into. In certain cases, I certainly believe It is absolutely critical for business leaders to take a stand, especially when they are thinking of completely net new markets, net new products, disruptive processes, disruptive products.
It's hard to experiment into that level of specificity and gain enough knowledge without significant investments. And those, sometimes, yes, it's possible to experiment into disruptive change, but usually that is very difficult. You have taken a decision that you must follow through. And, it's the path that you take that determines success there.
So yes, I would have to agree with him that there are certain areas where it is not possible to experiment, but in almost all other areas where, you know what you want, you have. Some prior knowledge, you have the infrastructure and the willingness to experiment. It's usually a better way of answering questions than taking a shot in the dark and relying entirely on intuition.
Adel Nehme: And this is may not necessarily a, may not necessarily be an experimentation question where then it is, how do you approach this as a leader, right? When there are situations where experimentation is not feasible, how do you maneuver these types of situations and how do you make decisions effectively?
Amit Mondal: Partnership, taking inputs from experts across the company, that's stable stakes, you want to make sure everybody understands your idea and plan. So that everybody can then provide their opinion you may not have exact knowledge of what's going to happen, but there must have been similar situations that you can draw parallels from.
And the more people you involve in some of this ideation and decision making, there is, greater likelihood you're going to get inputs that can help you avoid pitfalls. Certain areas, you know, you have to stay away from ethical issues, compliance issues, legal issues. Those are not areas where you want to experiment into.
they absolutely must be avoided because you do not want to put the brand, the company the business model in danger of being adversely impacted by any of these issues.
Adel Nehme: There are additional kind of components experimentation here that we haven't touched upon yet, which I think, relates to the elephant in the room, which are models and alternative AI, Ben Stencil, who is the CTO of Mode discussed on the DataFrame podcast, how, LLMs could drive kind of a new era of experimentation.
and customer feedback gathering. So I think, for example, using a chat bot to gain qualitative information, about a pricing. What do you think of this price versus that price? You know, I'm sure everyone will prefer a lower price, but I'm using this as an example. what do you see as the real value LLMs have in the context of experimentation?
Amit Mondal: Again, I'll divide that question into two parts. One is as companies. Adopt LLMs. I do think there is a need for an experimentation roadmap because again, we really don't know as business people how customers are going to react to LLMs answering their question or LLMs instead of humans taking over that, the ability to then understand and really empathize with the customer's needs. So we have in American Express adopted multiple POCs involving LLMs. And we are experimenting into those new areas where we take very measured steps in terms of adopting certain changes. We use LLMs to drive those changes. We see the customer's reaction. We see the impact to the objective that we had at hand and then we decide whether it makes sense to adopt LLMs or yes, we have learned what we wanted to learn, but other ways of doing the same work is better and then we move on.
The second question is a little bit more hypothetical, which is instead of, for example, you have two designs today, you want to expose that to human beings and let human beings respond to those designs. And see if, one is better than the other. Could you replace human beings with AIs with different types of personality and see if that is going to be feasible?
That's, this is totally hypothetical. Perhaps someone will take this idea and run with it. But in effect, can you replace human beings with LLMs? And, use the variation in terms of how you set up those personas with LLMs in assessing the value of the change that you're driving.
It sounds like a very interesting idea. And I'll, you know, keep my ears and eyes open to how that evolves over time. And this is, , space that's evolving so fast. Anything that I say is probably already four weeks too late. And so maybe next we have a conversation on, we will have an answer to this question, but for now.
A lot of this is, speculative. People are really still finding new ways and means of using LLMs and new ways of means of leveraging LLMs for business outcomes. And so, yeah, lots to learn in this space.
Adel Nehme: Maybe if we have the conversation next week, we'll have the answer given how the space is evolving so
Amit Mondal: Absolutely. I
Adel Nehme: Yeah. Maybe , as we close out our conversation , You mentioned here LLMs and the potential use of LLMs in experimentation.
If you look at trends that you're looking at in the experimentation field, what are trends that you observe that are exciting here that you think will be impactful in the field to come?
Amit Mondal: mean, some things that are already impacting the way we experiment. One is privacy. for example, in Europe, a lot of customers are no longer giving us the ability to look at what they're doing on our websites. And how does that impact experimentation? Experimentation results that we get once we take out customers who no longer want to be tracked.
Are they still relevant? can those learnings be generalized? So that's one set of things I'm looking at. Into deeply in terms of how that space evolves. Another set of change that is already coming, or it's already there, I should say, is really the death of the third party cookie. So especially in the advertising ecosystem and advertising world the ability to track conversion has been, going down over time.
This impacts the ability to then, simply see the result of a particular experiment. And how do you then assess how, different changes that you've made to your advertising or advertising channels. How does that impact? Customer response. What is the ROI of some of those changes? So demise of the third party cookie is making it harder to run experiments of that nature.
So those would be a couple of things that have some immediate resonance with the area where I spend most of my time, which digital experimentation. And see how these areas evolve and how it impacts experimentation as we get into 2025.
Adel Nehme: , you're mentioning here the death of the third party cookie. I think this is on the mind of a lot of people. Maybe what are some initial thoughts on how folks should adapt here to be able to continue running experimentation, to be able to drive efficiency and their ad spend?
Amit Mondal: We have started leveraging marketing mix models as one way of measuring the impact of our media strategies, so that's an obvious answer, but not an easy answer because it requires a lot of infrastructure to be set up modeling capabilities, data science capabilities a clear understanding of the ROI and customer lifetime value.
Thank you. To even, measure the impact of, media activities top of the funnel. There are other, cookie less activities, and types of experiments that you can set up. Geo based experimentation is one way of really understanding how different customer groups react to different inputs in the advertising ecosystem.
People based marketing is another mechanism through which You can try and understand who is responding to your advertising campaigns. And certainly you need to keep working closely with the walled gardens who have very good data within their walled garden ecosystem. And use that data to understand the impact of different campaigns and advertising strategies that you're running.
So I'm hopeful that with these, and I'm sure many new things to come, maybe we'll not be able to replace third party cookie, and that is okay because there were severe privacy implications of, tracking customers across the web, but we will be able to get to better answers in terms of running marketing and advertising campaigns.
Adel Nehme: One final question, Amit, is what is one piece of advice that you have for other leaders looking to scale experimentation within their organization?
Amit Mondal: I think partnership nothing in a large company can be done without strong and robust partnerships. And as you think about experimentation and everything we discussed today, it requires a village to come together. It is something that requires Close coordination and partnership across different teams and, the willingness and ability to trust each person in that team to be doing the right thing.
So I strongly recommend starting with making sure that the partnerships that you have within the company is really strong and that would help you scale the impact of experimentation across the company.
Adel Nehme: I think this was a great discussion, Amit, I really appreciate you coming on the show. Any final closing notes before we wrap up today's episode?
Amit Mondal: First of all, thank you Adil and the DataFrame team for giving me this opportunity. It's been lovely talking to you and thinking about experimentation. A little bit removed from the day to day need to deliver results. It is a subject very close to my heart. I do think that we are in the golden age of experimentation, especially in the digital ecosystem.
Many things are going to change, but the ability and willingness of businesses to adopt experimentation, run experimentation and deliver results, I think that's just going to keep improving over time. So look forward to that evolving ecosystem and best wishes to all your listeners.
Adel Nehme: Okay. so much, Amit, for coming on the podcast. Really appreciate you coming.
Amit Mondal: Thank you, Errol and the DataFrame team. Have a great day.
podcast
Make Your A/B Testing More Effective and Efficient
podcast
The Path to Building Data Cultures
podcast
Data Science and Online Experiments at Etsy
podcast
Online Experiments at Booking.com
podcast
Behind the Scenes of Transamerica’s Data Transformation
code-along
Introduction to Experiment Tracking
Jinen Setpal