How YPulse Built an AI Application for Market Research with Dan Coates, President at YPulse
With over 30 years of experience in marketing, media, and technology, Dan Coates is the President and co-founder of YPulse, the leading authority on Gen Z and Millennials. YPulse helps brands like Apple, Netflix, and Xbox understand and communicate with consumers aged 13–39, using data and insights from over 400,000 interviews conducted annually across seven countries. Prior to founding YPulse, Dan co-founded SurveyU, an online community and insights platform targeting youth, which merged with YPulse in 2009. He also led the introduction of Globalpark’s SAAS platform into the North American market, until its acquisition by QuestBack in 2011. In addition, Dan has held senior roles at Polimetrix, SPSS, PlanetFeedback, and Burke, where he developed cutting-edge practices and products for online marketing insights and transitioned several ventures from early stages to high-value acquisitions.

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.
Key Quotes
I've been running this company now for 17 years, and I used to weep over the data that we collected in any given year that wasn't consumed by enough customers to sort of make it worthwhile. Collecting data longitudinally is a real chore. And I used to think about it like perishable produce or fruit or vegetables, right? And now I don't look at it that way. Now I look at it like we've created a catalog or a music library.
I'm excited about the future of industry because the folks that are coming into it now will be AI-first thinkers, taking everything that we're doing and rethinking it.
Key Takeaways
Offering AI as an upgrade option can lead to a positive ROI, as demonstrated by the successful sales of AI upgrades that exceeded the initial investment within the first year.
Utilizing retrieval augmented generation helps prevent AI hallucinations by ensuring that the AI only generates responses based on existing, relevant data.
Regularly testing AI outputs with benchmark questions and involving editorial teams in the process can help maintain accuracy and build trust in AI-generated content.
Transcript
Richie Cotton: Hi Dan. Welcome to the show. Brilliant. So just to kick off, tell us why did you decide you needed to build a chat bot for market research?
Dan Coates: We do this annual customer engagement survey, and in late 2023 we did our annual customer engagement survey. We found that, a new problem was emerging for our customers. Corporate America has been downsized, and people are expected to do a lot more with a lot less. What we found, our customers were telling us were.
You are producing too much information. We can't keep up with it. We ourselves did some time trials and found that it took 10 to 12 hours a week to keep up with all of our stuff. So really we, we hit this point where we couldn't ask for any more time amongst our customers. We had to start saving them time.
Meanwhile, we wanted to continue to be prolific and to cover as many possible topics that we could within our content. So that's when the idea of. We could solve this problem with an AI chat bot sort of arose. The previous use case was, in order to get the value from Ypulse, you'd have to read everything that we produced and the new model is find what's most relevant to you and just read that.
Richie Cotton: That's absolutely fascinating that the problem is just that people don't have time to read all the content that you're putting out. And that's why we need ai. I would say this is something I struggle with as well. I have a lot of things to read as part of my job, but then also sometimes lik... See more
So do you have a sense of when your customers want to use AI to consume content? Wanna.
Dan Coates: There's kind of two use cases now. The old one was, we were kind of using breadcrumb like techniques to sort of lead people to the content that was relevant to them. So every day we produce a newsletter and contained in that as a brief description with daily article. So the purpose of the newsletter is to get you.
Purpose of the article was to get you to be more interested in reading the report. The purpose of the report was to get you to wanna dig into the data itself. And so, we'd always sort of had that breadcrumb trail, but we found that some of our customers were telling us that they were worried that they were missing things.
They knew that there was value in what we were creating, they just didn't wanna miss. And so, let the content sort of wash over you. You know, take a look at the headlines for the daily newsletter and if nothing there is interesting, you ignore it. If the daily article is interesting to you, great, read it.
If not, ignore it. So we kind of have this, on the one hand, ignore the stuff that doesn't seem relevant to you based on the breadcrumbs that we're building. On the other hand, come use an AI power tool to.
Richie Cotton: So that's interesting. I like the idea that you've gradually eased people into reading, like, getting in depth into the content. So you can start off with just an overview and decide whether you like or not. Okay. So I'm curious as to what constitutes success for this. Like how do you know whether the chatbot is working or not?
Dan Coates: One of the first things that we noticed when we baited over the course of 2024, we had the chat bot in a free open beta for about three months across two different beta opportunities. And we noticed that during those beta opportunities, engagement with our content went up not down. So we feel like AI is, is another tool from the engagement stats.
We feel like AI is another tool that leads people into our content as opposed to drives people away. other metrics that we look at, sales metrics have to admit we made a, a healthy six figure investment last year in ai. so one of the metrics was could we sell a healthy six figure amount of ai?
so. We introduced AI as an upgrade, so it was really rewarding when we were able to sell more than we'd spent in the first year. So I'd say it was a pretty positive ROI, you know, all the usage reporting report downloads, everything that we do in Pendo kind of indicated that, this was getting people more interested in, the traditional products that.
Internal feedback. We're the number one user of our own tool. our writers use our tool to find previous content they can refer to in today's article, for instance. that helps us understand whether the tool is working and they're the harshest critic, by the way, my own internal team beats us up all the time in terms of what we're what we're doing and how we're doing it.
And then the last point is just customer feedback. Multiple points along the way through the deployment, we would ask people questions about post beta. We would ask them what they liked, what they didn't like whether they would pay for it, how much they would pay for it. So all of our decisions were sort of grounded in their feedback.
for instance, they said that they would pay for AI upgrade, but thought percent increase in, and so. 20% surcharge to be able to have ai tacked on. That went so well last year that at this point new customers don't have the option of, you know, would you like fries with that? starting on January 1st, 2025, all new customers had AI just baked into their product that the, we increase the price significantly.
We made sure that there wasn't an opt out on the AI and we're feeling so bullish now that our current customers won't have the option as of January 1st, 2026. You know, we may have to leave a few customers behind if they really don't wanna have ai, but as of the first day of next year, it won't be an option.
Richie Cotton: That's a pretty radical shift then if this is to all your customers, but I thought some of those statistics were quite interesting. Cost figure, this thing, but you getting that money back in terms of revenue in after, what was it? Less than two years since it's been released.
Dan Coates: we've been live now for about 10 months, so we broke even in under a year.
Richie Cotton: Oh wow. Less than a year. That, that's pretty impressive. Okay, cool. And you said like the main thing you looking at whether your customers like, like are they engaging with the ai? Like how much time are.
Dan Coates: there's two parts to it. The behavioral side, everything that Pendo, can observe all the clicks, all the downloads. And then the other side is the attitudinal, which is, what do you think? What do.
Richie Cotton: Yeah, obviously, like, I suppose point people uh, spending a lot of time with the bot if they don't actually like what they're doing. So, yeah, I like the sort of tracking some sort of customer satisfaction metrics there as well.
Dan Coates: Interestingly, everybody that I talked to when we were going down this path said, are you out of your mind? You're gonna do a customer facing AI implementation first. I guess it makes some sense if you're gonna stink, stink off Broadway, right? But we really felt like, by engaging our customers and bringing them into the process and of beta periods that we put into it, if we had a dog, we'd know about it.
Pretty soon our customers are. Definitely vocal and, express their opinions freely. But the other thing is we wanted to show them that we were trying, And we do a lot of studies on innovation amongst young people. And young people certainly feel like it's okay to fail. It's not okay to not try.
so we, just wanted to try, we wanted to bring them along in the process, explain what we were doing along the way, and I think we were rewarded for that. Approach, right. Technology for technology sake. We're.
Richie Cotton: Yeah, certainly. I like that. You might as well try, even if you're gonna fail. On the other hand, it does seem quite risky, going straight from, we've not worked with AI before to, we're gonna go straight into this is our, our flagship product going for all our customers. So I wanna know how you, pulled this off.
Let's get into the weeds a bit on like how you built this thing. So, I guess to begin with, like what sort of tools, technologies did you use as part of the AI.
Dan Coates: I.
Pretty much a year ago late February with a vendor. But prior to that, we were inundated with all of these quote unquote experts that had said that they could help us. And we had this sort of standard response. You know, we responded to all of them. We just said, can you give us an example of a production version of something that you've created?
That windowed it down dramatically. but basically we managed to find one that was I don't know if you know Professor Scott Galloway he's an NYU Stern professor, pretty prolific, he created something called Prof g AI about 15 months ago. And he's prolific, we're prolific.
basically querying his body of content. that's the same thing that we wanted. So in about November of 2023, we tracked down the people that had created prof G ai. And that led us to Haystack, which was what they used, sort of an open source platform. And and then that led us to deep set, which is.
So in late February of 2024, we had our first kickoff meeting with the Deep Set Team. and basically, so Deep Set is kind of one of those intermediate layers. They, create your vectorized database from your, your content. They allow you to have sort of a modularized approach so we could toggle between various LLMs at Will and just sort of that intermediary approach suited us well.
They had a lot of expertise. quite a few webinars actually. So we, we watched a few webinars before we engaged with them. Even they had a white paper on how to structure our team in order to successfully deploy. So we were so naive, we just thought, these people seem to know what they're doing 'cause they've done it a bunch of times.
They could show us multiple production examples of what they created. So let's just sort of follow their lead and I have to admit that
ourselves. But instead, on their side. There were a couple of engineers that were always involved that, some account handling kind of people, but their core engineers got us up to speed really, really quickly. So within the deep platform, we use Chat GT Omni. As our LLM, and now we're using Sonnet 3.5 on text to sql.
So the very first version of the product just looked into our repository of reports, 4,000 reports and articles that we produced. The second version now taps into 15 billion data points that we've collected. Survey answers, responses from young people across seven. That's all sitting in Snowflake.
And so we're using, uh.
Richie Cotton: okay. That's very cool. And it sounds like to begin with, you didn't have that many skills in house. Like you were looking to partner with someone who sort of built something before. Can you talk me through like what did you have internally? Like where were you starting from? did anyone know anything about AI at all?
Dan Coates: we have a team, a couple of developers, product manager and.
Administer our Snowflake database. And basically we all sort of knew about our data really well. We knew about our customers really well. Really didn't know much about AI at all. And of course, the developer types thought, well, let me look into this. But we felt like we could spend a long time wandering around even the debate of which.
In the overall scheme of things, it really doesn't matter which LLM and we were really happy to be able to toggle between LLMs because that's when we discovered it really didn't matter much, which LLM but the important thing was to get started and to move quickly, you know, we asked our customers what their pain points were.
They told us we didn't wanna wait year and or two to show.
that was when I made the strategic decision of we're gonna get some help here and we're not gonna argue with them, we're just gonna follow them until we get the first version of the product out in the world. And then based on its success or failure, we'll either ditch those people or keep going with them.
So when we got really positive feedback from the first version, the one that was just focused on our documents only we knew that we could stick with deep set and get to the finish line on the second part.
Richie Cotton: I'm right there with you, the, like, there's so many different tools now. Like the AI tooling has just grown like exponentially in the last couple of years, so it's very easy to just get that paralysis by analysis and not be able to get anywhere. So I do like these where with a simpler approach for, okay, well we'll get some people to know what they're doing and then build something and, see what happens.
And I'm sure if they're listening and a sigh of.
Dan Coates: I mean, it's funny. if they're listening, let them, take a shot across the bow based on the fact that they used Haystack, which is an open source platform, basically. Deep Set is the organization that created that. when we talked about it internally, we were like, listen, and once we know what we're doing, we can always revert back to the open source version away from the full featured, you know, corporate.
and do it ourselves. and listen, I would say that that's always a future possibility. But what we've learned is, after each new iteration of the product, there's just so much more that we can do and so much faster that we can move that the idea of retrenching back to something that's fully internalized as opposed to, relying on an external partner isn't really that appealing to.
Richie Cotton: Yeah, I think there's definitely a trade off between you using free open source tools, stitching things together yourself, and then cheap software, but more time spent versus you've.
Dan Coates: I can only imagine the war for talent right now. Like even, we couldn't hire the people at on the deep set side for any amount of money. They wouldn't come work for an organization. understand who? What your value proposition is. I don't think that we would be the kind of organization that.
Richie Cotton: That makes a lot of sense. Yeah, I think certainly like, the top end sort AI engineers, they're making bank at the. So, yeah. Difficult to get hold of for sure. and so how long did this process take? You said building this what, A year ago and then it's been live 10 months. Did build this then.
Dan Coates: So the first version, so we, we had a kickoff meeting at the end of February and we launched V one on June 15th. So that was a 120 days, actually, we launched on June 15th and had our first paying customer by day one 20. So that was a good sign in that 120 days we had a kickoff meeting. We moved prototyping really quickly, and by that.
We just loaded a subset, not even all 4,000 documents, a subset of our documents into their sort of drag and drop vectorized database. And I was able to show board members. I went on tour with a number of significant clients and just sort of say like, this is direction that we're heading. One of our clients is Apple, and it was really nice.
They said, listen, we're Apple and we're never gonna use your ai. But what we would say from our point of view is, you're doing the right things. You're, you're not doing the wrong things. And so keep going in that direction, even though you count on us, you know, wanting to use our own LLM and by the way, that Vectorized database that you just built, it'd be great if you could deliver that to our doorstep.
So we include it, we ingest lm, which of
20 days. We had kickoff meeting, we prototyped, we toured it with clients, got some feedback. I was able to show my board what we were doing. Even internally. A lot of our internal team members were nervous about us doing this. So the prototype really helped me show them. You don't have anything to fear.
We felt like with the prototype we had about 70% accuracy, and so with absolutely no prompt engineering, no tweaking, no custom nodes built by the deep set team, that we were at 70% just by dragging and dropping some documents into a bucket seemed to be a good start. We then went through the process of developing it and through that process what was really helpful was we had a set of benchmark questions and then we made our own editorial team answer those, let's say 20 to 25.
we had our editorial team say, how should an AI answer this? So we had this framework, this reference framework, and then as we were going through the prompt engineering, the custom node development process, we would just be seeing how much closer it got to that benchmark answer that we had.
So then we finally got development, launched a beta in late may, and gave customers about a month to interact with it took their feedback through the beta process. Re-engineered the product. so that V one was 120 days. Then you have to understand deep set's based in Berlin.
July and August one of the deep set team members had a baby. We really didn't get onto V version two until late August. and at that point, you know, and again, another 120 days, connected everything to the Snowflake database. Many happens. The customer asks a question, it runs off and gets an answer from our repository of documents, runs off, gets an answer from our Snowflake database, and then takes, and then combines those two.
So the chart of the different components of the software that we look at deep of.
At the same time you know, again, within 120 days we were in beta mode. We decided to stay in beta mode for a long time. We spent a month and a half to the end of the year. But at this point we've incorporated all that feedback. We're doing a slight kind of UX touch up. We feel like you know, we could do some interesting things from a I design perspective that would not only signal this is, We learned a few things. Things like people kinda like to see what processes are going on. they feel a little bit nervous the more black box it is the harder it is for them to sort of trust in the results. And so even giving status as to what we're doing through the course of the minute or so that it takes for the AI to answer kind of gives them that comfort of like, okay.
Process. They're making it a little less black box.
Richie Cotton: Yeah, that does certainly seem to be an important thing, is trying to find ways to gain trust with your users. There's a lot people, especially when just like, well, AI gave an answer. Not sure why. This is the correct answer, I suppose, since a lot of your work seems to retrieving facts, there's a real danger around hallucinations here.
So, talk Sure. Or that hallucinations.
Dan Coates: Augmented generation, generally speaking, should be straightforward. Go check repository.
Speed that through an LLM to kind of synthesize and summarize, a series or chunks of various reports that can, that get fed in on a zero day retention basis through an LLM. if you can't find anything relevant within our results, it doesn't give the LLMA chance to make something up.
It basically returns. Sorry. We can't find anything related to your question in our corpus, So retrieval augmented generation really helps in that if there's nothing there to work with, it says so instead, pretending and going through. In the early days with Sonet, we had a couple of times in the text SQL where it would hallucinate a little bit on the SQL queries.
But we managed to figure out where and how that was happening and stamp that out. but again retrieve a augmented generation really helps 'cause there's a fixed corpus. we're trying
ness.
Basically nearly 90% of the time whatever it returns from the LLM matches identically. What our corpus said, we find that that extra 10% is interesting. We haven't stamped that out completely because sometimes the LLM will make suggestions. One of the great use cases is, can you suggest some campaign ideas?
Based on the information that you've just seen, can you make some campaign recommendations? And it'll be in regard, different ways to sort of engage young people. You can ask, also ask the question. You. Write a creative brief for my ad agency and, there's no creative briefs in our corpus.
It's pulling that from the LLM, what is a creative brief, what needs to be in a creative brief? And so we've left that last 10% to really open up some workflow and productivity enhancements to taking our information and creating new documents outta it.
Richie Cotton: That's good that the, the val generation is working as it should be then, and it's on top of that sort of fact-based retrieval stuff, letting come.
Dan Coates: We did have one issue where this, we had this debate internally. It was recency versus relevance, right? So with youth information, probably something that's 10 years old isn't quite as good as something from yesterday. So we found that with retrieval, augmented generation, every once in a while we, we'd have this recency versus relevance tug.
More relevant than something that was newer. So really trying to get the retrieval augmented generation model to favor recency over relevance.
Richie Cotton: Yeah, I can certainly see how youth information dates pretty quickly then. Okay. So, what's the solution to that then? Is it just you're prioritizing reports that are more recent than over something that might have a higher factual or higher match score? Is that, is that the case?
Dan Coates: Basically retrieval augmented generation drives, off of the match score. how relevant is this based to the query and just training it that like years matter and, something from last year is more relevant than something from 10 years ago. We, every once in a while it's funny 'cause the underlying LLMs.
They're static, not static. In fact, you might even still be on the same version of chat GT four Omni. And all of a sudden they change something back at the ranch at OpenAI and all of a sudden you're getting a different result, which is where those benchmark questions have really come in handy.
In fact, every Monday morning we have a staff member that re-ask those 25. Just to see what the results are and to see if they're changing at all. And if so, how? I'll give you an example. When we first got started we would get answers back from the LLM they were kind of unstructured, and so we asked, you know, for more bullet points, right?
We wanted things to be punchier and more takeaways. And so we asked for bullet points and then at some point through the first version. It seemed like the LLM decided that it liked bullet points as well, so we were getting bullet points in our bullet points. So then we had to back outta the prompt engineering the request to make it all bullet pointed because they'd somehow figured that that would be a good thing to do just on a standard basis.
So we find that the prompt engineering is an always, it never ends. You're constantly prompt engineering, tweaking it, checking the benchmark questions, seeing what answers it produces, seeing how that's tilting in one direction versus another.
Richie Cotton: I think we've all seen that PowerPoint presentation where someone's got like bullet points nested in bullet points, nested in bullet points forever on the slide. But yeah, that's interesting that your testing process then, if I've understood this correctly, you've got a fixed set of questions, but then when you run them, a human has to look at the answers and judge whether or not these are good answers or not.
So there is still a human in the loop during the testing process.
Dan Coates: Indeed, and we've even started, archiving these things. So we kind of see how the answers have changed over time in way static.
Richie Cotton: Is this something that you think you'll automate eventually, or do you think you always want a human checking this out?
Dan Coates: think what I would to do long is have this move hands into our.
So that I would like the human in the loop to be the creator of the content as opposed to a product team, a technical team. and the reason why is that we've found that our editorial team has been a little skittish around the ai. they're not so sure that it's a good thing for them or bad thing for them long term.
And I understand their hesitancy completely, but I think to give them the reins of the beast and let them ride it around. Be smarter. Because they'd feel a little bit more in control.
Richie Cotton: Yeah, I can certainly see how editorial people might feel like they're competing with the AI in some sense, so they've got the conflict actually. How do you manage that as a manager? Like what happens when your employees are freaked out by the ai?
Dan Coates: At first, of course, we said what everybody else says, which is, this isn't.
Allows us to just do more. And I think that, we've been true to our word on that front. Nobody's been, let go. and I think that it's at a core level, AI and a particularly retrieval augmented generation versions. They're good at repurposing some original idea, but they're not really good at the original idea.
So the thing that we told our editorial team is, we need you to keep doing what you do, which is the ones coming up. Original idea allows that original idea.
Richie Cotton: Yeah. Okay. That's actually one of the sort of common things I've had from guests on the show. It's like when you ask, well, what's happening? Is AI gonna take those jobs? The, the standard answer is no. It's people more productive. But I see how well. In some cases it's gonna take people's jobs. So I like the idea of just saying, well, humans should be focused on the creative side of things and then use the AI for repurposing stuff, and that works pretty well.
So I'm curious, like one of the big advantages of AI is you can personalize content, you can make the responses different for different audiences. Do you have to do that with your AI or market researchers, a sort of homogenous crowd? Are they like a single persona that you have to deal with?
Dan Coates: Have multiple personas for sure. But we've sort of aligned our products around those personas. So we have three skews as it were. One is daily intelligence, and that's meant for the casual user who just wants. Keep up to we find that a lot of executives, they just wanna not sound stupid and stay culturally relevant or with it.
And that tool is really meant for people at the top of their organization that either manage processes that involve younger people and don't wanna appear to be lame. That's a lightweight, you know, nine.
That talk about young people in a certain way, of course, informed by all of our data and reports, but just in and of itself, it's an article. So that's the one persona. The second persona is more marketers and marketers. They don't wanna dig into the data. They don't wanna slice and dice things. They just want sort of the big idea so they can go off and kind of implement that.
And that aligns with our reports a little bit more than an article, but not nearly something that you wanna sort of dig in from a data perspective. And then the last product that we have is Pro. And that allows, you know, not only dashboard data dashboards that we deliver up via Tableau, but then, also the ability to kind of go in and manipulate data.
You know, I don't care about 13 to 39. I really care about 18 to 24, and by the way, 18 to 24-year-old females. So they're digging in and generally. Implementing ai. Basically what we thought was, okay, we'll have the AI for Daily Intelligence just use the articles as the input. We have the AI for the prime to just use the reports and then AI Pro to then have additionally access to data.
So that's worked out pretty well in that our products align with our personas. But trust me, a bunch of, clever folks in the marketing department have decided to cross the streams, which we've resisted to date. But why shouldn't executives have access to the core data through ai?
It's really fascinating. Again, we were curious as to whether. Deploying AI would mean people would read the reports less and people would dig into the data less, but it turned out to just be the opposite. if you give them a really fast path to answering a question, then of course they're gonna wanna fact check and dig in a little deeper, and then go in and slice and dice the data.
So, so far it's working well. Our personas aligned with our product SKUs. Our product SKUs limit what's being fed into the ai. And we've resisted the temptation across the streams, and I think I'm gonna stay there for a little bit longer. Strategically, before I let executives tap into data.
Richie Cotton: So I'm curious as to whether the existence of. This AI has changed how you create content. Do you create these reports now knowing that they're gonna be fed into ai? Does that change how you make them or the structure of them?
Dan Coates: we've so far, again, in order to keep the sensitivities of the editorial team, in check. we've used the same process to create the original content, but to the editorial themes, credit, once it's created, they're doing some really interesting things beyond that. So one of the things, that really vexed them, or, or seemed kind of low value was all of the sorting out of SEO keywords for every article, for every report.
We're using AI to do that now which they love that, Take the drudge out of it. I love the quote, which is, I didn't want AI to write poetry and music for me so that I could do the laundry. I want AI to do the laundry so that I could write music and poetry, So what can we do to take the drudge work out? So, developing the SEO keywords out of each article, that was drudge work. AI is doing a great job of that. Now, we also have these mass repurposing for variations. So you write an article, so now gimme that article as a LinkedIn post.
Gimme that article as a listicle that we can put a carousel on LinkedIn, gimme that article as a series of, 22nd videos that we can put out into the world. and that's been really helped by ai. So so far we've this notion of. The original content, the original creation process, we're not gonna touch that.
Then everything that follows thereafter we're gonna see how AI can help. Certainly, AI has been embraced by our marketing team, which, they used to have to either have somebody on the beg, somebody on the editorial team to repurpose something for marketing purposes. They're like kids in a candy store nowadays, and that they're using AI to take and repurpose stuff for marketing reasons.
So they don't have to worry about marketing content anymore. They all sort of go to the, watershed and develop it from there.
Richie Cotton: I do like the idea of just using the AI to create all the sort of secondary content. So all those sort of social posts, the marketing copy, all that kind of stuff, because it's not super, super fun. Contents are great where you start taking an existing report like, well, okay, but it's got a tense into a LinkedIn post.
I mean, I'm sure there are people who enjoy doing that, but it just seemed like AI can help you a lot with
Dan Coates: Say it a third way, say it a fourth way. Give it to me a fifth way. I mean, I'd imagine in the world of copywriting and, and editing these poor guys, Don Draper sitting at a typewriter coming up with yet another tagline, right.
Richie Cotton: Alright, so, having built this thing, are there any things you wish you'd done differently during the process?
Dan Coates: I've been running this company now for 17 years and I used to weep over the data that we collected in any given year that wasn't consumed by, enough customers to sort of make it worthwhile. It's, collecting data longitudinally is a real chore. And I used to think about it like perishable, produce or fruit or vegetables, right?
That basically once I'd asked those questions that year, if there wasn't a customer at the receiving end of that, that it was all for Naugh, it was all sort of, gone. And now I don't look at it that way. Now I look at it like we've created a catalog or a music library, The data that we collected five years ago, 10 years ago, if for no other reason than to kind of create these trend lines of which direction things are moving. And to be honest, we could have done it in the past without ai, I would've had to sit some data analyst in a room and, and make 'em sit there for five weeks combing through past data reports, taking the numbers from this question year, year before trend, and, I like our data analysts too much to do that to them. now we've got ai, which is perfectly happy to comb through historical data at record rates of speed and give us that trend line over time. And it's really valuable seeing the directionality of things. yesterday's information's great, yesterday's information as the latest data, point to 10 years of data is even better.
so, if I had a time machine, I'd go back and I'd say, listen, don't worry about that. Just collect as much data as you can. You'll thank yourself for it later. And even now we find ourselves accelerating the types of data that we're collecting so that 10 years from now we'll have something really interesting to show people.
Richie Cotton: That's cool. I like that. The use of AI has now unlocked new insights because it's, makes the data analysis so much easier. Particularly like the stuff that you thought maybe wasn't so exciting. The older data, the stuff about, they're no longer teams, they're now at their thirties as I was changed. So yeah, I like that it's unlocking new use cases.
Have you got like a favorite insight you've got from the AI then?
Dan Coates: so one of our customers is a paint company. So
all stuff. And I was just giving them a demo. Right. And, and I still, you know, even as president of the company, loved to give our clients demos of AI and sort of chat with them about the results. But we basically we were asking the AI about, you know, trends in home decorating especially amongst Gen Z, sort of younger half of the age range deal with.
Renters have, certain needs for temporary design that homeowners don't. So paint is kind of, it's a tough thing for a renter because if you paint it, there's a chance, you have to repaint it or that you're gonna get charged money because you painted something. and again, this is where the LLM kicked in, it was talking about, you know, removable.
Wall deck, Or decals where you could basically like if you think about those fathead sports ones, things like that, you could dress up a room without applying anything that was permanent, any paint. It even talked about like temporary flooring and you could just sort of see the, the lights going off on the other end with the client.
It's like, wait, I totally get it. we think about paint as being something that everybody wants to do, but a renter. May not wanna do that, but a renter does wanna make their home look better or more individualized or more personalized. So what can we do that's, paint that you can put on and then immediately take off or at some point apply some other chemical to it that isn't toxic, that basically, removes the paint that you use to sort of make that space your own.
Richie Cotton: Fantastic. They just like unlock an entirely new business idea. Just from having a little conversation with the AI and seeing what the market wanted. Okay. Fantastic stuff. Cool. Alright. So tell me is there anything else you're particularly excited about in the world of market research?
Dan Coates: As we move from the what to the so what? To the now, what, it's really exciting for me to be able to tell clients, here's what you should do next, You should come up with a, series of removable wall decals. you should come up with a, a flooring concept that you can put down and will stay in place until you unlock.
Future directions, give them business opportunities, gives us the ability to kind of our organizations, you know, change of seat instead. Folks out there answer. Wanna have in your hip pocket as you were designing the future of your organization. That's pretty helpful. And in our industry, you know, the big fish, eat the little ones.
So, Ypulse is the fifth company that I've created. What I'm super excited is how AI is kind of unlocking all this latent value in something that, I could sell to a bigger competitor. we're small and nimble enough that we've, made these sort of steps and moves with the scarce resources that we have.
Much larger organizations than us haven't even gotten started on this front. So I could see us, selling our organization into a bigger organization that has even more data than we have that needs these techniques even more And, I'm following in the footsteps of some really great entrepreneurs that have created business value and then harvested it.
guy named Patrick Comer, who sold his company lucid to another company in our space called, sent for a billion dollars, or Amy Pressman and, and Berger Hall, who sold their company Medallia to well, first they took it public and then they sold it for Toma Bravo for $6.5 billion. Or, I don't know if you've heard of Ryan Smith or Qualtrics.
Sell Qualtrics to SA for 8 billion, but then they spun it out and took it public at a 21 billion valuation. So, data and content are certainly valuable. These are companies that have, have unlocked tremendous value without the benefit of ai. Right now in some dorm room at MIT or, Harvard or Princeton or, some other university, maybe Oberlin College, There's some young people that are gonna use these tools up in everything that we've created, unlock even more value. So I'm, I'm kind of excited about the future of industry because, the folks that are coming into it now will be kind of AI first thinkers, taking everything that we're doing and rethinking it.
Richie Cotton: That's very cool. I like the idea of having these, business insights at scale. So it's not just a paint company you're helping, it's every single customer's getting these new business insights there. And yeah I'm, you're talking billion dollar companies here if you can help all these people at scale.
So, that's pretty exciting. so just to wrap up, I'm always interested in follow recommendations. Is there anyone whose insights you're particularly keen on at the moment?
Dan Coates: so I thought about this from a few different angles. Let recommend a book. a guy Jack wrote a unmanaged much.
Folks, you're smart. so that's been really helpful. Kyle Poer from Growth Unhinged, it's really helped in terms of account-based marketing and, we've got a great product. What we really need to do is make sure that as many people as possible get a chance to sort of think about whether it could help them in the challenges that they're facing today.
So. Kyle Poll is really helpful. And then the last thing that I really feel strongly about is, especially as a CEO, you need a community and you can't just talk to your staff. It's really helpful to have a community of like-minded people who are facing similar issues and you could sort of talk to them about how they're dealing with it.
So I, I belong to an organization called. Which is led by Ryan, who was the founder of Contact, and he created, now there's about 400 us. SA CEOs meet at various levels, so level. Greater than $50,000 a year a CV. And so me and the other guys that have greater than $50,000 a year ACVs get into a corner and talk about what we're seeing.
And I can't tell you how many problems I've solved just by talking to somebody who, has a similar business to mine and how they've faced issues maybe a year or two ago, and how I might leverage their experiences for my benefit.
Richie Cotton: Unmanaged sounds like a. For your boss, like whoever you are.
Dan Coates: idiot.
Richie Cotton: Nice. And yeah, the other thing like the, the group for SAS CEOs, like in general, like whatever your job is, find people who are doing something similar to you. You have a work therapy session, gripe about whatever. And yeah, solve some problems together. Uh, It's a great idea. I love it. Alright thank you so much for your time, Dan.
Dan Coates: Okay. Great. Well.
podcast
Building Multi-Modal AI Applications with Russ d'Sa, CEO & Co-founder of LiveKit
podcast
Designing AI Applications with Robb Wilson, Co-Founder & CEO at Onereach.ai
podcast
Perplexity & the Future of AI with Denis Yarats, Co-Founder and CTO at Perplexity AI
podcast
Customer Strategy in the Age of AI | David Edelman, Harvard Business School Fellow, Executive Advisor with BCG
podcast
Developments in Speech AI with Alon Peleg & Gill Hetz, COO and VP of Research at aiOla
podcast