Perplexity & the Future of AI with Denis Yarats, Co-Founder and CTO at Perplexity AI
Denis Yarats is the Co-Founder and Chief Technology Officer of Perplexity AI. He previously worked at Facebook as an AI Research Scientist. Denis Yarats attended New York University. His previous research interests broadly involved Reinforcement Learning, Deep Learning, NLP, robotics and investigating ways of semi-supervising Hierarchical Reinforcement Learning using natural language.
Adel is a Data Science educator, speaker, and Evangelist at DataCamp where he has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.
Key Quotes
Google search engine is the most sophisticated system humanity ever built. It's just insane. But also, on the other hand, I think there are certain things that can be done better. Specifically, you can save a lot of time when you don't have to sift through 10 links and do a lot of manual work yourself. If you just have a question, you just want to get an answer. You can imagine in the future, this is going to evolve into much more sophisticated workflows and pipelines where you can get those answers and execute on tasks, rather than just like simple questions and the search is going to do work for you.
I'm a big proponent of open source. I think it has a lot of benefits, and I literally see why like meta is so eager with this. It just many people don't realize it. I think setting up the standard and having the one architecture that everybody's using, it's very massive. Yes, some of the other arguments are just, the community is just so much larger than any company, right? And people are so excited about it so they can, if they can figure out like a trick or discover some bugs in the model. I think it's super valuable to the original creator of those models. I think in terms of how all of this is gonna play out, I feel like right now we're still in very, very early stage just of where people know what is the difference between GPT-4 and Claude or Llama. So that's why we focus a lot on designing a product where people don't even need to know what kind of model you use as long as it gets the job done, right? That's why there is so much opportunity to leverage different models, like those open source and closed source, and just create a system that can work in symbiosis together. They can work very efficiently together as well as quickly.
Key Takeaways
Prioritize solving a few core problems with high quality and accuracy rather than spreading resources thin across many areas.
Anticipate the evolution of AI into more complex workflows and tasks, such as booking flights or providing real-time educational support, and start preparing your infrastructure accordingly.
Embrace open-source AI models to benefit from community-driven innovations and improvements, which can significantly enhance your product's capabilities and robustness.
Transcript
Adel Nehme (00:36):
Hello everyone. I'm Adel Data evangelist and educator at DataCamp. And if you're new here, data Framed is a weekly podcast in which we explore how individuals and organizations can succeed with data and ai. Arguably one of the verticals that is both at the same time, most ripe for disruption by AI and the hardest to disrupt is search. We've seen many attempts at re-imagining search using AI, and many are trying to use erp Google from its throne as the top search engine on the planet. But I think no one is laying the case better for AI assisted search than perplexity to ai. Perplexity doesn't need an introduction. It is an AI powered search engine that lets you get the information you need as fast as possible. This is why I am so excited to be speaking with Denis Yarats. Denis is the co-founder and CTO at Perplexity. In his previous roles, he was a research scientist at Facebook, AI research and a machine learning engineer at Quora. Throughout the episode we spoke about the process of building perplexity from the ground up, how perplexity ensures veracity and citations for all its search results, how he sees the future of search evolving with ai, his views on how LMS will evolve in the years to come and a law more. If you enjoyed this episode and the data frame podcast, make sure to rate it wherever you get your podcasts. And now onto today's episode,
Adel Nehme (02:04):
Dennis Yarats, it's great to have you on the show.
Denis Yarats (02:06):
Hey, thanks for ... See more
Adel Nehme (02:10):
Awesome. So you are the chief technology officer at Perplexity ai, one of the fastest growing AI startups today. So something I want to know about here, I think Perplexity was first launched in August, 2022, if I'm not mistaken. How has it been being on this rocket ship over the past couple years?
Denis Yarats (02:27):
It's been definitely very exciting. I wouldn't say it's work, it's more like lifestyle, but it takes a lot of time. That's why I say it's like a lifestyle, but it's very exciting. It's like every day you wake up and there's something new. Super cool. I haven't had this experience before, so I'm very excited about it.
Adel Nehme (02:46):
Yeah, it must be very interesting being at the center of a really big moment in the technology space and I think we can jump right into that. I think I want to first talk about what makes Perplexity different from any of the other platforms we see today. And what I really appreciate about using Perplexity, I'm a perplexity user myself, is that you can see the thoughtfulness put into making sure that any answer that is provided is truthful. There are citations, it's not biased. So maybe as someone who's overseeing the technology aspect of things and perplexity, what goes into making sure every answer I get when I type in a query on perplexity is as truthful and as relevant as possible.
Denis Yarats (03:25):
So we already and I, we come from academic background, so I think sort of citing when we write papers like citing stuff is very natural things for us. And when we started, we very quickly realized in this LLM world, this is going to be very essential component because, and especially in the time we're living right now, there's a lot of misinformation and very hard to trust what you read. So that's why this is probably one of the most fundamental aspects of our product and something we focus a lot on. In terms of technologies, it's definitely a very hard problem to solve. I think it requires a lot of coordination across multiple teams, multiple technologies and such. But more than anything, I think it has to be a mindset of the entire company. It's just basically if there's only few things we can focus on, this is going to be the very first thing that we're going to be focused on.
(04:17):
Like speed and accuracy I think is essential. And from the get go we organized ourselves. We decided that we are not going to be too spread around, so we're just going to focus on a few things, but we'll try to do as them as best as we can. It turns out as we learn more about this problem, this problem is very challenging and there's a lot of things to do in terms of technology. I think it's basically combination of having very good models, having very good ranking system and the search engine and service and then trying to make sure that the information that you retrieve from the webpage is if there's multiple sources, maybe tell different things. So you have to figure out who contribute what. So maybe if it's hard to resolve it that you can give several opinions about this. So you don't want to be biased, you definitely want to make sure that you cover all the ground and then essentially have the models that can not only generate the answer but also self verify and see they made a mistake or not. And if they made a mistake, then see how things can be improved. And the most important part is just to establish data flywheel and learn from those mistakes and then keep getting better.
Adel Nehme (05:29):
And you alluded to this earlier when you were talking about the principles that you guys wanted to focus on when building complexity. I keep diving into that a bit more since the advent of chat. I think chat really mainstreamed chat bots or AI assisted search engines in a lot of ways there's been quite a few players in this space, I think not a lot of people have done as good as you guys have. And one building that flywheel, getting so many users, being able to put yourself on the map as a main player in this space. When think about the perplexity culture, the things that you focused on, what are the main principles that you've been focusing on to make sure that you're building a differentiated product that works versus the rest of the space?
Denis Yarats (06:08):
I think there's several key factors in terms of product itself. I think we wanted to make sure from the beginning that it's very simple, it's very easy to use, it's very intuitive. That's why we also realized from the beginning just like chat interface is not what we want to have and that we can spend a bunch of time thinking what it needs to be from this point of, from that time we decided this is going to be one of the differentiating factor. The other one was simplicity and quality. The problem is just in general problem search, it's AI complete and Google is obviously by far the best search engine so far, but still there's certain things that it can do. And because this problem is so monumental, we just decided, okay, so let's just try to solve this small piece, but we'll try to do this as best as possible and see how long it's going to take and where it's going to take us.
(07:04):
And yeah, it turns out it's actually a much harder problem than we even initially anticipated and preserving still that mindset that let's just do the simplest things but do them right and high quality and just keep doing that. And I think instead of trying to maybe go after many things at the same time and then doing them maybe not as good. And because of that we decided what kind of team we need to assemble. We definitely need to have people who care cheaply about those things. So care a lot about a very good infrastructure, moving fast, building things with the high quality and things that work fast because essentially I think Google told everybody that you have to get instant answers. You cannot wait. And especially now that we launch into price, if you want save people time, we not only have to provide them very high quality answers but also do this very fast. And so doing those things together, it's very challenging. So essentially I think doing small things but going very deep there and very high quality is just what we are trying to do. And I think we're going to continue to do that.
Adel Nehme (08:14):
And you mentioned here doing small things but doing them quickly, doing them high quality, and I think the culture of fast situation and operational excellence at perplexity even bleeds out when looking at perplexity as an outsider myself and looking at the velocity of features being added, the velocity of new releases, new launches. You mentioned enterprise. Maybe walk us through that culture of excellence a bit more in depth and maybe what can you attribute as the key success factor behind this culture of excellence when you're building out a team you're hiring? Walk me through that. As a leader of the organization,
Denis Yarats (08:44):
I think it was very important from the beginning to just get very strong foundational core team. So Arian and I started that we were so happy to have Johnny whom I used to work before, and he's this legendary coder and he was a world champion. So this guy just does everything very fast and very high quality. And so we decided, okay, so let's just try hire slowly but try to focus on very good people. And from the beginning, I think until maybe when we hired 10 or 15 people, we were actually had this trial periods, so it wasn't like a normal interview. We would invite somebody, they would work with us for a week or sometimes even longer. One thing is to do interviews, you can miss certain things, but when you work with a person for several days, it's very clear and that even though it was a time consuming, I think it turns out to be a great idea because we definitely were able to get those first 10 people who are very trustable, they're aligned on the mission very fast, very strong.
(09:47):
And the reasoning behind that was just, okay, so those 10 people that each of them is going to bring 10 more people. And so you need to have a solid foundation and that's basically paid off because those people now the new people will come, this is already integrated in altar, we have to move fast. If you can do something today, you have to do it today rather than doing it tomorrow next week and have this insane pace. And momentum is just something we basically, this is one of the things that I right now spend a lot of time maintaining is just how we get bigger. How do we don't lose this momentum because it's a nature if you stop moving things slowing down, you have to continue pushing. In my mind, the most successful companies is just those who were able to prolong this lose of velocity as long as possible because eventually it's going to happen once you become a super big company, it's just like organizational things become very hard. But yeah, we're trying to push it as far as possible.
Adel Nehme (10:51):
Yeah, it's really interesting and I want to kind of expand here on because I definitely agree with you wholeheartedly. Having a relentless space is absolutely essential when it comes to building out a company successful as perplexity. And you mentioned here bringing in people on the first week and having a trial period, right? Well, what is an evaluation of success after a week? Is it like time to value in driving iteration on the product within the first week? Is this something you look for on the first week?
Denis Yarats (11:15):
A few things we paid attention to is first of all, alignment and mission dreaming. So can this people, I figure out ambiguity. So you just give them high level goal or task and you see if they can execute on it. But honestly, it's surprisingly very easy to, when you work with somebody, you can literally sit next to them and see how they write code or when you talk to them, you go to for lunch and you just talk to them. It's like it actually does not take a lot of time to understand if you want to work with this person for a long time or not if they're capable or not. The best people, literally best people I can tell in 30 minutes. So this person is very good so we should get him. And it's kind of like the same story true on the other end.
(12:02):
It's also relatively quickly you can say that maybe it's not the right person for whatever reason. But usually the way it worked, because we were at the time, we were like, okay, so a very small company didn't have anything. So maybe first day you spent understand if you want to hire them, but the rest couple of days you're trying to sell them if you found somebody good that you'd want to hire. But obviously those people have a lot of options and so you were trying to figure out how do we convince them to join us. So that was also a big part of that process. So not only assess people but also try to sell them and join us.
Adel Nehme (12:41):
And on that selling aspect, there's not a lot of people on this planet who can build large language models or build systems that are operate at the scale of the perplexity. So how have you seen our effective models for competing with talent? When you're competing with the metas and with the open AI and with the Googles of the world, what is the differentiating factor for joining perplexity when you have this conversation?
Denis Yarats (13:01):
Yeah, I mean it's definitely very tough. I think for whatever reason, those companies like more established, they have bigger clusters, they
Adel Nehme (13:08):
Have compute,
Denis Yarats (13:09):
Compute and to be honest, they have way more resources that we had so they can literally pay as much as they want, which is something we cannot afford. So the strategy is trying to discover people who are maybe not already established like a research scientist there. So yeah, probably if you try to get somebody like that, they're going to be very expensive. They already have some ideas of what they want to do. Maybe they're not going to be very aligned, try to discover this. People who have a lot of potential, yes, maybe they'll require a little bit of teaching and learning, but who can very quickly within couple of months ramp up and actually be very productive. And so that worked super well for us. And so maybe getting people from adjacent fields who maybe strong engineer or have very strong math background or competitive coding experience and then just teach them.
(14:09):
And a few other people on the team used to be research scientists as well, so we have experienced this model and so we can teach them, but they have other skills and the most important, they have desire to learn and ramp up in that field, which is honestly very important and that's how we've been doing things. Obviously at some point you also need to get experienced people. I think now we can actually, because we got a little bit bigger, a more resources, a little bit more recognizable brand, so then we can try to get a little bit more senior people more experienced. But yeah, I think it's still very tough to compete with high and places like that because they just attract all the best people, best
Adel Nehme (14:51):
People. Speaking of talent here, when we were talking about technical talent, something you alluded to is taking in strong engineering folks and trying to give them AI chops. You find that that has been a successful model and being able to immediately obstacle people on AI and get them up to speed on the space. Maybe I'll reframe the question is you value more research skills or technical engineering skills at the beginning when you're looking for an early hire?
Denis Yarats (15:14):
Yeah, I think it's basically from my experience and from working with many famous researchers in ai, I think usually best people there are post good researchers and engineers. It's very rarely where you have a strong research sciences who doesn't know how to code. And so that's why I feel like the best people, they can do both very successfully and then they can do engineering very good, but then they can also do some resources. It's more about just in general intellect, especially with transformers, yeah, there's lots of secrets, but you can teach people how to do this. In the end, a lot of it is still a very hardcore engineering, especially once you start training large models. So it's a lot about distributed system and stuff like that. Obviously you need to know some fundamentals of machine learning as well, but those things is not rocket science.
Adel Nehme (16:05):
Definitely a lot of people are tuning in right now who are aspiring AI engineers or want to work in ai. What are the technical skills that you recommend people to develop at the beginning and what are the cultural traits that you want them to have if they want to break into the field?
Denis Yarats (16:19):
Yeah. Yeah, I think basically you definitely have to be very comfortable with coding. I think you have to Python at least known Python ideally, eventually plus plus and stuff like that. But I think it's more important least to have this curiosity and desire to learn things and just be very, very proactive because even right now there's a lot of information and very useful information internet you can find. There's many people you can talk to and learn. It's just trying things quickly iterate. I think it's very important to very quickly iterate and get results. That's how you learn the fast. It's trying things yourself rather than okay, so you can read a paper and they're just like, oh, this makes sense. But if you haven't implemented yourself, you're honestly probably not going to have very deep understanding of things and trying those ideas quickly I think is very important.
(17:12):
So that's why it's very important to be a good engineer so you can implement things very quickly and then test hypothesis and see if things are going well. And I think the other very important quality is not be afraid of things that seems like impossible before you say, oh yeah, like Google, it's impossible to do Google, but you don't actually need to build Google, you have to build something else. And so a lot of people who maybe have a lot of experience, I feel like they're maybe less flexible in thinking it's just some things are possible and some things are not because they're like, okay, let's say they work at Google. They're like, no, how complicated it is, how much infrastructure Google has in there. You talk to them and they're like say, oh, this is impossible to build, but then it's clearly possible, right? Open AI for example showed that you can do something very cool without having Google infrastructure, be flexible or have this trade off where you have necessary skills but you're also not to reach it to try new ideas and let the path go,
Adel Nehme (18:19):
Pardon my word here, but it's a healthy dose of being delusional a bit and wanting to break through that barrier. That's very important. You see this also with world-class athletes for example, but you need to be a bit delusional to think that you can be the best in the world at something, right?
Denis Yarats (18:32):
I think it's very important. Yeah, I remember clearly early on at perplexity thinking to myself, oh dude, when we especially thinking about doing search engine, just yeah, there seems like of an impossible task, but if you have this mindset from the beginning, then you're just like, probably we shouldn't even start. But yeah, luckily we didn't have that mindset.
Adel Nehme (18:52):
I want to talk to you about search, but before that I also want to ask you a question about how your career trajectory had changed. You mentioned you were from research and you worked at Meta and research, then you also worked at Quora. What was the biggest challenge for you? Transitioning from research to leading a large scale technical organization at one of the biggest AI startups on the planet? Yeah, walk me through the leadership experience that you gained, what you had to learn.
Denis Yarats (19:12):
Yeah, I did to transition, so initially I started as engineer, so actually work at bi content core, so I was engineer, but I did a lot of machine learning and ranking. But then in 2015 when I first went to res the biggest machine learning conference, I got very convinced right away that tip learning is the way to go and it's going to be the biggest thing. And so I joined Facebook, I research as an engineer initial and then go get my PhD and converted to scientist. So that's why I had very strong engineering background and I was able to learn, we were able to, if there's any elder or training system needs to be implemented, I was able to do this relatively quickly and then learn from that. And then through PhD and being in research scientist, I gained more high level problem setting, problem thinking skills and so now I basically had a very deep understanding of how it needs to be done, but also what needs to be done.
(20:06):
That basically helped tremendously creating perplex and running it because early on I was able to basically me and John build everything from the beginning, but then also as we start bringing in people, I was also were able to provide them guidance and trying to organize them together and trying to set directions and goals. And so if they stuck with some technical problems, then I can also unplug them. So at least for me it was very important to not only be high level and know how things in theory should work, but also if needed to just go implement them myself or help people to do that.
Adel Nehme (20:46):
Yeah, definitely. That's super helpful. People walk me through as well the people management side of things as well. I'm sure past few years of stretch your capabilities on that front as well. I can imagine being a researcher to A CTO as well.
Denis Yarats (20:56):
Yeah, so this actually surprisingly, we very flat organization right now, so there is literally no engineer managers yet. Actually we just hired one person a week ago before that was basically half of the company reports and then Johnny has the other company reports and it's more not a manager but a tech lead type of thing. And that was very important. I think what happened as we hired those three processes that we had where we had very aligned people who basically don't need a lot of babysitting or anything like that, you just, they're excited to work and it's kind of wild now that I think about it that for almost two years now we haven't had to deal with this. So there is no time wasted on people management talking about things or career growth, what should I do? We just work, we have exciting projects and we just try to do our best to get there. I think it's very unusual. For example, when I was at Quora, I joined there when there was 30 or 40 people and at that time there was already at least seven, eight managers or at least now I feel like how many people, 65 people I think We don't have managers yet. We only got one. So that was very unusual I would say, which help us to move very fast.
Adel Nehme (22:22):
That's pretty great insight. It's pretty interesting also seeing it's a new technology paradigm, but it's also you can see the DNA of a lot of old school tech companies bringing in here. So maybe shifting gears, you mentioned here competing with Google, being able to have that mindset of we can compete with Google, so the main competitor of perplexity is not necessarily chat speed is Google. Maybe my question for setting the stage is why do you think search needs disrupting?
Denis Yarats (22:45):
Yeah, so there are a few things happen. I think from the beginning Google had very brilliant idea page on and stuff like that. Something that really values high quality information, something that is very helpful. I mean it still is, but you've seen over the last 10 years maybe more, the system got gamified a little bit and mostly through ads and SEO and so people figure out how to break original page rank and that's why Google is still tremendously useful. I think it's still going to be very useful and throughout the years I think it's an amazing system that they build. To me, probably a Google search engine is the most sophisticated system humanity ever built. It's just insane. But also on the other hand, I think there is certain things that can be done better. Specifically you can save a lot of time when you don't have to see through 10 links and do a lot of manual work yourself.
(23:42):
If you just have a question, you just want to get an answer. And then you can imagine in the future this is going to evolve into much more sophisticated workflows and pipelines where you can going to give those things tasks rather than just simple questions and they're going to do work for you. And I think this is, I wouldn't say it's necessarily directly competing with Google I think is just a different market and I think people still, even right now, GGBT happened got a lot of traffic, but then Google not necessarily lost any traffic itself, they just created a new traffic. And so people still, for example, we see people use perplexity, people use GGPT, people use Google. It's just all different tasks, but because people can do things much faster now, they still have time to do all of this stuff. And then we trying to be basically implementary, we trying to create a segment where we can do provide best in class experience and then hopefully this segment is going to be large enough where you can create a successful company.
Adel Nehme (24:48):
It's really interesting because you're mentioning here the emergence of a new category of products and right now I would say a lot of AI search engines, and you can correct me if I'm wrong, perplexity included is mostly synthesis. We provide you 10 links. Let's say they're synthesized, here are the citations, et cetera. But you hinted at this is that there is a future where there's a lot more agentic use cases to explore. I can easily see a future. I have a question about Python or CC plus plus and then you have an AI that shows you a lesson and real life and able to let you execute a code. This is where education, right? So I'm giving you an education use case, but then there's also book me a flight, something along those lines, right? Yeah. How do you see the future of EENT use cases in search and when do you think that will happen? Assume you guys are working on it already, so
Denis Yarats (25:33):
This is something that's definitely going to happen. I think it's already undergo. I feel like technology is not quite there yet, but it doesn't mean you should not be preparing for it because it's definitely happening. That's why we don't have to rebuild Google search engine because this is not exactly what we need. We need something different, something that actually going to support those new use cases, those agents. And it's not necessarily having ranking system that runs top 10 results because that's not going to be useful. And that's why actual different infrastructure, different system and something that we're definitely working on and I mean I feel like it is definitely going to happen I think and the most important thing is just there's so much to do as you mentioned, you can book flights, you can do lessons, you can do tasks. So that's why I don't think there's going to be one winner take is just a lot of this is going to be product differentiation, like something that's why we very confident that we have a very good shot. OpenAI is going to be there, Google is going to be there. Hopefully we're also going to be there as well.
Adel Nehme (26:40):
And another question that's always, at least in my mind that I always think about when it comes to search and AI searches is that search represents a large amount of the internet's GDP if you disrupt the business model of search, this has really large scale cascading effects on publishers. We talked about flight bookings, Expedia's business model radically change if search is no longer the way we're used to it maybe for publishers creator services. How do you imagine the business model will look like if AI assisted search becomes the norm and how do you see that changing over time?
Denis Yarats (27:12):
Yeah, definitely things will have to change there. I think there has to be figure out some mentalization strategies where those can, right now usually how it's happened, like Google for example sends traffic to you and that's fine, but we perplexed right now also sends traffic. It's not as much as Google, but I think at least 10% of our requests result into people clicking links. But I think the key observation here is just each click is not equal in terms of the intent. So in Google you can just find, so you click and then you bounce back because it's just not relevant. So here if you click intent is so much higher and so if you wait by intent, so maybe it's not as much of a drop off as currently these things. I think that's how you can reward publishers and stuff like that and other actors, but also I think there could be other models where you can work together. Content generation is still going to be very important I think. I guess we'll have to make sure that we can figure out the ways to work with those companies, with those providers, so everybody's happy.
Adel Nehme (28:23):
Think about, for example, the state of media and the state of journalism, like the business model there is already shifting for the past 10 years and it will continuously shift, especially with this technology. And maybe as we're talking about AI close out our conversation, I want to shift a bit gears and talk about how you see the large language model space, the AI space in general, evolving perplexity lets you choose which model to use and there's an LM interface, whether it's an open source model, closed source model. This week we saw releases of two big, actually three, a big I think open source model, FI three, LAMA three, and now Arctic from Snowflake. Walk me through how you look at the open source versus closed source AI debate and how you think about it and what do you think how the future of AI will play out here?
Denis Yarats (29:03):
I basically think, yeah, I'm big proponent of open source. I think it has a lot of benefits and I literally see why meta is so eager to do this because many people don't realize it. I think setting up the standard and just having the one architecture that everybody's using, it's very massive. Yes, some of the other arguments is just like the community is just so much larger than any company people are so excited about. So if they can figure out a trick or two where you can save inference or discover some bugs in the model, I think it's just super valuable to the original creator of those models. I think in terms of how all of this is going to play out, I feel like right now we're still in very, very early stages of this where people kind of like, oh, they know what is the difference between GT four and or lama.
(29:51):
So ultimately once you go to very big market, most of the people are just not going to know what it is, what are you talking about? So that's why the only thing is, and which is something we focus a lot, is designing a product where people don't even need to know what kind of model you use as long as it gets the job done. And with this, that's why there is so much opportunity to leverage different models like both open source and closed source and just create a system that can symbio those things together and they can work very efficiently together and quickly. And in terms of how it's going to go, I think, right, so different models going to require different compute considerations for inference and I think different task require different compute considerations as well. And so you'll have to figure out how to utilize resources efficiently and then if somebody needs more compute for data task, then I think you can also figure out a way to offer that as a service. If somebody needs very quick answers, ization, you can use very small model, but if somebody needs super expensive like a changing behavior to some complicated stuff, then I think maybe it's going to be cost a little bit more.
Adel Nehme (31:03):
Interesting. So you're alluding to this and I think that connects to my next question because different tasks require, you can say different raw intelligence capabilities within LLMs, right? And when you're looking at the next generation of LLMs, I often think about there's different levers at the community's disposal here, like better data quality flowers or amount of compute, more data scaling parameter size. What do you think will need to be true to get significant stepwise improvements in the performance of today's state of the art lms?
Denis Yarats (31:31):
I think it's basically figuring out synthetic data. I think that's probably going to be the biggest one. I feel like already maybe pre-training is passing the point where pre-training cost is actually not the biggest cost, but data, synthetic data generation is the biggest cost and I think it's definitely going to be true in the next generation of the models. You have data generation which is super expensive. That's why inference is going to be super expensive. Then you do like amortizing things together, trying to figure out some data, data and self. I think if you can enable this thing, this kind of ultimately basically going to be unlimited amount of intelligence, but it's clearly very difficult to do and I think, but a lot of people trying to, I'm very excited about that
Adel Nehme (32:19):
And we're already seeing the precursors of that to a certain extent, like five three least this week. A really small model, but trend on synthetic data
Denis Yarats (32:26):
Three five, like five three is a very good example where pre-training compute is much lower than data generation compute.
Adel Nehme (32:36):
We've been talking about intelligence or intelligence, but also an equally important, if not but highly underrated I would say aspect of succeeding in building a generative AI product like Plex because it's not just about calling the API and getting an answer, right? It's about a great user interface and user experience as well. Maybe walk us through some of the lessons that you've learned in building a great generative AI product that has excellent UI and ux. What are kind of the ui ux principles that you've built around building generative AI products?
Denis Yarats (33:08):
Yeah, yeah. I think it is very maybe under appreciative area where people are, whatever UI is not important, but that's clearly not true. I think then a lot of creative goes to our designer Henry, who from the beginning was like I said, we shouldn't do chat interface, we should do something else. I think that was very important and the more important thing is just as this technology merge, there is going to be a lot of room for something creative in the ui ux space. I'm super excited about it, but it is very important. I would say it's super important to have a UI that is very intuitive, that is pleasure to use, that fast responsive, I just cannot stress this enough. Fortunately myself, I'm that generator of ui, but I'm very good discriminator so I can tell if I like the UI or not or UX or not, but we have a very good people who can generate it as well.
Adel Nehme (34:06):
Yeah, that's pretty great. You talk about this as well, current pleasant experience to use, but also to get the best responses out of an LLM, right? I'm a firm proponent of the concept that good design will eliminate needing the skill of prompt engineering because if you have a great ui, you'll always be able to get a good response maybe. How do you see that as well? The intersection of getting the best out of an LLM with great UX and ui?
Denis Yarats (34:28):
Yeah. Yeah. I feel like it's silly LMS getting better and better, but it's still, they're not ideal and I think it's going to take some time and that's why really thinking through how people are going to use LLM and how to minimize the amount of time where you unhappy with the result. So either provide some tools or UI elements that can eliminate those pain points. I feel like that's why it's very important to have your product team and the IT team work as closely as possible and brainstorm things together. So then this process should not be disconnected, otherwise you'll not have led to delightful product.
Adel Nehme (35:05):
Dennis, as we close out our chat, do you have any final call to action to share with the audience predictions about the data space, AI space? Love to. Can I hear your final thoughts here?
Denis Yarats (35:14):
I guess I cannot stress enough how early we're in this journey. I think it feels like even though it's been almost two years since I guess one and a half years since she agency happened, but I feel like it was ages. But on the other hand, there is so much more to come and so yeah, I'm just excited to continue this journey and excited for different generations of the models, see how can we solve the synthetic data. I was thinking to myself, what else right now can you be doing that is going to be more fun than AI and there is not.
Adel Nehme (35:45):
That is awesome. A hundred percent. Thank you so much Dennis for coming on data frame.
Denis Yarats (35:48):
Cool. Thank you so much for having me and it was great chat to
Adel Nehme (35:51):
Have likewise.
podcast
AI and the Future of Art with Kent Keirsey, Founder & CEO at Invoke
podcast
Why AI is Eating the World with Daniel Jeffries, Managing Director at AI Infrastructure Alliance
podcast
The Future of Programming with Kyle Daigle, COO at GitHub
podcast
[AI and the Modern Data Stack] Why the Future of AI in Data will be Weird with Benn Stancil, CTO at Mode & Field CTO at ThoughtSpot
podcast
The 2nd Wave of Generative AI with Sailesh Ramakrishnan & Madhu Iyer, Managing Partners at Rocketship.vc
podcast