Effective Product Management for AI with Marily Nika, Gen AI Product Lead at Google Assistant
Marily Nika is one of the world's leading thinkers on product management for artificial intelligence. At Google, she manages the generative AI product features for Google Assistant. Marily also founded AI Product Academy, where she runs a BootCamp on AI product management, and she teaches the subject on Maven. Previously, Marily was an AI Product Lead in Meta's Reality Labs, and the AI Product Lead for Google Glass. She is also an Executive Fellow at Harvard Business School.
Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.
Key Quotes
In traditional product management, we think, ok we're launching, we're getting hundreds or millions of users. Awesome. Whereas in AI product management, success may be, we made hypothesis, A, we tested it out, we figured out that it’s not gonna work, and we pivoted. Being able to get an answer to the hypothesis is success.
When someone asks me, is the PM role going to be replaced by AI? I say, of course not. But PMs that do use AI are likely to replace you if you do not use it.
Key Takeaways
When working on AI products, it’s crucial to accept that AI behavior will vary with each use due to its probabilistic nature. Prepare stakeholders and users for this variability and build a culture of experimentation within your team.
AI product management involves constant hypothesis testing and pivoting. Build teams that are comfortable with ambiguity and celebrate learning from failed experiments, as these often lead to crucial product insights.
Success in AI products isn’t just about user adoption; it’s also about continuous learning and model improvement. Focus on metrics like model accuracy, false accept/reject rates, and user satisfaction over time to guide iterative development.
Transcript
Marily Nika: Marily, thank you for joining me on the show. Before we get into any of the AI stuff, let's talk about product management. So can you just tell me at a high level, what does being a product manager involve?
Marily Nika: Every single person you ask is going to give you a completely different definition. I have a little definition, which is that a product manager helps their team and their company build and ship the right product. And I focus a lot on the word right product because that's the heart of it. We're figuring out the what, we set the vision, we set the mission for the team, and we're trying to find product market fit.
Richie Cotton: So we'll maybe get into product market fit in a bit more depth later, but for now, can you tell me, is there anything unique to being an AI product manager that's different from regular product management?
Marily Nika: Yes, absolutely. Absolutely. And it's interesting because when I first became a PM, I was an AI PM on day one. So I thought that all the challenges I was facing were challenges everyone else was facing. But you know, I was doing it before it was cool. So when you work with AI, there's just so many little things you need to consider.
Number one, the probabilistic nature of AI. This means that every single time you use a feature, you're going to get kind of different behavior. And when you're building a product that you hope to have a specific experience for the users, you can't really guess what that's going to look like. And a lo... See more
Let me give you an example. I was a part of a product review and I was demonstrating this, AI product, AI feature I was launching and leadership was trying it out and they said Hey, I don't think this works well because it's not consistent every time I'm using it, I'm getting a different answer.
And I say, well, that's the probabilistic nature, like this is working as intended. So that probabilistic nature has a lot of challenges. It gives you as a PM, because you, you have to, make people around you be comfortable with this uncertainty, if you will, but it's also kind of the beauty. Example is if you use Google Gemini or DALI to generate an image with the same prompt, you're going to get a different image generated.
Or if you're driving a car and you use self driving, there's always like a little probability that comes to it. So nothing is 100 percent safe. Nothing is 100 percent accurate. So that's challenge number one. Challenge number two, there's this experimentation culture that is AI products because, just because of this uncertainty you need to do way more experiments than in classic product management.
And Because you're just making hypotheses at all times and you say, well, I believe that if I deploy this model into this experience, we're going to get outcome X, but it's not always the case. So there's a ton of experiments. There's a ton of pivoting. And, you know, if you have a team that's very much used to, okay, I'm going to do.
A, B, C, and then achieve Y. They don't really get that. They get demotivated. don't know if they're on the right track. They don't know if, you know, they're progressing in their career. So, fostering a culture of experimentation in your team, engineers, and stakeholders is important. So, I had in the past to figure out what success meant for a product.
And in traditional product management, it's, okay, we're launching, we're getting product, Hundreds or millions of users. Awesome. Figure it out. Whereas in AI product management, success may be, hey, we made hypothesis A, we tested it out, we figured out, hey, that's not going to work, and we pivoted. And that hitting a milestone of being able to get an answer to the hypothesis is success.
So it's more of a milestone based progress if you will. So, yeah, I can speak about AI product forever, but I think these are the things that stand out the most. Thank you.
Richie Cotton: Okay, so certainly that first point about things being probabilistic is really interesting because if you come from a data science background, a lot of stuff is probabilistic, but if you come from a software engineering background, like almost everything is deterministic. So I can see how there's a bit of a culture clash there.
Okay, so your second point about experiments was really interesting. Can you Go into maybe a bit more depth about like, what do these experiments look like, and who performs them, and what are you expecting?
Marily Nika: Let's assume we work for a startup and this is totally hypothetical, by the way. And let's assume that the startup has created an app which essentially replaces the native keyboard on your phone. And let's assume that the whole premise is, hey, this keyboard learns from your texting behavior so that it can actually predict what you want to say.
And it's gonna. kind of speak for you and on your behalf eventually, because it knows how you speak, it knows the words you use, and you don't need to type everything. It will just say, hey, did you mean to say, hey, how are you doing today? With like two question marks, just pop it it gets refilled. So let's assume that's it.
Let's assume you've launched and you know, you have a certain amount of users, people would like it. Now in the backend and in the startup, you likely have a partner, which is a research science scientist that gets a lot of the data that come in from users and logs. And they can use this data in order in the back end to improve their model.
So in the back end, the scientists will come to you and they will say Hey, we see the usage. We have some metrics to measure success. These metrics can be You know, correct prediction of what the user meant to say and satisfaction, all those things. But I have a model that I think that if we run and launch this model and replace the model that's currently live, we are going to get improved metrics.
So we're going to run a little experiment and we are going to say, all right, for 50 percent of the population, we are going to keep the model that's currently out that can predict what merely wants to type as they tape And then for the other 50%, we're going to. Silent launch the new version of this model, and we're gonna measure our Northstar metric, which can be ratio of accepted suggestions, let's say, prompted to the user.
If this metric, goes up for the version we're testing, which is the improved model. Awesome, we'll start rolling out to the rest of the population. If not, we'll roll back out so that we just discard this. So there's a ton of that. And sometimes I, I had, I launched the new version of the model, even if it was performing worse, because I knew that over time you'd get more data and would get improved.
But as a user, I've been in the, the side where I had a product that I loved and it was perfect and then suddenly I update, upgraded to like the new version and I hated it. But that's what AI and probabilistic nature is. You kind of need to learn that there's a learning curve in the system that needs to learn you.
And you need to be careful and you need to be comfortable as a user to know that, hey, this thing listens to me and, you know, it, it understands me and how I, behave and it's going to improve a product for me. But yeah, that's a very simple, you know, hypothetical kind of experiment and AB testing, but there's a ton of them.
Richie Cotton: yeah, it's a very cool just such a simple but powerful technique, it's just rolling out features to some users, and then seeing whether it's better than the existing thing or not, and then having to roll things back. Although I imagine having to roll back a feature that you just launched, that's going to be kind of disheartening.
How do you deal with that sort of thing?
Marily Nika: Very good points. Well, I've worked both in startups and in big tech. Now in big tech, I mean, it's. Massive bureaucracy, if you will, and processes and operations. So you need to account for that and say, okay, in that case, if we have to roll back, here's who's going to do what, here's the timeline that's going to happen.
I would likely need to write something that we call a postmortem document. I don't know if you're familiar with it, but essentially it says, hey, here's what we tried. Here's what went wrong. Here's how we'll prevent any experiment from going that, that way. You know, something going that badly in the future.
But, you know, in big thing, you kind of expect it because it's something that might happen more often. Also in big thing, you're not going to open to 50 percent of the population. It's going to be more like 1 percent is trying this. So, you know, the, damage, if you will, if something doesn't go right, it's not that big.
In the startup though. Things are way faster and you don't really care about a polished experience. It's more like, okay, let's learn. If yes, move forward. If not, you're good. So, less operations, less bureaucracy. But I have to call out that for AI another very important challenge is, ethics being responsible, responsible AI, fairness.
So for every experiment, you need to make sure you can log specific data. You can process specific data. You need to make sure to notify the user and say, Hey, this data is going to be collected. If you want to opt in to improve the product. So, there's a lot there to notifying the users, but yeah, big tech way longer, but you expect it start up way faster, but.
You know, it's part of the hustling, I guess, nature of the startup. And I, I just, I just love this startup culture. I just miss startups so much.
Richie Cotton: All right, so you mentioned ethics is an incredibly important sort of skill set to have as being a product manager, particularly with AI. What are the other sort of important skills that you need in order to be a good AI product manager?
Marily Nika: I posted this on LinkedIn the other day, but every time you come up with ideas and brainstorm, even on day one you want to ask a question, not just, you know, Hey, can we do this? But also should we do this? So are you affecting users in a way that is not ethical? Are you going to create a product that's not going to be inclusive or provide, generate images that are diverse enough or anything like that?
So the skillset there, it needs to be. Understanding the risks and I can provide a list of risks out there. Number two is understanding the regulations. I know the EU's first AI act actually went through a few months ago. And just being aware as a product manager as to what you can and shouldn't, and can and shouldn't do it's just so important.
So staying informed is another skill that's just so important and just having the empathy And solve for your users without affecting them in a negative way is very important.
Richie Cotton: Okay, yeah, so certainly understanding legislation and empathy seems like an incredibly important skill, I guess, for almost every role. But yes, certainly being able to understand what your users want is going to be incredibly important. How technical does the role get? Like how much sort of in the way of AI skills or other technical skills do you need in this role?
Marily Nika: So I, I have a PhD in data science and by the way, I never knew I was going to be a PM. I didn't know what the PM role was all about. So I kind of figured it out. But for my case, I had to tone down the technical dial because when I became a PM and I was working with data scientists and research scientists, My inclination was to say, Hey, don't do it this way.
Do it this way. This is what I would do. And then I realized, Hey, this is not my role anymore. So I need to tone it down and just. learn how to partner with people because that's not what I do anymore. So that was a different and interesting thing for folks that are not technical. I tell them they need to have technical awareness, meaning people need to understand how AI works.
They need to understand what that probabilistic nature looks like. They need to understand the end to end process from data to training to human in the loop to the outputs. So they need to understand how things work. And then another thing, which is technical influence, meaning you need to be able to have technical conversations with the scientists and engineers.
So I'm not saying you need to say, Oh, I'm going to use back propagation or like a specific model. But when a scientist comes to you and says, Hey, for me to create this music recommendation system there are two implementations that I can do. And. You need to be able to understand the trade offs. You need to be able to make a call as to what the best way forward is in terms of these trade offs.
For you to understand these you just really need to understand how architecture works. Let me give you an example. Let's say, I mean, the most basic example, which is ads, right? For, imagine like a slider, which is a trade off slider, where on one hand, privacy is going to be very high for the users, but the ads are going to get, are not going to be personalized at all.
So, it's kind of like, okay, is that the best experience versus You know, still having private measures there and consents and all these things for users, but them getting more personalized ads. So that's a trade off. So you need to understand how things work, what privacy means, or how much with the user's data how to inform them, all these things.
And the, software engineer or scientist will come to you and they will say, well, okay, what do we do? How do I do it? How private is it? And we need to be able to make this calls and decisions and accept the risks All right. Now, you mentioned you had a PhD in data science. I'm wondering where does data fit into this role?
PhD was all about predictive analytics and it was actually a very important and interesting multidisciplinary topic. So I looked into BitTorrent downloads and YouTube views for songs specifically. And I was trying to predict how popular a song will go over time. And I used epidemiological models, so I was able to map online data with epidemiology, and I proved what going viral means, no one else had proven this word going viral, because if you read my PhD, I said that, oh, it's a virus.
Content does go viral and it does follow the curve that influenza has. And interestingly enough, it was like 10 years after my PhD when COVID came out and people started reaching out and they were like, can you predict COVID? I'm like, I'm not touching this. I can like um, so where data comes in.
When I first joined Google after my PhD, I was focusing on speech recognition and We wanted to launch. Um, Voice search for so many different languages. And I had to figure out, how do you collect data for, things that don't exist? So there's no concept of like mining or logging data and you know, how much data is good enough and what is the relationship between amount of data and quality and.
you know, what is the NVQ? NVQ is the minimum viable quality for you to launch something. And, lots of strategy came in there too. And for example, you know, if you're the first one in the market to launch a recognizer for Greek, I'm Greek by the way do you care about quality as much as you'd cared about it?
If, two or three, Competitors have already launched it, you know, in which case you would likely want to match or be better than them. So I was exploring all these different ways to collect data. A ton of experiments on quality and a ton of figuring out, you know, ML Ops, how to clean this data and how to do it in a efficient way that doesn't take seven, six a year, a month long.
So data comes in because the better the data are, the better the quality you're gonna make is. And I wish I could communicate with myself 11 years ago when I was doing that because, you know, now we can synthesize data with joint AI. And there's no concept anymore of collecting data or acquiring data or, cleaning it up and, you know, just synthesize it.
And that's partially, you know, I work on, on this project at, at Google in Geneva and synthesizing data. And it's just so, fascinating, the potential of it.
Richie Cotton: Okay, so it seems like there's kind of a lot of data involved around deciding like what the users want. So guess traditionally you're doing user research, you're doing surveys, interviews, things like that. But you suggested that you can also synthesize data. So is this synthetic data going to replace those user surveys and things like that?
Or is that a complementary thing?
Marily Nika: Yeah. So I was actually referring to the first step of training a machine learning model, which is collecting data in order to feed this into a model and train the model. Now you mentioned a very interesting point, which is, you know, And the product development life cycle, Hey, we need to do user research and figure out, Hey, would users like a product that can generate a video out of text?
So in that case, in big tech, of course, we have UXR departments. So user experience research where, you know, you will ask a person to say, here's our hypothesis. Would users want this? What kind of user segments do we have? And can you create a focus group? Can you ask them? The thing with AI is.
There are two things, right? There is AI product management, which is working with scientists to train the models, leverage AI there is AI for product management, which is using all these awesome tools that are out there, like Google Gemini and Cloud and of course, ChadGBT, to just enhance you as a PM.
Specifically for, the research side, I'm using this, app it's called Como dot AI. And it's amazing because I can say something like, Hey, should I build a smart fridge for families? And it will literally Give me Quora links and Medium links and Reddit threads and YouTube videos from creators where they answer that very question.
And not only provide us the actual link, it also summarizes, you know, the TLDR for me. It's like, well, according to families, here's the things that are more interesting. And here's the links if you want to click through it more. So it's just fascinating to see, you know, just how much faster the PM's job can take place how much less costly and of course, disclaimer, you shouldn't trust all these tools 100%, but they give you a jump and a headstart that's so immense and significant.
So when someone asks me, is the PM role going to be replaced by AI? I say, of course not, but PMs that do use AI are likely to replace you if you do not use AI. Yeah.
Richie Cotton: That's fascinating. I like that you're using AI to do the job to make more AI products. It's a good sort of recursive thing. Nice. Yeah, so certainly those sort of generative AI tools for doing research into almost anything, that seems incredibly powerful. All right. So, I'm curious as to who you work with.
So, I'm presuming there's some sort of engineering teams for actually building things. Can you just talk me through who are the different teams that you work with as an AI product manager? Yeah.
Marily Nika: I get this question a lot. I created a nice little graph on my newsletter that I'm happy to share. But essentially there are multiple SELI holders. The most important ones are research scientists. These are the ones that, We'll tell you, hey, for this experience to come out, we need to train models X, Y, Z.
Here's kind of the data we're going to need. Here's what trade offs can be that we need to discuss. Here's the risks. Here's what quality could look like. Then at the same time, you have a social engineer that is there to put together an experience. You also have UX, what's going to be the designer? Let me give you an example.
Let's say, well, I was using Zoom the other day, and there was this new feature that is called, it was like an AI assistant that summarizes the entire conversation. And then after the conversation is done, it sends you an email saying, Hey, here's what's going on. was discussed. So for this to get created you have the research scientist that needs to get the data, which is going to be voice data converted into transcript.
So they take as an input the transcript so that they can process it and summarize it by using Gen AI, of course. But at the same time, you need to work with a designer that's gonna create the little widget that pops up on the user that says, Hey, do you wanna activate the AI assistant? I think it's called Assistant AI Assistant.
Here's how it works. Do you wanna create it for all the speakers or just for you? So it will create the actual ui. You also need to work with UX writers that are the ones that are gonna use the right wording. Because you know, if you ask the scientists, Hey, what should we call is, the scientist is likely gonna use, you know, more technical terms.
So, but we wanna make it more user friendly so the writer can tell you what to write. Designers gonna create the actual design. And after you have designs, after you have the models, you are going to have the software engineer to blend all this magic together and integrate it. You also have ops involved, ML ops, because in order to train these models, you need a ton of training data.
And, figuring out how to collect all this data is a whole other story, but assuming we have data and we can train the models. There's also privacy and legal involved that you as the PM need to work with to say, All right, here's the data we collect. Here's how we process it. Here's when we delete it.
Here's what stays. Here's how we notify the user. Here's how we get consent in order to put this together. So, that's another party. Sometimes you do have another stakeholder which is a third party vendor that you know, if you want to clean up the data before training model, you're going to use them.
If it's a third party app, or if there's hardware involved, there are OEMs, manufacturers you may need to work with. and of course the user, right? You're going to ask the user and be like, Hey, is this something you want? How should we work? And so on. So there's a ton involved. But between classic product management and AI product management, the bubble that's missing, I guess it's scientists.
And maybe privacy because data may or may not be involved. Okay.
Richie Cotton: just listed pretty much every team in the company there. It sounds like it's very cross functional.
Marily Nika: Oh, it's a highly correct puncture and it's highly um, there's a lot of risk and there's a lot of alignments. So often I tell people if you want to be PMs, you should start by being a traditional PM before you can convert because you need to be able to know the craft before you add in the extra layer of AI in there.
Richie Cotton: Actually brings up another question is how do you get into this field? Like how do you become a project manager and then how do you become an AI project manager?
Marily Nika: There are different ways and different things I see for AI specifically, it is the case that traditionally you'd find more technical people like myself that used to actually do the work and train the models and all these things. And if they have an interest Coming up with ideas, solving for the user, getting alignment figuring out success and presenting getting alignment.
I keep saying this but because alignment is a big part of the job. Then you know you see a lot of people converting from the technical side. I also see Standard PMs that learn, the AI intricacies and, the technical awareness and influence that I discussed before. But what I tell these people to convert is, The best way to become an AI PM is doing it at the company you're at right now.
And this is literally going to your manager and say, Hey, I love AI, I am interested perhaps, you know, I can upskill by getting a training program. And of course I teach AI product management and certify people. So I get all those people that get reimbursed by their employers and I teach it to them and they get the certification, but another example is going to your manager and say, Hey, 20 percent of my time, can I work with this scientist and come up with this idea?
And. Just do a little pilot, see how things go so that you can have this AI experience and launch an AI product, which, as I said, like it's different than traditional. Another thing I see is. Now with AI, all companies want to try it out, see what it's like, see if it's applied to them. And I tell people, why not go to your leadership and propose a little labs departments in your company where, you know, you can be the part time PM there, work with the scientists and come up with ideas that You can infuse AI in some team in the organization and, perhaps launch something if, it's good.
So I guess TLDR, don't wait for permission, start doing it so that you have some AI experience and then be like, okay, I got it. I am an AI PM.
Richie Cotton: I like this kind of a few different routes into this, and it seems like a lot of it is just about, well, make sure that you've played around with the AI so you know what you're doing, and make sure you've played around with the product that you want to sort of build, just so you've got experience with the product, and then you can sort of just dive in and figure out the management side of things sort of afterwards.
Okay, cool. So, Right back at the start of this, you mentioned the idea that you have to look at the right product, so you've got to get a product market fit. Can you just tell me what is a product market fit and how do you go about determining whether you have that fit?
Marily Nika: So imagine three bubbles. Bubble number one is So building something that's desirable from users, that is going to solve a use case, that is going to solve a pain point that is going to add value to the user. Number two is something that's viable from a business perspective, meaning, You know, it is going to make money or it is going to add value to the business metrics of your company.
Number three, something that's viable from a technical perspective. I'll give you an example of a company that I love and they definitely found a product market fit in AI. So, I really admire Adobe and of course they have their own way to generate images. And I know they only use licensed data to do that, which of course I applaud.
So let's imagine, they have this suit of products. I think it's called Adobe Creative Suits. And let's assume they have a user base, which is mostly designers. Their business model is the designers are paying per month to use this product, right? And a product can be, you , an image generator.
So let's imagine there's So, to create a custom invitation for my kid's birthday party and the kid loves turtles. So make a turtle that has these colors that's called Bob. So then the designer can go on the system, they can generate this awesome, very custom turtle added in the little invite template and then send it to the client with a couple of tweets.
Tweaks. You know, the designer has created this awesome, very custom work super quickly, so we can take even more clients in. The customer is going to be so satisfied. So, of course, we make money out of it. So, this ticks all the boxes. User desirability, because the designer is a designer. More than happy.
Business viability, because the designer, of course, is going to pay per month for the subscription to use this tool. And then number three, this is something that's absolutely feasible from a technical perspective, because now we can generate all these images. They've infused these generators, I think, in their different apps.
So, I think that's a great example, and Adobe has done an awesome job.
Richie Cotton: Okay, I like that. So it's really about you've got to make sure that you're testing things against customers and seeing are they actually satisfied with what you're doing, and then you sort of iterate on that and just make sure that you keep increasing that sort of customer satisfaction. And I guess making some money in the process.
Okay.
Marily Nika: important to have a viable business, right? There's no, like, this is a big problem. demos are great, but at the end of the day, you need to build, AI is not a problem, right? So you need to make sure that something comes out of it that's viable. You can make money and it can sustain a business.
Richie Cotton: you mentioned this idea of like making sure that customers are happy or users are happy. Can you just go into a bit more depth on like what that sort of user research involves? I think this is something especially like people in the data world ought to know about, but in general don't.
So, yeah, talk us through it.
Marily Nika: Metrics in product management are so important. And I don't know if you know, the folks listening are product managers or not, but there are certain buckets of metrics. And. Probably the most important bucket is called product health, which means if you have a product how are people using it? So number one, are people satisfied?
And there are things like NPS scores and CSAT that show us user satisfaction. That's similar to going to an airport and after the security lane, you see, you know, this little buttons, like an angry face and happy face. And they're like, how was your experience after this? So that's what that looks like. But you also have things like an acquisition funnel.
For example. From, you know, all these people that become users, how many come back and how many of them are actually engaging with the product? How many are, We're sharing a story or liking a story. How many are commenting? So what is the interaction looking like? What is the engagement? Then do people come back to it, right?
If people come back, it means that you're doing something right. Retention is going up. I think my favorite metric is called stickiness, which is, you know, whether your multi active users, how many are actually coming back on a day to day basis. Then there are other metrics, of course, like monetization.
How much money are you making from each user if money is involved? You know, do they get ads? Do they actually pay? if you're playing Candy Crash Saga, let's say, and you know, the user gets out of lives, they get a pop up that's like, okay, to continue at 0. 99, press the button, continue. So you can figure out the user lifetime value or just how much money.
There are more metrics like referral, you know, how many more users are you getting from someone? And are there any schemes there for referral? Like DoorDash tells me if I refer someone, I get 20, they get 20. But in AI, there are more metrics. There's also, you know, accuracy. There are model specific metrics that you need to be aware of, like You know, how many times am I getting something right?
What is the probability of a false accept or a false reject? But yeah, I hope this answers the question for what metrics exists for users and, how do you measure success of your overall product?
Richie Cotton: Yeah, I love that. So this is like some very simple stuff, like NPS is just, it's a simple, like, survey. You're taking an average of like whatever the score is from 1 to 10 and then you got more advanced stuff like how do you measure how often people are coming back to your product, how much money you're making from your product, tons of stuff that you can track then and I guess the key is going to be deciding which ones are the most important metrics that you want to follow in order to measure your success.
Okay, so speaking of success have you got any success stories from products you've been involved in building that you can talk about?
Marily Nika: Wonderful question. Yes, yes, yes. joined Google in 2013 and we launched voice recognition It's just incredible. I think we launched 200 voice recognizers in all these countries. And that means literally adding the little microphone next to the search bar and on phone and on desktop.
And, you know, the next day you go there and you just see people using the voice to search and they can get images and they can get texts and, you know, it's You just unlock this ability for people to browse the web by a voice. And you can imagine that accessibility use cases or, people that don't know how to write can now search and get the images they want, right?
It's the impact is amazing. But eventually I was owning also VoiceMatch, which is the ability of the Google Assistant to recognize that it's merely speaking versus writing. someone else. And that's just an incredible product I, I led for multiple years because there's something about identity that I love.
So let's say you have a communal device in your kitchen and you walk in. It's different to just ask you to play music. Versus asking it to play music and knows that, Oh, Merrily will want to hear Coldplay, right? So this personalization element and experience, I think it's, it's very, very good. And yeah, I have a ton of examples, but I think these are the ones that stick out the most because, the impact we've had.
Richie Cotton: That is very cool, and yeah the voice interaction is just an incredible accessibility feature, so I guess for people who can write and have access to write on, then, you know, things that, well, I don't really need this, but there's so many situations where you can't write things down, you can't type, I'm Particularly driving, it seems like an obvious thing.
So, yeah the impact of that's going to be very satisfying, I think, getting this capability to lots of people. so the flip side is, has anything gone wrong? Like, what do you wish you knew at the start of your AI product management career?
Marily Nika: Well, I wish I knew it was AI product management. wish I knew what things look like in classic. I think what happened with me is I became very specialized early on. There's this thing that's called the generalist PM. And, you know, whatever they throw you, you can thrive and you can do very well.
Whereas I started with AI. So for seven or eight years, I was just doing AI, which is amazing. But. After a point, you know, I was looking for the next step. I was trying to figure out, what my identity is as a PM. And I realized that I was just, you know, there was only a finite set of things I could do.
So I wanted to flex the PM muscles, let's say, in something that didn't involve AI. And when I did that it was kind of tricky for me because, I had this experimental culture and people would say. No, no, we care about launching as soon as possible and, you know, no experiments are allowed.
It was kind of, you know, interesting to get used to that. So I guess I would want to know that there are different types of PMs and that, adding in different flavors as you, you know, as you get shaped as an, as a professional, would be good. But I will say that with the current market, with everything going on in tech, specialized PMs are the ones that are more likely.
To get a job faster and I mean, don't quote me on this, but I do feel and I see from all my students in the bootcamp that people that do get the specialization have more chances in landing a job versus the generalist PM, but that's what's happening now. I think in the past it was, you know, people wanted generalist PMs because there was just more diversity in jobs, let's say.
Richie Cotton: Okay, I feel that's one of those things that sort of swings backwards and forwards as the economy shifts, whether people want specialists or generalists, and it goes backwards and forwards. so, Can you talk to a bit about like what the tech stack involved is like, what sort of technologies do you need to know about as an AI quick manager?
Marily Nika: Well, you need to understand the different ways that the model can be trained and there is supervised learning, which is imagine I have a four year old and a two year old. So I walk on the street with my kids and we see a squirrel and I say, Oh, that's a squirrel. Or if there's someone with a dog, I say, that's a dog and squirrel, dog, bird.
So essentially you provide a lot of labeled data. So if you see something like that. the label of this. This is the squirrel, this is the cat. And then eventually the kid gets it and they say, Oh, that's a cat. That's a dog. I don't tell my kids, Hey, if the animal is smaller and the ear and there's fluffy tail, it's more likely to be a squirrel.
I let the kid figure that part out. So supervised learning is exactly that. You provide a ton of label of data, and you tell the system, okay, now you do your thing, learn, figure out the patterns, and eventually the system will say, yep, got it. So for every new animal that comes in that doesn't have a label, the system will say, I'm 65 percent sure that this is a squirrel and 1 percent sure that this is a cat.
So I'm choosing the squirrel. So that's how it works. There's also unsupervised learning, which is more like, Hey, here's a bunch of data, I'm not going to tell you anything about it, you figure out the pattern and you figure out the cluster and it kind of can create cluster. Use cases for this is more like, you know, news outlets where it's like, Oh, I think this article is about sports.
Someone had the sports label. I think this. articles that are more about politics or, pop music and all these things. I actually created a graph that I posted on LinkedIn that has the landscape of AI. Because I was looking for something like that. There's more things like semi supervised learning.
And of course now with LLMs, I wanted to call it out in a different bucket because there's just a different technology, different way of trainings, models, transformers, and all these things. So, take a step back. People need to understand this stack, which is different ways of training different types of requirements, data requirements, or metrics, or risks, or challenges tradeoffs for each one of them.
They also need to understand ops very well. And I tell people this, you cannot have AI if you don't have an ops org staffed, because you will need to move data around. You will need a human in the loop to say, yep. good or not good, trust, ethics. Like you definitely need the human to just ensure everything is getting there.
And the other thing is you need to understand what it takes to productionize AI. The AI can work perfectly in a lab and in an experimental server. But then when you actually take it through all the hoops of putting it through a system, launching it in the cloud or the web or whatever, or on device, whatever you do, the quality is significantly going to drop.
So you need to realize that, okay. This perfect data I had to train this model, you know, works very well. But when it's out in the wild, or am I going to get this data? Am I going to get the same quality? So productionization and the challenges that come with it is just a huge and very important part of deploying AI and an experience that people need to be aware of.
And every company does it differently, a startup or a big tech. So there's no like cookie cutter, but I will say if you productionize, the quality does change.
Richie Cotton: So that's actually a pretty broad set of things, you know, about needs to know about. So there's a lot of different machine learning techniques there. There's the MLOps side of things, putting things in production. There's all that sort of data quality and governance side of things, plus all the rest of the actual product management stuff you need to know.
So yeah, it seems like it can be incredibly broad. Presumably a very interesting kind of job. so, talk me through, what are you most excited about in the world of AI product management then, at the moment?
Marily Nika: You just didn't know what's going to launch the next day. And there may be something that's completely kind of destroy the vision you had, because there may be this whole other way of doing things that, you know, is gonna make you pivot. And there's something that my friend Hans said, which is, Hey, maybe we should consider that we no longer pivot and, but we evolve.
So. I'm just very excited about evolution about AI and techniques. And, the thing is, use cases remain the same, but the technology is the one that changes. So, I'm just very excited to see and explore different ways I can deploy technology for a specific use case. And, you know, as an AI PM, you never get bored just because of this.
And, you know, this has a negative side because, you know, The next day you may get something to launch that's going to kill your entire business, I was coaching this. incredible, she had a translation agency and people would come to her, she would translate, certify, that's it.
It's just like, Hey, it was Jenny and no one is using my services anymore. And I said, yeah, but you have a unique opportunity here. Maybe you can. create your own chatbot that can translate, but then you can also say, hey, I manually certify and I review it manually. So you can be the human and look for that thing.
So she started doing that and actually her business is thriving and she works way less. So I think it's a win win. So we have to figure out how to evolve in this area of AI.
Richie Cotton: I really like that just being able to make use of the AI technology, keep working less, and then your business is thriving. That seems like a, a perfect outcome. But also what you were saying about how you have to pivot a lot, so I suppose, yeah, you don't know the answer to what those experiments are gonna say, so you're gonna have to continually change what you're doing.
just to wrap up, do you have any final advice for aspiring AI product managers?
Marily Nika: things do get pretty technical, but don't let that overwhelm you because, it's not rocket science, like, you will understand that, you will learn that, and you don't need to learn all the math and all this crazy tech that goes behind it, you just need to be aware, you need to have influence, and you need to be open minded.
So, please don't be afraid to embrace it If you embrace it now, I guarantee you, you'll be ahead in the future because it is coming and it's, it's not an option, really. And if anyone is interested, I'm running my, my bootcamp. I have a cohort based course that is very, very popular. I'm happy to, to have people and teach them and certify them on how to be AIPMs.
Richie Cotton: Nice. All right. I like the the solutions, just go and embrace the change and keep going with that. And yeah, I'm sure many people will be interested in your bootcamp. Wonderful. All right. Thank you so much for your time, Mary. That was great stuff.
Marily Nika: Thank you for inviting me, Richie. Have a great rest of the day.
podcast
Developing Generative AI Applications with Dmitry Shapiro, CEO of MindStudio
podcast
How Generative AI is Changing Business and Society with Bernard Marr, AI Advisor, Best-Selling Author, and Futurist
podcast
Getting Generative AI Into Production with Lin Qiao, CEO and Co-Founder of Fireworks AI
podcast
Can We Make Generative AI Cheaper? With Natalia Vassilieva, Senior VP & Field CTO & Andy Hock, VP, Product & Strategy at Cerebras Systems
podcast
The 2nd Wave of Generative AI with Sailesh Ramakrishnan & Madhu Iyer, Managing Partners at Rocketship.vc
podcast