Eryn is a future of work evangelist. She is the co-creator of a tool for assessing AI maturity, and regularly advises companies on how to assess and improve their AI maturity. Eryn is also the Editor of the Weekly Workforce newsletter and the Principal at the Startup Consortium consultancy. Previously, she was the Global Director of the Association for the Future of Work, and VP of Marketing at Andela.
Iwo Szapar is a serial entrepreneur with a passion for creating impactful solutions that enable people to work smarter, not harder. He is the co-founder of several innovative initiatives, including Remote-how, Remote-First Institute, AI-Mentor, and the Saudi AI Leadership Forum. Throughout his career, Iwo has helped transform how over 3,000 companies—including Microsoft, Walmart, and ING Bank—approach the future of work.

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.
Key Quotes
One of the best tactics that really works for organizations is to organize hackathons. Hackathons can be just a few hours session where you're using no code, low code tools or the platforms like cursor, replit, et cetera, where you build some prototypes as you go without the need to understand how to code or even to work with the data. What is really cool here is that you're building solutions for your own problems, for your own challenges, right? So you can immediately see potential value that it can create, which then immediately creates a bind for such tools.
Every company is a tech company now. If you think of anyone in the world, whether it's a veterinarian's office or anything else, they're all tech enabled. And this is really the era that we're entering with AI, everything is going to have some level of AI integrated in the tools, systems, processes, and everything that we're doing within business.
Key Takeaways
Identify and leverage AI tools that align with your organization’s North Star goals to ensure that AI implementations provide measurable value and align with business objectives.
Conduct a comprehensive AI maturity assessment to understand your current state and benchmark against industry peers, which will help in setting realistic goals and tracking progress.
Balance top-down and bottom-up approaches in AI adoption by forming interdepartmental committees to ensure both strategic alignment and practical implementation.
Transcript
Richie: Hi Eryn. Hi Iwo. Welcome to the show. Excellent. So, just to begin with, I'd like a bit of motivation. So it's often not clear what constitutes an AI mature organization. So do you have any examples of companies that you think really have done AI maturity very well? Iwo, do you wanna go first?
Iwo: So, I think there are quite a few companies that are leading the, a revolution from the software side that really figured out AI internally. talking about companies like Replit, Cursor. So the ones that are on one side building amazing a products. Both companies are enabling no coders to build apps, build software which I call them software composers.
So they're growing very fast while keeping a fairly low headcount which means that. Efficient in scaling their internal operations using AI, both for automation and augmentation. But those are not only the small guys that out. And I know that has some example about the big guys.
Eryn: Definitely, I think some of the small, more AI native companies, it's very easy to adopt a lot of these changes and to try things out. And sometimes the bigger the ship, the harder it is to steer. But we are seeing really great examples out in the world of companies that are doing different aspects of AI maturity and readiness.
Well, one of my favorites is actually Deloitte. They're a huge, huge company, but they have this really great AI academy and they make digital and AI literacy just a core part of everyone's role and responsibility and their learn... See more
Then you can always explore new ones that are easy to just load up and safe and secure in their own environment. So there's a lot of ones that are making accessibility not just to knowledge, but tools and even things like governance and culture around it. Really effective.
Richie: Okay. So it's kind of nice that some of these larger organizations are getting in it. I mean, it's obviously like if you're a new startup, you've got no legacy processes. It's a little bit easier to be AI native, but if you've got sort of decades of history around. Things then. Yeah. I imagine it's a lot more effort to change things around.
So can you sort of clarify like what are the real benefits to organizations from putting more effort into your ai?
Eryn: the way I talk about it is. Every company is kind of a tech company now, Like if you think of anyone in the world, whether it's a Veterinary's office or anything else, they're all tech enabled. And this is really the era that we're entering with AI is that everything is going to have some level of AI integrated in the tools, systems, processes, and everything that we're doing within business.
so I think the real benefit of taking this transformation seriously, or trying to get ahead of the curve is that you can really embed it within your organization and make this digital transformation. A little less painful than it could be in other instances. So you can set up those right systems, those right processes, training, governance, everything that you might need, and really just take control of it in early days to experiment while you still have time, before it's too late to kind of go, oh gosh, we should have done this six years ago and now it's harder to change.
So I think kind of getting ahead of the curves is part of the game.
Iwo: and I would add on top of that. That because AI is so, powerful when we're talking about leveraging data to change things or to implement things we've been talking about, uh. data driven approach for many, many years, right? Mean on data related podcast. So probably all of our listeners would agree, right?
But then when it comes to the reality, it was not always the case, right? It was, quite often, too often we've been talking about data, but actually when we're talking about using data to make certain decisions or have data, a part of our day-to-day life, even though we are not working data science, but we're working something that in theory has nothing to do with data, but data is everywhere.
Like I see ai, giving us opportunity to a finally start measuring things that we were not measuring before. So in the in, in, I would say phase one, it's about understanding where you are right now. What are you doing? How are you doing? Then obviously step two, once you know where you are, what are the things that you can improve and then step three, like how AI can actually create with the data totally new areas that were not possible before because you needed the human to put this file between software A and software B and think.
Business potential which was not possible before. Right? and this is just one of the many, many, many aspects I think really, really important right now to understand that this is right now the time for data data to shine. which is great if this stuff becomes data.
Richie: Okay. So it just seemed like it's really a mix of being able to do the same things more efficiently and being able to unlock entirely new projects or entirely new sort of, business use cases. So. Actually I like that idea of it's sort of finally bringing that data-driven dream that we've been talking about for, well at least decade.
I'm just looking that 'cause it's a bit more accessible to more people. Now you mentioned that one of the sort of use cases there is just understanding the state of your own business. So I guess related to that, the kind of problem. You maybe not sure what your state of AI maturity is I mean, I think about the data camp case in some cases. Yeah. We are very sophisticated in our use of ai. In some cases I'm like, Ooh, that process is still a little bit manual. So overall, how do you measure your AI maturity? I.
Iwo: we've been asking this question. Ourselves like last year. And that was like one of the reasons why we, why we created a maturity index. But basically if you look at how AI becomes a part of your day-to-day work, it basically integrates with aspects like productivity, collaboration, innovation, and creativity.
So basically you have a. ai, it's not that I use AI at work. No. I use AI for A, B, C, D to work with others, to work with myself for brainstorming. So this is kind of the mental model that that really helps to understand where you are. And that's what we what we have in a maturity index where just through a 15 minute conversation with ai on how you use it, what are the use cases, what are the tools, how you feel about it, what are the challenges?
What are the barriers? What are the opportunities, how much time you save, et cetera, et cetera. Then you get the understanding how you score in different areas. Getting recommendations. So I would say that this is area one to get like this snapshot of how you operate as an individual.
Then to your point, when you're looking at some workflows, how some the work is done, then it goes on the team level, So how we collaborate with each other. How does the data or information flows between software or between human touch points, Between our meetings or asynchronous communication, et right. So then you're just moving from just how you use it to how you as a group of humans, And then of course you can extrapolate it to a company, and then it becomes really like a big complex. So it's not easy but you should be from different angles and one very, very simple actionable way.
Not just assessing it, but even just understanding how we use it right now and whether, what is the potential? There is a framework called step. It stands for segmentation the transition education and performance. And this is like a pretty a, complex framework of adopting ai.
But what is the key is this first s, which is stands for segmentation and segmentation means that you look at all of the tasks that you're doing. You write them down, which is already a very interesting exercise to do. Like not every day or like maybe most, most of us even never did it before. Like, Hey, okay, so I do this, and then you end up with like 30 things that you actually do on a daily basis, Recurring or maybe monthly, et cetera. And then you do the following. Then you group them into three buckets. You group them into automation. So you group them into the stuff that can be potentially automated, maybe not even if today, maybe tomorrow the model, the new model will enable it, or maybe the month, et cetera.
You never know, Then you have a group for augmentation, and then you have a group where it should remain human only, And with this very, very, very simple approach, you can start thinking about A, where you are right now, what you're already doing. Then you can see the potential of what is possible.
So then you can start planning this, talking with your team, seeing if you have the necessary software processes, et cetera. And then you can also understand which things should not be touched by ai. Because they require empathy, critical thinking.
Richie: Okay. Wow. Lot to unpack there. So I like the idea that you start with just working out like, what do I do myself? How am I using AI there? And then you go to a team level and you're gradually sort of aggregating across like larger and larger groups of people across your organization. And the idea of just making a list of all the things that you do during the day, it sounds terrible about self-reflection. Like how much of the time just Well, I made a cup of tea. Drank cup of tea.
Iwo: I was in meeting Juan that I was in another meeting and another,
Richie: Okay. Yeah, so, having chat on podcasts, don't want to automate that with ai, but maybe some of the surrounding stuff like yeah. Writing, marketing copy and things like that, that's very good for for automated with ai. Okay. So, yeah, I like that idea of just taking an adventure of what you do and seeing what bits can be automated.
Aaron, did you have anything to add to that? Have you got any advice on like, the measurement side of things?
Eryn: Yeah, I think a big part of it is not to confuse AI adoption with AI usage either, and so I think a big part of it is, sure, you might use ai. AI very frequently, or someone might be very technically advanced, let's say they're an AI ML engineer for the last decade. If someone actually fears that by using this new tool, they might lose their job.
They're not going to adopt it and they're not going to use it regularly. so I think it's really important not only to measure the tasks that you're doing, but the sentiment and the culture within your organization as it relates to these tools. If there's fear. Something going to take your job or something, you know, discomfort around the ethics of data management with these tools.
There's a lot of different reasons why someone may or may not use a specific tool within their workflows. So I think we have to look at all aspects of this, not just the technical side, but really the human wrapped elements as well.
Iwo: I think when, when mentioned another layer, I think we can then also go bit deeper on the specifics if. Then you're really looking into, okay should this be an a powered workflow? Should this be an autonomous agent? Should this be multi our agent framework, right? Many agents talking to each other and then producing that, the output, right?
So it's kind of like this decision tree when you're like defining things and then you're looking at different aspects. So it becomes a. Project, project that basically never ends because the environment changes so fast. But as I keep saying to myself AI is just another digital transformation.
It's just another revolution. The most disruptive probably that we had as, knowledge workers. But we had many before, right? So we will go through phases of a typical change management processes. Right now we're just starting, although it's been almost two and a half years since the chat g pt moment going mainstream.
But when we're speaking about the insights from a imagery to index at conferences and different continents and on podcasts like yours, et cetera, like still we see that we're just. Starting when it comes to the adoption of what's really possible and what people on x on Twitter are showing that are doing, which is a bubble within the bubble.
Within the bubble, when you compare it to the rest of the world.
Richie: Certainly like some of the cool things that people are showing off on social media, it's like, well, yeah, okay, you're an AI engineer you're gonna be able to do something that's a little bit more sophisticated than is, is in, sort of mainstream practice at the moment. So, Erin, I, I'd like your point about how it's not just about measuring adoption like frequency of use.
You also have to take as cultural temperature and see are people happy about using ai? Because that's gonna kind of, I guess, predict your future adoption as well. Alright, so. I mean, we're talking about AI maturity and that's one of the, sort of the hot things right now. But this has been created by people worrying about data maturity for a long time.
So what's the relationship between data maturity and AI maturity?
Eryn: Yeah, I mean, we all know the saying by now. Garbage in, garbage out. And so I think it's really that time of going in order to get the most gains we need to have really great data sovereignty and maintenance and just governance in general of are we finally cleaning up all of our data lakes and everything else that we have in order to make our own LLM, for example.
But I think that. It is sometimes a little bit of chicken egg, 'cause you can use AI to help clean your data and you can use AI to help produce data and then all of this can feed back in. So it's a complicated relationship between them, but a relationship that is very tightly, tightly knit. Eva, I'm sure you have some things to add there as well.
Iwo: I think what is pretty cool for the data community is that right now you're getting so many people within your organization or within your clients that starts to a, understand how important the data is and will be these days. Especially the proprietary data that the LLMs were not trained on especially the type of data that you're putting into your VER vector database and using RAG for this additional context.
So I would say that the importance and the seat at the table of data people just grew exponentially and extremely, extremely. Second of all more and more people start to experiment with data just by using available tools from something that is purely for data scientists like ju ai, et cetera through just using general chat bots and just like uploading some files and talking with data, et cetera.
And that a shows people what's possible, even on like this basic level. But then it also shows people that you can get started. You can be this junior data person, you can learn quickly and be also a, a conversation partner to someone senior, right? So even though you might not have skin in the game for many years in data science, I don't know, as a sales marketing product leader, et cetera you might start to have a conversation with someone that is an expert and understand and then experiment, et cetera.
So I think it also creates a different level of synergies and exchange of the information. So it's not just, okay, your data people, you do this and then I just wanted the result. So you can be really a partner in this conversation. And then lastly, you might be also building me things for yourself, So you're not always need to wait for someone. You can start experimenting, building some prototypes, and then if you need to scale it to production. If it needs to become something serious, then you're coming with already some sort of pretested pred thesis. So I think it's, super good.
It, democratizes access to data and working with data and leveraging data for various business contexts. So I find it very helpful. And I started to call myself a junior data scientist. Junior. Junior. So yeah, I'm also loving data exploration myself.
Richie: That's kind of interesting that it works both ways. So like Aaron, you were talking about how if you don't have data maturity, you're gonna have like garbage that's being fed into the AI and. AI outputs can be terrible. See, you need to kind of build the edge maturity while you build the AI maturity as well.
But I also, Eva, I like the kind of reverse thing where it's like, well, if you've got. Decent AI usage, then that's gonna help the people who are less technically inclined to be able to analyze data and having better AI maturity might in fact help your data maturity as well. So I like this kind of complex interaction between the two things.
Okay, that's cool. Alright, so let's talk about how you go out implementing, something to increase your AI maturity. So first of all, I presume this needs some kind of executive level buy-in. Like who needs to be in charge of the initiative to increase your AI maturity?
Eryn: Everybody. And this is not the easy answer, but it's the true one. So if you think about it, everybody's a really important stakeholder, right? We have usually your CEOs or COOs that have these North stars and goals that are saying, what is the problem we're actually trying to solve by implementing this type of technology, you have the legal team who probably has some level of governance and framework that they want to follow.
You have the procurement and finance teams that are saying, gosh, how much are we gonna spend on tokens this year? You have the HR team that wants to make sure that everyone has the right learning and support, or you're filling in skills gaps on teams where it's needed. You have all different types of technical teams from implementation to software developers that can make sure that they're going about the right integrations with what you've already got.
And of course, the data team that is going to be working with all different types of data sources, whether that's coming from your marketing teams or like wherever your data is coming from in general. So. Everybody has a really important role to play and everybody should have a seat at the table at some point within the decision making process and the implementation process, just because it is gonna touch every part of.
Richie: I can certainly believe that I, your AI sort of invading everything. Everyone's gonna need to be involved in some way. Does that mean you are advocating for like a federated approach where every team just kind of goes and implements it? Or do you need some kind of top-down C-suite strategy or like command and control type situation where everyone just thinks in harmony?
Eryn: It's a great question because if we go too much in either one direction, then it's an absolute mess, So if you imagine almost like a matrix where we have two planes. One is really low control environments and one is very high control environments. One is a bottoms up approach and one is a top down approach.
And we can see some of the issues with each of them. If you have just the executive team that's making decisions on what should be implemented. Sometimes they can be out of touch with what is actually needed to do the job, But if we have everyone say, Hey, I'm gonna use this tool and I'm just gonna use it tomorrow.
It's a lawless place of a million different tools and a million different renewing subscriptions on all of these. So you really need to have that balance in this direction. And it's the same with that level of control, too low control of an environment. Now you have to worry about data and security risks too.
High control of an environment like. Sometimes if you put in too many controls and people don't have the tools that they need, they're gonna have shadow AI and they're just gonna find the tool that they need anyways. Go on Chachi PT on their personal device and do the work that they need to do this way.
So when it comes to implementation, it needs to be that kind of balance in the middle of both control, and if it's bottoms up or tops down.
Richie: Do you need some kind of like committees then, or like, someone defining policies on what's possible and what's not?
Do you have any more implementation ideas on like, who needs to be in charge of this stuff?
Eryn: Definitely interdepartmental committees are helpful. And I think defining clear roles on almost like a racy structure seems to work well. Like who actually can make the final decision. Because you know, not everything's going to be a true democracy, but a lot of people need to have. Input into what's happening and at least give you extra context on what's gonna work well and what doesn't.
So I think it is important to have some sort of a committee that pulls in, representatives from these different departments that will make sure that this goes the most smoothly. And we all know it's change management. It's better to have people involved earlier than to late so that they can be, a part of this change rather than having to simply react to it.
Iwo: I.
Various teams, different levels. I think this is what can really make a difference when it comes to doing this, right? A you can then address potential fears potential barriers, challenges that people have, right? Because you're doing this together. So it's really about engaging, listening, hearing and then also addressing these concerns.
One of the best tactics that really works for organizations is to organize hackathons. So hackathons been known for like building software by software engineers between Friday and Sunday, middle of the night, eating pizza and just like writing code, right? But these days hackathons can be just a few hours session.
Where you're using no-code low tools.
To understand how to code or even to like work with the data. You just experiment with this. But what is really cool here is that you're building solutions for your own problems, for your own challenges, So you can immediately see potential value that it can create, which then immediately creates a buyin for such tools.
But of course, to Aaron's point it needs to be managed well. According to internal policies the tools that you can use, et cetera. But this element of like, Hey we will be going through this together. Those are our objectives. Those are , the business challenges that we would like to solve, and let's do it together.
It will be way more efficient and also create way more good vibes instead of just going from top to the bottom. This is what you need to do, this is how we do it. There's no discussion. You just go, right. Because then there will be resistance, et cetera. So co here is extremely important.
Richie: it just seemed like it's a very good idea to get people trying stuff, building stuff. I like that hackathon idea where you get people doing stuff and then as long as you start to see some, like a little successes to begin with, then that's gonna sort of snowball and make sure that more people are excited about this.
Alright, so, just in terms of like the skills you need throughout your organization, like what are the most important. Skills you can need to just make use of AI in the first place.
Iwo: Yeah. So, I think the most important one is to be open-minded and. you have that, then AI can be your guide into what to do next. so that would be one, like experiment, try things out. If this didn't work, then try out something else, et cetera. The second one, which is extremely obvious, and all of the l and d people have been telling this for many, many years.
We all need to be lifelong learners. And it was true 10 years ago, 50 years ago. It's true today, especially, With AI advancing so fast and capabilities changing basically every day. If you don't keep up, then you're just falling behind, So it's extremely, extremely important for organization, but also for ourselves to understand that.
We need to have, I would even go with like a very pragmatic approach, have a specific slot. During the day or during the week in your calendar when you're just learning ai, when you're reading about ai, when you're actually experimenting with new tools, new models, new use cases ideally if you're part of your organization, it should be mandated by your organization.
That one hour a week, every day just does whatever they want. And then also to have some sort of outcome of this. You can just write a short memo or like share your biggest win from this, like experimentation, what you learn, what you read, et cetera. So then it also creates this flow of information between each other.
Hey, I did this and John from marketing did this and married for marketing this. So it's like this simplest way where you just give people some time. a recurring basis and give them a platform to share. So that would be another like the life learning thing. And then I think last but not least, and of course the list is very long and I would love to give it to Aaron is to take curious and question the current that the status for today.
'cause what is impossible today might be absolutely doable within a year or six months or maybe in a week, right? So a lot of people go to an AA chat bot asks a question, and then it gets their response. Ah, no, doesn't make sense. And then they never go back. Right? And this is exactly how you should not then, or you should correct AI or you should be like, oh, but why I got here.
Maybe my context wasn't enough, Maybe I didn't mention A, B, C, D, and then I would get the right response, So, be curious and question how the things were done previously, because the knowledge work, as we know is, is going through a huge revamp and.
Richie: Okay, I'll, I really wanna hide. Like your idea there of like spend at least an hour a week doing some reading, doing some learning about stuff, because yeah, things are moving quickly and having that regular schedule of doing some learning is, gonna help you keep up with stuff. But it sounds like a lot of those skills you talk about that like for a lot of people, like you mentioned, just playing around with some prompts and see what response you get.
Like simple, prompt engineering is actually like a, an incredibly valuable skill there.
Iwo: Super simple. Like really there's no magic. And, and maybe another like highlight. it should be more about building and experimenting and even less about learning. Watch. I mean, I mean, okay.
Eryn: don't know.
Iwo: I mean, you could be learning by building, you know, it's,
Eryn: Yes. But I think we can't ignore the fact that there are things that people should learn before they even start
Iwo: I agree. No, no, I agree. I agree. I agree.
Eryn: we talk about things like AI ethics or at least data awareness and how these tools function from a theoretical level, right, of it's great to say let's pick up tools and try it, and I fundamentally believe this as well.
I think you're better at it than I am. I have to remind myself to try and
Iwo: I
Eryn: to do things. Yes, but I think also it's not responsible in a lot of instances, especially in a work environment for larger companies to say, go and try this tool and maybe not be aware of where that data is going or not be aware of how they're training these different models and where that's coming from, whether it's being trained responsibly and with permission on the data that it's been collected.
So I think some of that theory does have to happen. Beforehand and alongside your learning to understand how these tools continue to operate and really like that safety and security side of things. But yes, prompt engineering, super important. Getting it to wear different hats and getting more advanced.
I think output evaluation is another one that goes hand in hand with that of not just saying, do I like this or not, but. Is this hallucinating? Is this accurate? Am I using my critical thinking skills to see if this actually makes sense? So, I agree with everything you said and then a yes and on top of some of the theory too.
Iwo: Just watching videos and then not leveraging what you just saw into real life, So I totally, that some of the things we need to learn, we need to watch, et cetera and et cetera. But then Bridging this gap with actually putting this into practice. Even though if you won't build a tool for your day-to-day work, because I don't know, security or you have a different tool stack, whatever, but at least you'll leverage this and then kind of try to build, create something and then you'll build, and this will open also your eyes to what's possible,
Richie: So, I like you were kind of arguing, but I also agree with both sides of this. Like it is very important to do learning by doing because. You're gonna figure out what you need to know much faster, but also having some kind of conceptual ideas around responsible AI are also a good idea before you get started.
So I can certainly say like, you don't want someone from HR uploading, like a list of all the employees names to chat GBT and then saying, pick 10 names for me to fire. That's gonna be like a terrible idea. So, uh, yeah, there's ways of doing things badly with AI if things can go horribly wrong. So having a bit of training beforehand is gonna be a good idea. Okay. So I guess we talked a bit about the talent side of things in terms of tooling. Do you have a sense of like where you need to get started with this?
Iwo: So I would start with with the big guys to experiment what open AI has to offer, what clot pic has to offer what Gemini has to offer. So all of them has, on the basic level very similar performance, but then there are differences. For example clubs 3.5 and 3.7. Sonnets. Biopic. I way better in coding or in writing.
I find sonnets a way better writer than other models. Then for example, open ai the deep research feature, it's extremely good at running a research that can go on for like, even up to 30 minutes and just researching dozens of pages and giving you a final output of the answer that you're looking for.
Creating like a report as if a McKinsey Junior consultant produce it, right. So I think you should experiment with different models on this, initial aspect, So that's area number one. Then area number two is what kind of tasks you doing on a recurring basis that you should create assistance for them, right?
And this is again, where you can either go back to these models and use, for example. Projects by cloud or projects by by open AI where you keep your context about the specific project or a task that you're doing, and then you keep coming back when you need to do it again on, on different note, et cetera.
So they become like your. Brains for like specific areas of what you're doing, areas or tasks, Or you can use a separate platform that is just dedicated for this specific area as an assistant in area A, B, C, D, but it becomes a more of a standalone software, right? that would be the second point.
The third point is if you really wanna start using the data also between different software. So the good analogy here in the pre ai era boom would be the Zapier of the world, the maker of the world, So right now these platform also have the AI feature recently. But there are platforms, for example, in the ai where you can build your own agent that is taking the data from your transcript and then putting this into your HubSpot, and it does it all the, every time.
And then you can also ask questions about this call, et cetera. So those are then the pre-made. Assistance or agents that can then connect with your software and interact with this. And then maybe last but not least, 2025 is the world where autonomous agents are finally becoming a thing. So for the past two years, a lot of people have been calling agents something that was just like an AA powered workflow where you had a step 1, 2, 3, 4, and AI really knew what it needs to do.
But right now with autonomous agents. Where you just give it a task and it'll keep trying, keep trying, using different tools, maybe even conducting with other agents to execute it. Like this is the moment when it, when it's becoming possible. So you would see more and more use cases like this. You have open source framework for that, like crew, ai like line graph by by line chain.
So you can start experimenting with building this, even though you're not a software engineer, you can still start playing around with these tools.
Richie: Okay. Wow. So a lot of things, I, get the, the sense from that, just to summarize this. Like you start off with just like your, your chat bots, like, the large language models and then you kind of gradually working up to, well, we're making use of AI agents and there's these, I guess, different levels of maturity in terms of your process or your tooling there as well.
Okay. And I guess it's gonna have some like knock on effects to other software ecosystems as well. Like, as you upgrade your AI tooling, is that gonna affect your data tooling as well?
Iwo: I think it's quoting sat nadela that the if he right, but we need rethink. Not just how we work, but also what kind of tools we're using. And obviously like how does that affect the data, right? Because if you use this logic that most of the SaaS software is just a wrapper on a SQL database with an IC ui, right?
Why do you need this ui? You can just directly talk with this database, right? Why this database worth, what is it worth? Maybe I can have my own, right? so I think there's like a lot of paradigm shift. That is happening both in the software slash SA world but also when it comes to data. I think when it comes to data, what is really happening and, and will keep happening is that'll have more and more unstructured data being transformed to structured data that them AI can use and we can use at work and like the software can finally interact with.
So it's not just a bunch of random texts, but it's something that, that starts to, has a, a business value. And I think an interesting example here is what we're doing with the immaturity index interviews, right? So when the user talks with the assistant, it's just an unstructured conversation as if we are having an interview with the user.
But then the ETL pipeline takes it together with AI and it's actually structuring this. Into adjacent that then can be analyzed and then the report for you as a user with your score insights can be produced. So it's like a very simple use case of transforming the unstructured data. That can be also a meeting transcript into something structured that, that can bring the business value to, uh.
Richie: it feels like there are many, many possible things you can do to increase your AI maturity. How do you go about deciding where to start? How do you prioritize things?
Eryn: Yeah, I think it's important to look at the North Star of the business, of course, Of what are we actually trying to accomplish? We shouldn't just be using AI for the sake of using ai, although, you know, we love to tinker. Sometimes that's just what we do on a Saturday. But from an organizational standpoint, this doesn't always work.
So I think it's how can we look at the value that this implementation is going to be giving us, and then are we gonna be able to measure against that? So, are we using AI simply to increase productivity and this is what we should be doing and we're looking at time savings and cost savings. Are we looking to improve our innovation so it can help us to provide value to our end customers more?
Is it gonna have a market impact? You know, what is really, what is the purpose of this implementation and what type of solution is gonna get us there fastest is really how we should be prioritizing these types of initiative.
Richie: Okay. Yeah. I suppose that makes sense. You gotta think about like, what's your business, sort of North Star and then like make sure that whatever you're doing is gonna align with that. So, yeah, think about what your business goals are. But in general, are there any low hanging fruit that typically arise?
Like what's like a common easy first?
Eryn: I think Eva mentioned a lot of really great tools that people can start to dapple with, right? Of start to look at some of those small tasks, as you had mentioned in your kind of task analysis exercise of what am I doing every single day? What can be augmented, what can be automated, and what is something that I'm gonna continue to do on my own?
And you'll find that there's a lot of these small, little tasks that most of us do, right? Stuff like communication or having meetings. Maybe you could try out some. Meeting Notetakers that can help file actions for you automatically into your notion or your asana or your task tracker. So small things that might be helping with automation or lightening up some of like the administrative burden that you might experience every day.
I think that's a pain we all feel, So just start looking at some of your tasks. I think there's a lot of small, quick wins that you can get with these tools.
Iwo: I think my personal one of the moments when I had like the aha wow kind of a situation was when Claude introduced projects and with the ability to give ai uh, context and then different files to like expand on this context and then keep coming back with different. Topics questions, or kind of like mini conversations related to this was a game changer, right?
So I have I don't know right now how many, but like 12 I guess, different projects where I keep coming back and I also keep updating that overall context, I'm hoping that at some point AI will be smart enough to update the context itself. I know that OpenAI has this, a memory, but it's not as good as it could be.
But basically everyone has certain kind of like areas of their work that are different and they keep coming back to them, That doesn't mean to be a specific task. Kind of like an area, you can call it project, whatever, right? And then if you invest time into bringing this context about this area or a project, that AI can actually help you write it even, right?
So you can just start this new project in Claude, in like, Hey Claude, I would like you to help me. Right? And then even AI can help write it this changer. Using AI in a way where you don't need to provide the context all the time. That all the time you use it from, from the beginning of the conversation because it already has this context.
So this is magic, but of course it requires you to do some extra work in the beginning to build this context and then keep updating this. But I think this is this simplest thing when we were talking about the tools that people have access to. Open AI and Tropic have these features. They're not the only ones that we can build.
and we can immediately see see the value that they can,
Richie: yeah. So I like these ideas about just automating simple bits of your work that's gonna provide you an immediate productivity benefit. And yeah, making use of projects to sound like a good idea, I guess, like. There's this thing about AI is supposed to be like a, an intern for you. But yeah, it's an intern that forgets everything that you told it yesterday.
So, uh, uh, being able
Iwo: intern with, it's an with PhD.
Richie: cool. Alright, so, let's talk about how you measure your success. So what constitutes the successful AI maturation project?
Iwo: It. AI does everything for you, and you can just chill by the pool.
Richie: I've been hoping for this for decades.
Iwo: Still not there yet? No. No, we we're still not there yet.
Richie: Uh, Yeah. I feel like my boss is always gonna throw more work at me. Whatever happens, even if it's fully automated. Uh, Gone. Talk me through yeah. How do you know you've done something good.
Iwo: Yes. So I think it's really important to still have a framework that will help you understand how you're doing. And the framework that we mentioned initially, the step framework. We've been talking a lot about the first S letter, so segmentation of your tasks, but then what, it's really important, what happens after.
So there is a T that stands for transition and then there's E that stands for education. So typical stages where you need to have the proper change management in place. To understand how you're then adjusting these tasks, adjusting these areas, and then you're educating employees, your team on how to do it, and you keep improving.
But then the key is in the last letter and p in performance, you need to actually be measuring this, right? So from the get go, from day one, you need to understand what kind of goals you have, how you would be actually measuring this, So whenever the project starts, you need the snapshot in the beginning where we are right now when it comes to our a maturity.
And then you need to be doing these measurements on monthly, on a quarterly basis. On a yearly basis, to see how things are changing because you would need to be adjusting things you would need to be implementing also the new advancements in, in technology, et cetera, et cetera. So I think what really helps is to have a structure framework, things that you're measuring.
So then you can call it a success.
Eryn: It's also important to call it the difference between project management and change managements and when you should be measuring different things. A lot of these implementation projects take a very, very long time. And I think all of us that are maybe more in the data and engineering and technical side, we go, okay, the project's over as soon as the last Jira ticket is filed.
Right? So we're done. Project over success, right? Like it's just a checkbox and that is the measurement. But in reality, change management is when it's being used, business as usual. so I think the baseline that you measure there is potentially the tool usage and the productivity and cost savings.
But then we start to look at more of the value creation or risk reduction or innovation metrics that go alongside it as well. So what you might measure at the beginning of the project in terms of, tickets or implementation is gonna be very different that we're measuring when the tool is actually shipped.
And very different when it's being used as business as usual. So I think, think about what. Those different indicators are throughout your entire process.
Richie: I like the idea that, it's only done once the new tool's been adopted and as business as usual rather than just, yeah, it's been implemented and shipped. Okay, So how long does this take then? what's the sort of timeline like, I know a lot of companies, they don't think past the next quarter 'cause, you know, that's what the target is, but yeah.
How long did you have to wait to get good at ai?
Eryn: I think there's lots of small wins that you can get right away. So in some of the kind of out of the box tools and solutions that we've been talking about today, by using these third party tools, like you can start seeing a lot of gains pretty much right away and start seeing the benefits of having a really great brainstorming partner or some of those times saved, but.
It's unrealistic for us to say, Hey, everybody's gonna get value on day one, especially for a large custom solution or project that someone's building in house at a large enterprise or something like this, right? Like Rome wasn't built in a day, but we can say, okay, what does that leading indicator look like?
So what is the immediate quick win that gives me that big satisfaction motivation to keep going with this project or with my AI learning, and like what are those lighting indicators as well, right? Of. I'm gonna put in the work today to learn these skills, which means that when I use AI in a couple of weeks or months, it's gonna be that much more valuable or gimme that much more output.
So we have to look at like the fact that not everything's gonna give you an immediate payoff and you're not gonna be able to measure all of this right away, but there's gonna be lots of things that you can make building blocks for now that are gonna pay dividends in the future.
Richie: I like it. Yeah. So you've gotta have some quick wins, but expect it be a sort of almost never ending process. Just getting better using ai. Okay. Alright. Do either of you have any final advice for organizations wanting to improve their AI maturity? I.
Iwo: I think it's all. Starts with understanding where you are right now. So getting this snapshot of how using ki today when it comes to different areas and productivity, collaboration, decision making, innovation, so really have this figured out what are the tools that you're using. One of my favorite areas is like, use cases.
So we're using a very, very specific use cases that the people are using either to automate or to augment things. So that would be one. Then see how others are doing. So understand the benchmark, how you benchmark against your peers in the industry or in your profession. Get inspired, a bit of a gamification.
Like those are the things that really also help motivate us. It's to also really on the individual level to sign up to a few AI newsletters to follow a few people on, on LinkedIn or Twitter, whatever you're using, to be inspired. Doesn't matter if you're an individual contributor or a CEO, but kind of like to stay up to date with what's happening.
So once you know where you are, once you keep being fed with the updates on a day-to-day basis then either go with the idea of the hackathon. If you already have certain tools that are approved internally people are using them. But then think about what stuff can be built during such hackathon, right?
Or look already for some specified use cases where you need to build custom solutions. So then form cross collaboration teams that will be working on it or get external partners, et cetera. Those will be kind of a few things that I have in my mind probably.
Eryn: Yes. I think mine all comes down to the human side of things. I think of all of the AI conferences and meetups I've been to in the last year, believe it or not, very little of it has to do with the technology and all of it has to do with people and change management. So my final advice for businesses is really to bring people along on the journey as early as possible.
So how can we really think about how this is gonna benefit the members of our team? Who can we identify as being these change champions or these ones who can really pioneer in their own area and innovate in their own area? And how can I pull a full list of stakeholders, which. It's probably gonna be more comprehensive than you think, and make sure that they all have a seat at the table for all of these important discussions and decisions.
So I think really don't forget that AI isn't just tool and technology. It's who's going to be using it and who's going to be benefiting from it. That is the most important part of this discussion.
Richie: Alright. Tons of great ideas there. Yeah, Evo, I like the gamification idea. I will, I'd love to beat colleagues at random internet points. And Aaron, yeah, I do like the idea of just focused on the people side of things and like concentrate on change management because that's the way things are gonna work.
Alright. And Evo, since you mentioned that it's a good idea to follow people on social media, just see what's going on. I'm always in need of follow recommendations. So, do you have any advice on whose work you're about? Who should.
Iwo: Yes. So, two people come to my mind. One gentleman his name is Dan Shipper. He has a company called it's like a media company, but he does a lot of. Experimentation with ai. He also runs the Onca podcast, ai and I, I'm not getting paid by him, but I'm kind of a, kind a fan. And then the other one will be not only from AI perspective, but with the guy called Greg Eisenberg.
And he talks a lot about micro startups and entrepreneurship. So if someone is interested in also, doesn't need to be a unicorn. That can be a sustainable business. He has a lot of provoking thought on building a micro startup. So Dan and Greg Eisenberg both LinkedIn.
Richie: Aaron, do you have anyone? Uh, I should follow.
Eryn: Oh, there's, I mean, there are so many. I really like the Rundown. AI is a great newsletter. I follow them. They talk about all different types of. Of happenings in the AI industry. I also, we've worked a little bit with Ben from Ben Bytes as well, and he does really great step-by-step tutorials on actual use cases.
So if it feels really like tactical and hands-on that you can get your sleeve, roll your sleeves up and try some in the same day. So, I'm probably some more of these like. No, I'm not as, inspirational, futuristic, sometimes as evil, so I'm like, tell me what I can do right now.
Iwo: Seriously. Yes, you are.
Richie: Nice. Alright. So, uh, yeah, lots of people follow and uh, yeah, I always get the thing, it's like, I've gotta be patient and think about that far in the future do want a good thing straight away. Wonderful. Yeah. Thank you so much for your time Iwo. Thank you so much for your time Eryn. Great to have you both on the show.
Eryn: Thanks.