Skip to main content
HomePodcastsArtificial Intelligence (AI)

Data Science & AI in the Gaming Industry

Marie and Adel discuss how data science can be used in gaming and the unique challenges data teams face.
May 2023

Photo of Marie de Léséleuc
Guest
Marie de Léséleuc

Marie de Léséleuc is an accomplished game industry professional with over a decade of experience. Marie started her career as a data analyst, and has since risen through the ranks to a data leader in the gaming industry. She's worked at companies such as Ubisoft, Warner Brothers, and most recently at Eidos, the company most well known for games such as Guardians of the Galaxy and Tomb Raider.


Photo of Adel Nehme
Host
Adel Nehme

Adel is a Data Science educator, speaker, and Evangelist at DataCamp where he has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.

Key Quotes

We are living in a very exciting time because we are just beginning to scratch the surface of everything that could be done with data science in games and this is still a very new kind of adventure for a lot of studios and I feel like there is so much potential, so many things that could potentially be done to make it better for everyone.

No one can predict the future, but I think any studio that is a little bit serious about data and how they want to use them to improve their game is going to use generative AI because it's so powerful. One of the things it could be used just to start would be generating dialogues for the NPC, barks for the enemies. You know, any of this kind of stuff that is usually painful to do by hand and could just be automated so, you know, people that are developing the narrative of the game can focus on more interesting stuff. There are so many ways it could give the feeling of a more vivid world through the kind of experience it provides. 

Key Takeaways

1

Data Science is a versatile tool in game development, encompassing many different applications. From creating challenging enemies by analyzing how efficiently players battle them, to improving NPC dialogue using AI, optimizing for different consoles, and scraping early game reviews to gauge early sentiment towards a release. With so many opportunities for data collection, data science helps makes games better in a variety of ways.

2

When working with other departments on large collaborative projects, ensure a better flow of communication through the use of custom dashboards for each team. Frame the data you have to be specific to the function of each department to reduce friction.

3

When your data team is working in the ‘middle’ of several other teams, think about using agile methodologies to keep your communication and release cycles shorter. Utilise sprints, and include other departments in key meetings to ensure there is alignment across teams and information is well circulated.

Transcript

Adel Nehme:

Marie de Léséleuc, it's great to have you on the show!

Marie de Léséleuc:

Nice to meet you. I'm very happy to be here.

Adel Nehme:

Okay, so you've been at the intersection of data science and gaming for more than 10 years now. So maybe to set the stage for our conversation, you know, let's say I'm playing Guardians of the Galaxy on my PS5. What role did data science or is playing or data science is playing in making sure that I'm having a great experience?

Marie de Léséleuc:

Okay, so of course I can't reveal anything that would be NDA related, but on a general basis what is going to be happening is the data that is being generated by players when they are playing is being collected pre-launch and after launch to help developers actually understand how people are playing the game—If what they had in mind is actually what is happening in the game. So there is a lot of different ways data science can be used to make sure this happens. You can use data science to optimize the system of the game, for instance. You can use AI to test the game automatically to see if there is any kind of issues. And of course, I invite you to also data analyst. And they were working with data scientists to provide any kind of report and dashboard to better understand what we were observing in terms of behaviors.

Adel Nehme:

So that's really great. In a lot of ways, there's a ... See more

lot of different use cases or areas where data science provides a lot of value in gaming. You'll add quite a few different teams, data teams in gaming, maybe walk us through the different types of areas within a game where data science and machine learning play a lot of value and play a big role in delivering value.

Marie de Léséleuc:

Okay, so what is very interesting about data science is our versatility, it's encompassed in so many different applications. And in my previous job, we actually had like an AI team. We also had people inside of the development team working on those kinds of things. And we had like our machine learning people in my team working on different things related to business. So in a game, we can just discuss for instance the AI that are actually being developed to help a player having fun by engaging with NPCs that are well-defined but also being able to defeat enemies that are an actual challenge. So there is more and more data being used during development to be able to... achieve those kind of goals. And there is also optimizing anything like the sound of the game or the graphics, how to use machine learning and data science to actually help in those kind of challenges. And we've been using also on our site, data science to determine how people are actually reacting to. everything that was related to the game. So reviews and people like journalists publishing different, you know, reviews, we were able to extract information from those and say, okay, so this is what people are actually talking about and this is the sentiment that is actually, you know, associated to them. So what kind of things could you do based on those kind of things? So there is so many possible, you know, application and different people being you know, specialized in those specific areas.

Adel Nehme:

That's very cool. And as a gamer myself, it's very nice to see the applications of data science across the chain of gaming from development to the actual AI behind the NPCs. And even after release, because the community aspect of gaming is such a huge, important element that makes or breaks the success of a game. So understanding the pulse of the fans, what they want, et cetera, is really important. So I want to maybe pause at a few use cases that you talked here about. Let's maybe discuss early in development. I'm very keen on discussing what goes into AI development for an NPC, but first let's talk about development. There's a lot of data science use cases that go into QA testing, into making sure that the game is running smoothly, it releases bug free. We've seen a lot of disasters in the industry, for example, where games were released prematurely with a lot of bugs. We've seen a lot more attention nowadays being applied towards QA testing for games. Maybe walk us through an example of how that looks like in action. What is a process of setting up a data-driven QA process?

Marie de Léséleuc:

Ok so this particular aspect of things, we were actually working with the AI team in our studio to develop those basic agents. So it was not my team, but we were working with them to be able to provide the dashboard and everything. But the idea is you can basically set up, teach an agent to do different kind of actions or to learn from its environment and find stuff by themselves. And then you can deploy them in a portion of the game or any kind of place you would like to improve. So they would be able to just play and they will generate data basically. And using this data, you will be able to determine, oh, it looks like this seems to be a problem. So for instance, if you see your agent are always falling in the same hole again and again, okay, maybe there is a problem and we should just get rid of this because it's very confusing and it's a problem. And the idea is you can generate so many of them and they can do almost anything in the game, so it's easier for them to find very specific, you know, issues that would be difficult to find for a human. And so you can make them work during the night when you're basically sleeping. So there is a lot of very interesting things that could be done to improve the game based on this kind of agents, just playing it and finding edge case and... of providing us with all of the data they are generating when they play, so we can analyze the data after that and say, OK, so this is what we observe. So there is more and more companies, I think, that are going to use like this kind of QA, not going to replace a QA professional because a QA professional is absolutely necessary to understand, interpret the data, see if he's seeing the same kind of issues. But it's a tool in their disposition, basically, to help.

Adel Nehme:

That's very fascinating. And in a lot of ways you mentioned here, I think two main use cases or value drivers behind this is one, you uncover potential bugs within a game, right? By looking at that data, but you also make a game better by uncovering potential issues in the level design. You mentioned here, if a bot continuously falls in a hole, then there's probably something wrong with the level design that needs to be improved. Maybe unpacking a bit of the methodology, are these reinforcement learning agents that are being used or are they programmed relatively more strictly? What is the methodology behind these tools?

Marie de Léséleuc:

Ok so from what we were able to see, working with this team in particular, they started with something very simple. So just to be able to implement the agent and see how it was reacting in the world, because the reality is it's very complex to deploy an agent in a game. And so first it was very much rules that were, you know, predetermined for the AI and say, okay, we would like you to learn how to, I don't know, like do four you know steps and hit something, but the idea was in the long term for this to be actually reinforcement learning, so bots being able to learn from their environment environments and do actions by themselves.

Adel Nehme:

That's very fascinating and you know related to this agent design in AI is actually the NPC AI that is developed You know, let's say you're playing Call of Duty or playing, you know your favorite action game how the enemies attack you where they hide the level of difficulty that is all predetermined as well, maybe walk us through that type of AI When it comes to applications in gaming, what's the process of that?

Marie de Léséleuc:

Okay, so this one is not, I would say, in my particular bag of expertise. This is usually taken care of by a development team itself. So there is people specialized like programmers, scripters, and so on and so forth that are. dedicated to doing these kind of things. But I think more and more, there is going to be collaboration between you know, those specific people and data scientists that would be able to say, okay, so instead of scripting a certain number of things, you can actually use a reinforcement learning, you know, model to be able to dictate the, basically, the behaviors of all of those. But I don't think we are there just yet. I haven't seen a lot of people doing it just yet. mais je sais que c'est quelque chose de plus en plus discurse parce qu'obligément, il y a tant d'applications possibles.

Adel Nehme:

That's very fascinating. And I do see the reason why you don't necessarily want to have a reinforcement learning agent, because you do want some level of predictability over what the NPC does. Yeah. And that's, that's very fascinating. And you mentioned another set of use cases here being, you know, actual commercial use cases on, you know, being able to understand what the community engagement been like for a game, being able to understand how the game is being played after. the game release so that there are further optimization. Maybe walk us through these use cases in a bit more detail. What does it take to set up these types of use cases?

Marie de Léséleuc:

Yes, exactly. Absolutely. At the pre-launch, what is interesting, I was discussing for instance, the opportunity to optimize a game system. So there is a lot of systems in the game, there is a lot of, for instance, economic system, combat system, all of those kind of things. And the idea is, before the game is being released, it's always interesting to get people to actually play and generate data, so we have an idea how to, you know, connect. I'm sorry, to analyze those. And so we were having people in pre-launch coming and doing exactly this. And the idea was also for us, it was what we had in mind, at least, to use actually those bots, the one I was talking about, to generate as many of them as we could and let them play and see how they were interacting with the game and where the system was actually failing. So it could be, for instance, they are collecting far too many resources and there is no challenge for them to purchase anything. It could be, oh, you wanted the boss to be defeated in no less than 10 minutes and most of the time the players and those bots, they are able to defeat it in like three minutes. So there is a problem there. So those would be inside of the game, like the optimization of the game. But there were also, as you are saying, business aspects to what we were doing. And this one was, for instance, developing a tool that was allowing us to scrape the reviews from journalists and people reviewing the game on Metacritic, for instance, or Steam, or any of this kind of stuff. And being able, after all, to do some topic modelling on it to determine what kind of things were actually being discussed. in the game and what sentiment was being associated with that. So this is very important for marketing, for instance, to have an idea, OK, what are people saying at the moment? What are they discussing? What kind of sentiment is being associated with that? And using it, so we had a dashboard and everything. The model had been deployed and we were able to use it.

Adel Nehme:

I think it's very fascinating, you know, from your perspective, you sit as a leader within the industry, and it seems like you need to have strong collaboration with almost everyone within a gaming organization, whether that's the engineering department, the creative studio who wants, you know, a boss battle to be 10 minutes or around that vicinity of time, you need to be able to provide them like good direction over what makes a challenging boss battle, but also to the commercial teams, finance and marketing and be able to be supportive to them. So maybe let's start on the comp- development and creative side of the coin. What does a good collaboration look like for a data science leader in this space? To be able to understand what are the true requirements for a game to be successful? How can I inform the agenda? Walk me through the challenges there and how you've seen modes of success.

Marie de Léséleuc:

Yeah so this is very much the art of everything. So you need to have everyone on board with you, and you need to prove to all of those people that what you are doing is actually being useful to them. Because developing a game is difficult, it requires a lot of time, a lot of effort, a lot of money. And basically what we were trying to achieve is we are there for you. So we need to collaborate and anything we do is to answer the questions you have and help you meet your objectives. So the team we had was very much in the middle of, as you were saying, a lot of teams. So we were working with user research, we were working with marketing, with PR and the game development team. And so to be able to keep, you know, an open communication. with all of those people. We're basically using the agile methodology. So we are working on two to three weeks sprints and during the sprint we were collecting. any requirement, making sure they were actually met at the end of the sprint, and so on and so forth. And what we were doing also to be able, for instance, to make sure we were aligned with the development cycle of a game, was actually asking to be part of some of their more important meetings. So being able to say, hey, could we be there when you are developing features? meetings and so on and so forth and making sure that the information was actually circulating between everyone.

Adel Nehme:

Maybe as someone who's managing a team, how do you split your time between coordination with other teams versus actually working with your team and being a contributor yourself? What's the time split there? Because that strikes me as there’s a lot of coordination that needs to happen there?

Marie de Léséleuc:

There is a lot of coordination and this is why you have all of those different roles in a team. You are going to have data analyst and data scientist and so on and so forth. Those people are actually doing the pushing of the work and they are transforming the requirements into actual deliverables. But everyone was actually participating in this collaboration. So basically, we are the data scientist, data analyst, directly embedded into the development. So this way they were able to collect any kind of requirements and work on them as soon as possible and as efficiently as possible. So I would say on my side I was trying to coordinate as much as I could also and it was taking a lot of time. So it was basically one of the core activities, making sure, okay, what has been discussed, where are we in terms of... what we need to deliver, are these people aware of what we are working on at the moment, and so on and so forth. So as a director slash manager, it's a lot of time that you have to actually dedicate to this, but on my team at least, the communication was not just me coordinating everything. everyone who's actually talking to the team directly and reporting to the team what kind of new information they had so we can coordinate together. I don't know if it makes sense but yeah that's the way we're doing it.

Adel Nehme:

No, it definitely makes sense. And you mentioned something is that making games is hard. It's very hard, right? Like anyone who's not necessarily aware of the gaming industry. For context, the gaming industry is bigger than the movie industry and the music industry combined, right? And making games like your favorite games like Grand Theft Auto or Call of Duty or Rockstar, or like, for example, Red Dead Redemption, right? They take eight to nine years sometimes in creation of development time. It's quite a lot of effort to create a game. And oftentimes there's creative direction changes that happen. Mid-games, throughout the development cycle. How does that affect the data team? How are you able to be agile and be flexible when there are major shifts that happen in the creative direction

creative directionof a game?

Marie de Léséleuc:

We were affected by this a lot, obviously, because it impacts everything. It's impacting the planning of the project, when we are going to have, you know, play tests, what kind of features are going to stay or what kind of features are going to be cut, all of these kind of things. So this is why, first and foremost, I was saying I want people to be part of the meetings. Game designers and others are actually having this discussion. So we have a chance to adjust as soon as the information is being delivered, to like, it's being known. And we have to basically... have those, you know, like this is where we have our sprint and this is where we are agile, having those meetings where we say, okay, so this is the new situation, this is what is going to remain, what is going to go away, what's the new priority for each of those. And of course, as you are saying, all of this is not just the dev team, it's everyone, including people working in the background. So when we have to do those kind of things, it impacts our metrics. It impacts our documentation. It impacts what kind of like metrics is being implemented and we'll have to change, but also which one are going to be decommissioned and which one are going to need a new implementation which requires, you know, death time for it to happen. So we are very used to this, which is why it's not necessarily impacting us too hard because We expect them. We expect those kind of things to happen. And this is why we have also, you know, like this coordination with the dev team to say, okay, during a sprint, we need dev to spend time for us to, for instance, implement the metrics and so on and so forth. For data science, for the data science aspect of things, it's also very hard because when the game is changing, suddenly your model are not, you know, working as they should work and you have to sometimes start again from scratch because new features are suddenly appearing, others are just going to get out completely of the game and so on and so forth. So basically you have to keep track of all of that and you have to be able to adapt, retrain your model, like, ask people to play the game and if you have them, like, you know, set up some bots and see what's happening. It's very challenging. It's very challenging and it's costly also. It's quite expensive because training a model and like, you know, putting in production is not, it's costly in term of how much time it's taking, but also using the data to train the model and so on and so forth. So what we do is, There is phase in a project where this is more likely to happen than others. So we know, for instance, we're not going to develop a very complex data model for anything that is at the very beginning of a project. We are going to wait for it to be more mature and more stable to start doing any of this kind of stuff. Before that, it's going to be more, you know, we try to see how we can like even deploy anything, put everything in place in terms of pipeline, put everything in place in terms of being able to work with the data.

Adel Nehme:

This is extremely fascinating, especially given, I think one of my two favorite industries are data science and gaming, so I'm really enjoying this conversation. And in a lot of ways, the creative aspect and creative changes, right, puts in an additional wrinkle to the data science challenges because there's not necessarily a lot of data on that new feature. When you create and deploy a model, what are the challenges, how do you avoid or how do you solve that challenge for not a lot of data? Let's say early in the development process and the QA process, you want to create a machine learning model that's trained on the environment or something along those lines. What do you do when there's a lot of data scarcity?

Marie de Léséleuc:

Absolutely, this is the biggest challenge we had because data science and all of those models, they require a lot of data to be able to start beginning to do something. And when you have only like you know 10 to 30 people playing during play tests, it makes it very difficult. So this is why we had this idea, but it was not deployed yet when I was before I left, to actually use those bots, so you know those AI agents. that are going to be able to play the version of the game that is actually available in a way that is relatively similar to a player. So you can generate a lot of them and all of those they are going to send metrics and those metrics you can use them to determine okay what seems to be work like the most important features and so on and so forth. But yes, Datasense in pre-launch it's yeah it's not always easy if what you're looking for is improving any anything related to player playing the game. It's easier for stuff like graphics or audio or any of those kinds of things because you can generate data easily. But yeah, that's at least it was what we were trying to achieve. And I have hope that it could work in the future.

Adel Nehme:

So that's really great. And we've been talking quite a lot about what happens, like data science use cases and specificities and challenges pre-launch when a game is being developed. But maybe let's talk about it as well post-launch. So one aspect that I imagine is extremely difficult and challenging when setting up these types of data science use cases is getting a lot of data from players when millions of players around the planet are playing at the same time, and trying to understand and unearth insights from that data, right? So the way I see it is that there's a lot of challenges on the data collection side, like how do you actually set up that streaming platform to get a lot of data in, to be able to ingest it and transform it, but then is what do you look for when you improve to actually improve a game? So let's maybe start in that first puzzle here is what are the challenges related to data collection when you have millions of players streaming data to the cloud and you're able to collect that data and store it?

Marie de Léséleuc:

So data collection is actually not so much of an issue because we have pipelines. We have a system to collect the metrics, define them, implement them, and this is being tested before the game is released. So we are ready when suddenly we have millions of people coming and providing all of this data. We decide which metrics we are keeping, which ones we are not keeping, this kind of stuff. And then we have access to them directly. So we're using Google BigQuery and Google Cloud services to be able to achieve this. So it was not us, it was the data engineers and magicians on the DevOps teams. Those people do stuff I don't understand, but I'm very happy because it works. And the idea is unlike a lot of people, like a lot of team for which when the game is released, it's released. Like it's done. Congratulations, you did your work. For us, it's actually like marketing and all of this team, work is actually starting. Because suddenly we are looking into all of those KPIs to make sure the game is actually performing as we were expecting. We are looking into, okay, so what we saw during the pre-launch when we were trying to improve the system, has it been effectively corrected or not in terms of player behaviors? Are we seeing new things that we have not identified? We are doing also data science, for instance, segmentation of players. So we are going to try to determine, well, fine, who is playing our game? What kind of category of players we have based on what they are actually doing inside of the game? So for instance, there is a category of people that looks like they are very engaged, they are doing absolutely everything in the game, while others are barely touching the surface and how many of those people do we have, and we are going to try to predict stuff like activity, how much we can expect it to change or to stay the same based on, you know, if we are going to do any kind of event. There is also, of course, we are discussing and working with the sales team to see, okay, could we have like a very simple prediction model just to have like an idea of, okay, what... we can expect if, like as a baseline, in the coming weeks, for instance, in the CIO, we adjust based on that. So there is a lot of things we're looking into in terms of a business, but also like player behaviors and understanding how the game was performing.

Adel Nehme:

And if you wanted to choose what is a more challenging time for the data science team, would it be pre-launch or post-launch in terms of the type of use cases as well as the level of technical expertise needed to unearth these insights?

Marie de Léséleuc:

I would say everything should be in place and everything should be working for the release of the game. So all of your pipeline, all of your model, all of your things, they should be ready up and running. Even if after that you need to make modification, you need to retrain stuff and so on and so forth. I would say it's more challenging pre-launch because you don't have a lot of data and you are basically putting everything in place, hoping it's going to work. Once the game is released, there is new challenge, true. You suddenly have a lot of data, so you need to be sure your model was able to absorb all of that. It's going to cost more money because suddenly you are training and you are deploying model on millions of people data, so maybe you will have to sample for this kind of things. There is obviously the fact that with more data there is new behaviors that maybe you didn't catch up when you were on pre-launch, so on and so forth. But once you have everything in place, the kind of issues you will have is usually easier to tackle than when you are building everything up from scratch.

Adel Nehme:

That's very fascinating. And you know, you mentioned here a lot of insights that you uncover from player usability tests, from player tests after the game's launch. How does the communication flow look like then with the creative teams? How often do they update the game after that? What does that process look like?

Marie de Léséleuc:

Yeah, so it all depends on the kind of games, obviously if it's a live game or if it's a game you put it in a box and just sell on the counter. But usually what's going to happen, we are going to have a live dashboard with all of those KPIs that are needed to follow up on the game. Now, it's going to have reports also. So for instance, for Guardians, we made a series of six reports and those were presented to the high level executives but also the producers of the game and all those kind of things. We make also custom reports for people having specific questions regarding one aspect of the game or two and this way we are making sure people can follow up on the game and follow up on what's happening inside of the game and answer questions and sometimes those kind of questions can come very far away after the launch the game, which is very interesting if someone is developing a game that seems to be similar to the game we developed in the past, they are going to come to us and say, hey, could you do some specific reports. Marketing obviously is going to want to know what's happening in terms of player activity, for instance, if they decide to do any kind of event. So for instance, you're putting your game on the game pass, okay, how many more people did you get? And it allows you to say, okay, maybe... how many of those people would have paid if we had not made this specific thing. So there is a lot, we are trying to keep everyone informed because this is extremely important. People working on the game took time to actually discuss with us and provide time for us to have metrics in the game and so on and so forth. For us it's super important they know, okay, this is the result of everything you did. You can now see how the game is working and what people are doing.

Adel Nehme:

This is really great. There's a couple of things that you mentioned here that I would like to further unpack. So you mentioned that marketing question of, let's say the game is on Game Pass. What do we do? Let's say the game is on Game Pass. Should we have kept the game out of Game Pass, yes or no? When it comes to the data coming there, how important is the console maker, developer ecosystem? How do they provide you that data? Where do you get that data from, Microsoft, right? What does that look like?

Marie de Léséleuc:

Yeah, it's very, I have to say, I'm always extremely astonished at how unwilling most of those console developer are to actually provide data in a format that would allow us to go further into the analysis we would like to do. On our side, we know we get the data from the game directly, we are going to get information about how many people are doing what and so on and so forth. So first it's completely anonymized, it's completely GDPR compliant and everything. For the sales, it's not us that were collecting this information, it was marketing or finance and from my understanding most of the time, those information were coming into a different kind of format, on Excel extract, and it was not even this person has made this purchase. It's going to be a conglomerate like each month, this is how much you will have sell, which was kind of a challenge to do prediction and stuff like this.

Adel Nehme:

Interestingly, Netflix follows a similar model. Not a lot of people know how much their movies have been downloaded. For example, let's say you are a comedian on Netflix. You don't know how much you rmovie or your special has been streamed.

Marie de Léséleuc:

Yeah! Which is puzzling me so much because those are our players, basically ourselves. We should be able to collect information at a people level because this is important for us. But in the console industry, it's not working this way. On the PC side, I think it's far much easier if I understand well because Steam and so on and so forth, they are letting you get access to all of this kind of information. But yeah, it's a challenge. This one, like for the sales, I have to say, like on console, it's a challenge.

Adel Nehme:

It's an interesting space. And then the other question that I wanted to ask you here is, well, which is you mentioned something that is very specific that I think we can go on a tangent here and discuss quite a lot, which is it depends on the type of the game, right? If it's a live game, if it's an off-the-counter game. And you can even go into more detail here. Is this a sports simulator? Is it a first-person shooter? Is it a game like Hades, for example, that is developed over time? Right? So that also has a lot of wrinkles when it comes to what type of data science you do. So maybe walk us through the relationship between the type of game and what type of data science you do on the game. What does that relationship look like?

Marie de Léséleuc:

Yeah, absolutely. So a live game part, by definition, will be updated as it goes. But there is going to be events, but there is also going to have a lot of patch, a lot of constant development past the official release date, which means there is far much more opportunity for a data analysis and data science team to actually have an impact on the game, because they can provide those reports and use those models. to actually provide information to the dev, answer their questions and so on and so on, so far so make recommendations to improve certain aspects of the game, in terms of whatever you want. It could be the revenue, it could be the retention, it could be the acquisition, it could be a lot of different aspects based on what the PM or anyone that is concerned needs and wants. When a game is being released as a box basically, It's a bit different because the new iteration of that game or the DLC you're going to release are going to be sparse or very far away in time. So you are going to be able to still learn a lot of things because it's going to inform any kind of similar project and also what kind of return on investment you had, just simply based on the number of people playing, how much they are staying, if they are abandoning the game and so on and so forth. The kind of things you will do is not necessarily going to be as immediate, let's say, in terms of the impact on the game.

Adel Nehme:

Exactly. In a lot of ways as well that will also determine what goes into a sequel of a game in case that sequel is being developed. So I was, for example, I played God of War, then God of War 2 directly after each other. And you could see that there were some quality of life improvements that were done on really small things that would have most likely been informed by data science and analyzing player data.

Marie de Léséleuc:

So it's interesting because it might be less immediate, let's say, but it's still very important because very often a project like, you know, a studio is not going to have like just one game in development and that's it. And then we start a new one. There is multiple game being developed at the same time. And so what will you learn from one of those games you just released, you can use to like determine what could possibly happen for the others, especially knowing that most of the studios are specialized into a certain type of game. So even if it's not exactly the same IP or the same game, there are still things you can learn from what you just did.

Adel Nehme:

This is super useful. Now, Marie, as we close out our conversation, I'd be remiss not to talk to you about generative AI. We've seen, if you look back also like two years ago when GPT-3 was released, one of the first crazy use cases of GPT-3 was actually a game use case, which was a form of Dungeons and Dragons type role-based playing game, which is through text that is essentially auto-generated each time you run a playthrough. We've seen a lot of cool use cases in game design from creating assets in Unreal Engine, to creating audio assets as well, to even creating player interactions and having dialogue agents. So do you see this becoming more and more used in gaming in the next few years?

Marie de Léséleuc:

No one can predict the future, but I think any studio that is a little bit serious about data and how they want to use them to improve their game is going to use this kind of tool because it's so powerful. One of the things it could be used just to start would be generating dialogues for the NPC, barks for the enemies. You know, any of this kind of stuff that is usually painful to do by hand and could just be automated so, you know, people that are developing the narrative of the game can focus on more interesting stuff. There is so many ways it could, you know, give the feeling of a more vivid world through the kind of... you know, like, experience it provides. I'm already using it for a lot of, like, teach me, like, I'm a five-year-old stuff. And it's impressive, like, you feel like you're talking to someone, even if, obviously, it's not the case.

Adel Nehme:

It's very impressive indeed. And I think what's even more exciting for what this means for the future of gaming is what this means for the future of indie gaming in a lot of ways. If a two team developer team are able to create incredibly high quality dialogues, incredible high quality assets with the use of AI, then this only means better games for everyone in the future as well. So this is a pretty interesting space to follow.

Marie de Léséleuc:

And it's not the only application because you know you could ask it to generate code for you. It's impressive. I was looking at it and you can just say, could you provide me, you know, I have this question, I want to do these things. I would like you to generate some codes in, you know, Blah program. And it actually does it. So I have to say it's so much you know, faster and easier than just go to Stack Overflow and find a bit of code that's going to help you. So I feel like it's going to maybe reduce some, you know, some development time potentially also by providing like help to programmers and anyone who needs those kind of things.

Adel Nehme:

I couldn't agree more. Now, Marie, as we wrap up our conversation today, do you have any final words for our audience?

Marie de Léséleuc:

Yes, I would say that we are living in a very exciting time because we are just beginning to scratch the surface of everything that could be done with data science in games and this is still a very new kind of adventure for a lot of studios and I feel like with There is so much potential, so many things that could potentially be done to make it better for everyone. But I think there is still the barrier of the habit, there is some inertia yet, because it's very new, it's not something that people are used to think about when they are developing a game. So I think we will have to actually consider it seriously and provide. the resource and time that is needed for them to be implemented.

Adel Nehme:

That is very great Marie, thank you so much for coming on DataFramed, I really appreciate it. I had a blast, I love talking about gaming any day of the week and let alone talking about data science.

Marie de Léséleuc:

Thanks to you, it was a very good time speaking. Thank you.

Topics
Related

What is DeepMind AlphaGeometry?

Discover AphaGeometry, an innovative AI model with unprecedented performance to solve geometry problems.

Javier Canales Luna

8 min

What is Stable Code 3B?

Discover everything you need to know about Stable Code 3B, the latest product of Stability AI, specifically designed for accurate and responsive coding.
Javier Canales Luna's photo

Javier Canales Luna

11 min

The 11 Best AI Coding Assistants in 2024

Explore the best coding assistants, including open-source, free, and commercial tools that can enhance your development experience.
Abid Ali Awan's photo

Abid Ali Awan

8 min

How the UN is Driving Global AI Governance with Ian Bremmer and Jimena Viveros, Members of the UN AI Advisory Board

Richie, Ian and Jimena explore what the UN's AI Advisory Body was set up for, the opportunities and risks of AI, how AI impacts global inequality, key principles of AI governance, the future of AI in politics and global society, and much more. 
Richie Cotton's photo

Richie Cotton

41 min

The Power of Vector Databases and Semantic Search with Elan Dekel, VP of Product at Pinecone

RIchie and Elan explore LLMs, vector databases and the best use-cases for them, semantic search, the tech stack for AI applications, emerging roles within the AI space, the future of vector databases and AI, and much more.  
Richie Cotton's photo

Richie Cotton

36 min

Getting Started with Claude 3 and the Claude 3 API

Learn about the Claude 3 models, detailed performance benchmarks, and how to access them. Additionally, discover the new Claude 3 Python API for generating text, accessing vision capabilities, and streaming.
Abid Ali Awan's photo

Abid Ali Awan

See MoreSee More