Skip to main content
HomePodcastsArtificial Intelligence (AI)

AI and the Future of Art with Kent Keirsey, Founder & CEO at Invoke

Adel and Kent explore intellectual property and AI, open vs closed-source models, the future of creative teams and GenAI, the role of artists in an AI-world, the future of entertainment and much more.
Sep 23, 2024

Photo of Kent Keirsey
Guest
Kent Keirsey
LinkedIn

Kent is a creative technologist who has served as a Product and Business leader in startups across B2B, B2C, and Enterprise SaaS. He is the founder and CEO of Invoke, an open-source Enterprise platform built to empower creatives to co-create with custom/fine-tuned AI products.


Photo of Adel Nehme
Host
Adel Nehme

Adel is a Data Science educator, speaker, and Evangelist at DataCamp where he has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.

Key Quotes

I'm a big advocate of making open models. And I do think that there is an argument to say if you train on public data, you have to make the model open as well, because it's just fair you're contributing back to the commons. But either way, I think that that's the better path for innovation. It's the better path for humanity to go down.

The jobs change because the technologies change, but we still need people and we still need creatives who have got an eye and have the artistic intent in order to create something that matters. And I think that's where with these tools, more people have access to become artists. I don't say that they're artists just by using the tool, but they have the pathway to making bigger and more, I would say pronounced artwork.

Key Takeaways

1

As generative AI becomes more prevalent in creative workflows, developing ways to guide and control outputs through tools like regional prompting will be crucial for ensuring that human expression remains integral.

2

AI tools can drastically reduce the time required for tasks like ideation and concept art, allowing teams to focus on higher-level creative decisions rather than manual production processes.

3

Open-source AI models allow more control and customization, empowering businesses and creators to fine-tune models for specific needs without being locked into proprietary solutions.

Links From The Show

Transcript

Adel Nehme: Kent, it's great to have you on.

Kent Keirsey: Yeah, thanks for having me.

Adel Nehme: You are the CEO of Invoke ai, a generative AI platform for creative teams. there's a variety of angles by which we can start our chat today, right from the state of image generation. I art what it means to be an artist in 21st century ethics of ar, but I actually wanna start on intellectual property because I think that's counterintuitively one of the most interesting aspects of, of this space at the moment.

 There's a variety of conversations happening on the state of IP and ai. So maybe how should AI practitioners and leaders be thinking about intellectual property when it comes to ai?

Kent Keirsey: Yeah, I mean, I think when we're talking about intellectual property, it typically comes back to the training of the model as well as the outputs that come out of these models. The models themselves are in some sense a new form of intellectual property. So, we're kind of having conversations about licensing on the weights, what you can do with the weights, how you can fine tune them and train them, but I'll kind of break these down into a couple of different pieces.

Okay. We're talking about the inputs that go in to training these models. There's a lot of controversy right now around whether. Utilizing data that is pulled from the Internet and the white web and then using that to train a model, whether that is considered fair use, most of the intellectual property law that this ... See more

falls under is really the kind of the copyright law and, you know, I've had conversations with a lot of experts in the field, and in fact, we just recorded a video with Matt Sag, who recently in July of last year sat on the Senate hearing committee around this topic.

But the way that he frames it, and I think this is a really, really interesting way of thinking about it, is that a lot of prior cases have been settled with this notion of non expressive use. And really what that means is, when you're using the data and creating a copy for training, you're not using the expressive qualities of that image.

You're not copying it to produce something that you want. is replicating the content. You're not trying to create an overfit model. You're utilizing it in a non expressive way, and that typically does fall under fair use and fair use would allow us to effectively train models on anything so long as they are publicly made available.

We're not breaking the law with access to the data. We're not like taking pirated copies of books, for example, and training models on those. This is just like general web data. Yeah. And I think that's the big area that we think it matters right now is, focusing on how models are trained and getting some coverage there because we don't want to train models that are created illegally.

But I think most of the legal professionals are moving towards this understanding of this as being a fair use activity. And I think the fact that we've seen large companies, even like Apple, create models that have been trained on data that's been scraped, it effectively means that the large majority of the American corporate populace, if you will, they've made the decision that this is likely the way that it's going to go.

And I think there's a whole host of geopolitical reasons that it has to go that way. those are the areas that we focus on when we're talking about inputs when we're talking about outputs and how you get protection on the outputs, but both the model as well as the things that those models generate, we're getting into an entirely different problem space.

And I think with models, you have to focus on the license and what you can do with that. We recently saw with Stable Diffusion 3, for example, in the image generation space, the license itself. will actually cause most enterprises and corporations to back away if it's not a friendly license. Openly licensed, permissive, allows for fine tuning, training, owning the asset.

that is key for open models. With the outputs of generative models, we're asking questions right now, like, how do we get copyright protection on the things that we create with AI? What can we protect from a patent perspective if we co create something? And the Copyright Office right now has basically been very conservative with what they're going to grant copyrights over.

Their argument is that the at least the early technologies in this space lacked sufficient human expression to merit a copyright. So if you go in to Dolly and you type, show me a picture of a dog. That dog is not copyrightable by you because the amount of expression that you could put in by show me a picture of a dog is a minimal, that's where I think today we're talking about control and privacy as big focuses. How do I control these models such that I can get some amount of human expression there in the process and then claim that expression is copyrightable? And then from a privacy perspective, when we create. Assets like let's say a model like a fine tune.

How are we making sure that we're the ones who benefit from that and not the AI companies? Are they training on our data? Are they training on our IP? When we put it into their system, is it making their model better or is it making our model better? Are we the owner of our own destiny?

Adel Nehme: That's pretty great and I think there's, a lot to unpack on what you just mentioned. So actually I'm gonna focus on the input side. first, You mentioned something on non expressive use and publicly available data generally can be used as long as you're not breaking the law, ? So now we see actually a lot of content licensing deals, With, OpenAI and I think the Atlantic, OpenAI and the New York Times, I think there's a bit of a loss, there's a lawsuit there. so there's how should we think about what is content that, is illegal to scrape versus legal to use, given that, web data such as the Atlantic, for example, is available publicly.

 Is it considered publicly available?

Kent Keirsey: Yeah, but here's where things get a little fuzzy when we're talking about LLMs when you are taking data from the New York Times, let's say or any other website, and you're putting it in a big pool of data in order to train a model to be good at generalized generalized. LLM tasks, right? I want to ask it questions about, this writing and have it help me rewrite it.

That, that use case is very different than what the New York Times was putting out an article for, right? Like, the expressive piece of the New York Times work is not found in the LLM that's helping me do all this like random stuff. Now, where it gets a little fuzzy is when you're using the LLM as an assistant.

And that assistant is going to get you facts that just were published by the New York Times. And the New York Times might only have that information, right? Maybe they're the first one, they had an exclusive, whatever. If I'm pulling that information from their site, and pulling that for actually referencing the material itself in a new interface, that's where you start to get into the same problem that Google was getting into with going and getting content from the site and kind of cutting out the ultimate end beneficiary of the traffic, and that's where I think the content deals are maybe a little bit more focused. Like, I don't think that the New York times data set is sufficiently. Good enough for training a model that they've like licensed that it's specifically because the content itself is valuable and they want to show that content to the user.

And that's where the licensing deal, I think, is mostly benefiting OpenAI. Now, there are plenty of licensing deals being formed. The way that I like to describe this is there's two potential futures ahead of us right now when it comes to how you must or practically get data for trading these models.

There is a world where it is fair use. There's a world where it's not, you must have a license to the data in order to train a model. The world where you must license the data would have a whole host of business opportunities associated with it. You would have data licensing, you'd have all kinds of auditing, you'd have a whole, like, business industry that spins up to manage that.

And there's certainly a lot of companies that want that because that's the business they're building and they want to go kind of support that they want to be the middleman. challenge that I have with that is that I think ultimately the people that benefit are not the creators of the content.

It's not the artist. It's not the musicians. It's not anyone who's actually really impacted by AI. It is the corporations that either already had the IP, let's take like Firefly, for example. Firefly was created off the Adobe stock set of data that Adobe had purchased about six years before. Almost, I want to say, fire sale, right?

They just have a stock website and they're kind of like, let's figure out how we can do this business, it's a nice adjacent business. And all of a sudden, Gen AI hits and they're like, whoa, we've got a lot of data, let's go train a model. They didn't get consent. They didn't ask anyone in that data set whether they wanted to get rights or whether they could buy rights for GNI trading.

They already had the rights, and they're like, well, we're going to do it. And they paid the people who had created that content a one time bonus. And here's the real funny part. Normally on Adobe stock to take money out, you have to hit a threshold in your account of 25 so that, you know, it's covers the transaction costs and all that kind of stuff for Adobe.

Adobe is like this one time payment is going to be so low that we're going to waive that 25 threshold. We'll give you like a couple bucks one JNI model Firefly that's going to be used forever, right? So that's the world that we live in. with licensed data is really these big corporations, these aggregators are the ones who are going to benefit.

They're going to be all these licensing arrangements. There will be less innovation, less AI models, ultimately worse for everyone. And so that's why I'm a big advocate of making open models. And I do think that there is an argument to say if you train on public data, you have to make the model open as well because it's just fair you're contributing back to the commons.

But either way, I think that that's the better path. For innovation, it's the better path for humanity to go down.

Adel Nehme: Yeah. And that open model, like maybe describe that world in a bit more depth of what it looks like, because here you, you mentioned, you know how that first world, a lot of the power will be concentrated with, large corporations, data aggregators. Right. In a lot of ways. Similar to how, data brokers today, in the digital advertising sphere operate.

What does that alternative world look like? Where it's open models, how does that benefit the creators in a bit more depth? I'd love to learn the dynamics that you see there.

Kent Keirsey: Yeah, so I think what we're starting to see with models is these are especially generative models. These are a category of software that is dynamic and flexible. And we can train these things to do specialized tasks. These are now tools, right? So almost in the same way that you might take an open source bit of code and rewrite it to do something useful for yourself as a creator, you can take an open model and train it, tune it, and make it your own, It solves a specific problem for you. But you have to have that foundation model in order to be able to do that. You have to have access to the weights. if you actually want to be able to own the output on the other side of that training process, right? when I look at what's happening right now, I see and I'll speak specifically for artists because I think that's like where a lot of controversy is.

Artists are very much being impacted by this. And I think the reality is is not going back into the bottle, the genie's out, And so for an artist, what they have to realize is they're, at least the large majority of commercial artists are going to be asked to use AI in their work to compete effectively, right?

Everyone's going to be doing work and 10 percent of the time it used to take and so they need to like actually do that as well. Now, if you want to use AI. In a practical way, as an artist, the only way you're going to do that is if you have a significant degree of control and you have a model that understands what you're going for, and that's where training comes in.

An artist can actually train a model on their style. If there's a foundation model. They can tune and train that to understand their particular aesthetic for the project they're working on. Whether it's like a character or a world they're trying to build, they can teach it that. And then they can use it and control it using things that would help guide the inference process towards what they're trying to output.

there's a whole host of innovations around, putting in a sketch or putting in a render and having the AI system controlled by that during the generation process. So in that world, they actually have their own IP, which they're building, which is that model, They are able to own that model.

They're able to use it freely for their work. It is a tool that they can use. And so it's almost like saying In this world of AI, you're giving somebody the core raw capacity to generate and build the tools that they can use to compete in the market. And that's effectively what an open model would allow for.

What it would also allow for is for businesses and enterprises who to train their own model and do something internally, build a core business around this they can take the model and train it on their IP as well, right? And I think we're already seeing significant appetite for these types of open models on the LLM side.

We see, very popular Mistral, Lama there's probably 25, 30 different flavors of open model that have been released and more continuing to be released. And I think the reason why businesses are gravitating towards that type of open model is because, when you're building a business, you don't outsource the core competency that you have as a business and producing value to others.

And with a lot of these like AI powered systems that people are trying to think about internally, customer service building kind of like information flows inside of an organization, you don't want to outsource the core. capabilities that generative AI models might be doing inside of those processes.

You want to own that. You want that to be something that you can really rely on for business continuity. And so all the way up and down the stack, whether you're an individual creator or you're a big enterprise, you want control of the models that they're going to be trained on your IP, producing something that's specific value to your organization and something that you want to rely on in perpetuity.

Adel Nehme: You know, you hinted at this, right? I think it's a great segue, you know, how businesses are using generative. I do wanna talk about the state of. AI and artists and what that will look like. But first I'll talk about maybe how creative teams are using as well. Generative AI. you mentioned here kind of that open model.

I think that's really interesting to think about how organizations can leverage art for creative workflows. cause I think a lot of what we see online is Proof concepts, use cases of, image generation, a lot of kind of, you know, community oriented content.

Can you walk us through how creative teams today are using image generation services or kind of video generation services? I'd love to kind of see what the use cases of AI and creative workflows for creative teams today and what you've seen given your position here at VO AI as well.

Kent Keirsey: Yeah, I mean, I think it really is contextual, it depends on your industry, but I'll go through a couple of industries and how you can imagine them using it. And then I'll share a little bit about what I see potential future industries doing. So today, I would say large majority that are actually using this in production right now are going to be like the media and entertainment companies, right?

So you've got. You've got game studios, you've got VFX studios, you have these organizations that have significant costs around creating visual content, and they have adopted AI more quickly than any other industry, I would say by far. They are actively using it in their workflows all the way from early ideation, so concepting and visualization of ideas.

If you think about like storyboarding, for example. storyboarding will become just like generating different assets based on like quick ideas, right? And now you've got great visuals that can help communicate idea that you're trying to go for. But we're actually seeing that further down the funnel as well.

I think line of where it, Ends as far as where it's used is not dictated by the technology, but by the risk appetite of the organization, because there still are some questions around the I. P. That's generated from this. Like once we have clarity on copyright, I think you'll see stuff that You produce using AI going into games, but a lot of times what's ending up happening is you'll have artists, you'll have creatives, AI using sketches, they're iterating and compositing those outputs.

So it's not like I put in a prompt and I get a picture. And I've taken that to the game, is I put in a prompt, I put in a sketch that I drew, I regionally control where I want certain things in that image by pointing kind of words at certain areas, and then I generate that. And maybe I generate 10 or 12 of those, and then I take that into Photoshop, I composite it, I edit it, I do some overpainting, it's not 10 seconds of work, it's, I'm taking a 100 hour process, where it used to take me a long time, and I'm bringing that down to like 3 or 4, because I don't have to do a lot of the rendering, I can really just get the core concepts together and create the visualization I want and then pass that on.

So in video games, that might be a very easy concept art pipeline. So you're creating concept art to pass the technical artists takes the process that take Maybe weeks of time down to a single week of iteration and back and forth with a team when you're talking about VFX, you're talking about like backgrounds.

I mean, look at something like the Mandalorian, which was shot largely in front of a green screen, and now you can create entire worlds, the click of a button. And so you've got a significant degree of control, trainability and tunability. And so you've got a significant degree of control, trainability and tunability.

And so you've got a significant trainability and tunability. And you've got this demand for creating more and more content today, but trying to keep the costs low. And that's where AI is kind of like really being applied right now. You've got ad agencies, media agencies, any type of studio that's doing graphic design or visual work, using this as an assistant and creating those intermediary assets that you can composite together.

You can create really compelling visuals and some weird stuff, especially with video. Toys R Us just did the first video ad from Sora, I think it was like 80 percent Sora or something like that. And it was still a little weird, right? You're still looking at it and you're like, this is a little weird.

But it also, like, works to an extent. I mean, it's definitely ad that you wouldn't have seen otherwise. I think we're still at that exploratory phase for a lot of video, but at least conceptually, and especially for 2D ad creative, very, very powerful there where we haven't seen as much immediate application, but there's interest is around retail e commerce.

So if I can train a model to understand a product. Do I need to actually go to the beach and take a photo of that product on the beach? Or can I generate an image of the product on the beach? we have the technology to do it, we've actually done this a couple of times for clients where we train a model on a product, and now I can generate that product in a variety of context.

But where that's, I would say, still a little bit hesitant is like, are people going to be mad if they realize that this is a AI generated image, even if it's high fidelity and true to the product? Are they going to be mad that it's AI generated? And there's this whole host of questions around what is ethical?

What is responsible? What do people expect of us as a brand? And that's where, like, the retail e commerce space is still a little bit hesitant. But some brands are throwing this into their social creative today.

Adel Nehme: I mean, we use generative imagery imagery, for example, on our blog, right? Like when we do a tutorial on something in data science for Python, you know, you're gonna see a lot of weird Python imageries on our blog. And you, you mentioned that Toys R Us commercial. I actually opened it as well.

It does look uncanny. It's still looks a bit, still looks a bit weird. . Uh, . Yeah. And then, you know, one thing that you mentioned here is kind of on when you were talking about the creative workflows of teams. I take an image, I do multiple kind of variants of the image I do regional prompting.

You mentioned that regional prompting. Right. Which I think is really important to think about when it comes to actual creative workflows. And I think. Your, you know, run of the mill mid journey or stable diffusion is not necessarily going to get you there, right? So, maybe walk us through what is the technical innovations needed? from a both product experience, but also from underlying models needed to make generative models really effective to work with for creative teams.

Kent Keirsey: Looking at the space, so looking at how the space has evolved most of the tools that we use are built on top of stable diffusion. So stable diffusion was released originally August 2022, and there, there was a ton of research on top of it. In 2023, we had innovations like control net and IP adapter.

These were research papers that came from third parties that built sidecar models trained to guide the generative process where I think that there's a lot of misunderstanding outside of like technology. Where they don't really get what's happening in the AI space. Not all the magic happens in the model, The tool you're using to run inference on the model can unlock new things. It can bring in sidecar ML models. It can run inferencing. differently. It can either be more performant, run the same inferencing job in half the time, or it can control the attention mechanisms of the inference process to do certain things, right?

And that's where it like an invoke, which is this is open source innovation as well. we have effectively an attention control that allows for certain regions of the image to have like. higher activation rates for certain tokens, right? So certain prompts you are saying are activated in this region and not activated in the other regions.

And that's where we get a lot of this kind of regional guidance and control there. That's the type of stuff that I think really Is where we're focusing is how do we guide attention in a way that is useful and controllable by humans and that is where you either get into changing that attention mechanism itself in the inferencing process or creating these sidecar ML models that you can kind of inject into that process so that it is controlling the attention itself.

Adel Nehme: And, we talked about this earlier. I'd love to kind of deep dive into it. how you see the future of art in society and like how you see the work of artists evolving in the future. naturally there's a lot of distrust and fear, right, and anxiety within the art community when it comes to Gerontovia.

I mean, we just talked about the Toys R Us commercial, That's, one actor having one less gig, as a result of this particular commercial. So maybe getting your bird's eye view, right, like walks through how you're thinking about what it means to be an artist in the 21st century era.

Kent Keirsey: Yeah. I mean, I think this is a big question. And I know, I don't want to get like philosophical on your podcast

Adel Nehme: No, no, I'm inviting you to become a philosophical, yeah.

Kent Keirsey: I mean, you have to really take a step back and ask what is art? And I think we had the same challenge with digital art when that first came around.

So. When Photoshop was first released, traditional artists who had learned how to, you know, control the drip of the brush, or if they were using airbrushes, make sure that you didn't get drip on the canvas. Like, there were all these techniques that you really had to learn, and there were specialists who had, built a discipline and a craft that all in one night was basically being discarded and dismissed as well, we don't really need to control the drip anymore because we have a digital canvas.

And so that's, what we're going to focus on now because it's easier. And justifiably, people felt it. threatened by that there was definitely a lot of upset because now you didn't really have to learn to the same extent how to be perfect with the initial stroke. You have an undo button, right?

don't have to do it right the first time. You've got these affordances that make it easier to create the work. But I think this maybe like creates the divide that we're facing again with AI, which is craft and art and the craft of doing something well, of playing music well, of being able to do something with your hands and physically that other people cannot do, and that you are disciplined and kind of have practiced.

That doesn't go away. It's still valuable, It's just valuable in different contexts now, Art can be created without that. necessarily craft of doing specific ways of making the art real, right? I think there's a new craft being formed, which is using AI tools effectively to create something that you have high intent and expression that is being demonstrated in.

But that's a different craft. Art is kind of above this. It's meta, if you will. And it is being a conduit for human expression of of making us feel something as a viewer of expressing something that we want to say, like basically a voice for ourselves that is communicated through a different medium than our own voice, right?

I don't think you can draw clean lines around it and say that AI can't do that, because when you really look at some of the work that artists do with AI, AI is not doing the art. Like, the art doesn't come from AI, the art comes from the human controlling it and crafting it and composing it together to say something, and don't think everything that AI produces is art.

I don't think that everything humans put out is art, Art is kind of like this almost this ephemeral quality that happens when somebody is able to create something that speaks for itself, And I think that can be done with AI. And I believe that with the tools that we're building today, what we're going to see is that, yes, some parts of the creative process, at least commercially, are done differently.

Do, do I think we are going to probably have the same jobs in 10 years as we do today to make a movie, to make a game, to make anything? No. They're going to be different jobs, just like we had different jobs 20 years ago or 50 years ago to make animations, to make movies, to make anything, right? The jobs change because the technologies change.

But we still need people and we still need creatives who have got an eye and have the artistic intent in order to create something that matters. And I think that's where, with these tools, more people. have access to become artists. I don't say that they're artists just by using the tool, but they have the pathway to making bigger and more pronounced artwork.

If you think about like movies, you hear the story in Hollywood of directors who Are told like, hey, you can't go make the movie you want to make. We're not going to finance it. We're not going to fund it because it's maybe not mass appeal. It's not what we think we want to back. And those directors today with AI, once these tools get really going, they'll be able to create the movie they want it to make.

And what that means is I think we'll start seeing less homogenous media in American culture, and we'll really start to see some very, very interesting. Stories that get told that wouldn't have otherwise been able to be told because the budget will be I would say a lot smaller than most of the AAA movies and games that we see today.

Adel Nehme: There's a few things that I wanna kind of unpack in what you mentioned. Like there's, the element of craft versus RI think that's very relevant. I kinda, the reduction of barrier to creating high quality media I think is gonna be very interesting. I'm a huge video game fan myself. And then, the potential of disruption. You know, video games are extremely expensive to create. And the creation of assets and games using, you know, engines such as Unreal Engine or something along those lines takes a lot of time. And if you're able to, as a small studio that doesn't have the budget, with 10 developers to create something as high quality, let's say as God of war, using limited budget, I think.

really useful for the ecosystem, so taking that example, right? Like and kind of focusing on, how you see the media landscape evolving in the future, Assuming that this barrier will be reduced extremely. How do you exactly view the media landscape being disrupted in the future?

Kent Keirsey: Yeah, I mean, I think it raises the bar for the big game companies and media companies to create experiences that the smaller studios are unable to, What I think that means is we use today's media landscape as the benchmark, I think we'll start to see more independent creators able to create the types of games and movies that we have today.

But the game of tomorrow that's AAA maybe goes beyond that, And it maybe is a bigger in scope and more ambitious in how dynamic and real it is. And I think that's where more of the focus is going to be for large studios is creating these kinds of I mean, I hate to use the word metaverse because it's been so tainted now at this point.

But when you look at like, what is successful over long periods of time, it is games that have a community. And when you have a community around an IP or a game, they stick with it. With it for a long period of time because their friends are there, they've got like a sense of self attached to that thing.

And I think that's where more IPs at scale are going to go is supporting those types of experiences, building very, very immersive games and movies that you can feel very connected to over a long period of time. I also think that, from that angle, when you're looking at. These machine learning models that are going to support that type of development, you've got organizations that absolutely need to own and control that IP, Because you're going to be relying on that to generate the IP. Assets and experiences over the duration of this large franchise. And that, another angle that I think matters a lot in that context of like future media. but all in all, I think that we're going to see I think we already see this today.

Indie games are doing really well. A lot of really cool indie games come out when you've got like distribution platforms like Steam. you can really. see that there's a lot of passion in building these types of experiences. And I think the types of games that come out of the indie scene, the types of movies that come from indie directors they're going to slowly creep up or maybe quickly, even with the rate of technology that we're seeing quickly meet what we're used to from very high quality film and creative experience.

And the AAA game studios are going to have to innovate, the movie studios are going to have to innovate. the big question is whether they do or whether we see some of these upstarts. the new media landscape, build themselves up into that future company. They get a good success in an indie game.

They show the world that they can make something awesome on a shoestring budget, and then they take the revenues from that and invest into making even bigger and more expansive experiences. And that's how you get the next big company, the next big studio.

Adel Nehme: additional thing on the craft, involved with producing art or producing content, you know, I can take that example. There was, you mentioned the toys are us video, There's also an additional one that I saw from Sora, which is the music video. I forgot what's the name of the band or the, 

Kent Keirsey: I know which

Adel Nehme: the director that was

Kent Keirsey: it was kind of like that hallucinatory almost you're like flying through things.

Adel Nehme: really, really nice video. That one made me feel something. I would say when I watched it, that was interesting about it. But I was reading an interview with the director and one of the prompts that he shared, one of the prompts he used during creation process, outside of the fact that it took six weeks of editing and splicing through a lot of different clips, one of the prompts was 2, 200 words long.

All So that's definitely not a simple matter of create a video using one single prompt and that's it. So, how exactly do you see the craft involved with building art changing? What do you think are going to be the main skills needed in being an artist? And is that a good thing that this is the new craft versus like that?

So if I'm an, if I'm an artist, right, not playing the guitar sounds like a punishment to me, right?

Kent Keirsey: I don't think prompting is a craft necessarily. I mean, there's some aspects of it that are, but I don't think that's the craft. Like, I don't want to say, To an artist, hey, you know, the new craft, your old craft is using your hands to make art and the new craft is using words because that's not really what I am advocating for.

And I don't think that's likely what, prompts are, are one mode of input of control over the generative model to get your intent ingested And processed for the output. But this is where, like, the innovation is really pushing us to understand what, actually can we do here to control that process?

 The way that I like to describe the model to an artist is this is almost like a pre baked dictionary. of what these words mean, right? Like, you have a model, it has a very specific understanding of the word kiroskero. This is what it means. But the artists, when they type the word Kuroskero, they may mean something completely different.

When they type a specific style, they've got something up here that isn't necessarily in the words that they're tuning, or that they're putting in. and this is where tuning comes in. Tuning a model is basically showing examples Alongside text descriptions, you're kind of reverse prompting this is how I would describe this and what that's doing is it's altering the models dictionary.

It's changing it so that it understands what you mean that I think is going to be part of the craft is teaching models how to understand what we're looking for. The other piece that I think is really important is the ability to ingest other modalities. So we now have the ability to take in pictures and say, give me something that's styled kind of like this.

We can regionally control. that type of picture ingestion. So, one of the examples that I have on our invoke feature page around control layers is you can take a picture of a certain material and say, I want that material to be here on the armor, I want this character's armor to have this kind of like aesthetic.

And so it's not just words now it is using. All of these different tools that are at our disposal to convey what's in our head, What do we think we want to see? And that is where I think the craft comes is understanding that piece. But even then, you still need the same tools that you had in the past as an artist.

You still need the same raw skill set because very rarely are these things perfect. Representations of what you want to see. You might want to change some details. You might want to go in and edit some of the texture you might. fix some things, And all of those skills of compositing, editing, overpainting, and understanding texture, depth, and light, those all still matter.

You're just applying them in a different way and using a lot of these as support systems to get assets in and parts of your composite image produced.

Adel Nehme: and maybe I'll maybe ask this after this question, Because I think there's an interesting thing that you also mentioned on the craft and the the art, And like the ability to lower the barrier to entry to creating high quality content, whether video games or videos or audio or, or imagery.

How do you view the economics of being an artist looking like? I think that's the biggest question, Because if the cost of creating high quality art decreases and the supply of people capable of working on these model increases. What does that mean for the labor market, and how do you protect artists from disruption here?

Kent Keirsey: this is a bigger question than just artists, right? This is every industry. Supply of labor and demand of labor is an unknown in, 5 to 10 years. I think It's easy to get into a world where you assume that we lose a lot of labor, or a lot of need for labor, a lot of labor demand goes down, right?

And I think that's true if we believe that The same number of companies and the same amount of productivity today is all that is needed, So we only need what we have today, and now we're just figuring out how to make that a different way. Which I don't necessarily believe is true. I think that, especially, let's look at media consumption, right?

It's easy to say, oh, well, I only have so many hours in a day. And so there's only so many movies I can watch and so many games that I can play. And so that's, going to be a fixed demand for content, I would argue that while that's true, I don't necessarily enjoy every movie I watch or TV show I watch or game I play.

I don't enjoy them the same. I wish, for example, that there were better movies. Because I am tired of, like, seeing Marvel movie 95, right? It's like the same formula over and over again. I want something new, right? When Dune 2 came out, I was just like, Oh my gosh, this is different. This is bold.

This is unique, right? And I want more of that. And I think we get more of that in a world with these types of tools. And so, I guess The difference here is maybe we have less demand for labor, but we have more opportunities for people to become the creative kind of like center of gravity, They are the ones doing the entire project.

They are the ones putting their voice out. And I think you can imagine a world where there's like smaller scale creative projects that have a small audience, but that those audiences value those creative works very highly. And I think that's a viable path forward. But fundamentally, I also think that we need some large scale conversations of the social contract, I think universal basic income has been floated for like a decade ever since Bernie was campaigning, right? And I think that there is an argument to be made that hypothetically, if we're able to produce all the things that we need as a society with a significant degree of automation, that we should be figuring out ways to provide that kind of like baseline human experience and cover that so that we can then figure out What is it that we want to do with the rest of our time?

What is it that we want to do above and beyond that? Some people maybe don't want to do anything. I know plenty of people who'd rather not be working, They'd rather just be like doing their hobbies and having fun, right? There are some people who like actively are workaholics, right? And maybe they would be figuring out ways to like go add more value the world.

I think we just have to have a different perspective in a world with AI than we have today. Okay. And that's more so a productive kind of social conversation, I think, than any one industry is going to solve on their own.

Adel Nehme: I completely agree, and I like the, how you thought about, you know, the individual becoming kind of the creative center of gravity, right? I think the main risk with that is that you're gonna have like the gig conomification of almost any creative role. which has definitely a lot of dark sides, but I think definitely re evaluating the social contract is something very relevant here.

switching gears, kind of final discussion here, you mentioned, you know, that the craft of art, Even if you're using generative art of editing, compositing, all of these things is still going to be the same, but used slightly differently. Do you see models getting so much better in the next few years that a lot of that work will get abstracted away as well?

Kent Keirsey: I certainly think it's possible. But I think it's always going to be. Better controlled and guided by somebody who has the raw generative talent themselves. Like, I, think fundamentally when we think about what makes these models powerful is they're able to kind of create this very useful average of things that have already existed, And that's what's one of the big complaints that artists have of these tools is they think that they're all derivative, And to an extent. They are they are able to do things that have never been created before, but at the same time, you're not going outside the boundaries of what is known by the model, There's certain things it cannot do, but humans can. And that's where you need humans to be able to create that initial concept to train the model. And I always think that, like, A human is going to be able to do more new things than models are. But yes, I think the technology is going to get better right now.

It is certainly not like an ergonomic experience for many artists, it's a lot of work to figure this stuff out and use it. And I think over time that's going to continue to get better and better. I do think that there are always going to be some Elements of this process where you really want to have the raw skill set.

I just, I fundamentally think that's true. Do we need everybody to have it? Do we need 100 percent of the people who are working on art to have it? Maybe not. Like, I do think that there's a world, for example, where an artist who has a very useful tool that they've developed in training a model. They've got a model, let's say, for example, that does trees perfectly.

It's like they've crafted the model, they've taught it all the things they need. Or it needs to really understand how to create trees, I'm just using a random example. They might be able to, in a world, and again, this goes back to the open model having that as a foundation, they might be able to license access to the model to do that job for people.

And so now, rather than I'm creating trees, anytime someone comes to me as an artist, it's, I have a model that I've created and I continue to make better at doing this one task. I have a model that you can use on your projects when you need to do this thing. And I think that is also an avenue that has not yet been explored very seriously by artists, but which I think will be very, very valuable.

Or at least will give them an avenue for monetization of their work.

Adel Nehme: Okay, I think this is a great place to end today's discussion. Maybe one final question, Kent. What is one final call to action that you have for the audience today?

Kent Keirsey: my call to action, especially since I imagine you've got a fairly decent size audience in California there is active legislation right now that would curtail open source research and AI. It would really, really hinder the development of the technology and would actively set not just California, but our entire country back.

There are a couple links on the site. I can include a link for you to put in the show notes. But call to action would be to keep open source AI and the open source work that a lot of companies are doing to help make this technology accessible. Support that, advocate against the, state bill and help us keep pushing open technologies forward.

Adel Nehme: awesome. Kent, thank you so much for coming on DataFramed.

Kent Keirsey: Yeah, you bet. Thank you.

Topics
Related

blog

Using Generative AI to Boost Your Creativity

Explore art, music, and literature with the help of generative AI models!

Christine Cepelak

14 min

podcast

The 2nd Wave of Generative AI with Sailesh Ramakrishnan & Madhu Iyer, Managing Partners at Rocketship.vc

Richie, Madhu and Sailesh explore the generative AI revolution, the impact of genAI across industries, investment philosophy and data-driven decision-making, the challenges and opportunities when investing in AI, future trends and predictions, and much more.

Richie Cotton

51 min

podcast

The Past, Present & Future of Generative AI—With Joanne Chen, General Partner at Foundation Capital

Richie and Joanne cover emerging trends in generative AI, business use cases, the role of AI in augmenting work, and actionable insights for individuals and organizations wanting to adopt AI.
Richie Cotton's photo

Richie Cotton

36 min

podcast

Generative AI in the Enterprise with Steve Holden, Senior Vice President and Head of Single-Family Analytics at Fannie Mae

Adel and Steve explore opportunities in generative AI, use-case prioritization, driving excitement and engagement for an AI-first culture, skills transformation, governance as a competitive advantage, and much more.
Adel Nehme's photo

Adel Nehme

39 min

podcast

[Radar Recap] Charting the Path: What the Future Holds for Generative AI

Tom Tunguz, General Partner at Theory Ventures, Edo Liberty, CEO at Pinecone, and Nick Elprin, CEO at Domino Data Lab, explore how generative AI tools & technologies will evolve in the months and years to come through emerging trends, potential breakthrough applications and more.
Adel Nehme's photo

Adel Nehme

38 min

podcast

Inside the Generative AI Revolution

Martin Musiol talks about the state of generative AI today, privacy and intellectual property concerns, the strongest use cases for generative AI, and what the future holds.
Adel Nehme's photo

Adel Nehme

32 min

See MoreSee More