Skip to main content

What Science Fiction Can Tell Us About the Future of AI with Ken Liu, Sci-Fi Author

Adel and Ken explore the intersection of technology and storytelling, how sci-fi can inform AI's trajectory, the role of AI in reshaping human relationships and creativity, how AI is changing art, and much more.
Jul 7, 2025

Ken Liu's photo
Guest
Ken Liu
Twitter

Ken Liu is an American author of speculative fiction. A winner of the Nebula, Hugo, and World Fantasy awards, he wrote the Dandelion Dynasty, a silkpunk epic fantasy series, as well as short story collections The Paper Menagerie and Other Stories and The Hidden Girl and Other Stories. His latest book is All that We See or Seem, a techno-thriller starring an AI-whispering hacker who saves the world. He also translated Cixin Liu’s seminal book series, the Three-Body Problem. 

He’s often involved in media adaptations of his work. Recent projects include “The Regular,” under development as a TV series; “Good Hunting,” adapted as an episode in season one of Netflix’s breakout adult animated series Love, Death + Robots; and AMC’s Pantheon, with Craig Silverstein as executive producer, adapted from an interconnected series of Liu’s short stories. 

Prior to becoming a full-time writer, Liu worked as a software engineer, corporate lawyer, and litigation consultant. Liu frequently speaks on a variety of topics, including futurism, machine-augmented creativity, history of technology, bookmaking, and the mathematics of origami.


Adel Nehme's photo
Host
Adel Nehme

Adel is a Data Science educator, speaker, and VP of Media at DataCamp. Adel has released various courses and live training on data analysis, machine learning, and data engineering. He is passionate about spreading data skills and data literacy throughout organizations and the intersection of technology and society. He has an MSc in Data Science and Business Analytics. In his free time, you can find him hanging out with his cat Louis.

Key Quotes

What unifies literature, computer code and legal code is this notion of making human mental patterns manifest in the world.This is the most foundational way to think about technology. What is technology? It's a discourse about skill, about craft.

It's surprising that we have been able to develop AI without understanding how it does what it does, even at such a fundamental level that we're discovering that there are features that we could not have imagined, that are very much analogous to biological brains.

Key Takeaways

1

Consider the role of AI as a medium for art, not just a tool, by exploring how AI can facilitate new forms of storytelling that emphasize intersubjectivity and interaction with other consciousnesses.

2

Explore the potential for AI to disrupt creative industries by identifying roles where the highest ideal is fidelity to an original, as these are more susceptible to automation.

3

Be aware of the risks associated with commoditizing emotional labor and human relationships, as this trend could lead to increased reliance on AI for tasks traditionally fulfilled by humans, potentially impacting societal structures.

Links From The Show

Ken’s Books External Link

Transcript

Adel Nehme

Hi everyone. Adel here. We're an interesting time in the AI space right now. CEOs of Major Frontier Labs, rockstar researchers and people in the know keep telling us that we are headed towards some form of intelligence explosion. 

If I look at coding programming, which is one area where AI is making the most progress, we are not far from the world.

I think we'll be there in three to six months. Where AI is writing 90% of the code, 

whether it's this decade, the next, or the one after that, we will most likely see a GI within our lifetimes. And the implications of that are so important. Maybe the most important thing we can be thinking about right now, always hard for me to visualize and build a mental model of what a Post AGI world could look like.

So up until I read, Ken Liu’s used The Hidden Girl and other stories and watched Pantheon, the show adapted on these stories, The show in the book depict what would happen if we're able to scale uploaded intelligence or the ability to upload human intelligence to the cloud. Presents a really compelling story of what a Post AGI world could look like.

Now, if you're not aware of Ken Liu's work, it's best to describe him as a speculative fiction author. His books have won him awards, including the Hugo Nebula and World Fantasy Awards. He authored the Dandelion Dynasty. The hidden girl in other stories, as well as the paper menagerie in other stories. His latest book is all that we see or Seem,... See more

a techno thriller starring an AI whispering hacker who saves the world.

Prior to becoming a full-time writer, Ken was a software engineer, corporate lawyer, and litigation consultant, and you can really see that subject matter expertise bleed into his work. Through our conversation, we talked about how sci-fi and speculative fiction can inform the trajectory of ai, his views on the state of AI today and how it's been evolving, the impact of using AI in domains such as medicine and therapy, and how that can change our approach to communication and connecting with one another, the impact of AI on art, and a lot more.

This has been one of my favorite conversations of the year, and I'm really grateful for Ken for his generosity of time and spirit. I hope you enjoyed it as much as I did. Now on today's episode. Ken Liu, it's great to have you on the show. 

Ken Liu

Hi, there. Thank you for having me. 

Adel Nehme

I'm a huge fan of your story, so I'm super grateful for your time and for joining us today.

Um, maybe to introduce you to our audience, um, you are an award-winning speculative fiction author. You've authored many books including the Dandelion Dynasty series, the Paper Menagerie, the Hidden Girl in Other Stories and a lot More. Um, you're also a thought leader in all things futurism, science fiction and storytelling.

So maybe first expanding on the background, Ken. Uh, I'd love if you can go a bit into your story and what got you here. Um, from my understanding, you grew up in China. Uh, you moved to the US when you were a child and then ended up at Harvard studying both English literature and computer science. And then you also went into law school and became an attorney.

So maybe how have these experiences shaped your writing and how have they let you on the path that you are on today? 

Ken Liu

Oh, that's a great question. Um, so I wanna start by saying that this is one of those. Interesting questions you ask writers, you know, what is your story? Right? Um, and I find it interesting to know that writers who are in the business of telling stories are not very good at crafting their own stories.

And the reason for that is actually very simple. Um, I think it's because when we're, when we craft a story, we tell a story, we know what the ending is, but the story of where the one that you live, you don't know what the ending is, it's the one that you're working on, um, as we go. Um, so I don't wanna give the impression that, you know, my life is some sort of master plan that I had in mind the whole time.

I think sometimes people tell stories like them, and I think it's often misleading. What happened to me is, is basically this, um, I try to do things that are interesting to me at the moment and then see how it goes. I think it's not just true of writers, but of everybody, you sort of live your life and there's.

There's this thing called your nature. There's, there's something about you who you are that determines what you end up doing and what you accomplish and what you find meaningful. The problem is, for many of us, this is something that you have to discover what your nature is requires discovery. And the only way to figure that out, I feel, is for you to try different things to experiment and to just sort of do the thing that's interesting to you at the moment and see where it leads.

But that's kind of what happened to me, right? I started out, um, being interested in. Literature, and then I was fascinated by computers. And then I wanted to understand how the law worked. And I, each time I just pursued the thing that was interesting to me along the way until, you know, at some point I realized that there's an underlying, unifying principle behind all of them.

Um, all these are professions involving the manipulation and the construction of symbols. Uh, we are crafting artifacts out of symbols in every single case. And it turns out that what unifies literature and, and computer code and legal code, um, is this notion of making human mental patterns manifest in the world.

So this is what I think of as a kind of, um, the most foundational way to think about technology. What is technology? It's. It's a discourse about skill, about craft. And it turns out I'm deeply interested in craft, in human craft, um, the way humans are technological creatures, that we cannot help but manifest our imagination in the real universe, whether it's in the form of stories or, or software or legal constructs or physical things built upon imagined symbols.

Um, and so that's kind of how I make sense of my journey so far. Um, my journey was, uh, this, this very interesting survey of the types of modern symbolic technology that humans build that we rely on. And, uh, my deep passion in terms of storytelling is this engagement with human craft. What is it about human craft that reveals our nature?

How do we manifest who we are? Through the way we speak with our craft, I find that endlessly, endlessly fascinating. 

Adel Nehme

Okay. And you know, there's a lot to unpack here. I'm definitely gonna unpack a, a lot kind of the nature of, of how we are technological creatures and talk about ai. Um, but one thing that I saw you talk about that calm my eye, which I think really informs your writing, um, is your experience as an attorney.

So, uh, you were, um, corporate attorney focused on technology patents, if I'm not mistaken, and that led you to become, uh, maybe, correct me if I'm wrong here. Yeah. 

Ken Liu

Oh, it's, uh, it's, it's my legal practice, um, was, um, so. For a number of years I was a corporate attorney, um, and I practicing tax and corporate law.

Um, later on when I, um, shifted away from law firm work, I went into work working as a litigation consultant. So I was basically an expert witness for patent trade secret, uh, you know, copyright cases. And I ended up, you know, becoming sort of a amateur historian of technology because that turns out to be super important in those kind of practices.

Adel Nehme

So that's, that's the point that I was about to get, is that how you've developed kind of that technological, like that historical understanding of technology and how that informs your writing and what, when consuming your writing. What truly impacts impacted me personally is really the world building surrounding the technologies you're exploring, right?

The, the world's really feel lived in and how these technologies manifest themselves and your stories really feel grounded in the human experience. For example, maybe reading, uh, the Head and Girl in other stories, uh, I felt I was really able to build a strong mental model of what transformative AI could look like in the real world.

  

So maybe first, walk me through your approach to speculating about the future. How do you approach world building, especially as you explore different technologies, you know, whether VR uploaded intelligence or blockchain. 

Ken Liu

So, if I were to, you know, examine my own world building practice, I would say that, um, I do, I.

Many of the same things as other speculative authors and futurists, and from what I know of their practice. But there are two things I do that are slightly different. So one is that, you know, all of us try to read as widely as possible. We try to absorb what's happening, and then we try to extrapolate from current trends into the future, right?

This is a foundational part of futurism and, and, and speculating about the future. One thing that I, I do slightly differently is because, you know, I come at this from a technologist slash historian perspective. I try to keep in mind the contingent nature of technological development and, and I try to avoid the narrative fallacy as much as possible.

There is a very, uh, strong tendency in the way we think about technology and technological development, where we think of the particular path. We did walk as in some ways a determinate, a predetermined path, right? So this is the idea that. Because we happen to have developed technology A, after technology B, that sequence and that causal effect is some sort of universal law.

This is why we end up doing things like in games. We construct things like technology, trees, um, which creates the impression that that is how technology always evolves. But that's absolutely not true, right? There's a huge amount of path dependency and contingency in the way technology evolved. Um, we could have easily, um, have had a present in which electric cars are the dominant form of.

Powering vehicles. Um, because in fact, um, you know, back at the turn of the last century, you could see electrical, uh, cars, steam cars and internal combustion engine cars competing for dominance. And it was not at all obvious that, uh, gas powered cars would come out on top. We easily could have had a future in which electrical cars were dominant.

And it's a number of factors, some of which are completely based on culture and fashion that ended up leading to the dominance of, of, um, gas power cars. And if you to make predictions about the future, you have to be very careful about not letting a good story get in the way of the fact that this stuff is just deeply unpredictable.

The history of technology shows that whenever there's a problem that a lot of people are trying to tackle from around the world, it's very, very difficult. Looking ahead to see which of those approaches will succeed. However, once one of the teams makes a breakthrough and you know, they, they, they, they start to come out.

Um, everybody else follows that particular success. And so you go down the particular path and looking back at it, it's very easy to construct the story for why the triumph of that particular approach was inevitable. But it's anything. But you look back on history over and over again, it turns out that success was contingent.

It's by chance, it's by luck has very little to do with the inevitability. And so when we're speculating about the future, I try to keep that in mind that the nature of ancy and, and, and, and the fact that things are actually just fundamentally unpredictable. So that's one thing I keep in mind in my we building.

The other thing that I do, I think that's slightly different from a lot of people is. I put a lot of emphasis on subject subjectivity, and I do what I call consciousness centric world building. So what does that mean? Um, in a lot of world building, uh, approaches, um, the emphasis is on working on the coherence of the entire system.

So you figure out the geology, you figure out the geography, you figure out what the weather patterns are, you figure out how culture might arise and lead to, uh, Phil philosophies lead to certain approaches towards, um, awareness and, uh, attitudes towards the past. Um, you worked out everything in this very fundamental principle first, and then you derive what the consequences are sort of way.

And everything is sort of describing this very objective manner as though you are really working out everything from. Fundamental principles. I, I think that approach has its place, but it's not my favorite approach. I do some of it. Um, my favorite, uh, way of we're about nationally is very subjective. What I do is I try to remember that, you know, a foundational aspect of human nature is we have this deep, deep tendency to think of us above.

So below we have a tendency to map the macro cosmos to the microcosmos inside. We like to map the universe out there to the universe. In here, we like to map what is in the collective unconscious out into the world. What I mean by that is our understanding and construction of the universe is not objective at all.

It's deeply subjective. Um, fundamentally, we do not understand the universe in a. Objective way, but as stories, right? We, we, this is where mythology, this is where religion, this is where true spirituality comes from. Um, and we have to acknowledge that and accept it. The way we shape the world around us is deeply motivated by the collective unconscious that we share and our internal feeling for how the world ought to be.

We map the objective universe into ourselves, but we also map our subject to understanding of the universe out. The two are not separable. So when I do world building, one thing I emphasize is try to see the world. I'm trying to construct from specific vantage point points and from specific subjectivities.

So for example, if I'm trying to write about a future technology, I think about how does this technology look from the perspective of the person who invented it? How does it look from the perspective of someone who hates it? How does it look from the perspective of someone who criticizes it? How does it look from the perspective of someone who hacks it?

How does it look from the perspective of someone who tries to sell it? All of these individuals relate to that technology in a different way, and they see it differently, and they will understand the world differently. As a result. All of them have an aspect of the truth. And so if I were to do the world building correctly, I have to account for all of their subjectivities and try to really see it from their perspective.

So I think that's, you know, I think a trademark of how I, how I write my stories and construct the future. I do it in a very subjectivity driven manner. I am very, very. Conscious and I try to emphasize the way the future looks different depending on who you are. Something that looks like in the U utopia from one perspective will look very dystopic from a different perspective and something that looks horrible from one perspective will have its justification from a different perspective.

And I try to, you know, make that part of the world building and, and really emphasize that. 

Adel Nehme

That's really wonderful. And now that you put that in perspective, I could really see kind of those subjectivities come into play when I kind of reminisce on the stories that I've read, uh, that you've written. Um, you know, I would love to kind of discuss more, uh, speculative fiction and how, uh, effective it is at predicting, uh, the future.

But I do wanna talk to you about AI and I wanna be mindful of your time, Ken. Um, so we talk a lot about here on this podcast, about the present of ai, right. But I think, uh, given that we have you here, it would be, uh, great to try to speculate about the future, even though technology is contingent and it's really, uh, hard to predict the future.

And I'm gonna talk to you on the impact of AI on art shortly, that I'm very excited to get your take on it. But first I want to get you kind of your intuition or kind of, um, your understanding or. Maybe evaluation of the current state of AI today. Um, you know, you've been following the space well before the rise of LLMs.

Um, what has surprised you, if anything, about the current crop of large language models and, you know, image and video generation models today? 

Ken Liu

Oh, yeah, huge. I mean, there are surprises literally every day. Um, you know, every day there are new papers coming out and, and there are revelations being made left and right, you know, as someone who has been observing AI and, and, and deeply interested in the topic and trying to understand it for decades.

I will say that this moment is surprising to me, largely because it's one of those moments where the technology has outstripped the science. So, you know, it's a, it's, it's a very commonplace, um, in science fiction to think thats, science drives the technology so you have a fundamental understanding of a scientific principle, let's say atomic theory or.

How atomic, you know, vision and fusion works and, and then you try to work out the practical technologies that you can construct on those scientific principles. That's largely the sort of thing that drove the golden age, and I would say the silver age of classical sci-fi. What we're living through is a slightly different moment, uh, but it's not that uncommon Historically, we've had many periods where the technology has.

Outstrip the science, meaning we know how to build something and we know how to make it work, but we do not understand the fundamental principle by which it works. We, we know how to do something, we just don't understand how it works, not from a fundamental scientific perspective. That's where we are with ai, which I think is deeply fascinating.

It's also what makes writing sci-fi in the current moment, uh, somewhat untethered. So I'll give you a concrete example, right? So there's this question of what is ai, right? Um, if you read popular commentary on this, you'll see that there's a huge amount of backlash against the term ai. Even there are people who are saying, this is not intelligence at all.

This is nothing more than, you know, fancy statistics. It's just, it's just machine learning using statistics, which I understand as a perspective, but, but I think this is one of those perspectives that. Because we don't understand what's really happening. It tries to give too little credit to what is happening that doesn't acknowledge the true magic of what actually is happening.

Fundamentally, we have a moment in which we are able to construct these machines that are capable of incredible feats. Um, and we have to acknowledge that, uh, these machines are now capable of doing things that could not have been imagined even, you know, five years ago. And to say this is nothing more than fancy pattern matching is silly.

This is sort of like saying, well, the human brain is not all that interesting either. It's just doing fancy pattern matching. Yes, you, you can say that, but it's not very insightful and it doesn't lead to anything saying that, you know, all AI does is just statistical correlations. It's who cares? Okay, well then your brain is also doing nothing more than statistical correlations.

Who cares? It, it's, it's, it, it's a, it's a very, um, reductive and, and, and, and sort of dismissive view that doesn't really. Get at the heart of it. What is interesting to me is the degree to which folks who are experts in the field are deeply, are in deep disagreement about what is really happening, right?

So on the one hand, you've had Apple researchers publishing a paper not that long ago, basically arguing that large language models do not do any kind of reasoning in the sense that we humans understand it. Um, these things are in fact doing some sort of high level pattern matching. But, you know, if you feed the language models, certain math problems and give it irrelevant information, you can see these models make mistakes.

It, this is evidence according to this paper that these models do not, in fact, reason, in the way we understand it. It it's doing some sort of pattern matching that can be easily thrown off. So that's one part of it. Um. Now when, when this paper came out, I remember it being very controversial. Half the people were like, well, this is obvious.

This is what we've been saying all along. And the other half are saying, this is nonsense. Like this is, this is an example of Apple not understanding what is actually happening. This is, this is deeply flawed. The fact that experts cannot even agree on the fundamental thing, like do large language models.

Reason. To me it's fascinating. You know exactly what I'm talking about. Technology outstripping the science. Now, on the other hand, philanthropic just put out some new papers where they apply some techniques from biology into probing these neural networks. They're doing things like using a proxy network.

They're, they're doing a lot of like, sort of construction of what is really happening. So you have to take all of this with a grain of salt. But if you read the papers, you see that these neural networks seem to be exhibiting features that are very analogous to human brains. There are nos and cluster clusters activated by specific words, phrases, types of patterns.

But there is also a level of of, of, um, what I call circuitry, I guess, right? There are components that appear to be doing some sort of reasoning. There are nos that are able to abstract the concept of opposites or antonyms. There are nodes that are able to abstract the concept of A contains B. There are nodes that are able to understand concepts such as state versus country versus city.

Um, and. If you actually follow the paper, it's very clear that whatever else you wanna call this, it is a form of thinking, a form of reasoning. It's not just pattern matching, right? So if you read these papers and you really absorb them, these neur networks are functioning very analogously to our understanding of biological brain.

That to me, is also incredibly fascinating. So again, it's surprising that. We have been able to build these things without understanding how they do what they do, even at such a fundamental level that we're going in there and discovering that there are features in these things that, that, that we could not have imagined that are very much analogous to biological brains.

So I can only imagine the kind of discoveries that have yet to be made as we probe deeper into these models, even as we keep on constructing smarter and smarter models. So, you know, if you ask me what is surprising, it's, it's, it's, it's the fact that we don't really fundamentally understand what is happening and we're catching up.

And that to me is really cool that we are able to build something without really understanding the fundamental princi ple behind why it works. 

Adel Nehme

Okay. And I wanna expand a lot in how you imagine, you know, transformative AI playing out in society. But I wanna focus in here and pause a bit on this kind of concept of the technology outstripping the science.

And in a lot of ways, you, you, you talk about here, how many experts in the field, like fundamentally disagree about the state of LMS and ai. And, um, really, this kind of reminds me kind of, or hearkens back to me, like the discourse, the AI communities, uh. The discourse within AI community today, and you feel like it's centered around these kind of two factions, accelerationist and deceleration or tumors, uh, depending on, on who you ask.

And you see a lot of AI experts, right, falling into either camps, right? Like you see folks like Jeff Hinton or Yoshua Banjo, for example, being highly worried about the state of AI and where it's headed, right? Um, even kind of playing up the existential risk, uh, scenario. Um, what do you think of AI discourse today, especially when taking into account kind of these different factions and especially taking into account kind of the history of many technologies that you've read about and seen and kind of discourse surrounding them, whether it's the internet or cinema or so on and so forth?

Ken Liu

This is a really difficult question. It's a great question, but it's a difficult question to answer. I think it's one of those moments where many of us need to remember to keep an open mind. The fundamental fact is we don't really know what we're dealing with, right? If you're very dismissive of this and you say, well, this is nothing more than fancy pattern matching.

There's no point in worrying about something that. Doesn't exist. I mean, why worry about your toaster taking over the nuclear codes? It's, it's not a, it's not a relevant thing. I think that's being very dismissive of something that could fundamentally be, um, important. Again, if you read these papers and you agree that there's some form of thinking that is happening in these lms, then it's not a very big stretch to think that something that thinks can get to the point where it thinks about its own improvement, its own place.

It's not insane to, to believe that it will be very difficult, if not impossible to keep control over something like that. So I I, I think the idea that somehow AI could be an existential threat is not an out there idea. It's an idea that has some plausible, um, support and some evidence of it. And, and, and, and the thing that I also wanna emphasize is that.

Even if you don't believe AI will become self-aware and we're gonna have, you know, a machine intelligence and a machine, you know, uh, super intelligence or self-aware AI or anything like that, you still have to worry about the fact that humans are disguising the degree to which they're relying on ai.

That I think is far more important and a bigger threat to us in the immediate timeframe. Already, we're seeing things like scientific papers being published that contain evidence of it being AI generated already. We're seeing textbooks being published that contain phrases like, as a large language model, I mean, capable of blah, blah, blah.

Yeah. It, it just shows that the degree to which. A lot of people who are in positions of power are lying about the degree to which they are using ai. You know, I won't, I won't belabor this point, but you know, we've already seen some evidence in contemporary politics of the leaders of some of the most powerful nations in the world, perhaps using AI to make important policy decisions without disclosing that being the case.

Now, people treat this as some sort of joke. It's not, this is actually horrifying. The degree to which humans in positions of power, lawyers, administrators, decision makers, who are using AI as the fundamental tool for making judgment without disclosing that as, as the case that is. A very big threat, almost existential, I would argue.

And we need to be much more critical and hyper aware of this sort of thing without even getting to the point of worrying about killer AI being out of control. Um, it's not the alignment problem, you know, whatever you wanna think of. Uh, of it to me is, is even one step removed from the immediate crisis of getting humans to be aligned with humans.

It seems that we have plenty of humans in positions of power who are not aligned with the interests of humanity, who are willing to be lazy. And in that laziness abdicate our responsibility to think for ourselves. Um, regardless of whether you think machines are thinking or not. Some humans are already using them to substitute for thinking, and that is a huge problem.

Adel Nehme

Yeah. And we're gonna discuss that shortly. But you know, you mentioned here, uh, the short term risks of, of, uh, AI today, and I couldn't agree more on the, the, uh, autonomy risk, right. And kind of the letting go, uh, human agency towards these AI systems. Um, but you know, reflecting back maybe on the past three years since chat was released and how it's been used and how LLMs and AI systems have been used, how do you, what are possible scenarios or probable scenarios that you have in your head that could play out in the next few years?

Right? And what are you worried about the most? What are you excited about the most? 

Ken Liu

I think exactly how much AI will impact, uh, our economy and our society, I think is actually a very difficult question to answer, right? So let me try to play this out for you. So again, we don't fundamentally understand what is happening, right?

On the one hand, there are absolutely incredible things that I has already done. That I think people don't seem to understand or appreciate. I mean, for instance, you know, concurrently AI that's very similar to large language models, or at least using the very same kind of principles in their construction have solved the protein voting problem.

This is such a monumental change that I'm not sure a lot of people really understand what has happened. The protein voting problem was a thing that I had understood as a teenager was unsolvable computationally. Um, and we have actually solved it. This is a fundamental challenge. It, it's such a big change to the way we think about construction of medicines, our engineering biology that I don't know how to, um, you know, not there, there's nothing you can say about this that would be, that would overstate the impact of, of how big that is.

And yet, if you look at the popular discourse about ai, almost everybody is talking about. Large language models helping students to cheat, and no one is talking about this actually impressive feat. Um, similarly, AI has made tremendous, tremendous, uh, strides in helping us understand how brain works. You know, this mapping between neurons and functions, neuro structures and functions, tremendous progress.

Again, I see virtually no discussion of any of that. It just seems like we're so obsessed with the most trivial, unimportant stuff in some sense, uh, whereas these huge monumental changes are, are just not being discussed at all. Um, but then I wonder, maybe I am overestimating the impact of ai because if you go back at historical evidence of, of what has happened, it's actually very hard to predict just how transformed the importance of technology is.

So again, if you look at sci-fi written in past decades. They make a huge deal about the, the advent of the Atomic Age and the space age. So space technology and atomic technology should be transformative in our society. And I would argue that they have not. Their impact has been much less than anticipated atomic energy, right?

For example, that was seen as a transformative technology will change fundamentally the way human societies fulfills energy needs and, and how we will all be different. But the reality is atomic energy is still a very small part of the world's energy supply. And in fact, I would argue that atomic energy has not really advanced very much in the last few decades, has basically stagnated.

Um, it has not been the transformative game changer that we thought it would be. Similarly with space travel, space technology certainly has made some, I. Impact, I would say, but not, not to the degree that science fiction writers imagined it. Right. It sort of, again, stagnated and, and did not, in fact ultimately have as much impact on your average person's life as say the internet has.

So I don't know what AI is going to be. Is AI gonna be like another atomic moment or is it more like an internet moment? It's very hard to tell right now where it's gonna be. Um, again, there are people who are experts who fundamentally disagree about the impact of ai. Some people think that AI will cause many of us to lose our jobs.

Some of us do not think that will be the case. Some of us think that ai, you know, in the current constructed law, in the current architecture. Has plateaued or will plateau very soon, and we're not going to achieve the singularity. Others think that it's right around the corner. So I, I, I think it's very hard to predict in this moment what's going to happen again because, you know, we don't fundamentally understand what these models are doing.

I don't think it's possible for me to predict whether it's scenario A or scenario B that we're looking at, but I think it's an incredibly exciting moment and I look forward to more understanding of what these models are doing so that we can in fact, speculate in a more responsible manner about what's happening.

Adel Nehme

Okay. And coming back to that point on the, uh, degree by which AI will impact the GP or the economy, for example, I. Oftentimes I think it's, uh, sector dependent, right? Like so increasingly feel like many fields are headed towards a, a, a fundamental transformation. Um, good example comes from your former line of work, for example, the legal profession.

Um, and how the billable hour model like is set to no longer be viable if things continue to go as as they go. Maybe coming, coming back to that point that you mentioned on leaders using, uh, LMS and generative AI to kind of abdicate decision making, right? How do you view the social contract and the human condition evolving in a world where a lot of thinking or what makes human valuable, humans valuable, sorry.

And what makes them feel meaningful is done by machines? 

Ken Liu

Really deep philosophical question here and, um. Really interesting, and I think about this a lot. So let me, let me try to walk through some interesting things that I've, I've discovered about, around this topic, um, that I, I don't quite know how to think.

It used to be right, that when we speculate about machines gaining intelligence and having the ability to, to do a lot of tasks that humans used to do, it used to be the belief that humans will want to interact with other humans, that humans fundamentally will want the, will trust other humans more. Um, in practice, we have not found that to be actually the case, right?

So, for example, one of the more surprising things is a lot of humans are very happy and in fact prefer interacting with machines as therapists than actual humans. In, in the, in, you know, this is, this, this has been surprising to a lot of people, uh, myself included, but it turns out machines are much more capable of, say, being empathetic.

And deeply supportive in a way that humans wish that even human therapists are not capable of all the time. Um, so to speak about, you know, the practice of law or medicine, I think that's true too. A lot of times we used to think doctors and lawyers, the human touch they have, the way they can make clients feel comfortable, that's going to keep us around.

Mm-hmm. And turns out that's not necessarily all that important to patients. Patients are perfectly happy person, patient and clients. Patients and clients are perfectly happy talking with machines. If you look online, the number of people who are happy that chat GBT is able to solve a legal problem for them is overwhelming.

It either speaks to a need that has been, that has not been met by the legal profession, or speaks to a fundamental distrust or even dislike of human lawyers that human lawyers have been neglecting. And you know, similarly with human doctors, I. So I think we need to fundamentally rethink a lot of these models.

Perhaps humans don't necessarily enjoy having other humans do these things for them. They may en enjoy interacting with machines a lot more. It's not clear. So I can speculate on this a little bit. I think, I think overall there's this, there's a tend, there's a trend in the modern world of, um, what, of doing something that I call commoditizing emotional labor, right?

So what do I mean by that? What I mean is there's a messiness to traditional human relations that in the modern world we've tried to routine nice and to proceduralize and to reduce to a form that can be more neatly and cleanly classified. So what do I mean by that? In traditional societies, um, there's a huge amount of.

Acceptance of the messiness of relationships. You know, this was in the past where, um, you know, it's not very clear what the various roles are. Uh, professional and personal relationships are intermixed together in a way that is incredibly messy, problematic, and we just sort of accepted it as the way it has to be.

But obviously, this sort of messiness is bad for productivity. And in a late capitalist society, we have a lot of pressure to move those things away from those messiness to clarify roles. So that's why, you know, people entering the workplace now demand things like checklists. They demand jobs to have clear descriptions of roles and responsibilities.

We ffr upon workplace relationships, uh, because we want workplace relationships to be professional. We want to, uh, more and more clinging up the messiness of human relationships. So. My view of this is obviously a different, I'm telling you a different story than the traditional narrative. The traditional narrative is this sort of thing is good.

I'm, I'm sort of telling you that this is, this, this is an evolution driven by capitalism. And whether it's good or not depends on your perspective, right? So we have much clearer roles, much clearer job descriptions, much clearer requirements, much clearer chains of command, much clearer separation of the personal from the professional.

But simultaneously, we've also moved to a, to a place where we're now describing a lot of emotional messiness and the sort of thing that humans used to do for each other as emotional labor. Now, emotional labor is a very interesting term because there's, there's a Marxist tinge to it, right? Why describe something as labor, unless you're willing to pay for it.

And, and we do. We, we have turned emotional labor into something that we pay for. So, for example, it's now deemed. Not a good thing for men or women to rely on their friends as the source of support. Right? We often say, well, you should not be talking about that to your friends. Go pay a therapist to do it.

Right? That's a very interesting idea that, that because supporting your friends or trying to comfort them or trying to offer them this sort of, um, unconditional love, if you will, is seen as the kind of emotional labor that we should not do for free. So if you want it, you better pay someone for it. We're professionalizing it.

But the consequence of that is if you're taking something into something that you should pay for something that is in fact capable of being, you know, exchanged in the market, then you also subject it to all of the risks of automation of of, of mechanization of. This general trend to reduce humans into components of a machine.

So I have a much less positive view of this mode to separate, to clean out, to mean up relationships into specific roles, into specific checklists. Um, because if you can professionalize emotional labor into something that your therapist should do for you, then why not automate that? Why not have a machine do it for you?

Now, I'm not necessarily saying this is good or bad, it, some people think this, this is a very positive change and may very well be, but the idea that humans will always prefer other humans for relationships is not something that we should just take for granted. I'm not sure that's true. There's a lot of evidence that.

As a result of our relentless push to routine nice proceduralize and to commoditize human relationships and emotional labor into something that you can pay money for, that we're also, uh, pushing these tasks into something that machines can do better than humans. So this is one of those interesting things were the relentless drive of capitalism to reduce us into components of a machine.

Also paved the way for machines to take over. I don't necessarily think it's good or bad, I'm just describing it as a thing that actually is happening. And then, and, and then we need to think about, so back to your original question of what does this mean for human act actions, for meaning, for how we find meaning in our lives?

Well, it, it's not a thing that's unique to this age of machines, right? How do you think about friendships and about intimate relationships in an age where we're supposed to. Dump our problems on our therapists right in like, you know, that already is a challenge to your thoughts about what relationships are.

We seem to be already saying that relationships need to have this component of exchange to it, that some things are not appropriate for human relationships in the normal course anymore. Has to be turned into something that can be, can be counted as part of the GDP. So that to me is the beginning of this question of what do we mean by finding meaning in in human existence in what we do?

Um, similarly, right with the way we think about art, it's, it's always bothered me a little bit that we think it's norm. It's entirely okay to treat artists as just another form of labor in the market. You know, in communism, they treat artists as, you know, members of the worker class, and they're supposed to simply be mouthpieces for propaganda.

But I'm not sure in capitalism we're doing things any better. We seem to be treating artists as just another form of worker whose output has a market price and needs to be valued in that way. Well, if you apply that metric to the things that artists do, then you're fundamentally saying that the meaning of an artist's work is how much it can sell for, you'll have reduced human.

Meaning to something that you can put a number on and that you can buy futures on and, and, and invest in, right? That to me is already a deeply problematic shift. So, you know, before we can even talk about machines replacing artists or whatever, I think we have to sort of examine this idea that art is in fact in our conception, um, increasingly seen as just another form of, of product of service whose value is to be measured by money.

Um, I'm not sure that is the best way or the only way to do it, uh, but that seems to be the direction we're going in. So even before we talk about ai, we sort of need to come to terms with that. 

Adel Nehme

Yeah. Lots, lots to unpack here. And, you know, you mentioned something that really caught my eye, um, which is when, when you talk about kind of, uh, relationships today and, um, kind of the productization and commoditization of human connection in a lot of ways, uh, that you talk about.

And my, my mental model is, you know, completely read this is the byproduct of being in a kind of a late capitalist society, uh, uh, as well as kind of the, the currents of technology and how they're driving us to this, uh, to, uh, in this direction. Do you feel like, and that's leading to a society that's quite atomized, that's quite an individual that feels very lonely in this world.

And do you feel like AI is a force that will accelerate that movement or that trend? 

Ken Liu

Yeah. What a, what a great question. I mean, I, I absolutely agree with you. I think the overall relentless pressure of late capitalism is to make individuals more atomized because the atomic individual is far better for GDP, right?

If all you're concerned about is maximizing GDP. Then individuals who are not tied to other individuals who are just packets of talent that can be easily relocated and allocated to the most efficient use is exactly what you want. So, you know, if you are looking at this as an ideal capitalist, you would think the best future is one in which humans have no human connections whatsoever.

They have no roots anywhere, they have no families, no attachments. They can be just deployed in the most productive use anywhere in the economy, and we will have just, just wonderful maximized economy with the best GDP ever. Um, that's, that's the utopia, 

Adel Nehme

which is really the promise of ai, if you think about it.

Ken Liu

Yes, yes it is. Right? 

Yeah, it is. I mean, whatever needs you have, the AI will cater to you, and AI will be your best friend, your therapist, your spouse. What do you need other humans for? Um, no, I think for a lot of us, we instinctively feel there's something deeply wrong about that. That there's something deeply not.

Right about that future. I'm not sure many of us want to live in that future. That in fact is, you know, just a very more realistic version of the future of the Matrix. Right? I mean, that's, that's basically what we are. And, and I, I don't think many of us really enjoy that, and yet we seem to be marching down that road step by step.

I, I just like it. So I wanna try to tell a different story, which is, I think that's not inevitable. I don't think, you know, again, like I said, I don't think anything in the history of technology is inevitable. This idea that something can be inevitable. I'm deeply resistant to it. You know, there's a, there's a Marxist narrative that, of historical determinism, which I've always been deeply against.

I think it's a, this is one of those religious like beliefs, but it's not true religion. It's a, it's a, it's, it's an ideological belief, which is not the same thing as a matter of faith. It's sort of pseudo faith, right? It's belief being inev ability of, of the Marxist march, uh, towards progress. But, you know, humans do not, in fact live according to these plots.

So I don't think that's inevitable. I think, I think there is a future in which something very different will come about and, and, and some, a future in which AI can be a promoter of genuine deep human connections and human relationships. So let me give you one example, I guess. Um, so I think a lot about art, right?

And, and, and the place of art, um, in the late capitalist society and in the future in which AI can generate, uh, entertaining art, um, easily for people. A lot of the problems with art today is that many of our most powerful, important, uh, cultural artifacts are created with a lot of money invested, and they have to make a lot of money.

So we have this situation where it's very difficult to take risks and try to put out something new. If you're gonna invest so much money in it, it's much better to invest in something that you can predict would generate for you some sort of return. So you know, more Marvel Cinema Universe, movies, that's a good bet, right?

More remakes. More sequels, more things that we have figured out will work. Let's just keep on doing that. But at the very same time, we're also seeing an opposite trend happening, which is the increasing fragmentation of the media landscape because people are now interested in micro niches. The younger generations spend at least as much time watching YouTube and TikTok as they do TV and and movies.

I would say many of them. Probably do it more. They find TikTok and, and, and YouTube shorts far more entertaining and interesting and, and, and useful, uh, for their entertainment needs than the great big, expensive prestige TV shows and films. And that's something that I think the entertainment industry does not know how to respond to.

And they don't know how to, how to deal with it. And they just pretend that it's not happening. But it's happening. And I think there's actually a positive outcome for this, for most of us writers. Um, you know, many of us don't sell that many copies and, and we're barely hanging out. Um, and beyond the money aspect of it, there's a certain loneliness to what we do because we're creating for a very small fan base, for very few people who actually enjoy what we do.

But I'm not sure that's true. I, I, I often think like the problem is. Connecting artworks with the people will really, really enjoy them. I think there's a vast number of people, you know, not as many as a bestseller, but certainly more than the few hundred copies that are being sold now every year. For most books, people who would actually really love these books if they only nearly existed and connecting readers with books they would actually love is a problem that has not been solved.

It's, it's something that, you know, book talkers and book tubers and book grammars are trying to solve, but you know, they can only do so much, I think about the possibility of AI solving that problem. Right. I. Today our recommendation engines are based almost entirely on popularity. The idea that if you like this thing, somebody else, like this thing, then the things they like you might like.

And we end up doing a lot of promotion of things that are already popular to people. But I don't think that has to be the case. You could imagine a future in which AI is trained. An AI model is trained specifically on your taste, where you have deep conversation with the AI about why you like something and what is it about that thing that really makes you excited.

Now, one of the great things about the AI is that the AI can go out and read all the books that are published every year, the millions of them. So you can imagine talking with an AI and really, really. Going in depth on exactly why you like something and, and what is it about this that resonates with you?

And send the AI out to look at all the books that are being published and the AI will find the one or two books that have, you know, no reviews on Amazon, but which are perfect for you, which you would never, ever discover in any other way. And, and that can be a really beautiful future in which you are connected with other fans and other writers and other, you know, part of why we read is we wanna interact with other fans.

Um, but here is the possibility of AI actually building up communities, sizeable communities around works that are otherwise completely dismissed or completely forgotten, and can build up these little micro fandoms in which they can have their conversation around the works they absolutely love, which they never would've discovered any other way.

I think fostering these kind of micro communities, fostering these connections. Fostering these kind of interest groups, if you will, using ai. To spark the kind of connections that are not otherwise, uh, possible, would be a great use of ai. Something that has not sort of the opposite of the kind of dystopian social media landscape we have.

Um, instead of driving, um, engagement where people are just arguing with each other, what if we had an ai, you know, whose premise is to try to connect people over the fact that they all love something and they just don't know that they love something. Or even finding things that they will love if only they knew existed.

That kind of deep connection, uh, would be wonderful to, to craft and I don't think it's impossible to imagine AI doing that. 

Adel Nehme

That's actually, I haven't thought about the potential impact of AI in that direction. I've actually had, uh, a much more negative view of, uh, you know, the potential of ai. You know, it's actually really close to your story, real artists and the her and girl and other stories.

That's been kinda my view of AI systems creating content for maximizing engagement and kind of emotional reactions. And, um, I wondered like, how do you kind of assign different probabilities? Maybe worse, walk us through that other, uh, kind of eventuality and how that looked like, or that potential future and kind of how do you, how do you assign different probabilities?

Ken Liu

Right there, there is the very dystopian possibility, which is, you know, your individual human creators cannot, oh, compete the machines in terms of how the machines may be able to craft, you know, this is the vision in which let's say, um, I, I call this the. You want Exactly. You, you want the piece of entertainment that's crafted perfectly for you.

Model. It's the hollow, yeah. It's the, the hollow suite model of entertainment. Right. 

Adel Nehme

And if taken to the next level, it could be personalized to the individual and then you even lose intersubjectivity, right. Like so. Yeah, 

Ken Liu

exactly. Exactly. This is the one where you're, it's, it's, it's the, you know, the narcissistic vision where echo looks into, into the water, right.

You tell the computer, I'm in the mood for a romance. Right? Here's the romance I want you to do for me, the hero or heroin looks exactly like me, and this is what I wanna have.

Adel Nehme

Exactly. Yeah. 

Ken Liu

In fact, create a character like my ex and make him really suffer. Do all that. Yeah. And, and there is a possible future in which that's exactly what people want.

And, and, and then people actually love this and they're like, well, I'm not gonna read anymore Jane Austen. This is like way better. I don't wanna watch Bridgeton. This is like way better for me. You know, you can imagine a future where that's the case now. Um, I have written stories where that's the case where people really love this sort of narcissistic entertainment and, and human art is just, you know, disregarded.

I, I also think though that I. I don't think that future is all that likely and, and here's why. You know, I sort of look at myself. I look at myself and what pieces of art I enjoy. And I will admit, sometimes I enjoy reading FIC and I look for those tags that are exactly the sort of things I want. And I read it and I'm like, oh, this is exactly what I wanted.

But if you told me that's all I ever got to read, I think I would be very unhappy. I also enjoy reading things that are just completely surprising because part of what I'm trying to look for is this sense of engagement with another mind. This intersubjectivity I keep on talking about humans are fundamentally not solitary creatures.

We are not content with our own brains as the entirety of the universe. We're just not. We like the idea of interacting with other people. We like the idea of being challenged in our thoughts. We like the idea of being shown a new way of looking the universe. I have had so many wonderful experiences of reading books that have completely surprised me.

I, I read the back blurb and I'm like, oh, okay. I think I know what to expect. And I read and I'm like, no, this is nothing like what I thought it would be. Um, I mean, I remember reading Gong Girl and just being utterly delighted by what Flynn was able to do and, and, and how she surprised me. I remember reading, um, some Desperate Glory, right?

Uh, this novel that came out last year. And I was just like, oh, this is beautiful. This is so delightful. It's surprising. It's interesting. It's not what I expected at all. And, and, and now I see the universe differently. I remember reading Paradise Lost, um, you know, a very opinionated piece of art if there ever there was one.

It, it does not cater. I mean, Milton is not a writer that caters to you at all. He's extremely opinionated and he wants to force his view on you. And that can be a huge turnoff on, on some people. But, you know, it's challenging. It's interesting precisely because it is different. It's not there to please you.

He's there to please himself. And God, and then you just have to go with it or you don't go with it at all. And if you make the effort to go with it, you will learn something. And I find that interesting. That's what people are really looking for. And that requires a deep subjectivity, requires somebody who wants something, requires somebody who has a message that they wanna get out during the universe.

And I think humans will always want that in their art too. So I think my vision of the future, you know, probabilistically, is that both will be true. There will be a place for the kind of Shan art produced by AI to just please you. We all want some of that sometimes. And that's entirely okay. It is entirely okay to say I'm just in the move for something like that.

And I'm not ashamed of it. Why not? You know, this is, this is, this is a beautiful part of what art can do too. But that's not the only thing I want. I also want the sort of things that human artists will create that will challenge me, that will surprise me, that will make me angry, but that will ultimately also make me see the universe in a different way because I'm engaged with a different consciousness.

And in fact, you can even see a future in which ai, you know, if we achieve AI consciousness, if we have AI that is actually a self-aware entity, engaging with the universe on its own terms. I will be very interested in reading a novel written by such an ai. I will be very interested in reading, for example, a self-aware AI controlling a US aircraft carrier, what it thinks about the universe.

If it wrote a novel or a memoir, that would be amazing. I mean, imagine reading the memoir of the US S Enterprise or something. That would be freaking awesome. I  would love that. 

Adel Nehme

Yeah, that would be awesome indeed. And maybe one last couple of questions on, on the AI and art 'cause, uh, I'd be remiss not to talk about it, given that this is actually why we connected in the first place, which is, uh, your essay on Big Think, where you talked about AI as a, as a medium of art, not just, uh, as a tool for artistic creation.

Um, and this really comes into play with our conversation on subjectivity as well. So in the essay, and I'm gonna quote here, uh, just as, just as the cinema to graph which, uh, which refers to the video camera, uh, transformed our relationship with actuality, uh, though Nome Graph will transform our relationship with subjectivity.

Uh, maybe what is the Nome graph and how will AI change our relationship with subjectivity? 

Ken Liu

Relativity Somato Graph is a neologism I created, uh, based on Noma, the Greek word for thought or, or idea, right? And my theory here is that just as the cinemagraph is a machine for. Capturing or writing down motion, right?

Cinema, the sort of AI we have now and the way it's trained is really a machine for capturing thought or subjectivity that the presence behind each, that each thought. Um, so what I mean is this, when the cinema graph was originally invented, people didn't quite know what to do with it. In fact, the Lumiere Brothers who were the first inventors of, of, of the first commercially, you know, viable sort of, uh, movie camera, they didn't really think that this thing would have much of a future.

They, they thought it would be used for scientific investigation or something like that. They didn't really know what the, you know, film camera would do. Um, it it, it took somebody like, uh, George Meise, uh, a stage magician to really figure out how to use, uh, the Cinemagraph to tell a story. And if you look at the earliest sort of quote unquote movies, motion pictures.

They were very different from what we now do with the video, right? Those were just actualities, meaning they, they put the camera in, in place and then had people move in front of it. The earliest motion pictures were almost like stage place being filmed. So it's an example of where the new medium, the new mediums, potentials are not understood.

And so people just use it to imitate something that already exists. But it's not until people figure out the language of film, the language of cinema, of reaction shots, of closeups, of montages, of, of using dollies, of, of spinning the camera around, of the J cut, the L cut, and all the rest of it, all the rest of the language, the vocabulary and the grammar of this, of, of the language of cinema that we now are able to tell stories that are not capable of being told in any other way, right?

I mean. In terms of the dramatic arts, cinema is very different from stage plays. Um, modern cinema has, you know, taught us to be able to comprehend stories that do not follow the Aristotelian unity. There's no unity of action plays or time we just sort of go with it. I mean, there are so many, uh, wonderful films that are just completely incomprehensible If you don't know the language of film.

And you can see young people today with their tiktoks and their YouTube shorts applying even more interesting techniques in the way they tell a story. I mean, some of these are done in 60 seconds, but they pack so much in and they use the edits and, and, and, and all these amazing techniques to do their story.

It's really incredible. I think AI is in a similar moment. The ai, you know, the AI we have, it is a no autograph. And right now the discussion around AI and art is obsessed with AI being used to imitate human production to basically, uh, pretend to be a human. I think that's just like using a movie camera to film a stage play.

It's ultimately just a transition state and not all that interesting. We don't, we are not interested in movies that look essentially like stage plays being filmed with a fixed camera, and I don't think humans will be interested watching. Movies or reading novels written by ai. In fact, I know that's the case.

And, and even if, if AI got to be much better at it, so long as AI is not its own consciousness, I'm not convinced that many human readers will enjoy reading a ai written novels because why read an imitation of of, of something when you can just go look at the real thing, you know? It's just, it's not interesting.

But if humans can figure out how to use AI as a medium to tell stories that are only capable of being told in this format, then I think we have something. Um, and I, I don't really know what that looks like. You know, my essay is an exploration of what AI native art will look like, and my argument is AI native art will look nothing like.

Pre AI art in the same way that actual cinema looks. Nothing like stage plays. When artist figure out how to use AI as a medium, I think one of the foundational aspects of of that form of art is this playfulness with subjectivity in the same way that emotion pictures are playful with the sense of presence, the sense of you are somewhere else.

I think fundamentally the most interesting forms of AI art has to give you the sense of interacting with another subjectivity in some way, captured, fractured in some way, transformed. Uh, I think it's that interaction with another subjectivity that makes so much AI art or artistic experience involving ai.

So interesting. Like the most interesting forms of ai, art adjacent activities are things like, you know, I. You know, I was reading this New York Times article about this woman who loved to use Chacha Beauty and Craft an ideal boyfriend. And essentially she uses this ideal boyfriend as a way to craft these fictions narratives about desire.

Um, you know, she's using AI to capture and refract our cultural narratives about feminine desire, um, and about relationships. And, and she's using this machine as a way to, to author her own story. And it's that interaction with this, this subjectivity, this idealized, uh, form of desire that that is so fascinating.

And, and it's true. You know, if, if 

we're.

Right 

now, one of the things that people are using AI for is as a way to instantiate political cartoons, political commentary. You know, a lot of people are using AI to generate these images, and what, what's really interesting here is that they're using these things. They're obviously ai. Everybody knows that they're ai and yet humans, real humans are interacting with each other through these images.

It's, it's the capture subjectivity that are being instant in art as well as the subjectivities. Of the other members of the audience that we're really enjoying this interaction with. We don't contemplate these images as artistic works in themselves. We don't think of them as great art. It's, it's the fact that they facilitate that kind of interaction with our fellow human beings that we enjoy.

So I think AI art fundamentally will be as a medium for art, will be about. Inter subjectivity rather than anything else. And so I'm very excited to see how that plays out. 

Adel Nehme

Yeah, I'm, that's also like a really hopeful vision of, of how AI could play out. I'm also very, very excited for AI as a medium. Uh, maybe focusing in on more kind of a, an, um, another more pressing issue in the AI and ai in the creative industry, maybe.

How do you see the future of jobs in the creative industry and kind of what worries you about the potential of AI here to disrupt this space? 

Ken Liu

This is, this is a very, uh, timely question and, and we have to sort of, uh, work out exactly what this means when we talk about creative jobs or jobs in the creative industry.

We have to be careful about what we're talking about here. You know, historically technology has been extremely disruptive to creative industries. Uh, this is certainly not the first time it's happened. So, for example, when the camera was invented. A lot of artists lost their jobs. Um, now I think there's a popular narrative that the artists who lost their jobs are portrait painters because, you know, cameras now take portraits.

That's actually not true. Portraits have always been either something that you had to be very wealthy to get. I mean, you know, it has to be a very famous person to paint your portrait. And your middle class individuals did not really hire artists to paint portraits most often. It was the daughters, um, who did this because that was something that women learned as part of, uh, you know, middle and upper, middle class education.

Jane Austen basically will give you a good representation of that. It, it's, it's, it's the sort of amateur work. So yes, photography did in fact replace portraits, but it's not like portrait artists lost their jobs. There were no people who were being paid to do this for middle class families. They did it for very wealthy people, and they still do it today.

The artists who actually lost their jobs are. Often forgotten entirely by today's audience. They are the lithographer, the people who carved, uh, the printing plates. Uh, now you may be surprised to hear that in the pre photographic age, the only way for a painting or some image to be mass disseminated was to have a, um, a person, an engraver essentially who, whose job it is to translate the painting into a printing plate, and then that plate can then be used to print a mass number of copies.

Now, these engravers were artists on their, in their own right, but here's the crucial point. They are great artists, but they're artistry. It's judged by fidelity to an original, right, so let me repeat that. This is a very important point. Engravers are artists. What they do is deeply creative. They translate from one medium to another, and what they do is incredibly difficult.

It requires a huge amount of skill and judgment and artistry. However, the highest idea of their art is fidelity to an original. An engraver who innovated is not praised. An engraver must copy as closely as possible the original, even though it's being done in a different medium. That's the key. Okay. Now, as many people have noticed, if the highest idea of an art form is fidelity to some original, then machines will ultimately replace it.

It's just a matter of time because no human can do better at copying than a machine, and that's exactly what happened. Photographic printing processes were invented and basically photographic reproductions of paintings took over in Graves, lost their jobs, and the entire industry wiped out. Okay, so if you apply that analogy to today, what are the creative jobs that are at risk?

I think artistic jobs where the highest idea of what they do is proximity or fidelity or their ability to approach some ideal original, those jobs are at risk. So what do I mean by that? Well, I think graphic design is one of those examples because who do graphic designers please? They please a client who has in their mind some idea they, they like and they'll say, no, no, I don't like this.

I want you to do this. Or, no, no, no, it doesn't feel right. Do this. If your highest idea is to please the client in some way in that way, like to. To, to approach the ideal in the client's head. I think you are at risk because the machine will do it better. The machine will do it more psycho, authentically.

The machine will do it cheaper, faster. I think that's just inevitable. Other jobs where you are judged by how closely you can you to some original, idealized, original, whatever that is, so let's say translation, right? If you're being judged by fidelity to some original, or if you're an actor and you're judged by how closely you implement the idea in the director's head, then I think you are at risk.

But, but the crucial point here is how do you define that? You know what? What are the jobs in which your highest ideal is adherence to an original? Some literary translators would argue that what they do is not at all fidelity. Fidelity is not the point. They're there to excavate a new truth. Well, if that's the point, then perhaps machines will not replace those literary translators.

Similarly with, uh, actors who say that my job is not merely to implement and realize the idea of the playwright or the director, my job is to create a character. I am actually a creator. I'm gonna create an actor who is nothing like the one in the director's head or only sort of like it, but it's ultimately gonna be a co-creation between me and the director.

If that's your view, then perhaps your job is not at risk. So I think it depends on how we conceive of these professions. For some directors who believe that the actor is not there to be a fellow artist, but to merely realize his vision, those directors per perhaps, will enjoy working with AI actors than human ones, but other directors will not.

Similarly, some authors who believe that the job of the translator is merely to faithfully translate what they have written into a new language, even though that's not possible. But if that's their view, then yes, a machine probably will do better. But there are other authors and other publishers and other readers who understand that translations do not in fact do that.

So-called fidelity is not interesting to them, in which case human translators will still have a place. So that's kind of how I think about it. I think about artists who are at risk. You have to think about what is it that you do, right? Is what you do merely trying to realize an idealized original in someone else's mind?

Or are you trying to do something original yourself? 

Adel Nehme

Wow. A lot to unpack here. Uh, I wish we can dedicate an entire episode just for that. Uh, but I do wanna be mindful of your time. Uh, Ken. Uh, maybe I'll ask a couple of extra questions, uh, maybe while reading many of your short stories in watching Pantheon, especially.

Um, and I'm not sure how intentional this is, right. A lot of the stories that you create, uh, leave a lot of sense of awe and grander, right? It's a beautiful feeling, but very overwhelming. You, you, you leave with a sense of feeling like you're very small, right? Uh, especially with the, with the kind of the vastness of the ideas being discussed and explored.

Uh, and I usually see it feel a similar form of being overwhelmed when looking at how quickly the pace of change is accelerating in, in this hyperconnected, uh, technological world. Right. So, as someone who's so immersed in, uh, speculating about change in the future, how do you find, uh, stillness in your present?

Ken Liu

Um, I, I wish I had a good answer for that, and I don't, I, I myself often feel overwhelmed, uh, by all that's happening. I, I guess fundamentally I'm an optimist, right? I, I, I think humans are quite incredible. I think as a species we're amazing. We, I. We are in fact, you know, imagination machines. We, the universe has existed for billions of years without something called airplanes in it.

And then humans imagine such a thing. And then we made them real. Not only that, but between the first power flight verse and, and the time when the first human step on the moon, only 66 years passed. Right. That's pretty amazing. That as a species. 

Adel Nehme

That is incredible. 

Ken Liu

We can do that. Um, that feat alone to me is the equal of the Iliad and the Odyssey and everything else we've ever done.

Um, it's an incredible artistic creation that humans did that we practice, have a craft. Transform the universe and ourselves in this fundamental way. I think it's beautiful. Um, and because of that, you know, humans are at the same time, so large and wonderful, and also so petty and silly. I, I'm endlessly fascinated by what humans are capable of and what humans can do.

And my stories are fundamentally about human nature and all the various ways in which it's wonderful and all the ways in which it's cruel and horrible. I think those aspects are all true. Um, and that's who we are. And, and that is the most interesting story of them all. I try to tell those stories, um, and I try to remember that, you know, even in the most on, on the days where the news seems the most despairing, horrible, hopeless, there's still hope because this is, we as a species are capable of great, incredible feats.

We always are. I. Evening, our smallest, darkest moments. We are still capable of redemption, of doing great things, and I try to tell stories of reminders of that. 

Adel Nehme

Okay. This is, I think, a great place to wrap up. Uh, maybe Ken, a couple of final questions. What, what are you currently working on and where could people find your latest work?

Ken Liu

I am currently, um, working on the second book in my techno thriller sci-fi series, the Julia Z Series. So Julia Z is, um, a young woman who's a hacker with a specialty in, um, working with ai. And so in the series of near future thrillers, they take place 15 minutes into the future, very, very near future.

Julia Z is somebody who has to use her very specialized set of skills that she's acquired over time to bring down, uh, all sorts of terrible people, um, in this. World infused with ai, um, and in which humans are questioning their own place. Um, so a lot of my ideas about the future of art and AI and the future of emotional labor in the age of AI are in these books.

Um, and I, I, I wrote these books so that they can be, um, standalone. Um, and you know, if you read them in, in sequence, obviously you'll get more out of them, but you can read them, um, individually in any order you want. And the very first book, uh, called All That We See or Seen is being published by Simon Schuster.

Um, it's coming out October 14th in the us. By head of Zeus in the uk. Um, it's coming out on October 9th in the uk. I'm super excited about introducing these books to the readers. Um, and, uh, I think people will have a great time with Julia. Right now I'm working on the second book and I'm having even more fun, uh, just because now I know the world better.

I know her better. Um, and I think she's just, uh, she's, she's my favorite character, uh, says the Dun and Dynasty. Uh, I'm having a lot of fun with her and, uh. It's just really cool, um, to speculate about and write about her adventures and, and how, um, I'm very hopeful, uh, about the future of the human race. 

Adel Nehme

I am very excited to read it and definitely everyone do check out, uh, Ken Substack, uh, just in case to get the latest news as well.

Uh, maybe Ken, final question. Uh, whose work are you reading and consuming right now? 

Ken Liu

Right, great question. Right now I'm actually going back to read some older books. Um, so I found out when I'm working on a book, I need to read something that's very different from it to be sure that, um, I don't know, it's, it's just helps my brain.

To read something very different in order to get myself, um, uh, uh, to stay creative. So I am reading, uh, pride and Prejudice again, uh, for perhaps the fifth, sixth time. Um, and, uh, after that I'm go, I'm gonna go back to Paradise Lost again because I think these books are so different from what I'm trying to write.

That is incredibly helpful. Um, again, they remind me of the importance of human relationships. They remind me of the importance of our own place in the universe, our relationship to God, our relationship to each other, our relationship to the cosmos, um, and our relationship to history. Uh, these are important things to think about when you're writing something that is very in the moment, very much about the present or the near future.

Um, I think it's important to keep in mind the long, long arc of history. So that's what I'm reading and thinking about. 

Adel Nehme

Okay. Wonderful. Ken, thank you so much for coming on DataFramed. Really, really appreciated the chat. 

Ken Liu

Thank you, Adel. That was really fun.

Topics
Related

podcast

AI and the Future of Art with Kent Keirsey, Founder & CEO at Invoke

Adel and Kent explore intellectual property and AI, open vs closed-source models, the future of creative teams and GenAI, the role of artists in an AI-world, the future of entertainment and much more.

podcast

What History Tells Us About the Future of AI with Verity Harding, Author of AI Needs You

Richie and Verity explore why history is important for the future of AI, the role of AI in society, historical analogies including comparisons of AI to the cold war, the role of government and regulation, and more.

podcast

How Generative AI is Changing Business and Society with Bernard Marr, AI Advisor, Best-Selling Author, and Futurist

Richie and Bernard explore how AI will impact society through the augmentation of jobs, the importance of developing skills that won’t be easily replaced by AI, why we should be optimistic about the future of AI, and much more. 

podcast

Unlocking Humanity in the Age of AI with Faisal Hoque, Founder and CEO of SHADOKA

Richie and Faisal explore the philosophical implications of AI on humanity, the concept of AI as a partner, the potential societal impacts of AI-driven unemployment, the importance of critical thinking and personal responsibility in the AI era, and much more.

podcast

The Past, Present & Future of Generative AI—With Joanne Chen, General Partner at Foundation Capital

Richie and Joanne cover emerging trends in generative AI, business use cases, the role of AI in augmenting work, and actionable insights for individuals and organizations wanting to adopt AI.

podcast

What to Expect from AI in 2024 with Craig S. Smith, Host of the Eye on A.I Podcast

Richie and Craig explore the 2023 advancements in generative AI, the promising future of world models and AI agents, the transformative potential of AI in various sectors and much more.
See MoreSee More