Skip to main content

From Deep Learning to SuperIntelligence with Terry Sejnowski, Head of Computational Neurobiology at Salk Institute

Richie and Terry explore the current and historical developments in AI, the NeurIPS conference, AI and neuroscience, AI’s shift from academia to industry, creativity in AI, superintelligence, and much more.
Oct 28, 2024

Terry Sejnowski's photo
Guest
Terry Sejnowski
LinkedIn

Terry Sejnowski is one of the most influential figures in computational neuroscience. At the Salk Institute for Biological Studies, he runs the Computational Neurobiology Laboratory, and hold the Francis Crick Chair. At the University of California, San Diego, he is a Distinguished Professor and runs a neurobiology lab. Terry is also the President of the Neural Information Processing (NIPS) Foundation, and an organizer of the NeurIPS AI conference. Alongside Geoff Hinton, Terry co-invented the Boltzmann machine technique for machine learning. He is the author of over 500 journal articles on neuroscience and AI, and the book "ChatGPT and the Future of AI".


Richie Cotton's photo
Host
Richie Cotton

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Key Quotes

We are in a period very similar to that of the Wright brothers 100 years ago—we've just gotten off the ground. There's still lot of interesting issues and problems that we face to get to the point where we can trust AI and have a safe way of using it.

The ethics of creating an AI is actually no more complex than the ethics of creating a baby. Right? A baby is an intelligence that can go either way. You have babies that become extremely useful, devoting their lives to society in many different ways. Some babies that go off in a bad direction, they become criminals, they become mass murderers. My God, humans have ethics that we apply to humans, that have to be applied to AI, right? That's gonna be absolutely necessary.

Key Takeaways

1

Instead of relying solely on massive, general-purpose language models, companies can gain a competitive advantage by focusing on smaller, more specialized models tailored to specific industries or datasets.

2

As AI products become more common, businesses need to focus on building systems that prioritize fairness, reduce bias, and address the potential for negative societal impacts. Incorporating ethical considerations into AI development from the start is crucial.

3

AI is still in its early stages, comparable to the Wright brothers’ first flight. Expect rapid, unpredictable developments that require constant adaptation and an openness to new possibilities.

Links From The Show

NeurIPS Conference External Link

Transcript

Richie Cotton: Hi, Terry. Thank you for joining me on the show.

Terry Sejnowski: Hi, Richie, great to be here.

Richie Cotton: Since today we're going to be talking a bit about the past of AI and also its future. I want to start with talking about where we are now. So one of the most exciting cutting edge things you've seen from AI at the moment.

Terry Sejnowski: Well, AI seems to have something exciting happening every couple days. So it's hard to say which is the most exciting. but to put everything in perspective, we're going through this very rapid period of Development and it's hard to predict where it's heading. Very hard to predict. But there's something that I think will put this in perspective, which is that I think we are in a period very similar to that of the Wright Brothers a hundred years ago.

That is to say, we've just gotten off the ground. And there's a lot of, interesting issues and problems that we face to get to the point where you can trust AI and have a safe way of, of using it. By the way, I read a biography of the Wright brothers Very, very thoughtful, and one thing I learned was that their first flight, they crashed and burned many times, right, but it, but they did, the first flight was reached a level of 10 feet and traveled 100 feet.

Now, it took a long time for them to go from that to something that was controllable. That was the most difficult problem, was how you could make a reliable turn, for example. AI is going through exactly the same kind of ... See more

teething problems, right? How do you make it reliable? How do you regulate it? This is all part of technology development that has happened over and over and over again.

Richie Cotton: that's a quite a strong analogy saying it's like the Wright Brothers moment where we had the first flight, because that was just such a huge moment for humanity. But at the same time that also makes it sound kind of unimpressive. They only flew 10 feet high and for a hundred feet and no turns.

But yeah, I suppose it shows the power of what's to come. Yeah, exciting things in the future, but maybe we'll talk about the past first. So, I mean, artificial intelligence, I think the term was coined like 60 years ago. I know you've not been working in the field quite that long, but can you talk us through some of the major developments that you think have got us to this point?

Terry Sejnowski: AI, when it was first developed, used computers that by today's standards are like watches, or actually, actually not even as powerful as watches, right? They, they took whole rooms, vacuum tubes, and the only thing they could do well was logic. And so AI tried to develop the right programs using logic and rules and symbol processing to try to replicate human intelligence.

And I think it's fair to say that they vastly underestimated the complexity of that problem. So, the real advance that, in my mind, that made, modern AI possible was learning. early days, you had to write a program and take into account every possible complexity or detail that could possibly happen, which, of course, mushroomed combinatorically.

But if you can learn from examples, then all you need to do is have more and more data. And so learning algorithms for neural networks were developed in the 80s, which overcame some of the early limitations. And I was involved in that with Geoffrey Hinton mid 80s. But what we didn't know back then was how well it would scale.

And in computer science, the way that you, classify algorithms in terms of their complexity, complexity. is how does it scale with the size of the problem as the problem gets bigger and bigger and bigger. and it turned out that the neural networks scale beautifully. And in fact they had to because of the fact that nature discovered that long ago that more neurons are better, You have bigger brains, you have better capabilities.

Richie Cotton: Absolutely. Yeah, so, the sort of, the switch from all those sort of expert systems to machine learning, that, that was absolutely huge. And yeah, just having models that can learn has been obviously world changing. Okay I'd also like to talk about the NeurIPS conference since you've been involved with NeurIPS, I think for many years now.

What sort of role do you think this conference has had in shaping the field of machine learning?

Terry Sejnowski: So, the NeurIPS conference began in the, uh, 80s as a way to bring together people from many, many different fields. A tremendous diversity ranging from people interested in mathematics, engineers trying to solve difficult problems like computer vision statisticians, who are dealing with large problems and large data sets, and really brought together an incredible number of different backgrounds, languages, and I have to say, during the first few years, it was very difficult because we didn't have a common language.

We didn't have a way of, talking to each other. The neuroscientists would get up and give a talk about the basal ganglia and, and everybody would just glaze over. The mathematicians would get up and start putting up equations and neuroscientists eyes would glaze over. Yeah. Yeah. The only ones that actually, actually got through to everybody were the engineers because they were very explicit about here's the problem I have, I have to understand this image and here's how I'm going about it.

But overall, over the last, I think we're having the 38th conference coming up in December, that community came together over decades and developed a real common interest in in retrospect, is how you do computing in high dimensional spaces. High dimensions in terms of number of parameters, if you're a statistician.

High dimensional in terms of the data sets that you need in order to be able to tweak those parameters. And and like I say nature was there ahead of us, right? We have a million billion synapses. Those are parameters in our brains, and we use those every day to deal with the complexity of the world.

And that's what it takes. And that's what it took.

Richie Cotton: Absolutely. So, it is interesting how there's a lot of overlaps between natural intelligence and artificial intelligence, but you're saying there's like, different groups of experts who just couldn't talk to each other for a very long time. So I'm glad that, well, actually, has it got better?

Like, are there AI people able to talk to the neuroscientists these days? Is there a better flow of communication?

Terry Sejnowski: There's a whole new field called neuro AI. And next month I'll be going to a meeting sponsored by NIH. This is exploding. So back in the 20th century, AI person would come to the neuroscientist and say, Look, we know that intelligence is manipulating symbols. You should look for symbols in the brain.

And the neuroscientist would say, What do they look like? You know, it was, there was no connection there. Well, now, that we have a, basic architecture, which is a very simplified version of the brain. It's far, far simpler in terms of the complexity of the units and the synapses, but nonetheless, the architecture is massively parallel.

a lot of highly interconnected units. And learning, right? So the basic principles are there. And now they can talk to each other. And so it's, really exciting to see that happening because I'm actually right at the middle of that. My training was in physics. Thanks. worked with Jeff Hinton and got a feeling for how a computer scientist looks at complexity.

And then my career was in neuroscience and recently won the Brain Prize, which is the highest honor that a neuroscientist can get. and founded a new field called computational neuroscience. so now I'm right in the middle of this because I can see both sides. And by the way, the transition that occurred in NeurIPS was Machine learning.

In other words, what developed was all kinds of algorithms over decades. And I contributed to that, too, independent component analysis. But, there were graphical models, support vector machines it was really exciting because these were all algorithms that really were helping all these fields analyze their data, and really was an explosion, right?

That, that without data, you can't do any learning. So, as the data became available, we had the algorithms, and we could, really take advantage of them.

Richie Cotton: That's very cool. And certainly something like support vector machines is a standard tool for any machine learning scientist or machine learning engineer. And so that's so commonplace and that was born out of the whole Europe's thing. And yeah, that's, that's pretty impressive stuff. One thing I'd like to talk about is it seems who is doing work in AI seems to have changed a bit.

Like I think historically AI was very much an academic research field, but these days A lot of the big news is coming out of technology companies. Why do you think that's happened? Or, in fact, do you believe that's happened? There's been a shift.

Terry Sejnowski: Yes and no. So, it's absolutely true that in order to build a large language model that's competitive, it's really expensive. It's like hundreds of millions of dollars which of course, no academic has access to. However, it turns out that a couple of companies, and in particular Meta has made an open source their Llama3 and Mistral, which is a Startup company in France has a very, very nice model that they've opened up.

And so what academics are doing is using these open source and actually what academics are really good at is analyzing and, and, tinkering with and trying to understand. the complexities of how these models work. So, you know, maybe it's, I think it's an advantage that these big companies have taken over production of the actual models, because it makes, possible the next phase, which is going to be, as we analyze these models and figure out how they work, we should be able to improve on them, and it's already happening because there's a big movement going on, not towards larger and larger language models, but small language models. The ones that are built from smaller databases that are more focused on a particular application, a particular the database for a particular company, a database of medicine and so forth.

So, what I see is that there's going to be a broadening in the field. There's going to be a tremendous number of niches where academics and small startup companies can work with, enterprises to help them get into the game. And that's already happening. By the way, do you know how many startups there are in AI?

Richie Cotton: Oh, I lose count. Do you have any sense of what the number is?

Terry Sejnowski: It's a hundred thousand around the world. I mean, this is like, you know, With stock of AI, you know, everybody is out there.

Richie Cotton: that's pretty impressive. So, last year I tried to make, well, I made a cheat sheet just listing all the different startups in different AI fields. it was weeks of research and, you know, a few weeks later, it's like, well, it's out of date. There's just 10 times as many companies. Yeah. So, the field is just incredibly vibrant.

It's impressive stuff. Okay. So, I do like the idea that you've got companies and academics collaborating and working on different things. It seems like it's important to have this sort of diverse opinions. So you mentioned the idea that language models are actually getting smaller sometimes.

Do you want to talk me through what's the trade off? Like why you want very large language models or why you might want something smaller?

Terry Sejnowski: Like I say, it's all about data, what data you have access to. And the large language models have gone for omniscience. In other words, knowledge about everything, that's way beyond what any human is capable of, We become experts in very narrow fields. Not every field, not everything, I mean, for some people, they can master two fields, but that's very rare. So, a way, the large language models is really not a good way to go if you want to model a human brain. it's a good way to, to go if you're trying to create one solution fits all. But if you look at nature, it's really fascinating.

In fact, nature has not created one super intelligence, even though humans, think they are, they're not. You know, we, but there's millions of different species out there that have all adapted into their niche. And in order to survive, you have to have intelligence. You have to be able to understand what's going on in your niche, and you have to be able to reproduce, and you have to really be autonomous.

And I'll tell you, there is no large language model that's autonomous right now. It's like a big brain in a vat, just depends on humans for constant Power, you know, energy, constant programming, constant tweaking, hundreds and hundreds of machine learning people hovering around these large virus models, you know, coaxing them to get better.

That's not the way nature does it. Nature has a very, very sophisticated way of, there are hundreds of areas in the brain, for example. that help us intelligently deal with the world, like the social world, for example, is very complex. And right now, the large language models are just modeling one part, which is the cerebral cortex, which is, of course, an important part for intelligence, but it's not the only important part.

Richie Cotton: Yeah, that's a fascinating analogy. So what are the other bits that are missing? What are the other parts of the brain that we're not modeling?

Terry Sejnowski: It would take me, literally days to go through this. And by the way, I have a whole chapter in my book ChatGDP and the Future of AI on this specific topic is what is missing. But I'll just give you one example. Okay? Under the cortex there is a structure. Which receives input for the entire cortex and then projects back to it.

It's called the basal ganglia. Now that's a really, I mentioned this before, right? A confusing word, it's a Latin word. But really, it turns out, we know what its function is. It's a very interesting function. Here's what it, does. It's important for a different kind of learning than you find in the cortex.

It's called procedural learning, The declarative learning in the cortex is something you're aware of. consciously aware of. You're not aware at all of how the procedural learning system works. It's in the basal ganglia, and what it does is it learns sequences of actions that you need in order to be able to get future rewards.

And it's used, for example, if you're learning how to play tennis. Well, you start out, you're uncoordinated, gradually you start coordinating the muscles for the serve, and gradually you figure out how to hit the ball back. And that takes years and years and years of practice. And that is all done in the basal ganglia, If you ask some, tennis player, how did you hit the ball? Well, you just put your racket out there, you swing it, right? You know, they have no clue. Same thing is true about musicians. They're just doing it automatically. It's all automatized by the basal ganglia.

Now, it turns out that this is called reinforcement learning in AI, and it has been used. It was used by AlphaGo, for example, to become the world Go champion. And it's all based on predicting future rewards. And what was amazing about AlphaGo was not necessarily just that it won the championship. It did it in a very creative way, in ways that no human had ever imagined.

It just shocked Ke Zhi, you know, the world Go champion. Heath, he, he wasn't even in the game. It wasn't like, you know, it was close or anything, and he lost face. And, and in Asia, that is a really bad thing because it wasn't just it was, all, humans.

He said that when AlphaGo won, it showed us that humans were wrong. That in thinking that we understood this game, that, you know, alpha go was much better than humans and that. It took a lot of, real, self insight to see that it had happened and, you know, not to give any excuses.

Richie Cotton: When you're at the top of your field and you're being beaten by AI, I'm sure it's it's going to be a bit of a humbling moment. Okay. So, as you mentioned the idea of say I being incredibly creative, do you have any sense of where is more creative than humans. Are there any sort of particular areas where this is true?

Terry Sejnowski: Oh, wow. Okay. So very unexpected. So it turns out that these large language models are really good at things that are considered the highest forms of creativity in humans. And we're talking about things like writing poetry. short stories. Some people call them hallucinations. But you know, one man's hallucination is another man's creative writing course, right?

give you many, many examples. By the way, I have a substack if anybody's interested. The book was written a year ago, so it's a little out of date. Obviously. So what I do is I take each chapter or a little part and I update it. And so it's, I think I'm on number nine now. they're short reads.

But one of the most amazing things I came across was that large language models can be trained to become really, really good at cognitive therapy, psychotherapy. that means they have to have some form of empathy, understand something about, you know, how humans have problems, anxiety or depression or loneliness and so forth. And there are two amazing things that I discovered probing this. The first one was that when they had humans that were given the choice, either to go to a human therapist or to an AI therapist, they preferred the AI therapist.

Now that really surprised me. why would you do that? Well, in probing further, why. It turns out that they felt that they could be much more open and unburden themselves with an AI than with a human who's going to judge them. Right? And feel, oh, you know, this person is going to think I'm a real clod or, worse.

 And secondly, you know, if you want to get A human therapist, you have to make a reservation. You have to call and, you know, two weeks from now you're going to meet and have to pay, you know, 200 an hour or whatever, even with insurance, right? The fact is that it's expensive because it's, actually very time intensive for the doctor.

So, well, if you have a problem, you press a button and there's your human therapist, right? Right there.

Richie Cotton: Yeah, it's, it's, it's a genius idea. And certainly therapies are incredibly expensive, so it's not accessible for most of the population. So having something this,

Terry Sejnowski: That's going to happen over and over again. things that nobody imagined would happen and they're beginning to happen. And regarding empathy, it turns out, again, the book, I have this little section where a doctor was trying to Have best eye manner with a friend who had cancer. And he, he asked Chad, GP what he should say.

And, and Chad, JP gave him really, really good advice. And he thanked Chad, GDP. He said, you know, you thanked me so much and you helped me so much. and chat. GDP started consoling him saying, oh, you are a very good friend and you did a great job, there you go with empathy. Right? I never would've guessed that.

Richie Cotton: Yeah I guess when you're in the moment, it can be very difficult to work out what to say, and there've been thousands of conversations or, well, probably millions of conversations on this in the past that,

Terry Sejnowski: And by the way, doctors, my wife is a physician, they're given no training on empathy.

Richie Cotton: Okay, that seems like an important omission in the

Terry Sejnowski: Well, some doctors are good at it, naturally, and others aren't, but it's something that can be trained, it's just that it has to be done, you know, with a good instructor.

Richie Cotton: All right, so, we've had a few examples now of AI being better than humans. And I guess a lot of the prominent AI startups have expressed a desire to create artificial general intelligence. Which I guess the whole idea is that AI is better than humans everywhere. But first of all, do you want to tell me, like, what does AGI mean to you?

Terry Sejnowski: it's interesting, because I have not seen a good definition of what it is. So whatever, it's like pornography, I guess it's, you know, when you see it, you know, it's like, and by the way, one of the things that's been revealed by these large size models that experts don't know what they're talking about when they use words like understanding, intelligence, they can't For example, there's a big rift in the community.

Do these large language models understand language? And half of them say yes, and half of them say no. I mean, this is, how could that be? They're experts, right? Well, it shows you that the word intelligence is not very well defined. And some people emphasize some aspects and others others. So, it's not really, I think, a good goal.

In fact, okay, so again, I have a chapter on this, so you can go in and check it out. but there's so many things that are missing that for example, autonomy. Right, I mentioned, I brought that up. That's part of intelligence. being autonomous, making decisions in the world that allow you to survive and to thrive and reproduce.

And that's not really taken into account at all, even if you build a robot, it's not going to be autonomous. it's being programmed in some way. here, I'll just give you an example. Okay, one example. Okay, two examples. These large dangerous mammals, ballotus, don't have goals, Well, humans have goals, survival, right?

We have a goal, you know, you're hungry, you eat, you know, we have them all built in and and, and, Protection. All of these things are going to have to be added, you want to have a G. I. of some sort. Okay, that's one thing. here's another thing. So what happens when you're all by yourself and you're thinking, you're planning ahead what you're going to do the next day, right?

 well, that's called self generative function of the brain. is self generating thoughts. What happens when you turn a large language model, you eliminate the input to a large language model? What happens? Nothing. It's just that. No self generative thinking at all, right? If you want to have intelligence, general intelligence, at least it should have some self reflection, self thought, planning, right?

There is none there.

Richie Cotton: These are very interesting ideas. It feels like they're going to have big consequences. So what happens if you make AI completely autonomous?

Terry Sejnowski: It's going to get a lot cheaper, right? It'll run itself. You don't have to cobble it. And it's going to have to have much less it'll take much less energy, so you're going to need a much more advanced technology. And that will happen. That's how all technology is developed, is that they get cheaper and cheaper and cheaper.

we already have the technology fundamentals were actually 1980s. That's when I was getting off the ground with my career. And it's called neuromorphic engineering. And what it does is it, it turns out that the biophysics of neurons, you know, the ions going in and out, natural potentials, that you can replicate all of that in silicon at threshold.

At threshold, it's very, very low power, and you can cram, billions and billions of these simple biophysical circuits only take milliwatts, right, not hundreds of watts, but milliwatts. So eventually, that is, is going to be the way AI is delivered to the edge devices in the world.

so, at that point, you will then be able to have autonomous creatures who that actually do, have a little bit more going for them in terms of on board processing.

Richie Cotton: Okay. Yeah, I can certainly see how you want much lower power requirements in order to have like, something that can be autonomous because it can't run in a data center at that point. All right. Are there any ethical implications to creating this sort of AGI?

Terry Sejnowski: You bet. Okay?

so I'm the president of the Neural Information Processing Systems Foundation that runs the biggest AI meeting, and it's going to be in December, it's going to be in Vancouver, and I'll tell you, we have, there's a lot of people whose careers now are all about AI ethics, right?

This is like, become a big hot topic. And there's a lot of things that we have to worry about. We're worried about bias. We're worried about fairness. We're worried about making sure that hallucinations don't go awry in terms of how people use it and so forth. and all of those are very important ethical, and also how it is going to affect society in terms of, copyright laws and things like that.

I mean, there's a tremendous amount of regulation that's going to have to be done. So that is really important. However, when you come right down to it, the ethics of creating an AI is actually no more complex than the ethics of creating a baby, right? Baby is an intelligence that can go either way, right?

You can have babies that become extremely useful and to devote their lives to society in many different ways. And then there are people, there are some babies that go off in a bad direction, they become criminals, they become mass murderers. My God, you know, in humans, ethics that we apply to humans have to be applied to AI, right?

That's going to be absolutely necessary. and that's beginning to happen. You know, regulations, people are talking about regulations. The trouble is, it's a moving target. We don't know what to regulate. Or how to regulate. If it's changing quickly, then, you know, things are happening. In fact, the European the AI law that went into effect was already completely obsolete.

Because we can do things now that you know, it was done like, two, three years ago when they, like a hundred pages of laws and, you know, can't use large language models for looking at job applicants, right? I mean, that's, that's silly. You know, you should be thinking about how to regulate the a large language model so that it's fair.

One way is to put that into the cost function, the loss function, so that you make fairness you know, give it a value. what's the value compared to getting the best candidate, right? It may not be the, you don't want the best candidate Period. Because he may be or she might be really disruptive, but, other things that are important that need to be weighted.

So, we're at the very beginning. This is the Wright brothers. We're just trying to figure out how to control this contraption.

Richie Cotton: Okay, yes, certainly a bit of a wild west situation on a lot of these ethical issues. So you mentioned the idea of an AI being like a baby once, certainly once you have artificial general intelligence. So what's the equivalent of a parent then or family? to this AI baby.

Terry Sejnowski: Oh, there isn't right now. It's unfortunately, these are orphans. These poor programs are out there without any guidance. Right. And so how is it that we bring up our children? Well, We bring up our children, every culture trains the children, first of all, in the language of that culture, obviously, but also in the values of that culture, and, what the goals are, and so forth.

And that takes many, many, many years of reinforcement learning, what I mentioned earlier. In other words, reinforcement learning is absolutely essential for creating AIs that are aligned with us. Right. Otherwise, they're just freely floating. They would have whatever culture they want, right? I mean, this is how we do it with our own children.

And interestingly, that's already beginning to happen. So there's the most recent wave of I think it's Google who came out with a version of, their large language model, which was trained with reinforcement learning to do sequential reasoning. In other words, when a mathematician solves a problem, have lemmas, and then they have intermediate steps and so forth, and, Each one is simpler than the problem, but when you put it all together, you solve the problem. and so it turns out that mathematicians, okay, they're not born, to think that way, they have to have years and years and years of training to become a mathematician who can solve equations, with proofs.

They're valid. Well, that's reinforcement learning. It's just like learning how to play tennis. Right? And so, you're going to have to go through that same process if you want a large language model to be able to do sequential thinking. In terms of breaking a problem down into bits and solving each one separately, then you're going to have to train it to do that.

And that happened. But that's just one example. reinforcement learning has to be integrated in from the very beginning while you're training the model and not at the very end. And that's really where things are headed.

Richie Cotton: Okay, yeah, certainly that sort of chain of thought idea where it's breaking down problems into smaller problems and doing sequential reasoning, that seems incredibly powerful, and it's one of cutting edge things with these large language models. Yeah. Do you have a sense of, are there any other techniques like that that are going to sort of help push us towards AGI, like, do our existing methods scale, are they going to continue to scale to more powerful models and better reasoning, or do we need new techniques like that?

Terry Sejnowski: Okay, I said, like I said, there's a hundred, brain circuits that are involved in regulating our behavior. And I'll give you one more example. and this is incredibly important for survival, but it's also important for higher brain function. we have these connections between neurons.

They're called synapses. And, some are excitatory, positive, some are inhibitory, negative, and we have weights that are positive and negative, just like, in the brain. However, in addition, in the brain, we have what are called neuromodulators. What's a neuromodulator? Well, neuromodulator itself is not the main signal.

What it's doing, that is to say excitatory inhibitory, its main purpose is to regulate the other signals, to amplify them, suppress them. And it does that in a global way. A single neuron, like for example in the basal forebrain, there's these cholinergic neurons, this is all gobbledygook, right, but there are these neurons with a particular transmitter that then go throughout the cortical mantle, and, and those are important for attention.

There's another set of neurons in the brainstem called dopamine neurons, which project broadly into the basal ganglia and the cortex. And that regulates learning. Now, why are these two things important? Well, attention is important because you don't remember something you don't attend, first of all. So it's really important for long term memory.

And the dopamine signal is important for your procedural learning in the basal ganglia. so those two neuromuscular signals are not found in any large language model, that's a huge control structure that the brain has. And by the way, there are dozens and dozens of neuromodulators and they're important for social cognition, for example, serotonin.

look, there's so much that's left out of these large diameter models that, you know, we're just at the beginning. This is Wright Brothers here, right? We're just getting off the ground.

Richie Cotton: That is an intriguing idea, the idea of having AI that has a dopamine circuit or a serotonin circuit and gets, you know, joy from certain activities.

Terry Sejnowski: why not? Well, you know, why can't, you know, Large language models have joy, right? emotion should be part of the organizing principle of, of the way that we, we create. intelligence, what I'm saying is, multidimensional. Intelligence is a spectrum in terms of, in some day, okay, here's my, here's a prediction from the book.

Every object that you deal with every day, not just your computer, but a clock. You're going to be able to talk to your clock. Now, you don't need a lot of intelligence to be a clock. But, but, you set my alarm for 7 a. m. tomorrow morning. Got it, Roger.

So, and we're going to have a spectrum of intelligence, which is going to be used in every aspect of our lives, and every, company, when you call up for information, you can ask any question, and then you'll get answers that are, Much better than we're getting now. We still have these trees.

You know, you call the company and it says, and it gives you five options and you go down the tree and at the very end, you know, you've wasted five minutes, you haven't gotten a human. This is so, you know, retro. I mean, it's really, it's really, okay, here's another prediction. When's the last time you used a typewriter?

Richie Cotton: Oh man, it's decades ago.

Terry Sejnowski: Many decades, there are museums, right? But you're still using a keyboard.

Richie Cotton: That's true.

Terry Sejnowski: Why? Well, it's because the only way we can communicate with a computer is through our keyboard. That's going to change. You're going to talk to your computer. Your computer's going to talk back, You won't need a keyboard.

Richie Cotton: Okay. So that's one prediction I'm not so sure about because, well, if you're in an open plan office, talking to your computer is going to be awful for your But otherwise, yeah, I agree. There is definitely some uses for it.

Terry Sejnowski: Well, you know, this is a prediction that is, it's not going to happen just like I said it is, but there will be a way to communicate that is going to be personal. some technology will be developed to do that.

Absolutely certain of that. 

Richie Cotton: Yes, certainly personalization is going to be a bigger thing in the near future. Do you have any examples of how personalization might be useful?

Terry Sejnowski: Yes, I think that, in fact, this is something I was asked a question after a lecture, you know, what's the killer app in AI? that's like Lotus was for personal computers, right? Spreadsheets. And took, I hadn't thought of through, but now it, it's pretty clear that the killer app for AI is gonna be education.

So it's well known that the best way to educate a child is to have a tutor who understands that child personally and can help it get over mental blocks or, specifically the, that the child is having trouble with something but it's very, very costly. there's human labor involved in that, which is very costly.

However, if you have an AI that actually can keep track of that kid and give it advice and serve as a mentor, wow, every child in the world, every child, No matter whether you're here in the U. S. and you're wealthy or whether you're dirt poor in Africa, we'll have access to a personal tutor, which will help them navigate through life, teach them things, help give them advice.

I, I think that that is going to be ultimately the way that we're going to go. we're going to be able to improve society and get out of this endless cycle of having wars and terrible, terrible society problems with homelessness and so forth. I mean, this is really, I think the future, we get there, right?

It's not clear.

Richie Cotton: Absolutely, yeah. The use cases for AI in education are pretty amazing. I mean, obviously, it's very dear to our hearts at Datacamp since we do we do AI training and data training. Yeah, and there are limits to what you can do at the moment because it's like, well, Most of our examples around things that a lot of people care about business use cases, but if you care about like if you're super into like frogs or something, then we can't do a course on on the data analysis for frogs because there's just not much of a market for it.

But if you

Terry Sejnowski: but somebody out there is

Richie Cotton: then it's possible. Yeah, yeah. Okay. So, let's come back to the theme of autonomy. One of the big sort of hyped use cases recently has been around AI agents. So, how do you see AI agents taking off? Because they're, they're only in their very early stages at the moment.

Terry Sejnowski: A lot of our thinking has been formed by fiction movies. And the average person, thinks that the future is going to be like the Terminator, right? can assure you that that's not in the near future. But there is science fiction movie out there, which I think is really Right on in terms of where we are right now, where we're going to go in the near future. And that is Her. And I don't know if you've seen that movie.

Richie Cotton: I have seen that movie, but do you want to explain it briefly to the

Terry Sejnowski: Okay, well, Her is Joaquin Phoenix is, has a job in the future, and he's very lonely. He's all by himself, in his little cubicle, apartment, and he signs up for a personal assistant, like an agent.

Turns out to have the voice of Scarlett Johansson, right? So, that was a good choice. but he can talk to her, and over time they become friends. it really helps him. Become out of a shell and he's really doing very well now and much better than he was before.

And suddenly she goes silent.

And then it turns out that she's been seeing other humans, like she has a thousand humans. Obviously, not to timing him, but, you know, a thousand timing him, which, you know, cause and pause. But then I'm not going to tell you how it ends. I have to say that in a way, it gives you a sense of what an agent might look like.

How it could help humans and in other ways that things might go awry. I thought it was a fabulous movie just in terms of the way it was acted. It was very, very low key. It wasn't like there was any action, nobody was killed. It was, I, I, I, there are other good science fiction movies out there, but I think that one's the closest.

Richie Cotton: Yeah, so, I like the ideas, but well, it becomes a, companion and assistant and all sorts of things. So you're actually developing a personal relationship with the AI. Which again is, is slightly controversial, I think. Do you have a take on human AI companionship?

Terry Sejnowski: Oh, it's already happening. I don't know if you've been following me. There's an Israeli company's has a, a personalized help robot. for elderly, called LEQ, it actually is, the technology is not that advanced, but the point, though, is that for someone who's older, it keeps track of them, it knows whether they're turning the lights on at night, whether they're they have pain and they can report back to the family or the doctor, that's already out there.

In fact this is like going back 20 years Japan came out with a little toy dog, a companion called Aibo, A I B O. and people loved it, even though all it could do was follow a ball and follow them around, doesn't take a lot for humans to adopt pets AI will be adopted as a pet.

So that's something that's already happening. but, what we really I think I should be concerned about is. What impact it's gonna have on humans in the long term. And we don't know. We just don't know. You don't know until you actually get in there and get the data.

Do the experiment.

Richie Cotton: Absolutely, yeah. And with the AIBO dog, I think, yeah, dogs are amazing, but also having a dog that you don't necessarily have to walk every day, probably very convenient. Yeah. Okay. So, you mentioned that we don't really know about the long term implications of AI. Certainly wants to get to, like, very powerful AI.

So I was just, you touched on the idea of aligning AI earlier. And once you get to superintelligence, that becomes very important. Do you want to talk me through, like, how alignment works and what super, what aligning a superintelligence would involve?

Terry Sejnowski: Well, first of all, I think the term super intelligence is a really bad term because it has all these implications for eliminating humans, right? This is existential threat, That's what people think when they hear the word super intelligence. look what we really should be thinking about is how to, as you say, align these models, these large language models to make them compatible with us, And I've already given an example, the way that we bring up children, and we just have to basically adopt these AI models as if they were children, they have to interact with us. and learn as they're interacting, right? Right now, you train a big model for 100 million, you throw it out there, and it's one size fits all.

But really, want, the large language model to be specialized for particular niches. So you're going to have to bring it up in that niche the way you would a child, so that it becomes really good at that particular area of data, of use cases. I wouldn't call that superintelligence, I would just call it diverse intelligence.

There's going to be a great diversity of intelligences that are trained once the companies, and this is not the big companies, these are going to be startups, startups are already doing this, right? They realize that, they're not going to become Metas or open AIs, but what they can become is a really important second tier that is delivering AI to companies.

By the way, it's really interesting. Internet did not really have much impact on the internal structure of companies. It had a huge impact on people outside using companies and, you know, interacting with the rest of the world every aspect of our lives, you know, banking and social media and so forth, but it didn't really change the way businesses work.

This is going to be different. Businesses are going to be completely transformed by this technology. I mean, right now, if somebody asks a question about IBM, you know, this quarter, money going and so forth, it may take a week to answer that because there's dozens of databases and you've got to query them and you've got to SQL, it's a nightmare.

I don't know how they manage to do it. I suspect they don't, and nobody in there knows what's going on. But if you have an AI that knows all the databases and instantly can tell you the answer to your question, just think how much better the decisions are going to be made and how much more can be done.

So here's my mantra, which is that AI is going to make you smarter, whether you're a person or a company or a society. And, people are worried about losing their jobs, right? fair enough, are jobs that are going to be obsoleted. Most jobs, you're not going to lose your job, the job's going to change.

And that means you have to change. You're going to have to use AI, you have to learn how to use AI. And by the way, young kids coming up have no trouble doing that, right? They're already using it for school and for other things, right? So that's going to be the next generation coming up. They're going to be the ones that are going to take over.

Richie Cotton: Absolutely. Yeah, certainly. If you're learning all this technology when you're very young, it's very different proposition to, well, I'm middle aged and trying to learn this stuff. So, yeah I can certainly see how the next generation is going to benefit from being AI native. Okay. So we started that on Angleton superintelligence.

And one of the big sort of risks is, The problems from superintelligence, people have talked about things right up to extinction of humans and things like that. And so the probability of this happening is widely known as like the P Doom number. Now your collaborator Jeffrey Hinton, where he's just like, what's the probability of superintelligence destroying the world?

He put it at 0. 5. So 50 50 chance of superintelligence murdering us all. Do you have a P Doom number?

Terry Sejnowski: Yeah, well, first of all, I'm not worried about superintelligence, but I'm glad that Jeff Hinton and others are worried about it because I've got better things to worry about. there are more problems I'm working on right now than myself trying to figure out. So I've told you about self generative brains.

We don't know how the human brain self generates thoughts. We don't know, right? So that's the problem I'm working on. And putting a number on a, an imponderable like that is a fool's game and if you, if you ask around, I mean, the numbers are all over the place and that means there's not enough data to make an intelligent decision.

And so like I say I will. be agnostic about the possibility. I think that it's in the short term, I think it's very unlikely in the long term, who knows All I can say is I hope that they put an off switch into it. Really? If engineers are smart enough to do that, then I think we're safe.

But in any case, I'm sorry I don't have a number for you, but I think it's between 0 and Okay, 

Richie Cotton: very tactfully avoided, that's a good statistician's answer. okay, yeah. So, the idea of an off switch has actually come up quite topically recently with the California AI safety bill. Oh, yes.

That's one of the rules in there that, you have a particularly powerful AI, then there has to be an off switch, which turns out to be a very controversial part of the bill, I think. 

Terry Sejnowski: Well, well, you know, Newsom turned it down. 

Richie Cotton: Really? Okay. Yeah, yeah, no, yeah, 

Terry Sejnowski: he, he, he turned it down. And I think he was lobbied by the, you know, the big high tech companies.

Richie Cotton: All right, we're not getting enough speech then. That's a shame. 

Terry Sejnowski: Well, we'll have to wait and see about that, but no, I, think that it's, it's premature regulation that can often be real danger and I'll tell you why we might want to turn it off here in the U. S., but they're not going to turn it off in China and other parts of the world that were, they're bad actors, you know, you don't want to be behind, you want to be ahead of them.

Richie Cotton: Alright what do you think the consequences of the competition between different countries are then for AI?

Terry Sejnowski: Like any technology, and I hear, I look at the way that we've handled similar existential threats, bioengineering, for example, to create, viruses that are deadly, have regulations for that, and all the countries have realized that it's not in their best interest to create these bioweapons.

And somehow it's been contained. Same thing with nuclear weapons, right? We, we, it has to be international agreements that are made and everybody realizes that, if the war starts that we're both wiped out. So it's not worth. building up beyond a certain point. So, I, I think the same thing will happen with, as the countries each in its own way, develop something that and uses it in different ways.

And then it becomes clear that this is gonna be some time, you know, an existential threat in terms of being able to wipe out the other group, through cyber security or whatever, You know, I'm an optimist, even though I probably don't sound like it. I think that humans on the whole even though we make a lot of mistakes and a lot of decisions are really terrible ones. I think that on the whole, we have survived this long. And I think that if. We continue even making bad decisions, but at least ones that are going to be controllable.

think that it'll be safe. And that's my hope.

Richie Cotton: Okay that actually feels like a good note to finish on. It's like, we've done pretty well at surviving as for the last however many tens of thousands of years. And yeah, hopefully we'll continue to do so. Excellent, alright thank you so much for your time, Terry.

Terry Sejnowski: Oh, Richie, it was great. Thanks for touching all these interesting topics. They're all in my book. So go out and pre order from Amazon. It's coming out October 29th.

Richie Cotton: Alright, yes well worth the read, I have to say. Thanks.

Terry Sejnowski: Very good. Great.

Topics
Related

podcast

Learning & Memory, For Brains & AI, with Kim Stachenfeld, Senior Research Scientist at Google DeepMind

Richie and Kim explore her work on Google Gemini, the importance of customizability in AI models, the intersection of AI, neuroscience and memory and much more.
Richie Cotton's photo

Richie Cotton

43 min

podcast

Deep Learning at NVIDIA

The modern superpower of deep learning and where it has the largest impact, past, present and future, filtered through the lens of Michelle Gill's work at NVIDIA.

podcast

What to Expect from AI in 2024 with Craig S. Smith, Host of the Eye on A.I Podcast

Richie and Craig explore the 2023 advancements in generative AI, the promising future of world models and AI agents, the transformative potential of AI in various sectors and much more.
Richie Cotton's photo

Richie Cotton

49 min

podcast

Trust and Regulation in AI with Bruce Schneier, Internationally Renowned Security Technologist

Richie and Bruce explore the definition of trust, how AI mimics social trust, AI and deception, AI regulation, why AI is a political issue and much more.
Richie Cotton's photo

Richie Cotton

40 min

podcast

The Past, Present & Future of Generative AI—With Joanne Chen, General Partner at Foundation Capital

Richie and Joanne cover emerging trends in generative AI, business use cases, the role of AI in augmenting work, and actionable insights for individuals and organizations wanting to adopt AI.
Richie Cotton's photo

Richie Cotton

36 min

podcast

The 2nd Wave of Generative AI with Sailesh Ramakrishnan & Madhu Iyer, Managing Partners at Rocketship.vc

Richie, Madhu and Sailesh explore the generative AI revolution, the impact of genAI across industries, investment philosophy and data-driven decision-making, the challenges and opportunities when investing in AI, future trends and predictions, and much more.
Richie Cotton's photo

Richie Cotton

51 min

See MoreSee More