What History Tells Us About the Future of AI with Verity Harding, Author of AI Needs You
Verity Harding is a globally recognised leader at the intersection of technology, politics and public policy. She is Founder of Formation Advisory Ltd, a bespoke technology consultancy firm, and Director of the AI & Geopolitics Project at Cambridge University's Bennett Institute for Public Policy. Her debut book ‘AI Needs You’ was published by Princeton University Press in March 2024.

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.
Key Quotes
The main thing I want people to learn is to feel like even if they're not somebody with deep technical expertise when it comes to AI, it doesn't mean that their opinion about AI doesn't matter and isn't valid. Actually, it is incredibly important. And all of society will shape how AI ultimately emerges and how you use it as much as in how you build it. And so that I think encourages us all to think quite carefully about what type of society do we want to live in in future.
We're having exactly the kinds of discussions you saw around the biotech explosion in the 70s and 80s, with people questioning what it means to be human, these deep philosophical questions. What I take great heart from that is that, we can, if we're thoughtful, regulate appropriately and listen and trust people's concerns. Hopefully AI will also just become a normal accepted part of life.
Key Takeaways
Embrace thoughtful regulation as a means to foster innovation; setting clear ethical and safety standards can build public trust and support for AI technologies.
Consider the benefits of open source AI models for promoting innovation and scrutiny, while also recognizing the value of proprietary models for maintaining control and ensuring security.
Engage with experts from diverse fields, including ethics, law, and social sciences, to create comprehensive AI policies and frameworks that address various societal impacts.
Links From The Show
Transcript
Richie Cotton: Welcome to DataFramed. This is Richie. Conversations about the future of AI tend to be rather divisive, with opinions ranging from artificial superintelligence arriving to save the world or to eradicate humanity. There's a sense that the latter is undesirable, and that something ought to be done to prevent it.
In order to get from that vague feeling to having steps that are practical in order to shape the future of AI, we can draw on historical lessons. As luck would have it, Verity Harding, the founder of AI Consultancy Formation Advisory, has written a book on this very subject. In AI Needs You, she explores some 20th century endeavors like the race to land on the moon The development of in vitro fertilization and the development of the internet.
These things were equally divisive at the time and the stories of how they came to be accepted, provide useful insights into the current situation with AI. Today, we're going to have a discussion along similar lines. In addition to our consultancy work, Verity is also a director of AI and geopolitics at the Bennett Institute for Public Policy and a board member at the UK Cabinet Office Digital Advisory Board, and was previously a visiting fellow at the University of Cambridge. So, let's hear her stories. Hi Verity, thank you for joining me on the show.
Verity Harding: Hi, Richie. Thanks for having me.
Richie Cotton: So just to begin with, why is history important for people who are working in AI?
Verity Harding: Well, ... See more
I think it's particularly important if you're trying to think carefully about how AI is integrated into society, how it's regulated, how it's governed, how societies will react to it. I think you can learn a huge amount from looking at historical examples of transformative technology and how societies reacted to and, worked with those.
I think that brings a lot of insight and helps maybe avoid past mistakes and maybe emulate some things that people who have come have done well.
Richie Cotton: certainly learning from past mistakes seems like very important thing to do. And your book, of course, is filled with examples of historical occasions where there have been technological breakthroughs. one of the things you talk about in your book is the space race. I feel like this is remembered as this big occasion where lots of different people and teams from across the U.
S. sort of came together to get first man on the moon and it was sort of all about the goal of advancing science. So is that a rose tinted view of what happened or is it accurate?
Verity Harding: It's perhaps slightly rose tinted, but I think that's part of what's so interesting about the story. It really is an incredible technological achievement. And in that way, there's no rose tinting about it. You know, it's a amazing feat to have set this ambition and then Been able to fulfill it, which was by no means certain, but I think it's also, and this is what I write about, particularly in my chapter in on space in the book.
It's also an incredible feat of diplomatic and legal innovation as well as technical innovation. And that's what I'm super interested in, is the politics behind these stories. I think that technology is so deeply political, we don't think about it that way often, but if you think about choices that are made, trade offs that are made, what gets funded, what doesn't, who works on what, those are all pretty political questions.
And in the case of the space race, Kennedy wasn't actually that interested in space in and of itself, but he just wanted to find a platform, a sort of competitive space that he thought the United States could beat the Soviet Union. And that was for geopolitical goals. He felt that if he could show off the might and the talent and the ability of sort of free science in the United States, that might encourage other non-aligned countries and, people considering the two models to come more around, towards the US' way of thinking.
And, also he was thinking about war. I mean, this was the Cold War. It's, not very long since the second World War. And it was a very, very dangerous time , for the entire planet because of these two nuclear powers. And this, very real threat of nuclear war. So the capabilities put into space technology were often about spying satellites, intercontinental ballistic missiles, and these kinds of things.
So what's interesting then is we now look back on space as this incredible moment of sort of unity and scientific achievement, but it's based in these very sort of, political decisions and, maybe sometimes cynical, nationally interested decisions. But I think what that teaches us for AI is that actually you can have something that you know, you're concerned about from your own self interest, but you can make decisions that try and encourage people and inspire people and uplift people rather than use that competition as something to divide people and create antagonism and tension.
People say now that, well, you know, you can't possibly have cooperation. It's too much of a, tense and uncertain time geopolitically, but of course you couldn't get something more tense than the height of the Cold War. And yet against that backdrop, we were able to see the United Nations Outer Space Treaty of 1967, which as I write in the book, legally determined that when they did finally set foot on the moon a couple of years later, they did so first and foremost as representatives of humankind and of their nation state second.
, that UN treaty determined that space was the province of all mankind and not just something that whoever got to the moon first, they may have planted a flag on it, but they didn't own the moon and they certainly didn't plant nuclear weapons on it, pointing down at their adversaries. So these are decisions that politicians can make, builders can make, that society can make.
The space race was actually quite unpopular for a while and that caused some real tension over the funding for it. So there's just these fascinating aspects, I think, of these stories that are really if not under research, certainly underreported and people are less aware of. And I think that leads us perhaps not necessarily to rose tinted glasses, but to miss some really key parts of the puzzle.
Richie Cotton: it's kind of fascinating. The idea that some didn't really care about the space itself. It was all about the geopolitics and the military aspects of this, and the science was just a sort of side effect.
Verity Harding: Yeah, and he was, when it came to science, he was most interested in desalination. If he could, get drinking water from seawater, that was actually his kind of scientific passion that he was interested in. But, you know, you couldn't afford to do both. Not necessarily in terms of funding, although that was a big part of it.
As I say, it cost a lot of money to get to the main and, Congress often weren't happy and, and, citizens often weren't happy with that. But also, you know, political time and attention is a sparse resource. And there's only so much effort he could put into corralling and using his kind of dynamism and his kind of magnetic leadership powers to encourage people towards a goal.
So in the end for geopolitical goals, he picked space, but in the book, I quote these amazing recordings we have of him talking in the oval office to his head of NASA, where he I'm not that interested in space. Why are we spending all this money on space rather than maybe cancer research, for example? The reason we're doing it is to beat the Soviets, which really shows, I think, why this was done.
But that doesn't mean that it wasn't also a really inspiring, incredible technology. I think you can do both if you think carefully about it.
Richie Cotton: And for people who are involved in either creating AI or involved in maybe creating policy decisions about AI How should they try and replicate those sort of positive effects where you have that sort of inspiring moment, and then you're avoiding some of the, the thornier issues around like military uses and things like that.
Verity Harding: Well, I mean, look, AI is going to be used in the military. There's no, it already is. So, I think the people building AI need to think carefully about just what the purpose is of what they're building, why they're doing it. And it may be that that's what they want work on. But if they're trying to aim for an inspiring moment that gets people excited and attempts to sort of uplift and help support most amount of people they can around the planet, then they should be thinking carefully about what their purposes of what they're building.
So, that really is a decision that should be made at the start, I think. And I write about this in the book in terms of what's your purpose? Why are you doing that? I mean, my former employer, for example, DeepMind this founder and CEO there feels very strongly about AI for science, AI to help scientists and to turbocharge science, which might help the planet with some really tricky problems.
I think that's a really noble way of looking at AI. Can we use AI to help with the climate crisis? Can we use it to help, do advanced diagnostics and reach diseases sooner? These are incredible things. So it just really comes down to intentionality. You know, do you know why you're building what you're building?
Richie Cotton: Okay. Having a re having a reason for like, why are you doing something that sounds like a, an excellent idea, maybe often overlooked. Um,
Verity Harding: Sometimes it is, or you know, sometimes the reason is just to make money and you know, there's nothing wrong with making money but can that be accompanied by if you want to build something inspiring, then starting from kind of what society do you want to live in, what society do you want to see, what in the future, what are you trying to build, that will really help.
Richie Cotton: You mentioned like, the whole space race took part under the backdrop of the Cold War. I think with AI there have been a few people claiming that, AI could cause huge problems right the way up to things like extinction. And so is there a parallel between this sort of fear of AI and the fear of nuclear war that happened back in the 60s?
Verity Harding: I think it's comparable in terms of the level of discussion and hype around it, but I'm not sure it's comparable in reality. obviously in reality during the cold war, there were huge, nuclear weapons arsenals. And during the Cuban missile crisis, we were very close to nuclear war.
And that's part of what chastened and humbled Kennedy and led him. Although not many people know this to go into the UN and actually suggesting maybe that there should be a joint moon mission and not just something that the U S did on its own. And I write in the book about why that's a really incredible moment and leads to this.
United Nations treaty that we get later determines that today the space is the province of all mankind and we have things like the International Space Station and nobody owns the moon. You were a lot closer to nuclear war then than I think you are to some of the more extreme suggestions at the moment that AI might cause human extinction.
Now to be completely, the most generous possible to those arguments, I think some of those people are advocating for that, are concerned about bad actors using AI in some way to interfere, e. g. with critical infrastructure and things like that. And so, of course, I think cybersecurity and making sure that we're resilient and considering those issues are really, really important.
But I, don't think it's comparable in terms of you know, we have these weapons that can potentially wipe out all of humanity. I don't subscribe to the view that the AI is going to get smarter than us and overtake us and somehow kill us all. That's just not something that I'm particularly focused on.
Richie Cotton: Okay, that's good, like Terminator is not happening in the future then. Okay,
Verity Harding: don't think so.
Richie Cotton: Good, good. Okay, so another story from your book is around in vitro fertilization, and this seems like a fairly uncontroversial technology now, but when it was first introduced in the 1970s, I believe, there were a lot of worries around this.
So can you just talk me through what were people worried about back then?
Verity Harding: Yeah. It's a fascinating example that we don't think of, but something much more relevant to AI than the atomic bomb analogy. This is a technology which emerged with a bang in 1978 when the first baby was born using IVF techniques in the UK. And at first people were really excited, you know, this was a really cool and exciting new scientific capability and technological marvel.
It was especially exciting for people in the UK because it was kind of UK scientific achievement. But quite soon afterwards, you know, you see that people were getting concerned, not just because of IVF itself, although that was one of the concerns, but also because of some of the techniques that allowed IVF, such as human embryology research, plus some concerns about the growing ability to edit genetic sequences.
And people started questioning, what does this mean? Is this natural? What does it mean to be human? What does this say about the family and our future? And it's hard to believe now something as sort of normal and totally accepted and standard part of our society was ever controversial, which I think really gives us a pause for thought when it comes to AI, that it's sort of, at the moment, we're having exactly the kinds of discussions you saw around the biotech explosion in the 70s and 80s, with people questioning what it means to be human, these deep philosophical questions.
And what I take great heart from that is that, you know, we can, if we're thoughtful and we, regulate appropriately and listen and trust people's concerns hopefully AI will also just become a normal accepted part of life. I mean, it already is in many ways, right? I mean, of course we have AI around us all the time recommending us something to watch or a song to listen to or filtering out spam from our inbox or you know, whatever it may be, but these more advanced systems that people are so scared of and so frightened of right now I think probably we'll just.
gradually over time become something that's much more normal accepted. The key lesson from the IVF debate and the human embryology discussion that happened during this time, I think, this is certainly what I write about and what I say in the book, is that People's concerns were respected, they were listened to, and they were acted upon.
So the government said, look, we can't regulate this technology, it's so new, we don't know yet what it's going to be and what it's going to mean. But what we can do is ask a group of experts to look at it. And so they set up this Warnock Commission, which was a kind of interdisciplinary independent of government, they funded by government commission to look at all the issues that had been thrown up by this new technology.
And it was led not by a biologist, but by a philosopher. And she had some public policy experience having led a commission on educational issues before and sort of built this very unique group of different disciplines. there was religious scholars on their legal scholars, social workers, biologists, of course, and they did a huge consultative process up and down the country, meeting people, hearing from expert witnesses, and they looked at this and wrote a report that said, Look, people are concerned about these things, particularly the issue. Those people were worried about what it meant to do research on embryos. And so they suggested something called the 14 day rule.
And this was them saying, look, we think people need to feel like there's a limit. They just don't want to see this stuff get out of control. And so we're going to introduce this limit. You can do research on human embryos, but only up to 14 days. And then after that, no more. And this was actually when it was suggested, really controversial in the scientific community who said, well, there's no scientific reason for this.
I mean, why not 15 days? Why not 16 days? Which by the way, is exactly the type of conversation I can imagine happening now. When anybody suggests. any regulation around technology at the moment, and I have indeed heard lots of those types of kind of very rationalist pushbacks. But what Baroness Mary Warnock, who led the commission, said was, people in a democracy have the right to feel that their voices are heard.
And if we put a limit on this, then we're That will help innovation flourish and indeed it did in the UK. This was eventually adopted. The scientists who opposed it soon became supportive when they realized that the alternative might be human embryology research being banned altogether, which was a very real threat, so they agreed to limits it to sort of see off a ban, and indeed the UK has a flourishing multibillion pound life sciences sector now.
So I think what it shows us, this example amongst many other things. including about how to do a process that involves a wider group of diverse voices and participation in the discussions. I think it shows us that some limited guardrails can in fact allow innovation to flourish and not only can allow, but do allow in a way that if you don't put any guardrails on this and people really worry they might. over regulate, over react, or even, just through complete distrust and sort of horror recoil from using these new technologies at all?
Richie Cotton: That's absolutely fascinating, because I think a lot of people, if they're building something, there's gut reaction is going to be, okay, we don't want any regulations to call. It's going to kill innovation, but actually having those kind of guardrails in place and that consultative process meant that it actually increased what was possible.
Verity Harding: Yes, yeah, , I think you can, you know,, and you might think, I think the example I talk about in the book is around live facial recognition, you know, are we comfortable with, with things like live facial recognition? Are there areas where we actually think maybe There should be limits, and we should say maybe not here, actually, you know, we don't want AI in this particular arena, maybe because it's very sensitive or something.
It doesn't mean it has to be forever, but you've got this happening in the European Union with the AI acts, they're saying, there's certain areas that are just so risky, they're kind of unacceptable. I think they call it unacceptable risk. Does that help, wider society look and think, well, the people who are supposed to be, holding this stuff accountable and scrutinizing it and regulating it, they seem to be on top of this now, so I feel like I can trust it and therefore lean into it more.
And it gives businesses greater clarity of the sort of, field on which they can operate.
Richie Cotton: Yeah, this is really interesting stuff as related to this. The process that went through the Warnock commission that lasted, I think it was months or maybe years going from, Hey, this public has a concern about this to things appearing in legislation.
Verity Harding: Yes, it, yeah,
Richie Cotton: Again, I was thinking like it sounds like a really long bureaucratic process, but it had a good outcome.
Can you talk me through, is the long bureaucratic process a good idea?
Verity Harding: I think it's funny now we have this obsession with everything happening so quickly and people say, oh, you know, the politicians can't possibly keep up with AI, but that's sometimes it's going to be necessary and indeed a good thing to have a slow process where we don't overreact too quickly and maybe we respond later.
In the case of the Warnock Commission, I think The first IVF baby is born in 1978. The Warnock Commission is not set up until 1984. It reports in 1985, and because of huge pushback to start with and political decisions and things getting in the way, like elections and people feeling it's too controversial, the actual legislation which sets up the independent regulatory body we have now in the UK called the Human Fertilization and Embryonic Authority or Human Embryology and Fertilization Authority.
The E and the F, I can't remember exactly which way around they are now. that wasn't passed in law until 1990. But I don't think any of us looking back think, oh gosh, you know, I can't believe they didn't catch up and get that quicker. , You know, sometimes I think we understandably perhaps, and this is again to get back to your first question of why it's important to look at history, we sometimes feel that if everything isn't done right away and right now, it means it's going to be useless, but that's really not the case at all.
the sort of arc of history is long and technology takes sometimes a long time to, fully kind of fly right. I mean, you just look at the internet and we're still at the early, early days of the internet really in terms of the whole of history and where I'm sure it will go. So I think there's, time and sometimes taking that time to be sort of consultative and deliberative can actually be a better thing.
Richie Cotton: Yeah I'd rather if politicians took their time rather than brush through these decisions, I think.
Verity Harding: Right, right.
Richie Cotton: All right, so,
Verity Harding: They get it in the neck either way, politicians, I think. Sometimes politicians, they have a hard time if they don't regulate and they get a hard time if they do. And, my background is in politics and in technology. So I kind of speak both those languages and I have great sympathy. For the fact that the two kind of misunderstand each other.
But one thing that annoys me often is technology people you know, and I've been guilty of this too. So saying, politicians don't understand the technology, but, you know, technology, people need to understand politics too. and often don't understand it very well at all from my experience.
So sometimes I feel like politicians get criticized because they haven't regulated quickly enough and then they get, criticized because. They are considering regulation and it can be quite a difficult balance if you're trying to get that right. When it comes to, how to ensure that we can grow and innovation can flourish, but also curb societal harms and ensure societal trust in the process.
since there are a lot of technology people listening to this, what do you think technology people should know about politics?
Verity Harding: Well, then it's very difficult, you know, it's, it's often a bunch of people also trying their best. I mean, I've been working in technology in the last decade or more now. And I know that a lot of technologists are people trying their hard, doing their best to build something that they think is going to really make a difference and improve people's lives.
I'd love for technologists to understand that that's true of a lot of politicians too. there are definitely some bad ones. I'm sure we can all think of some people that maybe aren't in it for the right reasons. And don't do a lot of good, but Behind the scenes in the UK, at least, which is what I can speak to most, eloquently. But, of course, beyond just the UK, there's a lot of people trying their hardest to sort of build something, whether it be policy, regulation, or just, the type of future that they think will make the world better, and they're hugely well intentioned, and they, when they don't understand something about technology, you know, they want to.
And this is why I left politics to go into tech, because I felt like the two communities weren't talking to each other, and it's really important they did. and I wish the two communities would talk more and with more understanding of where each other are coming from.
Remember Obama saying in 2016 he guest edited edition of Wired magazine. And either in that interview or in the event that he did around it, he said he gets so many technologists coming to him and saying, why aren't you, using this cool new technology to, deliver better public services , and, embrace tech more in the government and his point to them was, Well, I'd love to, but you have to remember that my users are kind of very vulnerable people.
Sometimes, administering, benefits to people and introducing legislation and products that really deeply affect people's lives in a way they don't have much recourse to. It's not like they can just choose another service. So I have to be really, really careful. And so I think, a bit more patience and understanding from the technology community towards politicians, I think would be a good thing.
Richie Cotton: Okay. Patient seems
Verity Harding: not, that's not to say, by the way, that the politics community doesn't have a lot that it has to understand and learn from technologists as well, but that was not the question.
Richie Cotton: Yeah, I certainly think patience is a good virtue for a lot of people in a lot of situations. So that seems useful. All right. I'd also like to talk about the internet. this started off as a military technology back in the late sixties, and it's come through a lot of evolutions since so how basically used for everything.
Do you want to talk me through the history of this a little bit and what you think the different phases of the internet are?
Verity Harding: Yes. So, of course, as we've discussed, the first example in my book is the space race. And then the second one is IVF and human embryology. And then both the third and fourth chapters are the internet, but I've split it into pre and post nine 11. Because if you sort of need an example of how political technology is or how much politics affects, how technology develops in the internet is a really fascinating one.
So I knew the story pretty well, as I'm sure most of your listeners do, about the internet emerging from ARPA and, which is now DARPA, the Advanced Research Projects Agency that Eisenhower set up actually in response to Sputnik to discover the weapons of the future. And they actually, amongst many other things, come up with the internet, certainly come up with the funding that helps produce the early internet. But what I think is less understood and discussed is the sort of process by which that public network infrastructure became privatized. And that happens across the sort of 80s. Which coincides, of course, with a time when, politics both in the U. S. and in the U. K. and elsewhere was embracing this kind of deregulation, privatization political approach that was quite in fashion at the time.
And it was not consulted on widely. It was not a decision that was made by, Congress. It was something that sort of happened and at the time was, extremely controversial for how little people were consulted about this process. To fast forward a little bit, we get to a situation where because of this privatization, there's quite a lot of controversy around different aspects of internet governance.
One of those being the domain name system, which is run basically by sort of one man at I think University of California if I'm misremembering. But he runs um, the whole of the domain name system, including deciding, for example, who gets the, country level domain and who he decides to work with to make those kind of huge formative decisions.
By this point, business is quite heavily involved in the internet and Since the Clinton and Gore government, who are now in post after the 1992 election, are very keen that science and business unite to sort of project American power outwards the government step in to try to bring some order to chaos, essentially.
At the same time, they don't want to step on the toes of the kind of organic internet community that has built this thing and has built a lot of interesting rules and regulations around it themselves. You know, it has been self governing to a large degree. And they come to this compromised position to introduce ICANN or the Internet Corporation for Signed Names and Numbers, which we have today.
I think this is such a fascinating. Buddy, I think it's fascinating because it's multi stakeholder. Which means it's not just technologists that were involved. It's lots of different groups of people. In some cases, just anybody that's interested actually can turn up to these meetings and have an input.
It's multi stakeholder, it's voluntary, it's a non profit corporation, and it oversees some of the most critical infrastructure that our kind of entire society is built upon today. The government role is incredibly interesting, this kind of light touch but critical hand of guiding hand in terms of getting to a place.
And then when it's finally set up in 1998, the government of the time, the Clinton Gore government, of course Al Gore had been very interested and involved as a politician in the internet since the early days, and I actually tell the story of the internet through his life story the book, because he has a fascinating journey himself. And then in 1998, they say, you know, we'll set this thing up and then within two years, there'll be no more role for the US government. By the way, at this point, they have moved the internet into the Department of Commerce rather than the Department of Defense. So you can see how these political decisions are affecting how the internet starts to be shaped and who it starts to be shaped by.
It's now no longer seen as this military tool if it ever was seen like that but actually as a tool of kind of economic power. But in the year 2000, Gore loses the election to Bush and very. Quickly afterwards, we have 9 11 and the George W. Bush administration say, Oh, no, no, no, we're not doing this transition that was promised in the year 2000 out of sort of American control.
No, no way are we giving up the kind of role that we have here , in internet governance. it takes a further 16 years. It's not until 2016 that this change is finally made. And by then, it has to be done quite quickly and very sensitively to avoid a potential breakup and balkanization of the internet entirely, which is spurred on in large part from some of the actions from that Bush government after 9 11, including the some may say excessive and certainly controversial uses of internet surveillance after 9 11.
So the internet and the politics of the internet is fascinating. And again, with lots of lessons for us with AI today in terms of the importance of a multi stakeholder model, important role of government, how these political decisions will affect things. And also I think in terms of really ensuring that we are leading by example in use of technology so as not to give any potential adversaries any weapons to use against us with our own behavior.
Richie Cotton: I do find it amazing that all the domain name registration, ICANN does now, was originally a single person just controlling everything.
Verity Harding: Yeah, it was very originally just a text file, which is just astonishing.
Richie Cotton: That is absolutely fascinating and it seems like this has been a common theme with the internet is it was originally all about decentralizing systems and then you have that centralization of like a single person being the point of failure for this domain name system. are there in essence for AI, in terms of this sort of back and forth between centralizing and decentralizing power.
? look,
Verity Harding: the Internet is this open network architecture and the decentralization, nature of that makes it this incredible opportunity for people to build upon, including, of course, Tim Berners Lee building the World Wide Web . On top of the open protocols that Vint Cerf and others came up with.
So I think that's definitely bleeding across into AI at the moment where there's a big discussion around open source models versus closed proprietary models. I mean, I know people very well on both sides of that debate and I think, they're often coming at it. from a very genuinely held kind of authentic belief that theirs is the right approach.
I wouldn't want to draw too many lessons from the sort of uniqueness of that open architecture across to AI, but of course you can argue that if you make your Foundation model, open source and people can build upon it, you might get more exciting and interesting innovation.
And I suppose there is an argument that that openness breeds a level of scrutiny and accountability that ultimately ends up strengthening the technology itself in a way that like leads to more, certainly more trust because people can kind of get their hands on it and take a look at it themselves.
So I think, you know, there may be some lessons for people considering open versus closed models , to look at in terms of that decentralization.
Richie Cotton: You mentioned the idea of surveillance, and I think this is one of the cases where, I mean, I like a good conspiracy theory where, they're all silly and over the top, but this is one case where even the conspiracy theorists didn't realize how bad, the surveillance was going on, because there was the case with Edward Snowden sort of revealing how much the U.
S. government was spying on people back in the sort of early 2000s. Do you want to talk me through , how this came about and are there any lessons here for AI?
Verity Harding: Well, the reason I wrote about that in the book is because it's so relevant to what happens with the internet. fourth chapter, which is the internet post 9 11, opens with this meeting that happened in Dubai in 2012, when there was a proposal from a number of countries essentially to disband ICANN and move to a completely different model, which, as I say, in the book, book would have sort of broken the internet.
And I interview Larry Strickling, who was an American official at the time, who was responsible really for trying to save ICANN and the multi stakeholder model and keep the internet open. That is linked is because the, the kind of shock and horror at quite how, deep and broad U.
S. and U. K. to be clear, surveillance , had gotten by that point. People weren't shocked that they were spying, you know, all nations spy, but it was both at the kind of. details of this became public. And I think, as I say, the breadth and depth of it was kind of surprising to people. And that didn't just upset, adversaries or people that you might expect that the UK and the U S was spying on like, , Russia, but it infuriated allies, like, , at the time German Chancellor Angela Merkel who I think says something like, friends shouldn't spy on each other and this kind of thing.
So it became something that. was very difficult for the US and the UK in their geopolitical conversations and their diplomatic conversations. It put them on the back foot and they lost some of that moral high ground. And it meant that there was kind of more support potentially for disbanding I can, which still hadn't gone through this transition that was promised in 1998.
It was, people were suspicious. Well, why the U. S. government hanging onto it? Now, if you ask people like Larry, he'll say, well, you know, actually that role of the U. S. was not that significant. It was kind of a unique leadership role, but it wasn't control over anything. But that's, of course, not how it was seen.
You get the benefit of the doubt until people find out that you maybe have been kind of abusing that. And in the end it was the Obama government who were trying to kind of repair relations and that change in political leadership was helpful to bringing around other nations to keep ICANN.
And lead the U. S. through that transition process. The end of that chapter is the Obama administration, Larry primarily trying to get this transition through. And people like Donald Trump and Ted Cruz. saying, you know, this is anti American, what you're doing. And you could imagine if the Obama government hadn't managed to make that happen before the election, that certainly the Trump administration surely wouldn't have let it happen.
what that might have meant for the internet that we have today. So, the reason I discussed the surveillance issues in there is just to sort of remind us as democratic Nations that hold ourselves to a set of values that we need to think about that when it comes to AI. If other nations see us using AI primarily to sort of increase the level of surveillance, as I said, live facial recognition or more intrusive methods then why wouldn't they do that?
And why wouldn't it lead them to distrust our motives and what we're doing? Whereas if we can sort of show an AI that's the best of us, show that AI can do these really incredibly positive things around health and well being and climate so on, as I've said then that might encourage people a bit more to thinking that maybe the democratic model for AI is, you know, more appealing.
And I think that just benefits us in terms of real policy, but also in terms of our own societies and the type of society that I think we all want to live in.
Richie Cotton: it does sound like you need, a lot of cooperation is going to be needed, across borders, just in order to make sure these sort of good things happen. So just to wrap up since the whole point of this is about shaping the future of AI, if people want to get involved in this what can they do?
Verity Harding: Well, you know, there's a bunch of different things depending on your level of interest and time and how involved you want to be and what you're already doing. I mean, if you're already building AI, then think, read, learn from the past, read the book and, understand how this technology is not built in a vacuum, but is built in a context, the context of human history and politics and societal norms and values, and that what you're building will be shaped by that.
by those and is already being shared by those. That's really empowering because if you think that these are human decisions that we're making that guide the future of technology, then that gives you a lot of Potential power in that, , what decisions are you making?
Who is this technology for? Who are you building for? What are you building for? why are you doing it? If you're sort of outside of AI, but you're maybe concerned about certain aspects of that, there's lots of different things you can do. You can get involved , with your union, or you can write to your democratically Elected representative to say, these are my concerns.
What are you doing about it? Or what are you going to do about it? You can, , put your hand up inside your company. To say that you want to get involved in AI transition, you know, is the company thinking about how they're consulting? There's also different ways, but primarily, I think the main thing I want people to take away from this is to feel like even if they're not, somebody with deep technical expertise when it comes to AI, it doesn't mean that their opinion about AI doesn't matter and isn't valid. Actually, it is incredibly important. And all of.
society will shape how AI ultimately emerges and how you use it as much as and how you build it. And so that I think encourages us all to think quite carefully about what type of society do we want to live in in future and how is AI going to sort of contribute to us getting there more quickly, hopefully, or, in other ways.
Maybe preventing us and I think we want to guard against the latter and really encourage the former. And that has to be something that everybody's involved in.
Richie Cotton: That's a very positive message. The fact that everybody should have some kind of say in how AI is used and lots of ways to get involved. all right thank you very much for your time, Verity.
Verity Harding: Thanks for having me. Thanks so much.
podcast
How Generative AI is Changing Business and Society with Bernard Marr, AI Advisor, Best-Selling Author, and Futurist
podcast
The History of Data and AI, and Where It's Headed with Cristina Alaimo, Assistant Professor at Luiss Guido Carli University
podcast
The Past, Present & Future of Generative AI—With Joanne Chen, General Partner at Foundation Capital
podcast
Trust and Regulation in AI with Bruce Schneier, Internationally Renowned Security Technologist
podcast
Building Trustworthy AI with Alexandra Ebert, Chief Trust Officer at MOSTLY AI
podcast