FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

TRANSCRIPT. AI is already here. We are now able to get machines to do things that look like what humans can do. You can ask them anything and they’ll give you a very intelligent answer. It might be that just a few other changes to the AI systems would actually suddenly surpass us in all aspects. And we’d then see we weren’t quite as complicated and sophisticated and special as we thought we were. We don’t know if it’s going to be amazing or problematic. People call it a wicked problem. That’s an actual technical term in which you try and interfere with bits of it and you find there are unexpected consequences. Everybody is a little late to the party when it comes to artificial intelligence. We’ve got a lot of people having meetings and not a lot of things happening on the other side of those meetings. So, what is the singularity? Well, it depends on who you ask. A singularity comes from physics. It’s a point in time we can’t see beyond. You know, there’s a black hole, but we don’t know what’s happening inside the black hole. The first time I heard about the technological singularity was Vernor Vinge, the science fiction author. So he said, “Look, singularities are these things through which we cannot predict the future. And once AI becomes more intelligent than humans. I don’t know what’s going to happen.” Certainly go out to 2029, these computers will know everything that human beings will know. We’re going to expand that at an unprecedented rate in terms of scientific progress. So by 2045 we’ll expand what we know at least a million times, which is quite hard to understand. That’s why we call it the singularity. Is AI going to spin out of control and, you know, take all of our jobs and then murder us in our sleep? Probably not. But, you know, AI doesn’t have to kill us in order to make life really, really difficult. There is no simple solution to the challenges of AI. We’ve already seen many bad examples. The systems using, running the National Health Service, they were infected by a piece of ransomware and it ended up causing havoc. In the future, there may be similar parts of our infrastructure that are governed by IT, and if we lose control of that, then it could be goodbye. I guess the concern is that if it sees as a threat through some way or the other, then it will prevent us from being a threat. If it sees us as being unimportant and needs to use the resources of the planet or the sun, then it will probably do that, and without really concern about humanity. So, it’s basically our jobs to make sure that we make sure that we have machinery that we can keep control of. And any society that doesn’t do that is not going to be the society that wins. You need to agree the frameworks. What kinds of things are you allowed to do with these AIs? We have lots of rules in other parts of life, so we need more of the Big Tech companies with more support from academia and independent analysts to prioritise these safety issues, including, this is a significant one, an off switch. If we decide that the system is behaving strangely, can we terminate it? It’s not, sort of, an alien invasion of intelligent machines from Mars that’s come to compete with us. I mean, we’ve created them, but it’s not like we don’t have regulation. I mean, take medicine. It’s filled with regulation. You can’t just put something out and expect people to use it. It’s got to go through all kinds of tests and so on. So, I can’t just take a computer and create something and then expect people to use it. It has to go through regulation. We’re constantly having to protect ourselves. That’s true in medicine. It’s true in every different area. I am really hopeful about humanity. And I think that if we apply AI in the right way across our organisations, we can free people up from doing mundane tasks to get them to do more interesting, creative things. So, the thing that gives me hope is that we can choose better decisions going forward. But that also means that the onus is on us. We are the ones that have to make the good decisions or we’re going to all suffer the consequences. Well, I believe in the concept, it’s another kind of singularity. It’s called the economic singularity. Fairly soon there won’t be any jobs. It will be very few jobs that people will be paid to do, because anything that we might be paid to do will be done better, more effectively by robots, AI and automation of various sorts. More and more of us will be in this situation where we can’t earn a living by our labour, our intelligence, our creativity. And that’s going to require a big transformation of the social contract. A lot of people will find they have skills that were more valuable than they would have been before. And a lot of people will find that the skills that they’ve spent decades acquiring are no longer very valuable. So, there will be disruption at the family level. And then the question is, do we have sufficient social will to make sure that our societies and our governments are supporting those families and individuals so that they can get through those hurdles? It’s not us versus the computers. It’s us with the computers. We’ve already replaced all human employment several times. A couple of hundred years ago, we had the Luddite movement. If you were spinning cotton, you lost your job because a machine could do it. And ultimately machines came along and they could do everything that people could do and all those jobs went away. But then we created new jobs with the machines. We’re going to continue to do that. It’s going to be disruptive. You’re going to be able to do things that human beings have never done before, at a very high speed. And that’s going to be thrilling and it’s really going to be part of who we are. I actually think that creativity will become the premium. The way that we differentiate ourselves is through creativity. I think that we should be focusing our energies on creating what is called a world of abundance, a world where all of the goods that we need to survive and thrive, our food, our healthcare, education, etc. is essentially free. The vision of people like Ray Kurzweil and others is that the new species that takes over will be us, enhanced. We in some sense will merge with some aspects of AI and we will become tranhumanists. Smarter. Kinder. More collaborative. Less prone to egotism. Less prone to cognitive stupidity. You know, there’s a lot of clever but stupid people in the world. You know, we all do stupid things despite being clever. So, let’s have that fixed in our brains. We’re going to do things we never could do before. It’s going to have a tremendous impact on the way we live. I believe we’ll overcome cancer and other diseases. Take the Moderna vaccine. That was created by a computer in two days. We say, “Oh, that’s artificial intelligence, it’s not real.” I mean, it’s definitely real. When we look at the singularity, humans tend to have a negative perspective. But again, the whole point is that we don’t know what happens beyond it. And I’d like to think that over the next ten, 20 years, AI is going to free more and more people up to allow them to contribute in ways that they haven’t been able to contribute. And I suspect that that will take humanity to another level.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.