FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

ZANNY MINTON BEDDOES. If you were ever in that room and you thought to yourself, this is getting dangerous and this could actually have consequences that I would not want upon the world, would you then shout “stop” and would you stop?

SAM ALTMAN. There’s no one big magic red button we have that blows up the data centre. Which I think some people sort of assume exists. It’s not this binary go/stop decision. It is the many little decisions along the way about allow this, don’t allow this how to predict what the risks of the future are going to be, how to mitigate those set this new value here, things like that.

TRANSCRIPT

These are two of the most important people shaping the future of artificial intelligence: Sam Altman, the CEO of OpenAI, the startup behind ChatGPT and Satya Nadella, the CEO of Microsoft, OpenAI’s biggest investor

They spoke to The Economist’s editor-in-chief about what the future of AI really looks like Sam, let’s start with you What are the most important capabilities What’s next for ChatGPT? that ChatGPT will develop in the next year? The world had like a two-week freakout with GPT-4, right? “This changes everything, AGI is coming tomorrow, there are no jobs by the end of the year” And now people are like, “Why is it so slow?” And I love that I think that’s a great thing about the human spirit, that we always want more and better We’ve not had this where there has been some general-purpose technology whose diffusion happened instantaneously everywhere In any place health care and education is most of the government spend You now have, right, the ability to give every student and every citizen of the globe better health advice and a better personalised tutor You know, I often think one interesting way to measure this is, “What per cent of tasks can GPT-4 do?” Let’s say it’s 10% Can GPT-5 do 12% of human tasks? Or 15%, or 20%? But the fact that so many people are able to use it for productivity in their workflow, that’s the power and I know it’s not a satisfying as saying here’s each thing it will do, but it’s that it becomes a companion for knowledge work How dangerous is AGI? It becomes a way to use a computer I hear that and that’s an absolutely appropriate answer to my question but I guess I’m trying to get at the sense of whether this is incremental or whether it’s radical, even in the next year? I believe that some day we will make something that qualifies as an AGI by whatever fuzzy definition you want, the world will have a two-week freakout and then people will go on with their lives Sam Altman just said the world would only have a two-week freakout when we get to AGI That’s quite a statement to make One thing I say a lot is no one knows what happens next and I can’t see to the other side of that event horizon with any detail But it does seem like the deep human motivations will not go anywhere This is when people start getting alarmed That we have no idea Why? Well we’re going to have an intelligence that is more intelligent than all of us And we have no idea what happens next No, no, one thing I love to do is go back and read about the contemporaneous accounts of technological revolutions at the time And the expert predictions are just always totally wrong And the expert predictions are just always totally wrong – That’s a very good point And you need to have some flexibility in your opinions and look, AI regulation have a tight feedback loop with how it’s going with the world The amount of focus on safety and regulation is sort of very, very high Tell me whether you think regulators have got it right or whether we’re not doing enough At this point if I look at what the White House EO is or what the UK Safety Summit is, what’s happening in Europe, what’s happening in Japan, they are going to have a say Nation states are absolutely going to have a say Nation states are absolutely going to have a say – Absolutely on what is the regulation that controls any technology development and most importantly, what is ready for deployment or not And so I feel like we will all be subject to those regs If you were ever in that room and you thought to yourself, this is getting dangerous and this could actually have consequences that I would not want upon the world, would you then shout “stop” and would you stop? There’s no one big magic red button we have that blows up the data centre Which I think some people sort of assume exists It’s not this binary go/stop decision It is the many little decisions along the way about allow this, don’t allow this how to predict what the risks of the future are going to be, how to mitigate those set this new value here, things like that.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.