FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

So having intelligence in machines  is an incredibly valuable thing to develop. And so I believe it is coming. Now when a very, very powerful technology arises, there can be a range of different outcomes. Things could go very, very well, but there is a possibility things can go badly as well. 6:23

“So I think that [AGI is] very, very powerful. Very intelligent artificial intelligence is coming. I think that this is very, very likely. I don’t think it’s coming today. I don’t think it’s coming next year or the year after. It’s probably a little bit further out… 2028, that’s a 50 percent chance.” 8:56.

I don’t believe the people who are sure that it’s going to go very well, and I don’t believe the people who are sure that it’s going to go very, very badly. Because what we’re talking about is an incredibly profound transition. It’s like the arrival of human intelligence in the world. This is another intelligence arriving in the world. And so it is an incredibly deep transition, and we do not fully understand all the implications and consequences of this. And so we can’t be certain that it’s going to be this, that or the other thing. So we have to be open-minded about what may happen. I have some optimism because I think that if you want to make a system safe, you need to understand a lot about that system. You can’t make an airplane safe if you don’t know about how airplanes work. So as we get closer to AGI, we will understand more and more about these systems, and we will see more ways to make these systems safe, make highly ethical AI systems. But there are many things we don’t understand about the future. So I have to accept that there is a possibility that things may go badly because I don’t know what’s going to happen. 11:14

Chris Anderson: Shane, give us a snapshot of you growing up

and what on Earth led you to get interested in artificial intelligence?

Shane Legg: Well, I got my first home computer on my 10th birthday,

and I —

this was before the internet and everything.

So you couldn’t just go and surf the web and so on.

You had to actually make stuff yourself and program.

And so I started programming,

and I discovered that in this computer there was a world,

I could create a world,

I could create little agents that would run around

and chase each other and do things and so on.

And I could sort of, bring this whole universe to life.

And there was sort of that spark of creativity that really captivated me

and sort of, I think that was really the seeds

of my interest that later grew

into an interest in artificial intelligence.

CA: Because in your standard education, you had some challenges there.

SL: Yeah, I was dyslexic as a child.

And so they were actually going to hold me back a year

when I was 10 years old,

and they sent me off to get my IQ tested to sort of,

you know, assess how bad the problem was.

And they discovered I had an exceptionally high IQ.

And then they were a little bit confused about what was going on.

And fortunately, at that time,

there was somebody in the town I lived in

who knew how to test for dyslexia.

And it turns out I wasn’t actually of limited intelligence.

I was dyslexic, and that was the issue.

CA: You had reason from an early age to believe

that our standard assumptions about intelligence might be off a bit.

SL: Well, I had reason, from an early age, to sometimes doubt authority.

(Laughter)

You know, if the teacher thinks you’re dumb, maybe it’s not true.

Maybe there are other things going on.

But I think it also created in me an interest in intelligence

when I sort of had that experience as a child.

CA: So you’re credited by many

as coining the term “artificial general intelligence,” AGI.

Tell us about 2001, how that happened.

SL: Yeah, so I was approached by someone called Ben Goertzel

who I’d actually been working with,

and he was going to write a book,

and he was thinking about a book on AI systems

that would be much more general and capable,

rather than focusing on very narrow things.

And he was thinking about a title for the book.

So I suggested to him, “If you’re interested in very general systems,

call it artificial general intelligence.”

And so he went with that.

And then him and various other people started using the term online

and the internet,

and then it sort of became popularized from there.

We later discovered there was someone called Mike Garrod,

who published a paper in a security nanotech journal in ’97.

So he is actually the first person to have used the term.

But it turns out he pretty much meant the same thing as us anyway.

CA: It was kind of an idea whose time had come,

to recognize the potential here.

I mean, you made an early prediction that many people thought was bonkers.

What was that?

SL: Well, in about 2001,

a similar time to when I suggested this term artificial general intelligence,

I read a book by Ray Kurzweil, actually, “Age of Spiritual Machines,”

and I concluded that he was fundamentally right,

that computation was likely to grow exponentially for at least a few decades,

and the amount of data in the world would grow exponentially

for a few decades.

And so I figured that if that was going to happen,

then the value of extremely scalable algorithms

that could harness all this data and computation

were going to be very high.

And then I also figured that in the mid 2020s,

it would be possible then,

if we had these highly scalable algorithms,

to train artificial intelligence systems

on far more data than a human would experience in a lifetime.

And so as a result of that,

you can find it on my blog from about 2009

I think it’s the first time I publicly talked about it,

I predicted a 50 percent chance of AGI by 2028.

I still believe that today.

CA: That’s still your date.

How did you define AGI back then, and has your definition changed?

SL: Yeah, I didn’t have a particularly precise definition at the beginning.

It was really just an idea of systems that would just be far more general.

So rather than just playing Go or chess or something,

rather than actually be able to do many, many different things.

The definition I use now is that it’s a system

that can do all the cognitive kinds of tasks

that people can do, possibly more,

but at least it can do the sorts of cognitive tasks

that people can typically do.

CA: So talk about just the founding of DeepMind

and the interplay between you and your cofounders.

SL: Right. So I went to London to the place called the Gatsby Unit,

which studies theoretical neuroscience and machine learning.

And I was interested in learning the relationships

between what we understand about the brain

and what we know from machine learning.

So that seemed like a really good place.

And I met Demis Hassabis there.

He had the same postdoc supervisor as me,

and we got talking.

And he convinced me

that it was the time to start a company then.

That was in 2009 we started talking.

And I was a little bit skeptical.

I thought AGI was still a bit too far away,

but he thought the time was right, so we decided to go for it.

And then a friend of his was Mustafa Suleyman.

CA: And specifically, one of the goals of the company

was to find a pathway to AGI?

SL: Absolutely.

On our first business plan that we were circulating

when we were looking for investors in 2010,

it had one sentence on the front cover and it said,

“Build the world’s first artificial general intelligence.”

So that was right in from the beginning.

CA: Even though you knew

that building that AGI might actually have

apocalyptic consequences in some scenarios?

SL: Yeah.

So it’s a deeply transformative technology.

I believe it will happen.

I think that, you know,

these algorithms can be understood and they will be understood at the time.

And I think that intelligence is fundamentally

an incredibly valuable thing.

Everything around us at the moment — the building we’re in,

the words I’m using, the concepts we have, the technology around us —

you know, all of these things are being affected by intelligence.

So having intelligence in machines

is an incredibly valuable thing to develop.

And so I believe it is coming.

Now when a very, very powerful technology arises,

there can be a range of different outcomes.

Things could go very, very well,

but there is a possibility things can go badly as well.

And that was something I was aware of also from about 20 years ago.

CA: So talk about, as DeepMind developed,

was there a moment where you really felt,

“My goodness, we’re onto something unbelievably powerful?”

Like, was it AlphaGo, that whole story, or what was the moment for you?

SL: Yeah, there were many moments over the years.

One was when we did the Atari games.

Have you seen those videos

where we had an algorithm that could learn to play multiple games

without being programmed for any specific game?

There were some exciting moments there.

Go, of course, was a really exciting moment.

But I think the thing that’s really captured my imagination,

a lot of people’s imagination,

is the phenomenal scaling of language models in recent years.

I think we can see they’re systems

that really can start to do some meaningful fraction

of the cognitive tasks that people can do.

CA: Now, you were working on those models,

but were you, to some extent, blindsided

by OpenAI’s, sort of, sudden unveiling of ChatGPT?

SL: Right.

We were working on them and you know,

the transformer model was invented in Google,

and we had teams who were building big transformer language models and so on.

CA: Google acquired DeepMind at some point in this journey.

SL: Yeah, exactly.

And so what I didn’t expect

was just how good a model could get training purely on text.

I thought you would need more multimodality.

You’d need images, you’d need sound, you’d need video and things like that.

But due to the absolutely vast quantities of text,

it can sort of compensate for these things to an extent.

I still think you see aspects of this.

I think language models tend to be weak in areas

that are not easily expressed in text.

But I don’t think this is a fundamental limitation.

I think we’re going to see these language models expanding into video

and images and sound and all these things,

and these things will be overcome in time.

CA: So talk to us, Shane,

about the things that you, at this moment,

passionately feel that the world needs to be thinking about more cogently.

SL: Right.

So I think that very, very powerful,

very intelligent artificial intelligence is coming.

I think that this is very, very likely.

I don’t think it’s coming today.

I don’t think it’s coming next year or the year after.

It’s probably a little bit further out than that.

CA: 2028?

SL: 2028, that’s a 50 percent chance.

So, you know, if it doesn’t happen in 2028,

I’m not going to be surprised, obviously.

CA: And when you say powerful,

I mean there’s already powerful AI out there.

But you’re saying basically a version

of artificial general intelligence is coming.

SL: Yeah.

CA: So give us a picture of what that could look like.

SL: Well, if you had an artificial general intelligence,

you could do all sorts of amazing things.

Just like human intelligence is able to do many, many amazing things.

So it’s not really about a specific thing,

that’s the whole point of the generality.

But to give you one example,

we developed the system AlphaFold,

which will take a protein and compute, basically, the shape of that protein.

And that enables you to do all sorts of research

into understanding biological processes,

developing medicines and all kinds of things like that.

Now, if you had an AGI system,

instead of requiring what we had at DeepMind,

about roughly 30 world-class scientists

working for about three years to develop that,

maybe you could develop that with just a team

of a handful of scientists in one year.

So imagine these, sort of, AlphaFold-level developments

taking place around the world on a regular basis.

This is the sort of thing that AGI could enable.

CA: So within months of AGI being with us, so to speak,

it’s quite possible that some of the scientific challenges

that humans have wrestled with for decades, centuries, if you like,

will start to tumble in rapid succession.

SL: Yeah, I think it’ll open up all sorts of amazing possibilities.

And it could be really a golden age of humanity

where human intelligence,

which is aided and extended with machine intelligence,

enables us to do all sorts of fantastic things

and solve problems that previously were just intractable.

CA: So let’s come back to that.

But I think you also,

you’re not like, an irredeemable optimist only,

you see a potential for it to go very badly in a different direction.

Talk about what that pathway could look like.

SL: Well, yeah, I want to explain.

I don’t believe the people

who are sure that it’s going to go very well,

and I don’t believe the people

who are sure that it’s going to go very, very badly.

Because what we’re talking about is an incredibly profound transition.

It’s like the arrival of human intelligence in the world.

This is another intelligence arriving in the world.

And so it is an incredibly deep transition,

and we do not fully understand all the implications

and consequences of this.

And so we can’t be certain

that it’s going to be this, that or the other thing.

So we have to be open-minded about what may happen.

I have some optimism because I think

that if you want to make a system safe,

you need to understand a lot about that system.

You can’t make an airplane safe

if you don’t know about how airplanes work.

So as we get closer to AGI,

we will understand more and more about these systems,

and we will see more ways to make these systems safe,

make highly ethical AI systems.

But there are many things we don’t understand about the future.

So I have to accept that there is a possibility that things may go badly

because I don’t know what’s going to happen.

I can’t know that about the future in such a big change.

And even if the probability of something going bad is quite small,

we should take this extremely seriously.

CA: Paint a scenario of what going bad could look like.

SL: Well, it’s hard to do

because you’re talking about systems

that potentially have superhuman intelligence, right?

So there are many ways in which things would go bad in the world.

People sometimes point to, I don’t know, engineered pathogens, right?

Maybe a superintelligence could design an engineered pathogen.

It could be much more mundane things.

Maybe with AGI, you know,

it gets used to destabilize democracy in the world,

with, you know, propaganda or all sorts of other things like that.

We don’t know —

CA: That one might already have happened.

SL: There might be happening a bit already.

But, you know, there may be a lot more of this

if we have more powerful systems.

So there are many ways in which societies can be destabilized.

And you can see that in the history books.

CA: I mean, Shane, if you could have asked all humans,

say, 15 years ago, OK, we can open a door here,

and opening this door could lead to the best-ever outcomes for humanity.

But there’s also a meaningful chance,

let’s say it’s more than five percent,

that we could actually destroy our civilization.

I mean, isn’t there a chance that most people would have actually said,

“Don’t you dare open that damn door.

Let’s wait.”

SL: If I had a magic wand and I could slow things down,

I would use that magic wand, but I don’t.

There are dozens of companies,

well, there’s probably 10 companies in the world now

that can develop the most cutting-edge models, including, I think,

some national intelligence services who have secret projects doing this.

And then there’s, I don’t know,

dozens of companies that can develop something that’s a generation behind.

And remember, intelligence is incredibly valuable.

It’s incredibly useful.

We’re doing this

because we can see all kinds of value that can be created in this

for all sorts of reasons.

How do you stop this process?

I don’t see any realistic plan that I’ve heard of,

of stopping this process.

Maybe we can —

I think we should think about regulating things.

I think we should do things like this as we do with every powerful technology.

There’s nothing special about AI here.

People talk about, oh, you know, how dare you talk about regulating this?

No, we regulate powerful technologies all the time in the interests of society.

And I think this is a very important thing that we should be looking at.

CA: It’s kind of the first time we have this superpowerful technology out there

that we literally don’t understand in full how it works.

Is the most single, most important thing we must do, in your view,

to understand, to understand better what on Earth is going on,

so that we least have a shot at pointing it in the right direction?

SL: There is a lot of energy behind capabilities at the moment

because there’s a lot of value in developing the capabilities.

I think we need to see a lot more energy going into actual science,

understanding how these networks work,

what they’re doing, what they’re not doing,

where they’re likely to fail,

and understanding how we put these things together

so that we’re able to find ways

to make these AGI systems profoundly ethical and safe.

I believe it’s possible.

But we need to mobilize more people’s minds and brains

to finding these solutions

so that we end up in a future

where we have incredibly powerful machine intelligence

that’s also profoundly ethical and safe,

and it enhances and extends humanity

into, I think, a new golden period for humanity.

CA: Shane, thank you so much for sharing that vision

and coming to TED.

(Applause)

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.