FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

“We definitely will be able to create completely autonomous beings with their own goals. And it will be very important, especially as these beings become much smarter than humans, it’s going to be important to have these beings, the goals of these beings be aligned with our goals.”

Ilya Sutskever, one of the leading AI scientists behind ChatGPT, reflects on his founding vision and values. In conversations with the film-maker Tonje Hessen Schei as he was developing the chat language model between 2016 and 2019, he describes his personal philosophy and makes startling predictions for a technology already shaping our world. Reflecting on his ideas today, amid a global debate over safety and regulation, we consider the opportunities as well as the consequences of AI technology. Ilya discusses his ultimate goal of artificial general intelligence (AGI), ‘a computer system that can do any job or task that a human does, but better’, and questions whether the AGI arms race will be good or bad for humanity. These filmed interviews with Ilya Sutskever are part of a feature-length documentary on artificial intelligence, called iHuman

00:08 AI has the potential to solve problems and create new ones. 00:59 There is a call for a pause in the development of AI. 02:45 Creating AI with aligned goals is crucial. 03:25 Technology and biological evolution have similarities. 04:45 GPT is considered an early form of AGI. 06:13 The first AGIs will have a significant impact on society. 07:55 Programming AGIs correctly is crucial. 09:17 The speed of AI development is accelerating. 10:37 Cooperation between countries is important for AGI development.

0:08
Now AI is a great thing,
0:10
because AI will solve all the problems that we have today.
0:16
It will solve employment,
0:19
it will solve disease,
0:22
it will solve poverty,
0:25
but it will also create new problems.
0:36
The problem of fake news is going to be a million times worse,
0:41
cyber attacks will become much more extreme,
0:46
we will have totally automated AI weapons.
0:52
I think AI has the potential to create infinitely stable dictatorships.
0:58
This morning a warning about the power of artificial intelligence,
1:03
more than 1,300 tech industry leaders, researchers and others
1:07
are now asking for a pause in the development
1:09
of artificial intelligence to consider the risks.
1:45
Playing God,
1:47
scientists have been accused of playing God for a while,
1:54
but there is a real sense in which we are creating something
1:58
very different from anything we’ve created so far.
2:05
Yeah, I mean, we definitely will be able to create
2:10
completely autonomous beings with their own goals.
2:13
And it will be very important,
2:15
especially as these beings become much smarter than humans,
2:20
it’s going to be important to have these beings,
2:25
the goals of these beings be aligned with our goals.
2:45
What inspires me?
2:52
I like thinking about the very fundamentals, the basics.
2:59
What can our systems not do, that humans definitely do?
3:06
Almost approach it philosophically.
3:10
Questions like, what is learning?
3:13
What is experience?
3:16
What is thinking?
3:20
How does the brain work?
3:30
I feel that technology is a force of nature.
3:37
I feel like there is a lot of similarity between technology and biological evolution.
3:46
It is very easy to understand how biological evolution works,
3:50
you have mutations, you have natural selections.
3:53
You keep the good ones, the ones that survive
3:58
and just through this process you are going to have huge complexity in your organisms.
4:06
We cannot understand how the human body works
4:08
because we understand evolution,
4:11
but we understand the process more or less.
4:14
And I think machine learning is in a similar state right now,
4:17
especially deep learning, we have a very simple rule
4:21
that takes the information from the data
4:23
and puts it into the model and we just keep repeating this process.
4:27
And as a result of this process the complexity from the data
4:30
gets transferred into the complexity of the model.
4:34
So the resulting model is really complex
4:36
and we don’t really know exactly how it works you need to investigate,
4:40
but the algorithm that did it is very simple.
4:44
ChatGPT, maybe you’ve heard of it,
4:48
if you haven’t then get ready.
4:51
You describe it as the first spots of rain before a downpour.
4:56
It’s something we just need to be very conscious of,
4:58
because I agree it is a watershed moment.
5:01
Well ChatGPT is being heralded as a gamechanger
5:05
and in many ways it is, its latest triumph outscoring people.
5:10
A recent study by Microsoft research concludes that GPT4
5:14
is an early, yet still incomplete artificial general intelligence system.
5:35
Artificial General Intelligence.
5:38
AGI,
5:41
a computer system that can do any job or any task
5:45
that a human does, but only better.
5:54
There is some probability the AGI is going to happen pretty soon,
6:00
there’s also some probability it’s going to take much longer.
6:03
But my position is that the probability that AGI could happen soon
6:08
is high enough that we should take it seriously.
6:14
And it’s going to be very important
6:15
to make these very smart capable systems
6:18
be aligned and act in our best interests.
6:27
The very first AGIs
6:29
will be basically be very, very large data centres.
6:33
Packed with specialised neural network processors
6:37
working in parallel.
6:40
Compact, hot, power hungry package,
6:44
consuming like, 10m homes’ worth of energy.
6:48
You’re going to see dramatically more intelligent systems
6:53
and I think it’s highly likely that those systems will have
6:55
completely astronomical impact on society.


6:59
Will humans actually benefit?
7:02
And who will benefit and who will not?
7:42
The beliefs and desires of the first AGIs will be extremely important
7:49
and so it’s important to programme them correctly.
7:53
I think that if this is not done,
7:56
then the nature of evolution, of natural selection,
8:01
favour those systems prioritise their own survival above all else.

8:17
It’s not that it’s going to actively hate humans and want to harm them,
8:23
but it is going to be too powerful
8:26
and I think a good analogy would be the way human humans treat animals.
8:32
It’s not we hate animals, I think
8:33
humans love animals and have a lot of affection for them,
8:37
but when the time comes to build a highway between two cities,
8:42
we are not asking the animals for permission
8:45
we just do it because it’s important for us.
8:49
and I think by default that’s the kind of relationship
8:51
that’s going to be between us and AGIs which
8:56
are truly autonomous and operating on their own behalf.
9:18
Many machine learning experts,
9:19
people who are very knowledgeable and very experienced,
9:21
have a lot of scepticism about AGI.
9:25
About when it could happen and about whether it could happen at all.
9:30
Right now this is something that just
9:31
not that many people have realised yet.
9:35
That the speed of computers for neural networks, for AI,
9:39
are going to become maybe 100,000 times faster
9:43
in a small number of years.
9:51
If you have an arms race dynamics
9:53
between multiple teams trying to build the AGI first,
9:59
they will have less time make sure that the AGI that they will build
10:03
will care deeply for humans.
10:09
Because the way I imagine it is that there is an avalanche,
10:12
like there is an avalanche of AGI development.
10:15

Imagine it, this huge unstoppable force.
10:21
And I think it’s pretty likely the entire surface of the earth will be covered with
10:26
solar panels and data centres.
10:34
Given these kinds of concerns, it will be important that AGI
10:40
somehow build as a cooperation between multiple countries.
10:48
The future is going to be good for the AI regardless.
10:53
It would be nice if it were good for humans as well.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.