17 Aug 2023. Jaan Tallinn is no stranger to disruptive tech: 25 years ago he co-engineered Kazaa, which allowed for the free download of films and music. He also co-engineered Skype, which disrupted traditional voice and video communication. But when he looks at the way Big Tech and governments are pushing the boundaries of artificial intelligence, he worries about our future. Could we be fast approaching the point when machines don’t need human input anymore? Host Steve Clemons asks Tallinn, who founded the Centre for the Study of Existential Risk at Cambridge University, about risks and opportunities posed by AI.
hi I’m Steve Clements and I have a
question is the wild quest for advanced
artificial intelligence more like a
suicide Race For Humanity let’s get to
the bottom line
[Music]
science fiction has been obsessed with
this idea for decades robots and super
intelligent computers take over our
decision-making powers and then they
enslave us and we only realize what’s
going on when it’s way too late like
when Keanu Reeves character in The
Matrix film figured out that he and all
of humanity were basically just
batteries for a huge AI machine well is
fiction becoming reality governments and
big corporations are locked in a frantic
race to come up with more advanced
applications for artificial intelligence
to manage everything cheaper and more
competently Health Care education even
battlefields in a way artificial
intelligence is the new arms race
Russian President Vladimir Putin once
said that the nation that leads in AI
will be the ruler of the world so is
there an existential risk to humans and
is it time to pause and take a step back
to make sure these efforts are regulated
and more guard rails are put in place to
minimize those risks my guest today says
it’s definitely that time he is Yan
talin one of the engineers behind hit
programs like Skype and Kaza and now the
founder of the center for the study of
existential risk at Cambridge University
he’s also co-founder of the future of
Life Institute that took the lead in
calling for a six-month moratorium on AI
research John it’s a real pleasure to
have you with us today and I and I just
want to start 23 years ago I read an
article that was the cover of Wired
Magazine by another technologist I knew
named Bill Joy and the title of that is
saying is is was why the future doesn’t
need us and I would love to hear where
we are in Bill Joy’s predictions and
what you think we need to be wary of as
we as we move into this new era of AI
indeed protections like Bill Joyce and
even they can go in further for example
Alan Turing in 1951 said that once AI
becomes smarter than humans we will
likely lose control to it
and I think the correct
position to take here is that as soon as
we cannot rule out that we will remain
controlled firmly in controlled for a
long time we should kind of
take necessary cautions precautions to
make sure that either we remain in
control or if we lose control the things
the future will still be good for us now
you have written about your concerns uh
in this area not being something that
evolves tomorrow but down the road as
super intelligence really evolves and
takes hold that mankind may be less
relevant to the equation can you explain
to our audience our lay audience what
your concerns are about computer super
intelligence
uh so turns out like there was just
recently
um a survey result I think by
organization called ugal and that kind
of a normal Ordinary People on the
streets they actually have like pretty
good intuitions uh what could go wrong
and in some ways uh sort of people in
Academia intellectuals they kind of have
a habit of downplaying the problems than
both the experts like Joshua bengio or
Jeff Hinton the inventors of of deep
learning or that people on the street
they understand
because like if you look at it the
reason why humans are like firmly in
control of this planet are not
chimpanzees is that because we are more
capable than they are we are more
intelligent we are not we are not
stronger but we know how to do long-term
planning Etc and now we are we as
species we are in a race to yield that
advantage to machines which indicate
is not a good idea
now you were involved with the founding
of a number of institutes that are
fascinating one is the future you know
looking at the study of existential risk
and others the future of Life Institute
which I find fascinating and led with
the letter signed by many of the world’s
most important technologists today
asking for a moratorium and research
this Elon Musk uh who’s been I guess on
the board of your Center but also was a
cosigner of that letter I guess my
question is can can Pandora be put back
in the Box can can can a letter like
that can a moratorium a call for a pause
actually have an impact on the global
development of AI today or things just
proceeded too far at this point
it definitely can there are multiple
reasons to be hopeful for example we
have done it with other technology
before we did uh ban human cloning even
though technologically this is very
possible and
also things like bio weapons Etc like
for we have been like less successful
about them but still like the success
has not been known has not been zero and
the other thing is that again uh they’re
really experts uh top experts in the
field they are concerned uh therefore
like there isn’t this uh sort of Luddite
uh voice that advocates for for pause no
they’re like this there’s a very kind of
mixed and wide uh developing consensus
about the need to take things lower and
one reason to take things lower is to
just make sure make sure that things go
well uh if we are just at the mercy of
Market forces we might not have the
ability to steer and things sufficiently
yeah and you’re a wealthy guy and you’ve
invested in a lot of these companies
over the years and you’ve been investing
in AI companies so it’s sort of
interesting on one hand to see you
warning about these dangers and yet
you’re a significant investor you run
with the crowd that’s out there that is
bringing this new technology apology
forward how do you square that with
yourself
yeah that is a good question and uh
my default way of looking at things is
that like what is the counterfactual
that I’m displacing that in other words
like if I didn’t invest like who else
would take my place then and uh so
that’s like one uh one approach I’ve
taken with companies like anthropic for
example uh the other kind of approach I
take is that I just tried not to be kind
of decisive investor uh like for example
in deepmind where I invested in 2010 11
if I remember correctly uh I was just
like very small shareholder uh so I kind
of viewed my investment as a ticket to
be you know present at the company and
talk about the concerns that I have that
I already had back then
now let’s talk about this letter and
President Biden’s meeting with seven
leading companies I have them listed as
Amazon anthropic Google inflection meta
Microsoft and open Ai and many people
believe that this meeting in the White
House of President Biden with these
Executives talking about voluntary
measures that they might take to Think
Through the impact and dangers and risks
of AI was precipitated by the future of
Life institute’s letter now love to hear
your thoughts on that but I think more
importantly is do you believe that these
seven Executives these these companies
and President Biden are sincere in what
they’re doing or is this fake
performance acting like they’re doing
something while behind the scenes
they’re just chugging along as they were
before
so I don’t think this is a fake
performance I mean I do have some
concerns that they like many people too
many people this might be like very new
topic so they’re just uh kind of uh
don’t exactly know what to think at this
point but like you kind of like everyone
has to now learn about this new
situation and when it comes to
sort of
taking credit for these things I think
like the most biggest credit really goes
for good or for bad to open AI about
releasing the chat chipity because that
kind of like caused a coast planet to
gonna pay attention uh to AI in a way
that it didn’t really pay before but yes
like that so the future of Life
institutes letter open letter gonna fell
into this like fertile ground prepared
by open Ai and then like in turn the
extinction statement by Center for AI
safety
there’s like one uh one sentence
uh declaration that they are Extinction
risk should be there at the same level
than risks from nuclear and bio you know
it’s it’s an interesting conversation I
actually try to think try not to
sensationalize when we have these
conversations but sometimes you know to
overstate for effect is part of learning
and also thinking around the corner uh
and trying to anticipate the
unanticipatable if you will things we
haven’t thought about as part of the
conversation and your future of Life
Institute actually has been doing this
and they they’ve developed something I I
best thing I call them is what they call
them are Slaughter Bots can you tell us
about Slaughter Bots and and what you
know just thinking in contemporary terms
and capacity in drones what could
possibly happen uh more recently than
people think with the slaughter Bots
yes so there are like two really big
problems one is that uh fully automated
you know putting AI in the military
causes makes it very hard for Humanity
Control Data as trajectory because I got
this point you are in a very
you know literal arms race and like when
you are in arms race your
you don’t have much maneuvering room
when it comes to thinking about what how
do you approach this new technology you
just have to go what the capabilities
are and what are kind of strategic
advantages so that’s like one big worry
uh about putting AI military the other
problem and as we’ve seen with cyber uh
cyber warfare is that as Things become
autonomous
and the attribution becomes very hard so
like telegram natural
Evolution for for automated fully
automated
Warfare as future Life Institute
slaughterbots videos show is that you
will get this like swarms of
miniaturized drones uh that like anyone
with money can produce and release uh
without attribution so you might we
might be just creating a world where
it’s no longer safe to be outside
because you might be chased down by a
swarms of of StoryBots one of the
interesting things I reaction I had to
the future of Life Institute letter when
you brought all of these major
technologists to call for a moratorium
is I wondered how does that compete that
Consortium compete if China is not
involved or or is China involved is
there an opportunity for a truly Global
deal that we’re not thinking you know
enough about how do you do that if
you’ve got one side that’s going to
follow guard rails and guidelines and
the other side that might not
so I’m not an expert on China but with
that caveat I just want to point out
that China already has AI regulation in
a way that U.S does not so so in some
ways us is kind of falling behind in
that like regularized regulative uh area
uh as a CEO even though EU is also
farther ahead than than us so uh and
also
like this is a global problem so like
just like we’ve seen with global warming
it’s not
enough
correct to assume that Chinese are just
the bad guys in the room like if there’s
a global problem like you have to like
you know
get people together and discuss it in
you know with with open cars rather than
uh trying to kind of shift the blame uh
to to the other guys it occurred to me
reading a Goldman Sachs report that says
that extrapolating our estimates
globally suggests that generative AI
could expose the equivalent of 300
million full-time jobs uh to Automation
in Europe and the United States and
along this line it just it just made me
think are we going to look as as the
world has changed when we went from
farming and large families of kids that
were farmers and helping their parents
to a much smaller population in the
world when there are far more humans
actually being employed
so uh yeah like if you’re if you pick
the
knowledge that they are just another
technology you can like firmly make the
case that well you shouldn’t be worried
about because like technology has in the
end always have been has been good uh
for humans including employment however
if you if you take this analogy of like
species then it’s clear that the
introduction of a smarter species quite
often does not go well for the for the
less Advanced species well it’s a
fascinating problem and another
dimension here I read you know uh met
recently with Barry Diller and Barry
Diller has started Consortium uh to
basically Sue AI operations that scrape
um material intellectual property the
property of others that are particularly
in the news business in news industry
and threaten these major lawsuits and
there’s now this very interesting thing
in the media and Publishing World about
property rights uh and whether or not
guard rails that can limit how AI learns
are a smart thing to do have you thought
about that at all and what do you think
the chances are that Publishers or
artists or people that create can
somehow get carve outs where they’re not
part of the AI world I find it perhaps
naive but I’d love to hear from you
so perhaps as someone who was 20 years
ago working on a peer-to-peer flight
sharing program or probably shouldn’t
like take like strong stances on this
issue however like one
point that I think about is that or I
suspect is that the issue with copyright
is just going to be very temporary in
some ways
it is more
like a feature of the current generation
of AI and that’s like AIS will get
smarter they might just actually need
less and less data and less and less
copyrighted data uh to kind of exhibit
the same uh or perhaps in a stronger
capabilities so I think in some ways the
copyright people
perhaps are fighting a good fight but
eventually I think it’s going to be a
losing fight I mean you did this I mean
as you just mentioned I mean you were
the the king of this you did this with
Kazan you you know brought new techno
technology and Skype I mean I think that
that that is a very interesting tension
out there if if you would kind of put
yourself 20 years ago when you were
doing dude were there ways that the
system could have slowed you down and
and I asked this as someone that when
you were 25 years old you were changing
the world and not worried about these
boundaries now you’re 50 years old and
you’re saying hey we need guidelines and
boundaries are you worried about the 25
year old version of you that’s ignoring
uh those guardrails and concerns today
that is a fascinating question
hmm
I kind of feel that I could if I would
face uh with like 20 year old myself I
could kind of like talk sense to him but
I think it’s uh kind of perhaps like
deceptive uh thought because indeed uh
like
one thing that I have for example
inherited from 20 year old 20 year old
myself is that I really hate software
patents I think they’re just attacks on
programmers uh that is just not a great
uh thing to have but uh
but like definitely My Views about open
source have been become like more
moderate as I see that the potentially
the open source AIS could actually be
the source of catastrophic events
so
is resistance futile I mean I mean I I
guess you know one hand you know I look
at technology advancing so much and
you’ve been such a big driver of this
and I do believe that we do make choices
I’m glad you mentioned human cloning
what I’m interested in is how do we take
the work that you’re doing in future of
life in the future of uh the concerns
for existential threats and give it
scale so that it becomes more the norm
uh and less of a boutique topic
I think that it’s now time to uh to
really put forward some uh early
regulations
at the very least to do something to
just exercise a muscle of Technology
regulation I mean EU has done some of
that I think more than the US has done
but but like specifically I’m thinking
about things like uh making sure that
data centers are certified certified so
like if you want to train an AI and they
are done like the big AI experiments are
done in data centers Big Data Centers
those data centers have to be certified
uh I I think that’s uh one of the steps
perhaps even easier step where there
seems to be got a lot of consensus on is
that AI outputs should be labeled like
nobody should be faced uh with a phone
call or or video or text
and fooled into thinking that this came
from Human there should be clear
indications that this is AI output and
there are things like liability so like
if for example Facebook puts out open
source Ai and that falls into bad hands
and something really catastrophic
happens as a result that responsibility
should go back to Facebook
we uh have a survey that was done in
2019 and I guess 4 000 people were asked
they got hundreds and hundreds of
responses asking experts who are
actually working on these technology
issues on uh whether machines would be
vastly better than humans at all
professions and at that time uh it said
within 10 years or two years it would be
ten percent within 30 years it would be
60 percent where do you fall in this
spectrum
I’m just like very uncertain I think
there is like significant
bump in probability in the next few
years just because uh like people have
sort of like discovered the gold mine
and like just throwing more and more
compute more and more uh people more and
more money I mean
uh I just watched the uh senat uh
testimony uh by uh Stuart Russell and he
said that that like there’s currently
about 10 10 billion dollars per month uh
being invested in AI startups uh and
that’s like more than the uh entire
science funding in the us for the rest
of the science therefore like granted
there is this Rush uh sort of Cold Rush
happening in AI in a very distinct kind
of manner when you compare it to the
rest of the kind of the funding crisis
in startup land and technology in
general and so perhaps like this
might actually going to precipitate uh
some certain capability gains that I’m
very concerned about if that doesn’t
happen then like sort of like bets are
off all of sort of off the table again
and and kind of remains to be seen like
how much more time we will have I’m
going to tell our audience uh beyond
that you are the real John talian you’re
not a deep fake we haven’t conjured you
to do all of this but you know maybe
someday we would be able to do that
maybe illegally I’ve seen deep fakes of
Tom Cruise on Tick Tock uh at an event
that semaphore where I work put together
we did a deep deep fake of Barry Diller
uh this big you know media Titan uh he
was not happy at all about it but it was
one of these things where as you kind of
look at the convergence of a lot of
different dimensions of of how it
changed what do you think truth is
actually in jeopardy
uh
in some sense sure but in some sense not
really so there are
ultimately I think
like we have lived with the ability to
produce kind of fake texts uh for a long
time and we have developed things like
uh you know security digital signatures
uh like website uh traffic encryption
things like that to dealing with this uh
with video content we have been you know
used to uh trusting it uh so there will
be like a period uh during which we
kind of
like many people will be fooled uh by by
fake videos but uh of course if nothing
nothing worse would happen because of AI
then I wouldn’t pay that word because
people just adjust it and start
demanding uh kind of authentication of
the sources and and indeed start
demanding laws that if you’re gonna fool
someone with a generated videos you
should go to jail
let me just ask you finally we’ve had a
discussion you and I about this before
and that is about the fragility of
democracy and whether technology is
worsening the problem or enhancing
Democratic options down the road we
we’re you know I’m in a country right
now we’re a former American president
just had his fourth indictment I’m not
sure how well we’re exhibiting democracy
today but when you think about this part
of the question is might AI get
democracy better
I mean hey I could get everything better
sure uh I I do think that there is this
just like with deep fakes there is kind
of like this turbulent Waters ahead uh
and if uh if the data doesn’t turn
existential I think we could
potentially just develop counter
measures uh to everything and kind of
adjusted a new situation like we did
with with previous uh powerful
Technologies such as like internet or
smartphones or cameras everywhere Etc
but yeah my own kind of worry is that uh
like uh every new generation of AI will
just present bigger and even larger
problems
singularitarian hacker investor
physicist Jan talin founder of the
center of the study of existential risk
and co-founder of the future of Life
Institute thank you so much for being
with us today
thank you very much
so what’s the bottom line we all would
love AI to help doctors diagnose our
ailments better or to protect us let’s
say against fraud and identity theft but
those are just the toes in the door
generative AI will eventually affect
everything and I mean everything and
there are significant human-less
Dimensions to it where data and machines
actually talk to each other they learn
from each other and they evolve without
us some of us would like to buy a car
that could drive itself that’s great but
are you willing to live in a country
that has an autonomously run government
actually when we look around that
doesn’t look like a bad idea but add to
that lethal autonomous weapon systems or
robotic Killers or cities that run
themselves without any workers and
things start to look a bit more scary we
should be worried about the power of a
handful of people who are making the big
decisions on artificial intelligence
today and then we should be even more
worried when the handful of people are
gone and AI is making all those
decisions by itself and that’s the
bottom line
foreign
[Music]
hi I’m Steve Clements and I have a question is the wild quest for advanced artificial intelligence more like a suicide Race For Humanity let’s get to the bottom line [Music] science fiction has been obsessed with this idea for decades robots and super intelligent computers take over our decision-making powers and then they enslave us and we only realize what’s going on when it’s way too late like when Keanu Reeves character in The Matrix film figured out that he and all of humanity were basically just batteries for a huge AI machine well is fiction becoming reality governments and big corporations are locked in a frantic race to come up with more advanced applications for artificial intelligence to manage everything cheaper and more competently Health Care education even battlefields in a way artificial intelligence is the new arms race Russian President Vladimir Putin once said that the nation that leads in AI will be the ruler of the world so is there an existential risk to humans and is it time to pause and take a step back to make sure these efforts are regulated and more guard rails are put in place to minimize those risks my guest today says it’s definitely that time he is Yan talin one of the engineers behind hit programs like Skype and Kaza and now the founder of the center for the study of existential risk at Cambridge University he’s also co-founder of the future of Life Institute that took the lead in calling for a six-month moratorium on AI research John it’s a real pleasure to have you with us today and I and I just want to start 23 years ago I read an article that was the cover of Wired Magazine by another technologist I knew named Bill Joy and the title of that is saying is is was why the future doesn’t need us and I would love to hear where we are in Bill Joy’s predictions and what you think we need to be wary of as we as we move into this new era of AI indeed protections like Bill Joyce and even they can go in further for example Alan Turing in 1951 said that once AI becomes smarter than humans we will likely lose control to it and I think the correct position to take here is that as soon as we cannot rule out that we will remain controlled firmly in controlled for a long time we should kind of take necessary cautions precautions to make sure that either we remain in control or if we lose control the things the future will still be good for us now you have written about your concerns uh in this area not being something that evolves tomorrow but down the road as super intelligence really evolves and takes hold that mankind may be less relevant to the equation can you explain to our audience our lay audience what your concerns are about computer super intelligence uh so turns out like there was just recently um a survey result I think by organization called ugal and that kind of a normal Ordinary People on the streets they actually have like pretty good intuitions uh what could go wrong and in some ways uh sort of people in Academia intellectuals they kind of have a habit of downplaying the problems than both the experts like Joshua bengio or Jeff Hinton the inventors of of deep learning or that people on the street they understand because like if you look at it the reason why humans are like firmly in control of this planet are not chimpanzees is that because we are more capable than they are we are more intelligent we are not we are not stronger but we know how to do long-term planning Etc and now we are we as species we are in a race to yield that advantage to machines which indicate is not a good idea now you were involved with the founding of a number of institutes that are fascinating one is the future you know looking at the study of existential risk and others the future of Life Institute which I find fascinating and led with the letter signed by many of the world’s most important technologists today asking for a moratorium and research this Elon Musk uh who’s been I guess on the board of your Center but also was a cosigner of that letter I guess my question is can can Pandora be put back in the Box can can can a letter like that can a moratorium a call for a pause actually have an impact on the global development of AI today or things just proceeded too far at this point it definitely can there are multiple reasons to be hopeful for example we have done it with other technology before we did uh ban human cloning even though technologically this is very possible and also things like bio weapons Etc like for we have been like less successful about them but still like the success has not been known has not been zero and the other thing is that again uh they’re really experts uh top experts in the field they are concerned uh therefore like there isn’t this uh sort of Luddite uh voice that advocates for for pause no they’re like this there’s a very kind of mixed and wide uh developing consensus about the need to take things lower and one reason to take things lower is to just make sure make sure that things go well uh if we are just at the mercy of Market forces we might not have the ability to steer and things sufficiently yeah and you’re a wealthy guy and you’ve invested in a lot of these companies over the years and you’ve been investing in AI companies so it’s sort of interesting on one hand to see you warning about these dangers and yet you’re a significant investor you run with the crowd that’s out there that is bringing this new technology apology forward how do you square that with yourself yeah that is a good question and uh my default way of looking at things is that like what is the counterfactual that I’m displacing that in other words like if I didn’t invest like who else would take my place then and uh so that’s like one uh one approach I’ve taken with companies like anthropic for example uh the other kind of approach I take is that I just tried not to be kind of decisive investor uh like for example in deepmind where I invested in 2010 11 if I remember correctly uh I was just like very small shareholder uh so I kind of viewed my investment as a ticket to be you know present at the company and talk about the concerns that I have that I already had back then now let’s talk about this letter and President Biden’s meeting with seven leading companies I have them listed as Amazon anthropic Google inflection meta Microsoft and open Ai and many people believe that this meeting in the White House of President Biden with these Executives talking about voluntary measures that they might take to Think Through the impact and dangers and risks of AI was precipitated by the future of Life institute’s letter now love to hear your thoughts on that but I think more importantly is do you believe that these seven Executives these these companies and President Biden are sincere in what they’re doing or is this fake performance acting like they’re doing something while behind the scenes they’re just chugging along as they were before so I don’t think this is a fake performance I mean I do have some concerns that they like many people too many people this might be like very new topic so they’re just uh kind of uh don’t exactly know what to think at this point but like you kind of like everyone has to now learn about this new situation and when it comes to sort of taking credit for these things I think like the most biggest credit really goes for good or for bad to open AI about releasing the chat chipity because that kind of like caused a coast planet to gonna pay attention uh to AI in a way that it didn’t really pay before but yes like that so the future of Life institutes letter open letter gonna fell into this like fertile ground prepared by open Ai and then like in turn the extinction statement by Center for AI safety there’s like one uh one sentence uh declaration that they are Extinction risk should be there at the same level than risks from nuclear and bio you know it’s it’s an interesting conversation I actually try to think try not to sensationalize when we have these conversations but sometimes you know to overstate for effect is part of learning and also thinking around the corner uh and trying to anticipate the unanticipatable if you will things we haven’t thought about as part of the conversation and your future of Life Institute actually has been doing this and they they’ve developed something I I best thing I call them is what they call them are Slaughter Bots can you tell us about Slaughter Bots and and what you know just thinking in contemporary terms and capacity in drones what could possibly happen uh more recently than people think with the slaughter Bots yes so there are like two really big problems one is that uh fully automated you know putting AI in the military causes makes it very hard for Humanity Control Data as trajectory because I got this point you are in a very you know literal arms race and like when you are in arms race your you don’t have much maneuvering room when it comes to thinking about what how do you approach this new technology you just have to go what the capabilities are and what are kind of strategic advantages so that’s like one big worry uh about putting AI military the other problem and as we’ve seen with cyber uh cyber warfare is that as Things become autonomous and the attribution becomes very hard so like telegram natural Evolution for for automated fully automated Warfare as future Life Institute slaughterbots videos show is that you will get this like swarms of miniaturized drones uh that like anyone with money can produce and release uh without attribution so you might we might be just creating a world where it’s no longer safe to be outside because you might be chased down by a swarms of of StoryBots one of the interesting things I reaction I had to the future of Life Institute letter when you brought all of these major technologists to call for a moratorium is I wondered how does that compete that Consortium compete if China is not involved or or is China involved is there an opportunity for a truly Global deal that we’re not thinking you know enough about how do you do that if you’ve got one side that’s going to follow guard rails and guidelines and the other side that might not so I’m not an expert on China but with that caveat I just want to point out that China already has AI regulation in a way that U.S does not so so in some ways us is kind of falling behind in that like regularized regulative uh area uh as a CEO even though EU is also farther ahead than than us so uh and also like this is a global problem so like just like we’ve seen with global warming it’s not enough correct to assume that Chinese are just the bad guys in the room like if there’s a global problem like you have to like you know get people together and discuss it in you know with with open cars rather than uh trying to kind of shift the blame uh to to the other guys it occurred to me reading a Goldman Sachs report that says that extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300 million full-time jobs uh to Automation in Europe and the United States and along this line it just it just made me think are we going to look as as the world has changed when we went from farming and large families of kids that were farmers and helping their parents to a much smaller population in the world when there are far more humans actually being employed so uh yeah like if you’re if you pick the knowledge that they are just another technology you can like firmly make the case that well you shouldn’t be worried about because like technology has in the end always have been has been good uh for humans including employment however if you if you take this analogy of like species then it’s clear that the introduction of a smarter species quite often does not go well for the for the less Advanced species well it’s a fascinating problem and another dimension here I read you know uh met recently with Barry Diller and Barry Diller has started Consortium uh to basically Sue AI operations that scrape um material intellectual property the property of others that are particularly in the news business in news industry and threaten these major lawsuits and there’s now this very interesting thing in the media and Publishing World about property rights uh and whether or not guard rails that can limit how AI learns are a smart thing to do have you thought about that at all and what do you think the chances are that Publishers or artists or people that create can somehow get carve outs where they’re not part of the AI world I find it perhaps naive but I’d love to hear from you so perhaps as someone who was 20 years ago working on a peer-to-peer flight sharing program or probably shouldn’t like take like strong stances on this issue however like one point that I think about is that or I suspect is that the issue with copyright is just going to be very temporary in some ways it is more like a feature of the current generation of AI and that’s like AIS will get smarter they might just actually need less and less data and less and less copyrighted data uh to kind of exhibit the same uh or perhaps in a stronger capabilities so I think in some ways the copyright people perhaps are fighting a good fight but eventually I think it’s going to be a losing fight I mean you did this I mean as you just mentioned I mean you were the the king of this you did this with Kazan you you know brought new techno technology and Skype I mean I think that that that is a very interesting tension out there if if you would kind of put yourself 20 years ago when you were doing dude were there ways that the system could have slowed you down and and I asked this as someone that when you were 25 years old you were changing the world and not worried about these boundaries now you’re 50 years old and you’re saying hey we need guidelines and boundaries are you worried about the 25 year old version of you that’s ignoring uh those guardrails and concerns today that is a fascinating question hmm I kind of feel that I could if I would face uh with like 20 year old myself I could kind of like talk sense to him but I think it’s uh kind of perhaps like deceptive uh thought because indeed uh like one thing that I have for example inherited from 20 year old 20 year old myself is that I really hate software patents I think they’re just attacks on programmers uh that is just not a great uh thing to have but uh but like definitely My Views about open source have been become like more moderate as I see that the potentially the open source AIS could actually be the source of catastrophic events so is resistance futile I mean I mean I I guess you know one hand you know I look at technology advancing so much and you’ve been such a big driver of this and I do believe that we do make choices I’m glad you mentioned human cloning what I’m interested in is how do we take the work that you’re doing in future of life in the future of uh the concerns for existential threats and give it scale so that it becomes more the norm uh and less of a boutique topic I think that it’s now time to uh to really put forward some uh early regulations at the very least to do something to just exercise a muscle of Technology regulation I mean EU has done some of that I think more than the US has done but but like specifically I’m thinking about things like uh making sure that data centers are certified certified so like if you want to train an AI and they are done like the big AI experiments are done in data centers Big Data Centers those data centers have to be certified uh I I think that’s uh one of the steps perhaps even easier step where there seems to be got a lot of consensus on is that AI outputs should be labeled like nobody should be faced uh with a phone call or or video or text and fooled into thinking that this came from Human there should be clear indications that this is AI output and there are things like liability so like if for example Facebook puts out open source Ai and that falls into bad hands and something really catastrophic happens as a result that responsibility should go back to Facebook we uh have a survey that was done in 2019 and I guess 4 000 people were asked they got hundreds and hundreds of responses asking experts who are actually working on these technology issues on uh whether machines would be vastly better than humans at all professions and at that time uh it said within 10 years or two years it would be ten percent within 30 years it would be 60 percent where do you fall in this spectrum I’m just like very uncertain I think there is like significant bump in probability in the next few years just because uh like people have sort of like discovered the gold mine and like just throwing more and more compute more and more uh people more and more money I mean uh I just watched the uh senat uh testimony uh by uh Stuart Russell and he said that that like there’s currently about 10 10 billion dollars per month uh being invested in AI startups uh and that’s like more than the uh entire science funding in the us for the rest of the science therefore like granted there is this Rush uh sort of Cold Rush happening in AI in a very distinct kind of manner when you compare it to the rest of the kind of the funding crisis in startup land and technology in general and so perhaps like this might actually going to precipitate uh some certain capability gains that I’m very concerned about if that doesn’t happen then like sort of like bets are off all of sort of off the table again and and kind of remains to be seen like how much more time we will have I’m going to tell our audience uh beyond that you are the real John talian you’re not a deep fake we haven’t conjured you to do all of this but you know maybe someday we would be able to do that maybe illegally I’ve seen deep fakes of Tom Cruise on Tick Tock uh at an event that semaphore where I work put together we did a deep deep fake of Barry Diller uh this big you know media Titan uh he was not happy at all about it but it was one of these things where as you kind of look at the convergence of a lot of different dimensions of of how it changed what do you think truth is actually in jeopardy uh in some sense sure but in some sense not really so there are ultimately I think like we have lived with the ability to produce kind of fake texts uh for a long time and we have developed things like uh you know security digital signatures uh like website uh traffic encryption things like that to dealing with this uh with video content we have been you know used to uh trusting it uh so there will be like a period uh during which we kind of like many people will be fooled uh by by fake videos but uh of course if nothing nothing worse would happen because of AI then I wouldn’t pay that word because people just adjust it and start demanding uh kind of authentication of the sources and and indeed start demanding laws that if you’re gonna fool someone with a generated videos you should go to jail let me just ask you finally we’ve had a discussion you and I about this before and that is about the fragility of democracy and whether technology is worsening the problem or enhancing Democratic options down the road we we’re you know I’m in a country right now we’re a former American president just had his fourth indictment I’m not sure how well we’re exhibiting democracy today but when you think about this part of the question is might AI get democracy better I mean hey I could get everything better sure uh I I do think that there is this just like with deep fakes there is kind of like this turbulent Waters ahead uh and if uh if the data doesn’t turn existential I think we could potentially just develop counter measures uh to everything and kind of adjusted a new situation like we did with with previous uh powerful Technologies such as like internet or smartphones or cameras everywhere Etc but yeah my own kind of worry is that uh like uh every new generation of AI will just present bigger and even larger problems singularitarian hacker investor physicist Jan talin founder of the center of the study of existential risk and co-founder of the future of Life Institute thank you so much for being with us today thank you very much so what’s the bottom line we all would love AI to help doctors diagnose our ailments better or to protect us let’s say against fraud and identity theft but those are just the toes in the door generative AI will eventually affect everything and I mean everything and there are significant human-less Dimensions to it where data and machines actually talk to each other they learn from each other and they evolve without us some of us would like to buy a car that could drive itself that’s great but are you willing to live in a country that has an autonomously run government actually when we look around that doesn’t look like a bad idea but add to that lethal autonomous weapon systems or robotic Killers or cities that run themselves without any workers and things start to look a bit more scary we should be worried about the power of a handful of people who are making the big decisions on artificial intelligence today and then we should be even more worried when the handful of people are gone and AI is making all those decisions by itself and that’s the bottom line foreign [Music] 25:26 NOW PLAYING Rob