hello everybody it’s great to see my gosh so many friends actually and indeed my husband which is alarming he never
turns up to any event I do but um it is great to be here uh to talk
with about literally one of the hottest topics of the moment with someone who
has written one of the best books about it um how many of you have used chat GPT just a show of hands
virtually everybody I think you’d probably then agree that chat DBT what came November last year
and it was only then that most people realized that artificial intelligence
generative AI models in particular were about to change the world and suddenly there was a kind of collective Global oh
my God this capability is extraordinary and it’s been reflected in endless
numbers of editorials handwringing politicians and I think I’m right and saying the main focus has been on the
downsides everyone has their pet view of what the odds are of existential risk are we all going to kill ourselves it’s
all terrible and Mustafa comes into this as a man with considerable credibility he is a
man who has co-founded not just one but two successful AI companies and she’s a man who in this book takes a
sober realistic and actually very compelling look at what lies ahead of us and so
that’s why you really should read it it’s great I’ve read it twice uh you should read it Mustafa just to give you
some he doesn’t need much introduction I don’t think to this group but he was a co-founder of deepmind back in 2010. uh
he then was a co-founder of inflection AI with Reed Hoffman Reed Hoffman his co-founder has with the help of chat GPT
written an extremely upbeat view of the potential of this technology so I’d love to know the debates between the two of
you um he was got a CBE a few years ago for his Visionary services and influence in
the UK technology sector um he is also on the board of The Economist so I get to see Mustafa
working up close um uh he’s a friend of The Economist friend of uh and great figure in British
technology but I think and the place to start with this book and the book is called the coming wave
and you will know that there has been if you’ve turned on your TV or listened to a podcast recently you will know that never mind the coming wave there is
already a wave of um publicity and people being impressed with this book I
believe you’ve heard you told me 60 appearances of various sorts so consider yourselves lucky you’re 61 on this list
but understandably the work the book has had a tremendous impact because it is very interesting very thoughtful and
it’s on the hottest Topic at the moment so we want to talk most of the time about the book but I do want to for those of you who don’t know Mustafa to
get a little bit of background and the first is that Mustafa is actually not a computer geek
you didn’t study computer code right you studied philosophy and theology at Oxford so can you just give us the kind
of potted history about how a man who studied philosophy and theology comes to be the co-founder of two tech companies
what are you doing well I’ve always found philosophy as
systems thinking tool it enables me to be rigorous and clear about what I think and you know right from the very outset
I think when I was 19 I actually dropped out of My Philosophy degree I didn’t know that yeah I didn’t finish and I was
really motivated by the impact that I could have in the world I left to help start a charity at the time it was a
telephone counseling service called Muslim youth helpline and it was a secular that I was a an atheist even
though I had grown up with a Muslim background it was a secular service that
was designed to provide faith and culturally sensitive support to Young British Muslims this was in 2 2003
and you know I I found myself at Oxford studying this very theoretical esoteric
you know set of ideas and I wanted to put real things into practice in terms of my ethics and that was why I went to
you know start the helpline and worked on that as a volunteer for three years uh I soon got you know frustrated about
the scale of impact um in our non-profit and I worked briefly for the mayor of London at the
time Ken Livingston um as a human rights policy officer um and you know that was that was
inspiring but I was also struggling with the scale of impact I I realized that
you know if if I didn’t capture what really makes us organized and effective as a species The Profit incentive then I
was going to miss one of the most important things to happen in my lifetime and um at the time I saw the rise of
Facebook this was sort of around 2007 2008 and it had grown in the space of
two years to 100 million monthly active users is and I was totally blown away at
how quickly this was growing out of seemingly nowhere something completely new to me and so I set about on a quest
to find anyone and everyone that would speak to me to teach me about technology I had started a bunch of businesses
before that two different businesses one actually a technology company selling electronic point of sale systems
actually around here in Notting Hill uh in restaurants uh trying to put Wi-Fi infrastructure in there and so on that
was a that was unsuccessful that was ahead of its time and so I was looking for people who I could you know form a
new partnership with and figure out how to take advantage of of technology and that’s where I met my friend and
co-founder of deepmind Mrs arbis because he was the brother of my best friend at the time from school and he was just
finishing his PhD in neuroscience at UCL and we got together and you know the rest is history and at that time
you had
Patrol intelligence that was you know capable of replicating human intelligence or even succeeding it so
just think this was 13 years ago the rest of us didn’t even know this stuff was really going on you’re you’re in
your where is it in Regent Square somewhere did you imagine that by 2023 the world
would have what we have now I mean in a way yes it was difficult for
us to imagine exactly how it would unfold but we made a very big bet on deep learning uh which is one of the
primary tools that is powering this new Revolution um before anybody was involved in deep
learning so the the current chief scientists and co-founder of openai the creators of chat gbt was one of our
interns back in 2011. Jeffrey Hinton who was the who subsequently became the um
one of the heads of AI at Google and is known now as the Godfather of AI recently in the Press worried about the
consequences he was our first advisor our paid advisor I think his salary was 25 000
pounds a year to advise us so I think three of the six co-founders of openai at some point passed through deepmind
either to give talks or were actually members of the team so really it was incredibly about timing you know we got
the timing absolutely right we were way ahead of the curve at that moment and somehow we managed to hang on so you you
were there for a while and then let’s fast forward a bit you can read the rest of this in the book you you now have
co-founded and run inflection Ai and you are creating an AI called Pi which you
can interact with if you like tell us what pi does so Pi stands for personal intelligence
and I believe that over the next few years everybody is going to have their
own personal AI there are going to be hundreds of thousands of AIS in the world they’ll
represent businesses they’ll represent Brands every government will have its own AI every non-profit every musician
artist record label everything that is now represented by a website or an app
is soon going to be represented by an interactive conversational intelligent
service that represents the brand values and the ideas of whatever organization
is out there and we believe that at the same time everybody will want their own personal AI one that is on your side in
your corner helping you to be more organized helping you to make sense of the world it really is going to function as almost
like a chief of staff or you know prioritizing planning teaching
supporting supporting you so that sounds great um what does it actually mean though in
practice because so often this conversation about AI it’s at this point then it turns into the apocalyptic we’re going to end up you know wiping
ourselves out because there’ll be some Rogue person you know sitting in a garage somewhere who will you know unleash a virus that will kill us all so
before we get to all of that stuff in let’s say I don’t know five years and you said
within the next three to five years you think AI will reach human level capability across a variety of tasks
perhaps not everything but a variety so paint a picture for us of what life will be like in five years
at 2028 and first of all will it be you and me here or will there be the kind of Mustafa AI bot
okay let me let me just go back 10 years just to to give you a sense for what has
already happened and why the predictions they’ll make I think are plausible so
the Deep learning Revolution enabled us to make sense of raw messy data so we
could use AIS to interpret the content of images classify whether an image
contains dogs or cats what those pixels actually mean we can use it to
understand speech so when you dictate into your phone and it transcribes it and Records perfect text we can use it
to do language translation all of these are classification tasks we’re essentially teaching the models to
understand the messy complicated world of raw input data well enough to understand the objects inside that data
that was the classification Revolution the first 10 years now we’re in the
generative Revolution right so these models are now producing new images that you’ve never seen before they’re
producing new text that you’ve never seen before they can generate pieces of music and that’s because it’s the flip
side of that coin the first stage is understanding and classifying if you like the second stage having done that
well enough you can then ask the AI to say given that you understand you know what a dog looks like now generate me a
dog with your idea of pink with your idea of yellow spots or whatever and that is an inter
it’s a prediction of the space between two or three or four Concepts and that’s
what’s produced this generative AI revolution in all of the modalities as we apply more computation to this
process so we’re basically stacking much much larger AI models and we’re stacking
much much larger data the accuracy and the quality of these generative AIS gets
much much better so just to give you a sense of the trajectory we’re on with respect to computation
over the last 10 years every single year the amount of compute
that we have used for The Cutting Edge AI models has grown by 10x so 10x 10x
10x 10x 10 times in a row now that is unprecedented in technology history
nowhere else have we seen a trajectory anything like that over the next five
years We’ll add probably three or four orders of magnitude basically another thousand times the compute that you see
used today to produce gbt4 or the chat model that you might interact with and
it’s really important to understand that that might be a technical detail or something but it’s important to grab like sort of grasp that because when
people talk about gpt3 or gbt 3.5 or gbt4 the distance between those models is in
fact 10 times compute it’s not incremental it’s exponential and so the
difference between gbt4 and gbt2 is in fact a hundred times worth of compute
the largest compute infrastructures in the world basically to learn all the relationships between all the inputs of
all of this raw data what does that mean what does that entail enable them to do in the next
phase we’ll go from being able to perfectly generate so speech will be perfect video generation
will be perfect image generation will be perfect language generation will be perfect to now being able to plan across
multiple time Horizons so at the moment you could only say to a model give me you know a poem in the style of X give
me a new image that matches these two Styles it’s a sort of One-Shot prediction next you’ll be able to say
generate me a new product right in order to do that you would need to have that
ai go off and do research to you know look at the market and see what was potentially going to sell what are
people talking about at the moment it would then need to generate a new image of what that product might look like
compared to other images so that it was different and unique it would then need to go and contact and manufacturer and
say Here’s the blueprint this is what I want you to make it might negotiate with that manufacturer to get the best
possible price and then go and Market it and sell it those are the capabilities that are going to arrive you know
approximately in the next five years it won’t be able to do each of those automatically independently there will
be no autonomy in that system but certainly those individual tasks
are likely to emerge so that means that presumably the process of innovation
becomes much much more efficient the process of managing things but it’s a bit more efficient what does that
mean and let’s let’s stick with the upside for the moment I will I promise you we’ll get to all the downsides of which there are many but but what is
that going to enable us to do I mean people talk about AI will help us solve
climate change AI will lead to tremendous you know improvements in healthcare just talk us through what
some of those things might be so we can see the upside intelligence has been the engine of
creation everything that you see around you here is the product of us interacting with some environment to
make a more efficient a more a cheaper table for example or a new iPad
if you look back at history you know today we’re able to create we’re able to produce a kilo of grain with just two
percent of the labor that was required to produce that same one kilo of grain 100 years ago so the trajectory of
Technologies and scientific invention in general means that things are getting
cheaper and easier to make and that means huge productivity gains right the
insights the intelligence that goes into all of the improvements in agriculture which give us more with less are the
same tools that we’re now inventing with respect to intelligence so for example to stay on the theme of Agriculture it
should mean that we’re able to produce new crops that are drought resistant that are pest resistant that are in
general more resilient we should be able to to tackle for example climate change and we’ve seen many applications of AI
where we’re optimizing existing Industrial Systems we’re taking the same big cooling infrastructure for example
and when we making it much more efficient again we’re doing more with less so in every area from Healthcare to
education to Transportation we’re very likely over the next two to three decades to see massive efficiencies
invention think of it as the interpolation I described with respect to the images the the the the AI is
guessing the space between the dog the pink color and the yellow spots it’s
imagining something it’s never seen before and that’s exactly what we want from AI we want to discover new
knowledge we want it to invent new types of science new solutions to problems and I think that’s really what we’re likely
to get we I believe that if we can get that right we’re headed towards
an era of radical abundance imagine every great scientist every entrepreneur
you know every person having the best possible Aid you know Scientific Advisor
research assistant chief of staff tutor coach Confidant each of those roles that
are today the you know exclusive Preserve of the wealthy and the educated and those of us who live in peaceful
civilized societies those roles those capabilities that intelligence is going
to be widely available to everybody in the world just as today no matter
whether you are a you know a millionaire or you earn a regular salary we all get
exactly the same access to the best smartphone and the best laptop that’s an
incredibly meritocratic story which we kind of have to internalize you know the Best Hardware in the world no matter how
rich you are is available to at least the top 2 billion people and that is I think that is going to be
the story that we see with respect to intelligence right enough upbeat stuff that that was we’ve had 20 minutes of our
feet which is which is uh but you didn’t call your
book you know becoming Nirvana you called it the coming wave and I’m told that you were thinking it the original
title was going to be containment is not possible I’m glad you didn’t call it that it wouldn’t have sold so well uh
but explain the argument you’re making is not actually Nirvana is around the
corner in fact it’s a much much more subtle argument than that so tell us what the downsides are and what it is
that your book the focus on containment is in the book is about yeah I mean I think I’m pretty wide-eyed
and honest about the potential risks and you know we if if you take the
trajectory that I predicted that more powerful models are going to get smaller
cheaper and easier to use which is the history of the transistor which is the history of every technology and you know
value basically that we’ve created in the world if it’s useful then it tends to get cheaper and therefore it spreads
far and wide and in general so far that has delivered immense benefits to everybody in the world and it’s
something to be celebrated proliferation so far has been a really really good
thing but the flip side is that if these are really powerful tools
they could ultimately Empower a vast array of Bad actors to destabilize our
world you know everybody has an agenda has a set of political beliefs religious beliefs cultural ideas and they’re now
going to have an easier time of advocating for it you know so the extreme end of this spectrum you know
there are certain aspects of these models which provide really good coaching on how to manufacture
biological and chemical weapons it’s one of the capabilities that all of us developing large language models over
the last year have observed they’ve been trained on all of the data on the internet and much of that information
contains potentially harmful things that’s a relatively easy thing to control and take out of the model at
least when you’re using a model that is manufactured by one of the big companies they want to abide by the law they don’t
want to cause harm so we basically exclude them from the training data and we prevent those capabilities
the challenge that we have that everybody wants to get access to these models and so they’re widely available
in open source you know you can actually download the code to run albeit smaller versions of
Pi or chat gbt for no cost and if that trajectory
continues over 10 years you get much much more powerful models that are much smaller and more you know transferable
and you know people then who want to use them to cause harm have an easier time of it I think that’s a really important
distinction that there are you know the leading companies you Google deepmind
you know open AI who have the biggest models now and they’re a relatively small number of these ones and they are
bigger and more powerful but not far behind are a whole bunch of Open Source
ones and so the question is then for your containment can you prevent the open source ones
which will potentially be available to the you know angry teenager in his garage or her garage can those ones be
controlled or not okay the darker side of my prediction is
that these are fundamentally ideas you know their intellectual property
it’s knowledge and know-how an algorithm is something that can largely be expressed on three sheets of paper and
actually is readily understandable to most people it’s a little bit abstract but it you can wrap your head around it
the implementation mechanism you know requires access to vast amounts of compute today but if in time you
remove that constraint and you can actually run it on a phone which you ultimately will be able to do in a
decade then that’s where the containment challenge you know comes into view and I think that there are also risks of the
centralization centralized question right this is clearly going to confer power on those who are building these
models and running them you know my own company included Google and the other big Tech providers so we don’t eliminate
risk simply by addressing the open source Community we also have to figure out what the relationship is between
these super powerful tech companies that have lots of resources and the nation state itself which is ultimately
responsible for holding us accountable so let’s go through some of the most sort of frequently cited risks or indeed
negative consequences and and the one that you hear a lot is as AIS become you
know equivalent to or exceed human intelligence across a wide range of tasks they were be any jobs for any of us you know why would you employ a human
if you could have an AI so history suggests that that’s bunkum you know we’ve never yet run out of jobs and you
know being a good paid up Economist I think it’s a lump of Labor fallacy but lots and lots and lots of people say
this what’s going to happen to the jobs where are you on that well let’s just describe the lump of Labor fallacy because I think it’s important to sit
with that because that is the historical Trend so far what it basically means is when we have when we automate things and we make
things more efficient we we create more time for people to invent new things and we create more health and wealth and
that in itself creates more demand and then we we end up creating new goods and services to satisfy that demand and so
we’ll continually just keep creating new jobs and roles and you can see that in the last couple decades there are many
many roles that couldn’t even have been conceived of 30 years ago from App
designer all the way through to the present day prompt engineer of a large language model so that’s one trajectory
that is likely I think the question about what happens with jobs depends on your time Horizon
so over the over the next two decades I think it’s highly unlikely that we will see structural disemployment where
people want to contribute their labor to the market and they just can’t compete I think that’s pretty unlikely there’s
certainly no evidence of it in the statistics today beyond that I do think it’s possible
that many people won’t be able to even with an AI produce things that are of
sufficient value that the market wants them and their AI jointly in the system I mean AIS are increasingly more
accurate than humans they are more reliable they can work 24 7. they’re you
know more stable and so you know I I think that that’s definitely a risk and I think that we should lean into that
and be honest with ourselves that that is actually maybe an interesting and
important destination I mean work isn’t the goal of society sometimes I think we’ve just forgotten that actually
society and life and civilization is about well-being and peace and
prosperity it’s about creating more efficient ways to keep us productive and healthy many people you know probably in
this room and including us enjoy our work we love our work and we’re lucky enough and we’re privileged enough to
have the opportunity to do exactly the work that we want I think it’s super important to remember that many many
people don’t have that luxury and many people do jobs that they would never do if they didn’t have to work and so to me
the goal of society is a quest for radical abundance how can we create more with radically less and liberate people
from the obligation to work and that means that we have to figure out the question of redistribution and obviously
that is an incredibly hard one and obviously I address it in the book but is that to focus on what does taxation look like
in this new regime how do we capture the value that is created make sure that it’s actually converted into Dollars
rather than just a sort of value add to GDP okay role of government you need to
have um
is that Ai and the rise of AI makes actually the functioning of democracy ever harder
we’re already seeing lots of concerns about you know deep fakes wrecking the
2024 elections four billion people live in countries that will have elections next year people are worrying about 2024
never mind 28 or 34 and we just um Mustafa and I just had a conversation
with Yuval Harare who is as pessimistic as you are um thoughtfully optimistic who basically
said it was the end of democracy um uh I’m not sure that either you and I agreed but what is the consequence for
Liberal democracy in the coming decades in this world of AI I think the first thing to say is that the state we’re in
is is pretty bleak I mean trust in in governments and in politicians and the
political processes as low as it has ever been um you know in in fact 35 percent of
people interviewed in in a Pew study in the US think that Army rule would be a good thing so we’re already in a very
fragile and anxious State and I think that the you know to sort of empathize
with you Val for a moment the argument would be that you know these new technologies allow us to produce new
forms of synthetic media that are persuasive and manipulative that are highly personalized and they exacerbate
underlying fears right so I think that is a real risk we have to accept that
it’s going to be much easier and cheaper to produce fake fake news right we have an appetite an insatiable addictive
dopamine hitting appetite for untruth you know it sells quicker it spreads
faster and that’s a foundational question that we have to address I’m not sure that it’s a new risk that AI
imposes it’s something that Ai and other Technologies accelerate you know in and
that’s the challenge of AI that is a good lens for understanding the impact that AI has in general it is going to
amplify the very best of us and it’s also going to amplify the very worst of us
and what about the fact that this is developing in a world which geopolitically
split in in a way that it hasn’t been at least in the last couple have been the post called War World at all so we have
the tensions between the US and China we have essentially a sort of race for Global dominance between these two
regimes in that kind of a world how can you achieve the sort of governance
structures that you write about in your book that are needed to try and you know perhaps prevent the most extreme
downsides of AI yeah I mean The Bachelors I’ve been accused of being an optimist about it I’ve also been accused
of being a utopian about the interventions that we have to make and I think that unfortunately that’s just a
statement of fact what’s required is good functioning governance and oversight I mean the the companies are
open and willing to expose themselves to audit and to oversight and I think that
is a unique moment relative to past generations of tech CEOs and inventors
and creators across the board we’re being very clear clear that the precautionary principle is probably
needed and that’s a moment when we have to go a little bit slower be a little bit more careful and maybe leave some of
the benefits on the tree for a moment before we pick that fruit in order to avoid harms I think that’s a pretty
novel you know setup as it is but it requires really good governance it
requires functioning democracies it requires good oversight I think that we do actually have that in Europe I think
that the EU AI act which has been in draft now for three and a half years is super thorough and very robust and
pretty sensible and so in general I’ve been you know a fan of it and kind of endorsing it but
people often say well if we get it right in the UK or if we get it right in Europe and the US what about China I
mean I hear this question over and over again what about China and I I think that’s a really Danger
first it’s sort of deep China as though China has this sort of like maniacal suicidal mission to at
all costs at any cost you know sort of take over the world and you know be the next dominant Global power I mean so far
I don’t see any evidence of of that I mean you know I’m not ruling it out I’m not a you know sympathizer but I think
we should just be wide-eyed about the actions they’re actually taking at the moment they have a self-preservation
Instinct just as we do and the more that we can appeal to that you know desire to
you know have their citizens benefit from economic interdependence and from
peace and prosperity and well-being we’re both aligned in those incentives I think the second thing is it’s dangerous
to sort of point a finger at you know China because actually we can’t just have a race to
the bottom on values we have to decide what we stand behind right if we’re not you know I mean I I’m a believer that we
shouldn’t have a large-scale state surveillance apparatus enabled by AI we shouldn’t do that just because China are
doing it we shouldn’t get into you know an arms race and take risks just because they’re taking those risks and that’s
difficult for some people to accept because you know they might be hyper pragmatic and you know I think that that
only leads to an inevitable self-fulfilling prophecy that we both end up taking terrible risks which are
unnecessary