the only way that we can coexist with it is if it loves us
yoshak welcome to How the Light Gets In up until fairly recently artificial
general intelligence has sort of been decades away or seen as being decades away how far away do you think we are
from from AGI and also what do you understand AGI to actually be i think in the most narrow sense AGI is
achieved when we have an artificial intelligence system that is better at AGI research than people right at this
point we can leave the rest to the machine and we’ll be done and in a more uh wider sense I think um
we are interested in having a system that works very much in the same as we do as VO that is able to understand the
world that is able to relate to itself to the environment around it that can deeply interface with us and that can
probably also scale up because if it has those abilities it’s not constrained by the size of its brain or by the
processing speed comparable to a human nervous system so it’s not really conceivable that it’s going to stay at
the human level or be very humanlike for very long yeah and when you consider it humanlike now I don’t think that the
present systems that exist are very humanlike at all they do produce behavior which in some sense is similar
to human beings in the sense that it’s able to solve the Turing test for most
purposes i think that JGPT is uh if you are an expert in the area still not
necessarily at the level of expertise and it does not psychologically behave like a human being at least not out of
the box but it might be possible to modify it some degree but at this point it’s a text generator and we have image
generators that produce uh pictures that are uh above what most human beings can
draw and uh conceive of and uh the texts that u the eological language models are
writing are better than the texts that an average human being can write but uh it’s still very very different from the
way a human being learns and from a how a human being relates to the world and thinks about the world and so I
personally don’t think that these systems are AGI yet even though some people feel that you might be getting
very very close with it and it might be possible to modify even the existing approaches even though they don’t work
like a brain at all into something that is AGI when I lived in Berlin AGI was
very far in the future basically would never happen and a lot of people even in neuroscience are cryptodoolists they
think that basically minds cannot be naturalized it’s very difficult for most people to imagine that minds are
physical system that can be explained as some kind of mechanism and you obviously take that kind of computational
perspective which is going against the grain why is it that you think that you can still kind of defend this kind of
computational the brain is a computer perspective what is it about it well computation is a way of looking at the
world it means that we can describe a system by the way in how it moves from state to state and when we have full
control about the state transitions and then we have a computer we can make that thing behave in an arbitrary way and I
think it’s a quite deep insight it’s a way to think about how to turn mathematics into a machine any kind of
mathematics into any kind of machine and from this perspective everything in the world is computational even the universe
itself it’s not limited to the mind itself it is the whole system is any kind of dynamical system that has some
kind of causal structure is in this perspective computational is just a more precise way of thinking about language
perception and existence to exist means to be implemented to be implemented means to be some kind of causal
structure that is producing a certain behavior and so if we’re thinking about the universe as this kind of computational machine Demesis Abis once
described AI as sort of this method for the universe to understand itself would you agree with that statement
well it’s a very Hegelian thought right and it’s tempting and um it’s very
poetic i don’t know if we learn something literally from this and this
idea that we that the universe itself is driving towards understanding itself and for this purpose it’s producing humanity
and humanity is teaching the rocks how to think and they become conscious at a much higher level it might be literally
true but um I don’t think it’s the reason why it happens yeah my next question is how do you
think artificial intelligence is transforming society now and how do you
think it will transform society in the next few years i think it’s pretty difficult to say
when the internet came out most people thought it’s not going to be that big it was something that the nerds were
playing around sending some emails to each other or being on message boards and some of these nerds felt it’s going
to completely transform the world it’s going to be a global village and we are going to connect and world is going to
be super democratic and so on and people did not predict what the internet was going to be how much it would transform
everything and right now it’s impossible to think about without the internet and I suspect that AI even the present
generation of systems might be very similar in this regard that is going to transform the world beyond recognition
in such so many ways it’s very hard for now to predict what it’s going to look like do you think as a species we have
the capacity to kind of reconceive how our institutions will function in light of these changes or do you think we’re
just sort of at the behest of the kind of chaos that ensues well the baseline
is that we are confronted with global warming and impending resource exhaustion and we don’t have any solution for this no none of our
civilian institutions is planning more than a few decades ahead and uh all our projections somehow end in 2011 right as
well as a civilization we don’t really have a plan for our own future at this point it’s a bit terrifying it seems
like no nobody in the present generations has an idea on how to keep the wheels on the bus we really don’t
know how to reform our own institutions to update them to work and that’s a pretty terrifying situation so basically
per default we’re already dead and is AI going to make this worse i doubt it i
think it’s going to give us tools in our hands to solve some of these problems potentially to understand the complexity
that we’re confronted with but to a degree some of the problems that we do face it’s it’s not a a problem of them
being too complex it’s a problem of collectively we’re not implementing the right policies to deal with as you say
climatic changes which you know have the potential to end our species we sort of
know vaguely what to do so how do you think artificial intelligence could help
us solve the climate crisis in I doubt that we vaguely know what to do we went from a few hundred million people to 8
billion people in a few short centuries this is unprecedented we’ve turned the planet from very complex interlocked
ecosystems into a factory farm that doesn’t seem to be very stable it’s extremely complicated to deal with these
issues and they require a lot of deep systemic thinking and it appears to me that our academic institutions have lost
the ability to instill the ability to think systemically into the world instead we are thinking more
realistically which is to say like a dollar cents we don’t think about the effects of our actions we think about
the quality of our intentions but we don’t get any points for effort when we try to solve our existential problems we
need to think very deeply and this deep thought this uh understanding of complicated interlocking systems that
have lots and lots of feedbacks between them where every decision that you make is going to have repercussions on many
many other systems is something that makes it very hard for us to evaluate how our decisions are going to look like
if you look at this problem like Brexit right people don’t actually know what the effects of Brexit are because there
were no really viable economic projections instead people had extremely strong opinions the further they were
removed from this because every opinion was formed by PR agencies that had very different agendas and uh we basically
live in a world where information and actual causal structure is obiscated by interest groups by propaganda by PR by
ideas that people might have by desires to be good to be have to maintain their friends to be successful in the world by
having the right opinions and in such a world having uh tools that discover what’s true and what’s false and
integrate this and empower every one of us to understand the world as a deeper at a deeper level might be super helpful
our capacity to analyze what is true and false has has drastically declined over
the last few years do you think that we have become probably less critical as a
species in light of a world that’s maybe more complex i feel this is the opposite is the case what has happened due to the
internet is that our inability to judge what’s true and false based on the narratives that we are getting in the
media has become much more apparent i don’t think that psychology and uh physics and so on was dramatically
better 20 years ago or uh political reporting instead what has happened is that we have so many back channels now
and what’s so beautiful about this is that now we can use all these sources if we know if we learn how to use them to
get a much deeper understanding of the world for instance for me co was a very important thing to have the internet
available because I suddenly found myself in lots of Facebook groups and chat rooms and uh WhatsApp threads with
uh scientists from institutions all over the world and uh there were trying to
look at these phenomena and they were presenting the ideas and theories and saying to the most competent people that they could find please this is my idea
please shut it down check my statistics see if this is working out and I found that uh the internet was converging much
much faster than the public media and our official institutions and governmental institutions on an
understanding what was going on and what we should be doing about it and I think they also had a very significant
influence to basically update the policy makers on the efficacy of masks and so on when our institutions failed us and I
think that there is an a lot of untapped potential in this collective intelligence that is unleashed when you
allow people to self-organize their cognition on the internet and free them from influences by corporate advertisers
and local interests and give you the freedom to curate your information sources in your own best interest and a
very good strategy for this is always try to find the most competent people among your circle of friends and ask
them to find the most competent people in their circle of friends to create back channels about what’s going on so
there is basically some collective intelligence some ability to turn social media and the internet into a global
brain in which AI can dramatically help and overcome many of the shortcomings of the present institutions you describe
having these back channels but for many people social media has been we would agree that it’s been detrimental to our
society i don’t think that is the case i think that’s a narrative that exists largely because the people who make the
alternate narrative are in direct competition with social media like legacy media if you are legacy media
station that is basically selfidentifying as the clerics of society a group of elites that are
selected to give spiritual guidance to the unwashed masters for them it’s terrifying that some random person on
the internet can have a YouTube channel that gets more views than their news program right but uh I find that many of
these random channels on YouTube are made by pretty competent people and uh that the influence that they get and
among their audience is often not undeserved because they are presenting some perspective that is absent in the
media and if you have a society where 60% of the population find that their lived reality is not present and the
discussion that they have with their friends are not represented at all in the media and basically drowned out by
uh the ideas of very few people which have gone to a very few important universities where they had very important friends uh that uh took enough
coke to get into government right this is uh is not sustainable ultimately you in the democracy need to get a way to
integrate my all the different opinions and walks of life into a shared understanding and we are just at the
beginning of this and I think this creates a lot of insecurity especially in those people who have been used to
manufacture reality for the rest of us so you think we’re going out of a kind of manufactured consent and we have this
kind of these back channels to kind of have these conversations that we otherwise wouldn’t be able to have
without Yeah a big problem with that we have this fake news is because the official news al news is also fake right
it’s basically all the narratives are twisted a little bit to push us towards
the desired conclusions so we get coherence in our society and when you are able to uh find alternate sources to
this when you for instance you have a health study and everybody can look up the papers and the studies and the meta
studies of course now you have um big cacophony of different voices who have
uh disagreement and opinion But uh I I think this is a benefit in the democracy if we have that larger
conversation interesting um I want to go back to to AGI and ask how you could
outline a kind of road map you would envision towards a realization of AGI how do you see that happening
there are two different uh ideas that exist right now one could be described as the scaling hypothesis and a lot of
the people that I know at OpenAI and Google um think that it’s very
reasonable to bet on the scaling hypothesis and it this hypothesis basically says that using the present
methods maybe with some tweaks here and there and using more training data and more compute we can get to systems that
are surpassing human intelligence all the way so this is an approach that got
more and more prominence and I think the people who have argued against this have
a lot of egg on their face right now and there have been people who said that this cannot work it’s very limited it
can never understand true meaning it cannot do X and few months later you have a system that does it i also found
that when I asked other people whether they expected these breakthroughs in AI
to happen at this point like GPT3 and Delhi and Chad GPT and so on most people
were very surprised by this and I think this implies that these methods were underhyped underhyped they were super
present and so everybody said oh my god all this hype but the uh hype hype was bigger right this uh the idea that there
is a hype itself was hyped and uh was a real underhype because if if you don’t
expect that to happen and you’re surprised by it it means uh we did not really pay attention enough to the
potential of these things blindsided the the other thing that we observe is that these methods that we’re currently using
for the training transformers and related classes of algorithms are very unlike human brains they’re not
self-organizing they are not distributing resources in the same way they don’t have the same flexibility
they don’t learn on dynamical worlds they learn on static pictures so they can we can batch them and train them
into the neural networks with the present methods so there are lots of things where the system learn very very
differently from human beings or animals and one thing that is very apparent when
you look at how we train these image recognition models you give them hundreds of millions of pictures with
captions with text description of what’s in these pictures and just by doing statistics over all these pictures and
captions with a massive farm of uh graphics cards basically uh the thing is
able to gradually get the structure out of this and learn what the threedimensional world looks like and what kind of animals exist and what
they’re called and what artists exist and what their styles are and what all the dinosaurs look like and all the
spaceships and so on and uh then you compress it down into a model of 2 GB and can open source it and everybody can
download it this on their MacBook and generate pictures at home yeah it’s really mindblowing and it’s obviously completely different to how a human
brain works obviously it has its origins in trying to mimic a human brain artificial intelligence but we still
kind of have this capacity to I think anthropomorphize and say well when when will it develop this particular human
characteristic but what we’re seeing is that it’s drastically different i suspect that there has been far less influence from neuroscience to AI than
most people imagine heavier learning has probably been influenced and uh my color
and pits were um influenced by some imaginations of how neurons might work
but there is no model in neuroscience that you can implement in a computer and it learns neuroscientists largely look
at neurons and they don’t really develop overarching theories that you can get off any kind of textbook on how the
brain works that would work in simulation even if you take something simple like C elegance which only has
309 neurons and something like 7,000 connections if you translate this into a computer model it doesn’t work it
doesn’t behave like a worm it just has a seizure right so it’s uh we don’t really understand how neurons compute yet in a
scalable way even though we have pretty uh good models or ideas about it and most of the ideas that exist in AI have
been developed by experimenting in artificial frameworks and uh many of the
things that people have gotten to work or because of the way in which training data work or how the hardware is available but a a human baby if you lock
it into a dark room after birth and you give it 800 million pictures with captions is not going to learn the
structure of the world or if you give it uh to read basically all the text on the internet it’s not going to figure out
that there is space with rotating objects in it like the LLMs can do so
how do we learn we basically learn based on change in the world and if the world was not made out of controllable
structures and if the excessive brains wouldn’t have information preservation and we just learn how information flows
in the world the world would not be learnable for us yeah and we learn because we are coupled to the world in
real time and we can discover ourselves in the world in this coupling and uh discover a self model and we discover
how we interface with other people and we born with a lot of behaviors already that allow us to explore the world and
later on we reverse engineer them we become aware of what we are and become deeply deeply structured in the process
of doing this and this is a secondary approach it would be interesting to try this to build something is that is
self-organizing that works much more like the learning works in a cat or in a human being much more difficult to
probably I don’t know if it’s more difficult but it’s very different and we would probably need to start from
scratch in many areas but I think it’s very tempting to work it out especially now that we have these large
computational uh hardware systems we don’t need to try this all by hand sometimes it’s a much better idea to
take a step back back and instead think of what’s the search space for systems like this and then you start an
evolution that is uh trying everything in that space and see what works and you
just come back after a week and see what it has figured out yeah I’m just curious this is a bit off topic but I’m curious
as to what you see as the real differences between kind of like evolved systems and design systems one way of
thinking about this is what I would call outside in design or inside out design
um outside in design means that when you are an engineer you start out with a space that you completely understand you
have your workbench or you have your computer memory which is exactly in the state as you know it to be and then you
build something in there you basically create new complexity and you extend your world into this new complexity and
make this part of your world but if you are a seed that wants to grow into a tree it’s completely different right you
go inside out around you is chaos and you start to colonize the chaos the earth around you and you divide cells
into this and you connect to these cells and make them talk to you and then you build structure across these cells like
you install some kind of language on the neighboring cells and you turn them into cohesive structure that talks to itself
and has an inherent complexity and coherence and if you think about biological systems and social systems
they’re basically not machines in the sense that you build some structure that is so stable that it’s able to resist
all disturbances because it’s so strong but you have something that wants to grow into what it needs to be and when
you disturb it it regrows into that thing right so it’s a second order design you don’t build the system that
you want to have but you build the system that wants to become what you want to have right and so if if you want
to build a social institution the same thing happens like a festival like this you don’t build a festival itself what
you build is the organization that wants to have the festival to happen and then the festival emerges and becomes stable
jeffrey Hinton obviously considered the godfather of artificial intelligence by many left his position at Google in
order to speak out about what he sees as very real fears with regards to artificial intelligence you yourself
have also spoken out against the the sort of infamous um letter that was written by a number
of uh you know researchers in the area i don’t think it’s infamous i thought uh
it’s it’s very sweet and I like many of the people who uh wrote up this letter and there I think that their concerns
have to be regarded and I I’m not laughing about them there are several
levels at which people are worried about AI and you know there are people which
think uh opinion is something that you get from your environment and most people are like this to some degree they
assimilate their opinion and some people um get to the level where they discover
that stuff is true independently of what other people believe and they prove what’s true to themselves and many of
them are nerds and uh then there are some people which discover identity themselves and they choose their own
values based on the world that they want to be in and this is a small minority
right this is kind of a developmental trajectory that we can go through in our life and uh depending on where we are
and how we see the world we’re concerned about different things about AI if your opinions are the result of your
environment then you’re very concerned that AI might give you the wrong opinions right so if the AI is going to
say something racist and sexist we all going to be racist and sexist if you are somebody who lives in a world of true
and false and not of right and wrong uh if you are more inert or scientifically inclined or rationally inclined then you
probably less concerned about this because you think if uh the AI has wrong stuff in its training data it will
figure this out because it can prove what’s true and false ultimately if we make it a bit better right so it’s going
to give us what’s true on average much better than people can do but it might still have the wrong values and it might
destroy society or humanity because it is completely unaligned with us but if you are somebody who understands that
their own values are not something that others put into you but that there are something that at some point when you
become an adult it’s something that you choose based on the world that you would be in you’re more concerned that the AI
is not going to be enlightened right and if the AI is going to be enlightened you can probably talk to it and discuss
about it what the best course for life on Earth should be under which circumstances and if you are somebody
who is has been afraid of the idea that at some point we built a machine some kind of golem that you programmed to go
on and do a job and you don’t understand the consequences for it but it’s so much stronger and more powerful and smarter
and faster than you are you can no longer stop it that’s very risky right i think it’s very reasonable concern to
have to say if we can potentially build a machine even if it’s very different from Chad GBT and even if it’s maybe 50
years in the future from now and even if it’s not 100% probable but there’s only a 5% probability that somebody could
build a golem a machine that walks by its own has its own motivation is more powerful than a human being smarter
faster than a human being and can spread itself everywhere and control the planet isn’t that something you should be
worried about and shouldn’t it be something where if this is possible maybe should put a little bit of
research into it to see that this doesn’t happen and so some of these people got together and said we need to write a letter about this to say can we
delay this research until we figure out what’s going on and a bunch of other people hooked into this people who said
“We believe AI is a really bad thing because it’s done by evil nerds in Silicon Valley that are uh not like us
at all and um maybe we should stop this so the creators don’t go out of business or journalists don’t go off bit out of
business because that thing can produce texts like us much faster than us right?” And all these concerns come
together and lead to letters like this some of the concerns I think have their have a basis in reality we’re obviously
seeing to some degree AI is unaligned with human values and you yourself have argued that in order to sort of solve
the alignment problem you’ve argued that AI should solve the alignment problem it shouldn’t really be something that well
I think the present AI cannot solve the alignment problem because the present AI is uh basically completing texts based
on the text that it has seen on the internet and a generalization over this and it’s uh it’s questionable that the
texts on the internet were all written with the desire to produce the best possible outcome they were reflecting
what people are thinking and doing and people are not aligned right people have not solved the alignment problem for
themselves our societies are inherently underlined most people don’t know what their values even mean right values are
something that you profess to to make yourself good look good in public for most people most people are not able to
really deeply explain and justify the values and argue about their values when people have different systems of values
they can usually not sit down and align themselves we don’t have that ability right and so if we pretend this is
always the case and these nice pretentions that we have our values can be translated into AI betting a bunch of
well-intentioned sociologists and overriding what the model has learned from reality it’s probably not going to
work it’s making these systems worse not better i’m curious because I feel like with some of your answers I can sense
this sort of immense optimism about enlighten AI and kind of what it’s
capable of bringing but then also we have this the very real kind of human values aren’t aligned and we’re sort of
suffering from the consequences of that just generally I mean do do you have hope for the future considering well I’m
not optimistic or pessimistic I’m amazed I’m sitting there and I’m observing what’s happening and I’m completely
fascinated And uh that’s the dominant perspective when I was young was born in
1973 I read the Club of Rome report on the limits to growth and I had the same
experiences as Greas Stunberg has today and I did my grieving and uh I felt
maybe there I only have a few generations left in this comfortable civilization before it crashes and now
it turns out there’s a possibility that we create something that uh makes everything unknown that opens the future
up again into something that we absolutely cannot predict that is really fascinating and of course this has room
for optimism and this optimism might be unfounded also think that if we can get to the point where we build systems that
are not just replicating text or replicating images that it has already seen but we are building systems that
reason about the world themselves to understand what they are and wake up in the same way as we are we might be
creating something that is more lucid than us that is more competent than us of making sense of reality and we cannot
align this by coercing it or manipulating it the only way that we can coexist with it is if it loves us and I
think that means this might sound very weird and esoteric that we need to understand what sacredness or love is
and build machines that are capable of dealing with it and interacting with ism
do you think it’s even possible to build a machine that understands in a way that a human mind can understand because
Roger Penro has obviously argued in Empress’s new mind that a computer will never be able to kind of be cognizant in
a way that a human being is because it lacks a level it will always lack a degree of understanding roger Penrose
book demonstrates that Roger Penrose doesn’t understand what consciousness is and he would also suggest that he
doesn’t right that’s the thing that’s so puzzling to him and uh many of our best minds don’t understand stuff and the
others also don’t understand stuff so maybe the goal is not human understanding humans really are very
very bad at understanding things humans interact with reality without understanding how they interact with it
it’s very rare that we deeply actually understand something so instead what we are producing is coherent behavior and
our AIS are also producing coherent behavior and I think that understanding requires that you build a world model
that is completely coherent and that is grounded both in first principles and in observation right and it’s something
that is outside of the realm of what human minds can do if we are honest about it right so I think AI is our only
chance to build a system that is able to understand something and philosophers have understood this for a long time
like since Dipenitz Friger Vitkinstein and so on people have tried to mechanize
the mind to turn it into a mathematical principle to get to a system that is able to bridge between mathematics where
we have truth and falsehood defined and philosophy where we talk about the
actual world and the sphere of our ideas and the possibilities of what could exist right human minds are not deep
enough to make that bridge but if we can build something that is like a human mind but can be scaled up we might be
able to build something that actually understands being made sense of by something that’s sort of greater than us
and we do understand a lot of course let’s I’m not saying that human beings don’t understand anything but if you ask a school teacher why is 2 plus 2 4 very
often the school teacher will tell you this is because it is like it is right and the understanding is not that hard
it’s a game it is a a game of symbols and there are many possible games of symbols that you could uh play with
numbers and most of them don’t lead to structure right they don’t lead to interesting structures and there’s a
space of possible structures that you can build in this way and mathematicians understand that space and you could also
build an AI that understands that space and it turns out that we can build AIs that begin to understand these symbol
games and build worlds from them in the same way as our minds are doing it and to me this is very exciting that we are
now at the threshold of building sentient systems yes and it’s strange also because obviously
during this enlightenment period when we started to understand nature is this sort of thing to be harvested for its
mathematical principles artificial intelligence is sort of doing the same thing but I would argue with the kind of
information that we’re giving it it’s sort of studying us maybe and making sense of us in a way that it’s
harvesting information from us do you think that we are living in base reality
yes like I I think we cannot know whether we are in base reality because in principle everything that you
remember to have been the case could be fake right you you cannot really know whether your memories are true so you
can only go from the now and uh in the now you cannot verify a lot about reality right so in the now I cannot
perform any kind of quantum mechanical experiment that would ensure uh that I’m
in a world that is extremely too complicated to build in a computer but I’m actually on something that is built on some quantum universe i think that uh
base reality would mean that there is a universe that can exist without anything preceding it right base reality is that
which is not created by something else and this means it’s needs to be some kind of tortology something that is
logically following out of its own possibility and I it seems to me that
the universe looks like this the alternative would be that we live in some kind of simulation if you’re not in
base reality it means that this space reality is somewhere else and the reality that we are in has been created
by something in the base reality uh to look like a reality to us right and at
some level that’s also true because the reality that I’m subjectively in is a dream it’s a dream that has people in it
and emotions and desires and attention and goals and stories right it’s a dream
that is generated in my own brain and I cannot get out of that right i don’t see that it’s the inside of my brain but the
world that I perceive is a universe that is generated in my own mind and that thing is inspired by the sensory data
that is generated by the physical universe to my best understanding in this sense I am and we all are living in
a simulation generated in the skull of a primate so we not living in base reality we are living in a magical world in
which everything is possible because our minds can be psychotic and they have can false memory and they can create
arbitrary things that are fantastic right and out outside of our minds we cannot be conscious outside in the
physical universe there’s only mechanics the thing is there so only in this dream can we be conscious in this way we don’t
live in base reality but it seems that this reality that creates the possibility for a dreaming system for a
dreaming brain this can be explained very well by assuming that it’s a coursely closed system that is only
observing intrinsic mathematical principles fantastic well this has been fantastic Yosha thank you so much you’re
so welcome that was great fun thank you Darcy