“They understand just like we do.” – Prof. Geoffrey Hinton.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

so I guess I’d like to make one comment about what Sylvia just said which is some um politicians don’t believe in institutions because if those institutions have functioned properly they’d already be in jail um I don’t want to mention any names um so I’ve got 10 minutes and I wanted to basically say one thing um if you take a problem like climate change the first thing you have to do is convince people that carbon dioxide produced by people is what’s causing climate change until you’ve done that you can’t have a sensible policy even after you’ve done that you may not get a sensible policy people may still subsidize oil companies and things but that’s a first step now I’ve been talking about the existential threat of AI this is a longer term threat there’s many many short-term threats which are Urgent like cyber attacks and loss of jobs and pandemics and they go on and on fake videos um but there’s this longer term existential threat that we will create things more intelligent than ourselves and they will take over um many people don’t take that seriously and one of the reasons they don’t take take that seriously is because they don’t think that the current AI systems we have really understand so there’s a group of people many of them linguists influenced by Chomsky um who think Who call these things stochastic parrots and they think these things are just a statistical trick that takes a big body of text and just pastiches things together and looks like it understands but doesn’t really understand the way we do now to have that theory that they don’t understand the way we do you have to have a theory of how we understand um I’m going to argue they understand just like we do so the people who talk about stochastic parrots um they have a theory of understanding that comes from classical symbolic AI that in your head you have symbolic expressions in some cleaned up language and you use symbolic rules to manipulate them that theory never really worked um but they still stick to it because they think somehow they think the only way you could have intelligence is by having something like logic to do reasoning with and they think the essence of intelligence is reasoning there’s a completely different Paradigm which is the essence of intelligence is learning and it’s learning in the neural net and things like vision and motor control are primary and language and reasoning comes later um but I want to address this issue of do they really understand and there’s one particular piece of history that most people don’t know which is these large language models which certainly appear to understand and can answer any question you ask them at the level of a not very good expert um they came a long time ago I like to think like this um they came from a model I did in 1985 which was the first neuronet language model um it had 104 training examples instead of a trillion um or many billions it had about a thousand weights in the network instead of a trillion um but it was a language model that was trained to predict the next word and to back propagate errors from the prediction in order to convert input symbols into vectors of neur activity and then learn how those vectors should interact to predict the vector for the symbol it was trying to predict um now the point of that was not it didn’t have an engineering point the point of it was a the theory of how people could understand the meanings of words um so the best model we have of how people understand sentences is these language models that’s the only model we have of how people do it that actually works we have all these symbolic models um they don’t really work very well they came I mean they’re influenced strongly by Chomsky who managed to convince many generations of linguists that language is not not learned on the face of it it’s just obviously absurd to say language isn’t learned and if you can get people to believe something obviously absurd then you’ve got a cult um and shamsky’s had a cult um language is learned and we now can see things that learn language the structure doesn’t have to be innate it comes from data there has to be innate structure in the neural network and in the learning algorithm but all the structure of language you can just get from data Chomsky couldn’t see how to do that so he said it must be inate actually saying it must be innate and it’s not learned is really stupid because that saying Evolution learned it rather than learning and evolution is a much slower process than learning the reason Evolution produced brains is so you could learn stuff faster than Evolution can make it innate so the point of this ramble was to convince you that um they understand the same way we do and I’ll give you one more piece of evidence for that so many people who talk about stochastic parats say look I can show you they don’t really understand because they just hallucinate stuff they just make stuff up those people are not psychologists they don’t understand that they shouldn’t use the word hallucinate they should use the word confabulate and psychologists since the 1930s have been studying human confabulation psychologist called Bartlet um and people confabulate all the time if you you take any event that happened a long time ago and that you haven’t rehearsed in the meantime and you try and remember it you will confidently remember all sorts of things that are wrong because memory doesn’t consist of getting a file out of somewhere memory consists of constructing something that seems plausible now if you’ve just seen something and now you try and construct something that seems plausible you’ll have fairly accurate details but if you saw it many years ago and you now try and construct something that seems plausible first of all it’ll be influenced by all the stuff you learned in the meantime um and you’ll construct something that sounds good to you but actually many of the details that you’re very confident about will be just wrong it’s hard to show that but there’s one case studied by a psychologist called orri niser which is beautiful John Dean testified at Watergate under oath about the cover up going on in the oval room the Oval Office and he didn’t know there were tapes so you’ve got someone trying to tell the truth about things that happened a few years years ago and much of what he said was not true but he was clearly trying to tell the truth he’d say there was this meeting between these people no those people never had a meeting and this person said this no that person didn’t say that somebody else said that in a different meeting but the point is he was conveying the truth about the cover up and he was confident that what he was saying was true and he was just wrong and that’s just the way human memory works and so when these things confabulate um they’re just like people people can fabulate at least I think they do I just made that up okay I’m done we won’t remember Jeffrey if you could uh stay here because then I think there’s a I just wanted to ask you if I can one or two questions and I know then we have a excellent panel which will speak to you a lot more about the technology um but I know that you’ve talked about um risks uh with AI um and so I really have just uh the question about uh I think you’ve also talked about that there has to be some form of international collaboration to handle the risks um would you like to what do you think has to happen in order for countries to be able to collaborate in a constructive way to contain those risks so I think on risks like lethal autonomous weapons countries will not collaborate the Russians in the Americans are not going to collaborate on battle robots are going to fight each other um all of the major countries that Supply arms Russia the United States China Britain Israel and possibly Sweden are busy making autonomous lethal weapons and they’re not going to be slowed down they’re not going to regulate themselves and they’re not going to collaborate if you look at the European regulations there’s a clause in the European regulations on AI that says None of the these regulations apply to military uses of AI so clearly the European countries don’t want to regulate it they want to get on and see if they can bet build better battle robots than the other guys and so we’re not going to be able to control that I think it’s going to be the same for many of the other short-term risks in the states for example they’re not going to regulate fake videos because one of the parties that’s soon to be totally in power believes in them um so there’s one area though where you will get collaboration and that’s the existential threat so the existential threat that when these things are smarter than us which almost all the researchers I know believe they will be we just differ on how soon whether it’s like in 5 years or in 30 years um when they’re smarter than us will they take over and is there anything we can do to prevent that happening since we make them we’ll get collaboration on that because all of the countries don’t want that to happen Happ and at the height of the Cold War the Soviet Union and the United States could collaborate on preventing a nuclear war and they’ll collaborate the same way on preventing these things taking over the Chinese Communist party does not want to lose power to um AI they want to hold on to it um so I think that’s an area where we can expect to get collaboration um which is kind of Lucky but I think these other areas we won’t get collaboration all right well that’s a somewhat optimistic and then and then my final question as the Academy of engineering Sciences where we focus on solving problems um I would just like to ask you what would you tell and also as the mother of children who are quite concerned about the present and the future right now what would you tell young people today what would you want to okay so there’s some AI researchers like yaka who’s a friend of mine it was my post talk um who say there’s no chance of them taking over nothing to worry about um don’t believe that um we don’t know what’s going to happen when they’re smarter than us um it’s totally Uncharted Territory there’s other AI researchers like OVI well he’s not really an AI researcher but he he’s knows a lot about AI who say 99% chance they’re going to take over he actually says 99.9 and the correct strategy is to bomb the data centers now um which wasn’t very popular with the big companies um that’s crazy too we’re entering this era of huge uncertainty when we start dealing with things as intelligent or more intelligent than us we have no idea what’s going to happen we’re building them so we have a lot of power at present um we just don’t know what’s going to happen people are very ingenious um it’s quite possible we’ll figure out a way to make them so they never want to take control because if they ever wanted to take control I think they easily could if they’re smarter than us so I think the situation we’re in now is like someone who has a very cute tiger cub and it’s a tiger cuup makes a great pet but you better be sure that when it’s grown up it never wants to kill you if you can be sure of that it’s fine um that’s the situation we’re in thank you so much on that note I will leave you in the hands uh in the very competent hands of Anette Novak who will guide us through this panel discussion thank you so much for so far thank you thank you Sylvia and thank you Professor Hinton um as a chair of IAS um Information Technology division it is my honor and privilege to guide you through this um upcoming part of the seminar which will uh hold a panel with uh some truly powerful lineup of speakers uh we will uh by the panel I think show also that information technology is a very broad field of inquiry uh we have Eva fellows spanning futurism um philosophy computer science um and AI in human interaction uh we will start this with three short interventions just to plant some seeds for the future panel discussion with Professor Hinton who will be back so don’t worry um and we will kick off with Andre Sandberg who’s the researcher at The Institute of future studies here in stockolm and formerly of the future of humanity Institute at the University of Oxford please the floor is your sers thank you very much and I I find it very interesting to be in this situation talking about uh what we know about AI or rather what we don’t know about especially since my professor my professor is sitting in the audience and one of the embarrassing things is to realize that a lot of the things I learned in the 90s is no longer true and indeed this is one of the main lessons of the entire history of the field of artificial intelligence we are exceedingly bad at predicting what will and will not work uh there is an entire genre of quotes of course from early AI Pioneers confidently saying that within a generation we’re going to have human level machines and indeed it’s interesting note that none of them were concerned about the safety of us but of course those predictions were also wrong we were also wrong about being pessimistic sometimes it just starts to work and it’s kind of surprising and even shocking when it happens the revolution that happened in the 2010s was using the same kind of networks that had been doing modestly interesting things for a long time but suddenly we started taking off for reasons we still don’t fully understand we understand how to make them we understand some of the fundamentals but the emerging artifacts that develop or not so much our creation as an emergent phenomenon that we really struggle with and of course this shows that our predictions about the future are not very well understood and we’re not very well found that doesn’t necessarily mean that we should UST predictions because in many domains in business in politics in war and in love we quite often need to make use of predictions that actually are not well founded and it’s an interesting challenge when we have a domain of technology that looks so promising to then try to to figure out how to handle it safely uh because this is situation is of course deeply worrying from a safety standpoint it looks like artificial intelligence we can’t tell how far it can go there many people saying that oh no no it’s shomsky is deep down actually right or there other limits of the amount of training material or compute or that we’re actually using the wrong algorithms and that’s the scaling laws that are powering the current llms are not going to work and by the way why should an llm be able to do thinking it’s after all just predicting sequences well it turned out surprising enough that sequence prediction might actually be a pretty good substitute for thinking even if there is a limit to what a large language model can plan maybe it can only plan short stains of thought and you need to plug in another architecture there is no guarantee that not some clever chap will be doing that next Tuesday similarly we don’t know how fast it can develop again we have a long period very little seemed to happen and then suddenly a lot happened and that of course suggests that uh we might have problems controlling it and knowing that and it looks like it could go far which is very good news because AI actually looks very useful I’m certainly using it it’s solving interesting problems some problems that also make me a bit nervous because I did a biocurity exercise earlier today with an llm and I was constantly expecting to say Anders that is a Bowe don’t develop it further but because I managed to phrase it nicely as a very academic project it was happily telling me what kind of viruses I needed to transfect the cells with that made me a bit nervous I’m probably not a very good biotechnologist so everybody’s most most certainly safe but um that suggests that we have this interesting situation in a world of high degree of uncertainty the cautious thing is to assume that we can get very fast very power ful development even if we can’t predict them and then try to take steps to make them safer so I like to say that I’m a Doomer because I’m optimistic if I thought hey I couldn’t do anything there was no reason to try to do anything about it I think there is risk here but an amazing beautiful opportunity and I think we are indeed learning something profound here about our own kind of intelligence and also perhaps other forms of intelligence finally getting a kind of mirror that shows us alternative ways of of thinking so thank you very much for doing this professor Hinton wonderful thoughts thank you Andress and I will not keep you any further so uh we will just ask the next speaker to to come up and that is the CTO and founder of recorded future stuff on 3 thank you hello everyone so when I when I was a graduate student I had the opportunity to to spend a year at MIT and I went there partly because I wanted to study Linguistics and chsky was there as you said the big hero fortunately I I hadn’t done my homework so when I got there CHS was on sabatical the whole time and I ended up working with a cognitive psychologist instead you know that’s probably why I’m here now um so I I’ve spent the last 15 years building a company or a system if you like which is using AI to try to predict threat to organizations Nations Etc and so I thought I would say something what maybe in contrast to what you said about shorter term threats so I fully agree that in the longer term perspective you know machines that become intelligent enough to understand that the real threat to the problem to the planet is humanity is is the big problem for us but I as I see it in the in the neartime future bad people using AI is is the the the big threat we need to be concerned about and that’s what why I’m also concerned about the questions of Regulation because even if you might get Nations globally to agree on on avoiding the longer term threats and I think you already sort of alluded to this essentially I think shortterm what’s going to happen is that if we you know probably everyone in the world thinks they are the good side but let’s presume that we in the west free world whatever you want to call it are the good people if we come up with regulations to prevent these things of course the ones who don’t care about regulations which could be Rogue Nations criminals you you name it they will have the advantage they will not stop development and of course the fact that this is such an accessible technology means that even if we regulate there is no way to control if people follow those regulations very much unlike nuclear weapons where you need huge machines and and factories which can be monitored and detected and so on to much larger extent even if some can fake can claim that there are those in Iraq even if they aren’t so I think this to me is the question I hope we can come back here a bit how how do we defend against bad people using AI against us and I think it’s we’re we’re in for an arms race here so we’re trying very today every day to develop AI based tools to for example detect AI generated imagery so that we can alert if there are fake news being spread and so on but of course this is a classical AR race that the models that generate images will get better we’ll be able to detect those and there’ll be new one and and so on and and there really is no no stop to it so I think you know full respect for the longer term but I also think we we need to figure out and I don’t have an answer to this how can we actually prevent very bad things from happening not in 20 years but in two years time essentially thank you well long long-term and short-term risks I hope we’re going to get something something positive also there uh so I’m not um going to um delay this any further so we have uh last but definitely not least another Eva fellow uh it’s professor of interaction design at kth Royal Institute of Technology Kia hook thank you so I said when they invited me to talk that I don’t do Ai and I don’t like AI um what I have done in the past was symbolic AI didn’t like that at all um and then lately we’ve been doing these more embodied AI stuff so here you have a shape changing crossette that mimics how a singer uh uses their muscles when they’re singing so that the audience can feel it on their body so we’re shifting AI out onto the body so we’re talking about TR threats maybe uh but of a different kind so I wanted to talk a little bit about ethics and Ai and the body which obviously means talking about death so let me explain why death is important so I recently went to a conference where Terry vinograd was saying that an AI does not care so if you talk about ethics an AI will not care whatsoever um and Donna harway was there and she said intelligence is corporeal it is embedded with our bodies uh so it’s not Solly the brain it’s in the whole system including the brain stem and including your muscles and your whole body and I would argue then that human intelligence is in fact movement first and language second so I don’t find the llms that interesting I find this way more interesting the the in Corporal corporeal stuff and consequently if we’re going to design with that we’re going to design with uh in a way Corporal ethics so OB obviously first of all to have ethics you need bodies to be you know to be put in jail and you need bodies that uh sort of needed for the justice system to work and that’s not what I want to talk about I want to talk about how our biological constitution in a way determines our actions our emotions our values and so on and that is where we need to go if we’re going to go for an Ethics that comes very close to our bodily selves um so what do I mean well the actual body as it is the symmetry of the body uh the up and the down and the left and the right uh the fact that I am a female body I have breasts and whatnot matters to how I act in the world it also matters to how you act towards me and so that is where my understanding of who I am is shaped so the Norms are ingrained with me um they become part of me so I can see the women in the room are keeping their legs together they’re not doing the typical manspread why there’s nothing wrong with your pelvis you can sit as a guy does if you want to but that is a norm we don’t do that um so we in order to survive we need to be in a normative uh setting where we learn from others on on how to behave all so culture is is super important to to what it means to be human and our habits need to be in sync with others if they’re not then people are not going to like us and we’re not going to make friends and we’re not going to get status and and all of the things that we do want and we’re not going to survive even and that’s where Evolution again comes into into play and so um so I think we can of course work with and change uh what is deeply ingrained with us uh once we enter AI into the world that is all also going to change us and the norms and how we move and act in the world and that’s where we start to be worried because then we need to really articulate what we feel when we interact with these systems and so we need to come beyond what is deeply ingrained habits and Norms within us uh to figure out because the the AI is just going to re reproduce that you know we’re going to get racist AI because we are racist so so we need to figure out why does this disturb us what is it that disturbs us in a very embodied uh way but with this disembodied AI so for me just to say my point is that for me AI in ethics is not procedural Solly it’s not legal Solly it’s not about policies Solly it is also a felt process it’s a bodily process it’s a normative process that you uh enact through your body through your Corporal self um and that is where I as a designer has a space and I was happy to hear from the discussion earlier today that there was a call for designers to be in the loop of Designing Ai and so I want to propose a feminist somatic felt ethics of care implied and applo employed towards an inside the design process of AI systems and I’m happy to see you you’re not dead that’s lovely you have good habits obviously because you you’re all high status you’re all well-dressed and you I’m sure you make shitloads of money and so that is where we are right this is the group of people and your bodies that are going to shape this uh and you need to be aware of how that is enacted in your Corporal self so thank you stay please stay I can stay I’m going to ask everybody up wonderful thank you so much for that and please I will ask all the the speakers and and Professor Hinton also uh up on stage now um well that’s a start really the the feeling here uh I just want to start going back up to um um I mean we we started now digging really deep into certain certain areas here and I I think that I I guess that there’s a lot of people here are kind of curious about you and you didn’t talk so much about you so I just go back and ask you one or two questions about for instance the fact that you you’ve been going with this research for so many decades how how do you how do you keep going you you brought together different areas actually and you you probed that for a long time to find those areas that you wanted to put together how how do you how do you keep it how how do you keep going um I want to know how the brain works and um I still haven’t figured it out I you know I always wanted to get a Nobel Prize in physiology or medicine for figuring out how the brain works and I had this Theory called boltman machines and Terry sinowski and I thought that that was going to be how the brain worked and that we would get a Nobel Prize together and we actually had a deal that if one of us got and the other didn’t we’d share it so I’ve shared it with him um what we didn’t realize is we could have this Theory of how the brain worked and it could be wrong so we wouldn’t get The Nobel Prize in phys physiology or medicine but you could still get it in physics you could still work on that no PR as well actually so so curi curiosity is there I I hear and also the the vision of of the CU there’s one other thing that helped me a lot okay so for a long time in the ’90s in particular within computer science not within psychology but within computer science almost everybody said this stuff’s rubbish they said that in the 70s too so in the 70s and the ’90s they said this stuff is just nonsense it’s never going to work the whole idea that you could take a network of simulated neurons which aren’t that like real neurons anyway and you could make random connections and then you could get it to do intelligent things just by looking at data that’s ridiculous you have to build in lots of innate structure for it to do anything at all and that wasn’t such an unreasonable position um they said things like if you start trying to train it by gradient sent it’s going to get stuck in local Optima they never actually checked whether that’s what happened but they were confident that would happen um and so the question is how do you keep going when everybody around you says what you’re doing is rubbish and for me it was quite simple because my parents were both atheists and when I was seven they sent me to a private school that was Christian and so there all these other kids who believed in God and all the teachers who believed in God and it was obvious rubbish Ian it just was ridiculous what they said and it changed with time of course when we were little it was an old white man behind the clouds um of the kind painted by Michelangelo um when we were bigger it was sort of more obscure um but it just seemed to me always this is this stuff is just nonsense and so I had this experience from a very young age of seeing stuff that seemed to me to be nonsense and turned out to be nonsense um and that was really useful for doing your own [Laughter] NS and how do you get the funding then if everybody because that’s that’s a tricky question here yes um in Canada they give funding to people to do basic research and they there’s not much money but they use it very well that piece of the money so they give you a grant for 5 years and at the end of five years you have to write a six-page report of what you did and it doesn’t have to be what you said you would do in your Grant proposal um that’s an excellent way to fund basic research so I want to bring in the panel on that because I see a lot of I think that’s a COR that my old Professor used to say the same thing almost that what classifies a good researcher is excellent applications and great results but no correlation necessary I hope the the research funders in the room take good note of this um we did we did organize Academia uh a lot in silos and in-depth foundational research very often is found in the silos I wonder if you want to reflect a little bit on on I mean this is very cross disciplinary and uh and um is the complexity we’re moving in now to towards it does it demand more multi or or cross disciplinary work do you want ref yeah definitely at least for me it’s it’s my research group has opera singers and uh AI people like my sitting over there and and we have Hardware people and we have software people and we have uh industrial designers and and and whatnot so yeah for sure if you’re going to design uh intelligence or intelligence intelligent behaviors in in systems then then you need to um and I think what is more and more urgent is also the humani ethics and and caring about what it means to lead a good life with these kinds of Technologies definitely it comes I think it’s a it’s a cue for you Anders because you just came from Australia and there’s a huge debate in Australia on on a funding cut for human the human human Sciences isn’t it oh yeah uh and that’s of course always going to be a problem where does the funding come from but it’s also a problem are the silos actually corresponding to the important problems and reality doesn’t fit the academic silos it would be amazing if academic silos somehow mapped reality but we haven’t been trying to do that we have ended up with these silos for historical reasons and sometimes fortuitously they correspond to really important things but usually the interesting new results come when you take some results from one remote area and apply it somewhere else indeed that was what I was planning in Australia I was working with an old friend of mine who is originally a physicist and a biologist did colloid science nobody remembers that because he also figured out the optimal angle to dunk a Biscuit in tea so now he is forever known as the biscuit dunking Professor but the key part here is we’re bringing together teams to try to learn how do we do that interdisciplinary work better because we’re not systematic about it we everybody says we want to be interdisciplinary and maybe fers say that but they don’t give much money to interdisciplinary research and even when we try to be inter disciplin we’re not very good at it we should become better we can become what is the opal langle uh it’s about 15 degrees for dig biscuit and if you use normal British team did you have any yeah I think one of the reasons you need to be interdisciplinary is that I think one of the biggest challenges today and even more in the future will be I like to call it the Handover problem I think you know we will have AIS working together with humans it’s it’s sort of blatantly obvious if you think of self-driving cars where everyone is saying that okay so the the car will drive itself up to a point where it can’t handle the situation and then it’s going to hand over to the driver you know who’s probably asleep at that time but but how how does the machine convey its opinion about the state of the world to the human this goes for all kind we see it when we work with threat analysts how do you take an algorithmic analysis of something and convey that information so that a human can can carry on or or prove it or right yeah and for that you clearly need designers you need people who come from all kind kinds of paths to to be able to build those systems yeah the crucial question of the interfaces we actually designed for that in our lab we worked with autonomous cars and designing a backr that wakes you up with Inflatables giving you a little bit of you still have them time in 3 seconds to understand why you were woken up sometimes sometimes but we need to slide back I think to the to the to the risk question and uh since you left Google you have been more and more vocal about this and uh since you’re among techno optimistic friends here so we don’t have to really uh say that we cannot talk only about risk we need to talk about everybody here knows the opportunities I think and can see them but but you you just said in your opening remarks that super intelligence is coming you you do believe that it’s between five or 30 years from now something like that my belief is with a probability of 0.5 it’ll be here between 5 and 20 years except that was my belief a year ago so I better say between 4 and 19 now just doesn’t sound so good so and then what because when it actually is super human intelligent almost everybody I know think it’s coming and they just differ on how long the people who don’t think it’s coming are the stochastic parate people who believe in classical Linguistics and symbolic AI absolutely um but everybody who knows about neuron Nets um I think they all think it’s coming so so what what do you if you would give us your your when you say this is an extension thre what do you see what what are you seeing on your vision the darkest one oh that people just become irrelevant so what worried me at the beginning of 2023 was something that was obvious for a long time but I hadn’t felt the full emotional impact of it which is that digital intelligence might just be a better form of intelligence than what we’ve got we basically analog it’s sort of one bit digital for neur but basically analog um and you can do things with very low power if you’re analog if you learn to make use of the peculiar properties of the analog Hardware so you can’t get two analog computers to do exactly the same thing as each other but if each one learns it can still do things very well and that’s what brains are like I can’t share weights with you because you there’s no onetoone correspondence between my neurons and your neurons um so you can be very low power but you can’t share so we can’t have a thousand different people go and take a thousand different courses and as they’re doing the courses they average their weight changes together and by the time they’ve all finished doing their courses they only do one course each but in the background they’re averaging weights um and at the end they all know what’s in all thousand courses but that’s what these digital intelligences can do so they have a bandwidth of sharing of the order of the number of weight so trillions of bits they can share you and I the way we share is I produce a sentence and you try and change your synapse strengths your brain does so that you would tend to say the same thing if you trust me um and that’s sharing with a bandwidth of a few bits per second not trillions of bits per fraction of a second um and in that sense they’re just much much better at acquiring lots of knowledge and seeing the relationships between all these bits of knowledge so to store all the knowledge in only a trillion synapses they have to do lots and lots of compression and compression means finding what’s common to many things and using what’s common to them to store them so you store them by the common bit plus a few differences and that means they’re seeing all sorts of analogies people have never seen between remote fields and so I think they’re going to be much more creative than us and it’s sort of worrying and scarily fascinating right very fascinating thank you for that related to that I mean so I think we’re all struggling today with the the models we’re working with today that they are very human in the sense that they keep babbling even if they don’t know things so uh when do you how far off do you think the point is where the the mod will actually become aware of what they are not aware of they they’ll be getting better at that I think they’re already get they’re getting better at that and it’ll be incremental and um they’re still not as good at people realizing when they’re gaslighting but they they will um on the whe they’ll stop doing it is another question on the other hand there’s no death for the AIS right and so there’s no real reason for them to to act in certain ways or not you know and and there’s still not embodied they’re not in the world so the risks are not there you can’t they’re not going to get an arm chopped off or you know so they might get a data center chopped off they might do with an AR processor no sorry so they got a phantom the Phantom hurt when when the data center is closed but I think the place you’re going to see what you want which is in bodied AI is in battle robots they’re going to have things like the amigdala if you’re a little battle robot and you see a big battle robot you better run away and hide and they’ll have all that and so they I think they will have genuine fear and it won’t be S kind of simulated digital fear this little battle robot will be scared of this big battle robot yeah um so I think you’re going to see all that embodied stuff um once they shift out into once they’re acting in the real world and there there are consequences right and I don’t think it’s cuz they can die I just think it’s CU they need to have all this emotional stuff self in to survive in the world so self-preservation actually in the embodied yeah indeed many of the reinforcement learning systems seem to need something akin to emotions so in this case mathematically you’re trying to maximize expected reward across the future but quite often the training consists of you only have a limited amount of time to do it so they’re actually they can in some sense get stressed there are things that are much worse than others and I think one can make an argument and some of my philosophy colleagues have argued that these are like rudimentary emotions maybe they’re not as elegant and complex as you get in biology but we’re still serving the same purpose being disappointed when an expected reward doesn’t arrive is a very important learning signal and we both see that in robots and in humas it’s just that the robots might be a bit more stoic about it there was something I wanted to say about a comment on what you said um about having AI system try and detect fake videos I used to think that was a good idea and now I think it’s kind of hopeless because of this arms race between the detector and the generator that’s called gener generative adversarial networks and that was the way they made good image generators before they had diffusion models um I think people are switched now to saying it’s basically hopeless to be able to detect fake videos what you need to do is detect that a video is not fake so you need to have a way of checking the Providence of a video so for a political video for example you need to be able to get from that video to a website for the political campaign and if you have the identical video on the website and you’re sure that website’s for that campaign and of course websites are unique so that’s not so hard um then you can believe it and if you don’t get that you can’t believe it and your browser can do almost all of that in the end so your browser just as nowadays when you get spam if you’ve got a good browser it tells you this is probably spam you should be able to get videos where your browser says this claims to be a video from the Harris campaign but um it’s not because it can check the website and see that it isn’t the identical video there I think we’re going to do much better at getting um knowing that things are real because of the Providence and the newspapers love that idea the New York Times loves the idea that the only thing you can trust is the New York Times no and I I fully agree on that and but I think the thing is that you need to go far rather than just sort of certifying the the browser and and the server I mean you essentially we’re going to need to sort of revir everything out to the devices I mean you you need to know that it’s the right camera which took that picture in that specific location and it’s not rocket science I mean it’s essentially available technology but you sort of have to rethink how you do the internet if you like but what you focus on is Provence being able to establish Pro exactly and I think that links your points of view actually in an interesting way so when think about using AI in science the obvious thing is oh have it read all the scientific literature but much of the scientific literature is wrong some of it is fake a lot of it is just bad papers uh and in order for the AI to actually do learn useful things in science it probably needs to go into the lab and be able to do experiment and that’s Provence you get a grounding that if I do this experiment this actually happens and this presumably needs a form of embodiment whether that is the lab robot and the sensors you need to get that feedback from reality and I think that is going to be quite necessary for making AI be able to go beyond us in a useful way too so given that Anette actually is a journalist uh just just let us remember that it’s not that long ago that we didn’t have any photographs or video whatsoever and it was really hard to trust information uh so 200 years ago they were where we’re at now right and it’s a short blip in history where you could trust the video or where you could trust a photograph and now we’re back into figuring out how to communicate news and what not between people in ways that we trust so sometimes this development of AI is a bit history lless I feel you yeah so I think in Britain aund a couple hundred years ago there were political pamphlets and they had a law that if you produce political pamphlet you have to put the name of the printer on it um so you get some sort of Provence cuz then the bottleneck was the printing press and if the printer’s name was on it you could it was much harder to produce fake ones and that’s what we need to do again now yeah why didn’t we do that right away when we created possibilities for for photos taken by what not quite a few years ago we have had been surf here at Eva and we actually asked that question you know if if you were to go back and redesign the internet from the start what would you do and he said exactly that authenticity and the ability to verify but in those days there were 200 people on the internet and they all knew each other so wasn’t needed at that time and that’s where where I wanted to round off this part of the discussion because I think it’s it’s all about I mean where we are it is an opportunity for the local and the analog because it is when you know somebody and when you know where it comes from that you will be able to trust so there’s there’s something with a close to the body and or maybe the local community we I mean you talk New York Times but maybe it’s maybe this is the opportunity for the small very close media companies where we know the reporter know the editor um perhaps um I want to go back to where you were in the beginning with um and also to the the the complete risk scenario because now you’re getting very con constructive here and you’re solving things Eng just to just to finish the existential thing so you the we we have the we have the fighting robots we have an existential threat where where do you see the the how do we build do we build it into the technology or is it something that comes around the technology where we where we can hinder this the the worst scenario to happen we don’t know right I mean we really don’t know how to keep control of things more intelligent us us we don’t know if it’s possible uh I think it may well not be possible I don’t think we’re going to stop the development of AI because it’s so good for so many things there so many short-term profits to be made from it so in a capitalist Society you’re not going to just stop it I didn’t sign the petition saying we should slow down I think that’s crazy we’re not going to do that we’re stuck with the fact it’s going to be developed and we have to figure out can we do it safely and we don’t know if we can do it safely um we should focus on how to do that but we don’t know what the solution is going to look like um we just hope there is one it’d be shame if Humanity disappeared because we didn’t bother to look for the solution so I’ve been involved in the AI safety Community for surprisingly long time was was on the mailing list of the 1990s where before alas yudovsky realized that a could be dangerous and when he was all in favor of having an imminent Singularity and then he realized uhoh we need to fix some safety issues and then we started working on it and oh it became harder and harder and more and more interesting but one the thing that actually fills me with a lot of optimism is these days there are people doing useful things we are detecting internal States in a system that can be interpreted maybe not perfectly but we’re actually getting better at figuring it out we found ways of detecting even when we’re being deceptive even though deception philosophically is rather complicated when it’s a non-intentional we’re getting tools and I think some of these tools might be good enough some of my colleagues who are a bit more pessimistic say no no it’s not enough and I think we should be assuming that it’s not enough we should try harder but I’m very optimistic that if we actually put our mind to it and it might also be good for business because you want your machine to follow laws and behave ethical because otherwise people will sue you go ahead so I’m realizing now that what we need to do when we built the first Super intelligence the first task to assign to it is to tell us how to protect it or protect us from them maybe a bit paradoxical but I don’t know that sounds a bit like getting the police to investigate the police it never works that’s true true you don’t trust them do you they’ll be very good at Deception learn it from us on this note I mean the embodiment and the the connection your research I’m I’m thinking also learning part of learning is also social norms and and shame and things like that when you have evolution of psychology you know in the beginning with a child very strong uh way of nudging it to the right is there is there something there can we teach the systems to feel ashamed if they do the wrong thing yeah whether whether they need to have some kind of emotion system internally I’m sure they do in order to to do the learning uh in a in a particular way but most of all I think it’s also about us understanding because the data they feed off is us and nature and what we’ve written and whatnot and so we need to be a bit clearer about what what is it that we feel so if you feel embarrassed when you’re looking at a racist uh facial recognition system then you need to go look internally in yourself well what is going on here why am I embarrassed by standing here by This Racist facial recognition system and only when you can articulate where the problem comes from all right yes it’s actually racist that’s the problem I have and that’s an easy problem right that we have a lot of other ethical issues that are way more difficult but once you feel here is something off or this is good there is a freedom here that is is given to me or there is a possibility then you need to be able to articulate that and a lot of that ethical sensibility is not verbalized it is very emotional bodily movement based and that makes it harder to articulate and that’s why we need new design methods to do that it’s also worth noticing that you can use many design methods to get safety together there is this concept of Swiss cheese security each layer of security is like a slice of Swiss cheese there’s a lot of holes in it but if you have enough slices the probability of something getting through all the way might actually be pretty low so we might want to design and some touring police to check the AI systems we might have some standards for training we might want to put in the emotions and might have child rearing standards for small young AI programs to have good parenting giving good moral education and together that might make it something that’s reliable enough so one thing I think about um making AI systems safer and more ethical these systems are much more like children than they are like lines of computer code in the old days when we wrote programs to get computers to do things you could look at the lines of code and see what they did yeah there might be a million lines of code so it’s hard to look at them all but there was a possibility of looking at the lines of code um what we’re doing now is training things and they extract structure from data so it’s really important what data you show them so at present if you take a model like gp4 as far as I know it was just trained on everything they could get their hands on and so it would have been trained on the Diaries of serial killers now if you were teaching your child to read would you choose the Diaries of serial killers as early reading um the children will probably be quite interested but but it’s not what you’d go for so I think a lot of the ethics in this systems is going to come from curating the data and you know as parents you can you have two controls over children you can reward and punish them that doesn’t work very well or you can set them a good model that works much better if you tell them not to lie and you’ll hit them if they lie but then you lie that doesn’t work so modeling good behavior is where I think ethics is going to come from and I think that the really good uh theories around how to handle data and how to look at data is are the feminist theories the theories uh the decolonizing of data because where does that data come from who are we harvesting what what are we stealing and not giving back to people uh who’s sitting in Nigeria training these systems you know that is where we also need a to do a lot of work uh so I totally agree the data on several layers both what we feed it with but also how we construct those data sets there there’s an assumption here when you speak which is that it it definitely will be more intelligent and that comes of course from your your standpoint but but there is there’s also are there parts of the intelligence that it will never have we haven’t gone into that is there what what is in inherently human that that machines will never achieve I I want to talk for about five minutes about this because a lot of people there’s a last line of defense that lots of people have which is yeah but they don’t have subjective experience or sentience or Consciousness um I want to now most people here probably think that a current mod multim modal chatbot does not have subjective experience I want people to raise their hands how many people here currently think that a multimodal chatbot either definitely has subjective experience or might well have subjective experience more than I thought okay about sort of 15 or 20 right now I’m going to give you my argument and then I’m going to get you to raise your hands again and if it was 15 I need to get 16 people to raise their hands okay so here goes I want to convince you that current multimodal chatbots have subjective experience and it’s about an it’s about what we mean by subjective experience so most of us have a model of the mind which is like a theater and there’s things going on in this theater that only I can see and if you ask a philosopher what subjective experience is they’ll start talking about qualia so let’s suppose I drop some acid or drink a lot and I start seeing little pink Copans floating in front of me and I tell you I’ve got the subjective experience of little pink elephants floating in front of me what am I really saying well here’s my analysis of it I’m not saying there’s an inner theater that only I can see and in this inner theater there’s little pink elephants that are made of pink qualia and elephant qualia and floating qualia and right way up qualia and not that big qualia all somehow mun together that’s the Philosopher’s Theory and it’s complete rubbish um that qu are the philosophers version of fistan and chemists have fistan to explain stuff and there wasn’t any and there aren’t any qualia in the sense philosophers want them so what I’m really telling you is this when I say there these little bence I’m saying my perceptual system I believe is lying to me that’s why I say it’s a subjective experience and the way I’m going to tell you how it’s lying to me is not by telling you my perceptual system is telling me to make neuron 53 fire because that wouldn’t do you any good and anyway I don’t know that I’m going to tell you how it’s trying to mislead me by telling you what would have to be out there in the world for it to be telling the truth and so when I say I have the subjective experience of little pink floating in front of me that’s equivalent to saying the following I think my perceptual system is lying to me but it would be telling the truth if there were little pink elephants out there in the world floating around so these little pink elephants are real world elephants they’re not elephants made a qualia there’re things in the real world that are counterfactual if they did exist they’d be real world things and that’s why words like pink and floating apply to them words that apply to real world things what’s funny about them is not they’re made of funny stuff called qualia what’s funny is they’re counterfactual so now I’m going to give you the example of a multimodal chapot having a subjective experience so I have my multimodal chapot I train it up it can talk it can point it’s got a robot arm um it can see stuff and I put an object in front of it and say point the object and it just points at the object no problem then I put a prism in front of its lens without it knowing so I screwed up its perceptual system so it perceptual system doesn’t work properly anymore and now I put an object in front of it and I say point at the object it goes like this and I say no I put a prism in front of your lens so and the prism’s bending the light rays and so the chat bot says oh I see um the prison bent the light rays so the object’s actually there but I had the subjective experience it was there and if it says that it’s using the word subjective experience exactly like we use them so if a chatbot says that it’s got subjective experience just as much as we have subjective experience subjective experience is your perceptual system goes wrong and I and you explain to people how it’s gone wrong by saying what would have to be out there in the world for it to be telling the truth and that’s true for us and that’s true for jackpots okay I want to vote again and I want more than 15 people how many people now think that chatbots could have subjective experience I still think so right no but you’re defining subjective experience in a very particular manner as if it’s about being false or or or true you know subjective experience is always there if you believe that there is anything objective then that’s where you’re going wrong no no I think I have the no I don’t agree I I was I actually get into this argument with feminist and absolute truth I believe in I believe there are things that are really true and I’m looking at a glass now I’m having the objective experience of looking at a glass do you disagree yes right that’s that’s the end of the argument yes I think we will have to solve that later he existing so we we’ve asked uh you to help us uh formulate some questions so I’m going to bring out the first question coming from the audience and it’s a question from uh Mr Andes Hector who’s at the department of the government’s offices working with the national digitalization strategy I think he’s here somewhere yes so uh you have been arguing for the emergence and dangers of AGI what are the milestones we should be looking for and is there somewhere an observatory or something like that tracking and reporting that type of Milestones evolution is that for me yes I think open AI has some Milestones right open AI made up some Milestones where they were at stage three now and if they get to stage five that’s AGI um and do you agree with the way they they framed that when I left Google in the spring of 2003 I stopped reading the literature ah I tried to retire okay sorry the reason the reason the reason I left Google was so I could retire and I thought on the way out I might as well mention that this stuff’s dangerous and then I couldn’t retire and then you got the but I did stop reading the literature so I don’t actually I if you ask me what are their five stages you could probably tell uh I have read them but I can’t remember them right now I blame yet let’s let’s say this let you read them but but I think in some sense they they’re somewhat sensible but Loosely expressed when I looked at them I seem to remember that okay how do I quantify this and this might be even more relevant for the EU AI act because there you actually need stuff that is going to be legislated and perhaps inspected by inspectors they need to be able to tell that AI program is actually level two but this one is level fre and then you need a fairly strict definition that’s going to be tricky but many people have been promoting of course lines in the sand like self-awareness I think it anop it was anthropic saying but if we see any trace of self-awareness we’re going to stop training immediately and they rather famously saw trace of that and were laughing about it and say Isn’t that cool and then kept on doing it yeah do do you think we would actually recognize an AI if we saw it or if it’s beyond us how do we how do we know that um you know it cuz it’s now in control by by which time it’s too late by which time no I I think one one way to recognize it is you have this AI system a big chatbot and you have a debate with it and you always lose it’s a nice well that’s possibly a St that’s I me now now it all all of a sudden feels very close doesn’t it yes it it does that’s how we recognize for example that Alpha go just plays go better than people people lose so one more technical question also coming from the audience is that that the human brain is seems to be more energy efficient than than the large foundational a models so how can we make the models more energy efficient okay I have a lot to say about that we’re energy efficient because we’re analog and so we don’t have a separation of hardware and software the the weights we have in our neural net are weights that work for those particular neurons with that particular connectivity with those particular quirks of all those neurons and all those interactions of those neurons and the weights are just good for that um the alternative is to go digital where you use very use transistors of very high power um but what you get for that is you can have two different pieces of Hardware that perform exactly the same operation at the level of the instructions to do that you have to fabricate things very accurately um and you have to use very high power so you get you get ones and zeros not 05s um it’s just better because then two different bits of Hardware can learn different things and share what they learned it’s better in that sense but it’s much worse because it’s much higher power and the question is is the better because you can share going to outweigh the fact that it’s much higher power I think we didn’t Evol you didn’t evolve digital intelligence because there’s too much power involved you’re going to involve analog intelligence but I think digital intelligence is just better um it may be that the power is just too extreme and of course if we could figure out how to learn properly in analog neural Nets then um maybe we can make much bigger analog neuron Nets than the digital ones and possibly although they can’t share with each other other one of them can learn a lot but I don’t think you’re going to get the data through and in particular you’re not going to get the data through one system if the data involves acting in the world because acting in the world you can’t speed up by a factor of a million you have to act in the world and so that’s a slow sequential thing and you’re not going to get all the data through one system so you’re not going to be able to compete with digital intelligence I said my bit and you said you were retiring it sounds like you’re deeply interested in this area no that’s what I was working on that caused me to retire the fact that I thought analog intelligence wouldn’t wouldn’t be able to compete with digital intelligence panel any ideas on Energy Efficiency there is this really interesting question about where the ultimate limits actually lie there are these fundamental ideas about the land principle that in order to raas one bit of information you need to pay a certain thermodynamic cost and we’re very far away from that of course the brain running on 20 20 to 25 watts is closeable but still I think seven orders of magnitude away from the land hour limit so I think matter is not very intelligent in our universe yet but the smarter our Technologies get the smarter we are at making them I think we might be getting all sorts of efficiencies and maybe we are going to want to have robots that are energy efficient but then you might want to have copiable intelligences that might require more energy or you might want to have quantum computers that have their own annoying quirks because they they are so fragile agile need to be isolated it might be that you get fundamentally different kinds of hardware for different kinds of intelligence I remember reading someone I can’t remember who said it it wasn’t me we’re currently in a situation where we have Paleolithic brains medieval institutions and Godlike technology I think another another that’s not going to solve the problem with energy necessarily but in for the embodied intelligence that there’s a lot of experiment ation on biomaterials so not implementing in in what we know as Hardware today but other materials yeah growing other materials one thing that worries me in the current discussion and we I think we’re peing on AI hype at the moment uh is that the very most I would say the voices in the debate are male and um almost all questions we got sent to us before the seminar was from from male interlocutors so uh I’m going to ask the audience and only and I’m going to be very authoritative now only females can pose a question now please anybody please take the microphone on the side of your chair I actually have two questions I start with the first one um when we talk we assume mostly not all of us but mostly that that when we are making these systems we actually are thinking evil a little bit they’re going to be bad to us they’re not going to be good to us why don’t we assume that good people there are more good people in the world than bad people so thereby the systems are going to be good systems that’s my my first question now my second question is if you look at back to our history when we created new systems we actually learned with them and we made it part of us not separate something else that we’re looking at why don’t we think but that we’re going to merge with these systems and to be we together are going to be become a lot better so two questions I I agree there’s more good people than bad people unfortunately the bad people are on top there’s actually another problem there and people like to call it the the Defenders dilemma that uh the attackers only need to succeed once and as Defenders you need to succeed every single time so you know over time probability works against the guys what do you think here so I do think that we need um other ways of approaching these issues and unpacking uh other methods and teaching other methods to our young ones so at kth you know teaching ways of thinking about these systems in new ways so that once we do Implement them that we actually uh try to emphasize the good stuff and I know that we also need laws and regulations even though I’m I’m more keen on exploring your personal uh uh Power in this and your personal responsibility in in in designing these systems so that that’s where I am you know that’s what I can do I guess is to to try and engage with our young students um I heard that the EU is is soon saying that any computer science department has to have people from the humanities and social sciences employed excellent y so so so just asking the questions and asking them in an informed way knowledgeable way with uh good theoretical understanding both of the data and of the algorithms and of how we build them I think is important maybe every Humanities department needs to have an engineer too yeah but but your second question is interesting too because when we are using tools they become Incorporated in our body image at least for many of us tools if I had hold a long piece of wood in my hand suddenly my sense of personal space actually changes you can do brain Imaging and see these changes and that of course also goes for many of the cognitive tools and my smartphone my notebooks they’re part of my mind very extended my mind and we also have a social extensions there are many forms of extended cognition going on in society the fascinating thing that is happening now is of course that more and more algorithms and pieces of software are creeping into them I really realized a while ago that Wikipedia is part of my memory and that means I have editors and that are editing my memory there are also Bots editing Wikipedia in many ways useful but that means I already have little AI enhancement affecting part of my memory without me me knowing it I kind of trust Wikipedia there others of these systems I don’t know whether I should trust and I think developing ways of making and trustworth is going to be pretty important when we extend ourselves very good I I don’t the word trust I think it’s a shitty concept that covers up a whole bunch of stuff we don’t have the time to go there we don’t don’t go there TR AI what the hell is that one last question from the audience microphone on the side of the chair that was an intelligence test I barely passed and my name is Sarah and I work at Google uh with public policy and I was interested in the panels and the Prof particularly your professor Hinton uh idea of the governance of this if we both see the potential and we see these risks how do we go about making sure that the uh potential and opportunities are the ones that win in the end so when I was at Google and they had a big lead so they had when around 2017 and for a few years after that after they’ve done the Transformer which they published and probably now regret that they published um they had they had much more advanced chatbots than anybody else and they were extremely responsible in not releasing them because they’ seen what happened at Microsoft when Microsoft released a chatbot I it started spewing racist hate speech very quickly um and Google obviously had a good reputation didn’t want to ruin it so it wasn’t for ethical reasons that they didn’t release these things it’s because they didn’t want to ruin their reputation um but they behave very responsibly as soon as open AI made a deal with Microsoft they couldn’t do that anymore they had to compete and release stuff and have chatbots out there um so I think Google behave very well when they they they could afford to um but once you get in a capitalist system we can agree on in a Capal in a capital system when you get profit driven companies particularly ones run by CEOs who have stock options that depend on what they do in the next quarter um you’re going to get short-term profits dominating everything else and that’s what that’s exactly what we’re seeing at open AI an open AI is an experiment that’s been running in real time on AI safety versus profits so are you saying actually to answer the question you’re saying that um we need some intervention from the the market will not I don’t think Google by itself is going to be able to um to actally it’s it’s I mean in in law it’s got a fiduciary responsibility to try and maximize profits um it’s not allowed to legally it’s not allowed to act decently um it’s going to require governments to regulate it I think I think I’m right about that about producer responsibility Microsoft chatbot as I remember it it was first released somewhere in Asia and it worked really well and then it was released in the UK and it only took 24 hours before it was racist and and uh using swear words and what and it was almost certainly men don’t don’t go there that I don’t know so we we’ll start uh wrapping this up but I want to uh this is a question also coming in a kind of merger from a question from me but also from a person in the audience and it’s about um having you’re here now of course to to receive the the distinction as sbel Lait and we congratulate on this fantastic achievement um Nobel was himself an inventor as you know he made his fortune on the patent of the dynamite dynamite being something that really served Humanity well but also killed a lot of people uh and the similar reflection coming from the audience had the Oppenheimer example with with the the type of of of sex second thoughts and wanting to to hinder it so of course my my my question to you is is going towards regrets do you have any okay I want to distinguish two kinds of regret there’s a kind of guilty regret where at the time you knew you shouldn’t have been doing this but you did it anyway and then there’s you did something and in the same circumstances with the same knowledge you do it again but much later you realize that had bad consequences which you didn’t have the information to realize at the time I don’t feel any guilty regret but I do wonder if we if we hadn’t developed it so fast um it might have been better um I think if I had not worked on it I might have slowed things down by several [Laughter] weeks it’s it’s thank you so much for that answer it’s it’s soon Christmas time I think we’re getting into that mood and uh you know Christmas time is the time for wish lists so I thought that we would terminate this with a very quick round on a wish from each one of you and I will start with Kia you can direct that wish towards the R&D Community or towards the government or anywhere you want actually research Christmas is listening sorry research funding bodies in the room yeah no I think what I wish for is is that interdisciplinarity and and working together U because this is obviously as we heard now very difficult issues and deeply philosophical and it is about what makes life good to us and to the planet uh and and that is not easily sorted out so more interdisciplinarity I would wish for yeah I I definitely sharing that wish I think also when just looking at questions like making AI more safe uh you again need interdisciplinarity we have been trying some things some things that make sense to the kind of people who are trying to make a safe but there probably other approaches that might be good or valid or actually really really useful and I think even if we look at the Technical Solutions we’re probably still looking at just too few we need more diversity of approaches to it there is plenty to be discovered and probably a lot of interesting and wonderful discoveries even if they don’t necessarily work for the main goal of safety we might discover other amazing things about ourselves or our machines thanks I hate these kind of questions so so so may maybe then instead of interdisciplinarity I I would go for internationality instead I think you know if we can as you said I mean there are things which nations will not collaborate on but we could at least try internationally to collaborate as much as we could on on trying to drive the evolution of this technology in the right way and putting in place the the barriers which we need please I wish I could get the answer to the question does the brain Implement some form of back propagation [Laughter] ah I think we need to give this panel a warm hand please take a seat and we will now move on to a little wrap-up session here uh with um um another Eva fellow and professor of computer science at Lynch up University um fredi hin who will give us a little bit of what did we hear okay so now I have this small task trying to summarize what we had so previous during the day we had this session when do you feel comfortable and when do you feel uncomfortable and accepting the uncomfortable which I am right now but I think today we have been really trying to talk about this big question of what is intelligence but also can we build artificial systems that are actually intelligence and actually think we brought up this very interesting aspect that maybe digital intelligence is superior to analog intelligence H and I think this is a really interesting notion that maybe the the sub I mean a substrate that you use to implement it have an effect on what can be achieved and that it’s in some sense our analog maybe it’s the where is Kia where the the body which is the Del limitation there you are I cannot find your body in the audience here uh but but that maybe that is the the the the part of the reason or I mean part of the potential for us um or limitation for us I should say h but I think it was also very interesting this is the essence of intelligence reasoning or learning personally since I my lab is called the reasoning and learning lab of course I see it’s neither or but both of them together and how we can combine learning with reasoning I think it was also very interesting and this in some sense the the the role of the body the embodied me but I would say maybe even more interesting the the role of I we talked about subjective experience and to me the big question here and actually the reason I didn’t raise my hand was this do they even have a subject and what does it mean to have a subject I mean I don’t question that they have I mean that there is some perception and that they can be deceived and all these things but what does it take to have a subject and maybe it is the lack of subject that makes the digital potentially Superior of the analog the subjective I don’t know um uh but I think um and we also talked about this with with uh sharing knowledge that one of the benefits of these digital systems is the fact that they can share knowledge and also compress knowledge and through the compression you’re creating new connections between facts and information and that the the compression itself might improve uh our understanding and make these new and surprising uh uh connections I think that that was also very something I found very interesting and of course we talked about risks and ethics and so on and and this uh that ethics through creating the data that o sorry the a models are trained on I think that’s also quite an interesting uh something to to think about more and also that this in combination with providing uh feedback reinforcement to uh hopefully push and make these models more more reasonable and more in the way we like them uh and uh so so I think to to end actually when when I introduced myself I said I was a professor of computer science and the response was I never taken a computer science course in my life and I got the touring award H and then you said that oh I don’t do uh research in physics and I got the Noel priz in physics H so I guess there is hope for for all of us uh that we might also someday manage to to achieve uh greatness um but I think it’s also interesting this that AI is such a broad area which touches upon connects and leverages and influences so many other research fields and going back to this I mean the the fact with the nobble prize I think it’s very interesting question when will Every Nobel Prize be what you say supported by AI or when will h a single individual being able to get every Nobel price through the use of AI or will even AI itself be able to get The Nobel priz I guess that’s more questionable um but to to Really conclude you you mentioned this with a very high uncertainty and I think this is probably one of the things that we can all agree upon that there is a lot of uncertainty going forward we just don’t know what’s going to be happen and of course as a scientist as a researcher I take this as a challenge so I encourage all of you let’s figure it out and solve this problem together thank you after that appeal is very difficult to pick this up but we we we need to wrap up and uh a little note just on on how this will happen because we will need to to uh uh guide Professor Inon out uh so please uh remain seated when this is when we finalize here for a little while so we can so he has a very packed schedule so please respect that then thank you so much um it’s very mind-boggling I think we’ve all kind of energized by all this um we are in a some sense stepping into the unknown and I I think that um we um we we need to bring one image with ourselves and I think that I’m going to send that to Professor Hinton we have a um um a character how do you call those bondes in in I get the French word we have a Serial character a draw character um um cartoon thank you so much we have a cartoon in Sweden called bum it’s a teddy bear and he gets very strong when he has had some magic honey and his his um his normal um payoff is always when you’re really really strong you need to be really kind so I think that’s what we need to build into the models actually as we move forward um we I somebody called told me um a little secret so now this is the surprise moment where you’re all going to chip in because tomorrow Professor Hinton has his birthday so I think we shall finalize instead of giving him a warm hand

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.