FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

>> STEVE: OUR NEXT GUEST IS KNOWN AS THE GODFATHER OF A.I., AND RECENTLY, WAS AWARDED THE NOBEL PRIZE IN PHYSICS FOR HIS PIONEERING WORK IN ARTIFICIAL INTELLIGENCE. HE’S ALSO GOTTEN UNDER ELON MUSK’S SKIN. We will have to ask about that. HE’S GEOFFREY HINTON, PROFESSOR EMERITUS OF COMPUTER SCIENCE AT THE UNIVERSITY OF TORONTO, AND HE JOINS US NOW TO DISCUSS THE PROMISE AND PERILS OF ADVANCED AI. So good to have you back in the studio. >> GEOFFREY: Thank you very much Steve. >> STEVE: We are going to show a clip right off the top of I suspect one of the better days of your life, Sheldon if you would. ♪ ♪ ♪ ♪ >> STEVE: That was King Carl the 16th of Sweden and you are getting in Stockholm the Nobel Prize. WHEN DOES THE BUZZ OF ALL OF THAT WEAR OFF? >> GEOFFREY: I’ll tell you when it wears off. >> STEVE: It still has not? >> GEOFFREY: Not completely no. >> STEVE: Not completely how cool of the day was that? >> GEOFFREY: It was amazing, PARTICULARLY SINCE I DON’T DO PHYSICS AND I GOT THE NOBEL prize in physics. >> STEVE: You want to explain how that happened? >> GEOFFREY: I think they wanted to award a NOBEL PRIZE FOR THE DEVELOPMENTS IN AI BECAUSE THAT’S WHERE A LOT OF THE EXCITEMENT IN SCIENCE IS NOW. AND SO THEY SORT OF REPURPOSED THE PHYSICS ONE BY PRETENDING I DID PHYSICS. >> STEVE: Did you pointed out to them? Thanks for the Nobel prize but you guys know I don’t do physics. Is that what you said and did they say DON’T LOOK A GIFT HORSE IN THE MOUTH, OR WHAT? You get a medal right, where you keep keep the medallion? >> GEOFFREY: I’m not telling you. >> STEVE: I am not in a steal it Geoffrey. >> GEOFFREY: No but somebody else might. It 6 ounces of gold it’s worth $15,000. >> STEVE: So you are not going to tell me that its at home or in a safety deposit box or whatever fair enough. I’M GOING TO READ WHAT YOU WON FOR. YOU WON FOR, QUOTE, FOUNDATIONAL DISCOVERIES AND INVENTIONS THAT ENABLE MACHINE LEARNING WITH ARTIFICIAL NEURAL NETWORKS. AND I KNOW YOU’VE BEEN ASKED A MILLION TIMES, WHAT DOES THAT MEAN IN ENGLISH? So let’s make it a million and one what does that mean in English? >> GEOFFREY: In your brain you have a whole bunch of brain cells CALLED NEURONS, AND THEY HAVE CONNECTIONS. AND WHEN YOU LEARN SOMETHING NEW, WHAT’S HAPPENING IS YOU’RE CHANGING THE STRENGTHS OF THOSE CONNECTIONS. AND SO TO FIGURE OUT HOW THE BRAIN WORKS, YOU HAVE TO FIGURE OUT WHAT THE RULE IS FOR CHANGING THE STRENGTHS OF CONNECTIONS. THAT’S ALL YOU NEED TO KNOW. HOW DOES THE BRAIN DECIDE WHETHER TO MAKE A CONNECTION STRONGER OR WEAKER SO THAT YOU’LL BE BETTER AT DOING SOMETHING LIKE UNDERSTANDING WHAT I JUST SAID? AND YOUR BRAIN HAS A WAY OF FIGURING OUT WHETHER TO MAKE THE CONNECTION STRENGTH SLIGHTLY STRONGER OR SLIGHTLY WEAKER. AND THE QUESTION IS, WHAT IS THAT WAY? HOW DOES IT DO IT? AND WHAT HAPPENS IF YOU CAN MIMIC THAT AND TAKE A BIG NETWORK OF SIMULATED BRAIN CELLS? AND WE NOW KNOW WHAT HAPPENS. IT GETS VERY SMART. >> STEVE: Of all the different thousands AND THOUSANDS OF AREAS OF SCIENTIFIC RESEARCH THAT YOU COULD HAVE DONE, WHY THAT ONE? >> GEOFFREY: Because that is clearly the most interesting one. >> STEVE: To you? >> GEOFFREY: I think it’s the most interesting to everyone. Because in order to UNDERSTAND PEOPLE, YOU REALLY NEED TO UNDERSTAND HOW THE BRAIN work. AND SO HOW THE BRAIN WORKS IS WE STILL DON’T KNOW PROPERLY HOW THE BRAIN WORKS. WE HAVE MORE IDEAS THAN WE DID. BUT THAT SEEMS LIKE JUST A HUGE Issue. >> STEVE: You are obviously very well known after well you were well known before you got the Nobel, BUT THEN YOU GOT THE NOBEL, and of course that has an explosive effect on one’s profile. Since then, you have been warning us about the perils of AI, you can quit your job at Google a couple years ago because of concerns about this. So let’s break this down. THE SHORT-TERM RISKS OF NOT HAVING ADEQUATE CONTROL OF ARTIFICIAL INTELLIGENCE ARE WHAT, IN YOUR VIEW? >> GEOFFREY: Okay, so there is really two kinds of risk. THERE’S RISKS DUE TO BAD ACTORS MISUSING IT, AND THOSE ARE MORE SHORT-TERM RISKS. THOSE ARE MORE IMMEDIATE. IT’S ALREADY HAPPENING. AND THEN THERE’S A COMPLETELY DIFFERENT KIND OF RISK, WHICH IS WHEN IT GETS SMARTER THAN US, IS IT GOING TO WANT TO TAKE OVER? IS IT GOING TO WANT TO JUST BRUSH US ASIDE AND TAKE OVER? AND HOW MANY EXAMPLES DO YOU KNOW OF MORE INTELLIGENT THINGS BEING CONTROLLED BY MUCH LESS INTELLIGENT THINGS? >> STEVE: Not many. >> GEOFFREY: We know more INTELLIGENT PEOPLE CAN BE CONTROLLED BY LESS INTELLIGENT PEOPLE, BUT THAT’S NOT A BIG DIFFERENCE IN INTELLIGENCE. >> STEVE: I was going to make a Trump joke there, but never mind. Were going to move on. >> GEOFFREY: So was I was but I avoided it, just alluded to it okay. >> STEVE: Bad actors, let’s START THERE give us an example of the concern that you have with bad actors exploiting this. >> GEOFFREY: WELL, SOMEBODY GETTING LOTS OF DATA ABOUT PEOPLE AND USING THAT DATA TO TARGET FAKE AI VIDEOS TO PERSUADE THOSE PEOPLE, FOR EXAMPLE, NOT TO VOTE. THAT WOULD BE A BAD ACTOR. THAT WOULD BE A PROBLEM, YES. AND THOSE ARE THE KINDS OF PROBLEMS WE’RE ALREADY FACING. CYBER ATTACKS. SO BETWEEN 2023 AND 2024, PHISHING ATTACKS WENT UP BY 1,200 PERCENT. THERE WERE 12 TIMES MORE PHISHING ATTACKS IN 2024 THAN IN 2023. AND THAT’S BECAUSE THESE LARGE LANGUAGE MODELS MADE THEM MUCH MORE EFFECTIVE. SO IT USED TO BE YOU’D GET A PHISHING ATTACK WHERE THE SYNTAX WAS SLIGHTLY WRONG. IT WAS KIND OF DIRECT TRANSLATION FROM THE UKRAINIAN OR WHATEVER. AND THE SPELLING WAS SLIGHTLY WRONG. AND SO YOU KNEW THIS WAS A PHISHING ATTACK. They are all in perfect English. >> STEVE: It is getting too sophisticated now. How about examples of the second thing you said. The number of people being in control of smarter people or dumber things being in control of smarter things. >> GEOFFREY: There is only one example I know actually. And that is a baby and a mother. IT’S VERY IMPORTANT FOR THE BABY TO CONTROL THE MOTHER. AND EVOLUTION PUT A HUGE AMOUNT OF WORK INTO MAKING THE BABY’S CRIES BE UNBEARABLE TO THE MOTHER. But that is about it. >> STEVE: The longer risk that you are worried about just how short-term have a longer term? >> GEOFFREY: The long-term risk is going to get smarter than us. ALMOST ALL THE LEADING RESEARCHERS AGREE THAT IT WILL GET SMARTER THAN US. THEY JUST DISAGREE ON WHEN. SOME PEOPLE THINK IT’S MAYBE 20 YEARS AWAY. OTHER PEOPLE THINK IT’S THREE YEARS AWAY. A FEW PEOPLE THINK IT’S ONE YEAR AWAY. AND SO WE ALL AGREE IT’S GOING TO GET SMARTER THAN US. And the question is what happens then, AND BASICALLY, WE HAVE NO IDEA. PEOPLE HAVE OPINIONS, BUT WE DON’T HAVE ANY REALLY GOOD FOUNDATION FOR ESTIMATING THESE PROBABILITIES. SO I WOULD GUESS THERE’S A 10% TO 20% CHANCE IT WILL TAKE OVER, BUT I HAVE NO IDEA, REALLY. IT’S MORE THAN 1% AND IT’S LESS THAN 99%. >> STEVE: When you say take over I think you have gone furthers than that I think you said there is a 10% TO 20% CHANCE THAT WE WILL BE RENDERED EXTINCT. >> GEOFFREY: If it takes over that’s what will happen. >> STEVE: Do you want to give us a timeframe on that? >> GEOFFREY: No, because I like say there’s no good way to estimate it. But if we don’t do something about it now, it may not be that long. Right now we are at a point in HISTORY WHERE THERE’S STILL A CHANCE WE COULD FIGURE OUT HOW TO DEVELOP SUPER INTELLIGENT AI AND MAKE IT SAFE. WE DON’T KNOW HOW TO DO THAT. WE DON’T EVEN KNOW IF IT’S POSSIBLE. Hopefully it is possible. AND IF IT IS POSSIBLE, WE OUGHT TO TRY AND FIGURE THAT OUT. AND WE OUGHT TO SPEND A LOT OF EFFORT TRYING TO FIGURE THAT OUT. >> STEVE: Can you play out that scenario for us, how will they render us extinct because of superiority? >> GEOFFREY: There is so many different ways they could do that if they wanted to. I don’t think there’s much point of speculating. I DON’T THINK IT WOULD BE LIKE TERMINATOR. THEY COULD, FOR EXAMPLE, CREATE A VIRUS THAT JUST KILLS US ALL. >> STEVE: So clearly we have to get a handle on that are we doing it? >> GEOFFREY: It would be good to get a good night handle on that. AND THERE IS RESEARCH ON SAFETY. AND THERE’S RESEARCH ON THIS EXISTENTIAL THREAT THAT THEY MIGHT JUST TAKE OVER, BUT NOT NEARLY ENOUGH. AND THE BIG COMPANIES ARE MOTIVATED BY SHORT-TERM PROFITS. WHAT WE NEED IS THE PEOPLE TO TELL THE GOVERNMENTS THEY OUGHT TO MAKE THESE BIG COMPANIES DO MORE RESEARCH ON SAFETY. THEY OUGHT TO SPEND LIKE A THIRD OF THEIR RESOURCES ON IT, SOMETHING LIKE THAT. >> STEVE: How is that going? >> GEOFFREY: People are becoming more aware. Politicians are becoming more aware, recently in the states there was a step backwards. >> STEVE: Do you want to refer to what you’re talking about there? >> GEOFFREY: The Biden administration was interested in AI safety and had an executive order, and I think it is gone away of all Biden’s executive orders under Trump. >> STEVE: Has in it has been reversed? AND I PRESUME IT’S BEEN REVERSED BECAUSE THE RICHEST, TECHIEST PEOPLE IN THE UNITED STATES ARE ALL SUPPORTING THIS ADMINISTRATION RIGHT NOW. IS THAT FAIR TO SAY? >> GEOFFREY: It is sad to say, yes. >> STEVE: All right, clearly we like to see us get a handle on this, you know, what can we do since it appears that there is not a consensus to do anything about this at the moment. >> GEOFFREY: The first thing to do is build consensus because this is a really serious problem is not science fiction we need to persuade them to do more research on safety. IT’S LIKE CLIMATE CHANGE. YOU HAVE TO FIRST BUILD CONSENSUS THAT THERE REALLY IS CLIMATE CHANGE AND IT’S REALLY GOING TO BE TERRIBLE IF WE DON’T DO ANYTHING ABOUT IT. AND THEN YOU CAN START GETTING ACTION. NOT ENOUGH ACTION, BUT AT LEAST SOME. WITH THIS, WE FIRST NEED THE CONSENSUS. BUT ONE PIECE OF GOOD NEWS IS FOR THE EXISTENTIAL THREAT THAT IT MIGHT WIPE PEOPLE OUT, ALL THE DIFFERENT COUNTRIES SHOULD BE ABLE TO COLLABORATE. WE SHOULD BE ABLE TO COLLABORATE WITH THE CHINESE. ACTUALLY, I’M NOT SURE WHO WE IS ANYMORE. I USED TO THINK OF WE AS CANADA AND AMERICA, BUT THAT’S NOT A WE ANYMORE. >> STEVE: It is not, you are right. >> GEOFFREY: Country should be able To collaborate, because nobody wants to get wiped out, the Chinese leaders don’t want to get wiped out, TRUMP DOESN’T WANT TO GET WIPED OUT. THEY CAN COLLABORATE ON THE EXISTENTIAL THREAT. SO THAT’S A LITTLE PIECE OF GOOD NEWS. BUT THE BAD NEWS IS WE DON’T KNOW WHAT TO DO ABOUT IT. WE DESPERATELY NEED RESEARCH NOW TO FIGURE OUT WHAT TO DO ABOUT it. >> STEVE: Is there an international institution leading the way to get that collaboration? >> GEOFFREY: There is a number of organizations that are trying to be dominant but not one yet. >> STEVE: Is it a job for the UN or who. >> GEOFFREY: The UN is sort of pathetic, right. >> STEVE: Who is up to it? >> GEOFFREY: The big companies have the resources so you need to do your research and air safety, YOU NEED TO BE DEALING WITH THE LATEST, MOST ADVANCED MODELS. AND ONLY THE BIG COMPANIES HAVE THE RESOURCES TO TRAIN THOSE. >> STEVE: Let’s talk about the richest man in the world shall we WELL, I GATHER YOU’RE NOT ON HIS CHRISTMAS CARD LIST ANYMORE. >> GEOFFREY: I agree with him on various things. I agree with him on the essential threat for example he takes it seriously. And he has done some good things like electric cars. And communications for people in the Ukraine using STARLINK. SO HE’S DEFINITELY DONE SOME GOOD THINGS. BUT WHAT HE’S DOING NOW WITH DOGE IS OBSCENE. WHAT’S HAPPENING IS HE’S CUTTING ALMOST AT RANDOM LOTS OF GOVERNMENT WORKERS, SO GOOD, HONEST PEOPLE WHO GO TO WORK AND DO THEIR JOB. HE’S ACCUSING THEM OF BEING CORRUPT AND LAZY AND USELESS AND JUST CUTTING THEIR JOBS. AND IT’S GOING TO BE TERRIBLE. IT’S GOING TO HAVE TERRIBLE CONSEQUENCES ON PEOPLE. AND HE JUST DOESN’T SEEM TO CARE. SO THE ONLY TIME I’VE SEEN HIM CARE WAS WHEN I CRITICIZED HIM AND HE SAID I WAS CRUEL. >> STEVE: Well let’s do this here you went on his home turf EX-FORMERLY TWITTER, AND YOU TWEETED, I THINK ELON MUSK SHOULD BE EXPELLED FROM THE BRITISH ROYAL SOCIETY, NOT BECAUSE HE PEDDLES CONSPIRACY THEORIES AND MAKES NAZI SALUTES, BUT BECAUSE OF THE HUGE DAMAGE HE IS DOING TO SCIENTIFIC INSTITUTIONS IN THE U.S. NOW LET’S SEE IF HE REALLY BELIEVES IN FREE SPEECH. AND APPARENTLY YOU CAUGHT HIS ATTENTION BECAUSE HE TWEETED BACK AT YOU, ONLY CRAVEN, INSECURE FOOLS CARE ABOUT AWARDS AND MEMBERSHIPS. HISTORY IS THE ACTUAL JUDGE, ALWAYS AND FOREVER. YOUR COMMENTS ABOVE ARE CARELESSLY IGNORANT, CRUEL AND FALSE. THAT SAID, WHAT SPECIFIC ACTIONS REQUIRE CORRECTION? I WILL MAKE MISTAKES, BUT ENDEAVOR TO FIX THEM FAST. Okay what was your reaction to his tweets? >> GEOFFREY: I thought it’s best not to get involved in a long series of series of EXCHANGES WITH ELON MUSK BECAUSE I WANT TO BE ABLE TO GET INTO THE U.S. AND MY FRIEND, YANN LECUN, ANSWERED THOSE QUESTIONS. >> STEVE: Okay, and where would be able to see the answers? >> GEOFFREY: On Twitter. >> STEVE: So that’s the only interaction you had directly within. >> GEOFFREY: A couple of years ago he asked me too call him because he want to talk about the existential threat. ACTUALLY, HE WANTED TO RECRUIT ME TO BE AN ADVISOR FOR X. >> STEVE: What did you say? >> GEOFFREY: So we talked about the essential threat for a bit. AND THEN HE ASKED IF I WOULD BE AN ADVISER FOR HIS NEW X A.I. COMPANY, AND I SAID NO. HE THOUGHT I MIGHT AGREE BECAUSE HE EMPLOYED ONE OF MY BEST STUDENTS AS ONE OF THE TECHNICAL PEOPLE THERE. AND THEN HE STARTED JUST RAMBLING, AND SO I MADE UP A MEETING AND SAID, I’M SORRY, ELON, I HAVE ANOTHER MEETING, SO I HAVE TO GO. >> STEVE: And that is it? If I can sort break this thing and two he’d take some fairly personal shots that you at the beginning as you did in him fair, not everybody agrees that what he was doing when he got up on stage and did that thing was a Nazi salute. He would argue you just throw in his heart out to the crowd. You are not buying that. Not buying that. >> GEOFFREY: Particularly if you look at his history and his parents views and so on. >> STEVE: He does seem to cozy ups to some situations here and there. But then the second part of this is rather constructive he got super advice on what make. >> GEOFFREY: And I let somebody else answer that. >> STEVE: Do you want to share one or two of the things that you thank you ought to do? >> GEOFFREY: Well if he is going to. Let’s get it straight what’s going on here. HE WANTS THERE TO BE AN ENORMOUS TAX CUT FOR THE RICH. HE WANTS A $4 TRILLION TAX CUT. THAT’S WHAT IT’S GOING TO COST. AND IN ORDER TO GET THE MONEY FOR THAT, WITHOUT INCREASING THE NATIONAL DEBT HUGELY, THEY HAVE TO CUT SOMEWHERE. >> STEVE: Or put tariffs on us. >> GEOFFREY: The two things they’re planning TO DO ARE CUT GOVERNMENT SPENDING AND HAVE TARIFFS, WHICH ARE REALLY A TAX ON THE POOR. TARIFFS ARE A NON-PROGRESSIVE TAX. THEY’RE GOING TO MAKE EVERYTHING MORE EXPENSIVE. AND SO NORMAL PEOPLE ARE GOING TO END UP PAYING $4 TRILLION MORE FOR WHAT THEY BUY TO PAY FOR THE TAX CUTS FOR THE RICH. THIS IS DISGUSTING. >> STEVE: This is government policy in the United States right now, which is disgusting. You talk about damage to scientific institutions in the United States. Referring to what? >> GEOFFREY: Well, for example, if you put a crazy guy with a worm in his brain in charge of the health system. >> STEVE: That would be RFK Junior that you’re referring to. You don’t like anything of what he is doing right now. >> GEOFFREY: No, I wouldn’t say that. These things are never completely black and white. I THINK HIS EMPHASIS ON PEOPLE HAVING A HEALTHY DIET IS IMPORTANT. MAYBE SOME OF THE THINGS HE’S DEAD AGAINST, LIKE SEED OILS, ISN’T QUITE RIGHT. BUT THE IDEA THAT PEOPLE SHOULD HAVE A HEALTHY DIET AND THAT will IMPROVE HEALTH, THAT’S AN IMPORTANT IDEA, AND HE SORT OF PUSHES THAT A BIT. BUT MOST OF THE REST OF WHAT HE SAYS IS JUST NONSENSE. >> STEVE: You don’t share his suspicions about vaccines and Parma and how we get autism and that kind of thing. >> GEOFFREY: No I don’t, there’s been a lot of research on that already and people take it very seriously because of all these claims. Most of the people who push that just want to sell you meds or sell you something, they are doing it as a sales technique to get your attention. They don’t really believe it himself. He said his own’s kids vaccinated and far as I know. >> STEVE: That says a lot. It reminds me Fox News would be BROADCASTING 24-7 AGAINST THE MANDATORY VACCINATION, AND YET ALL THE FOX EMPLOYEES HAD TO GET VACCINATED. There you go. We talked a lot about the perils of AI, is there anything you can leave with us here that should make a somewhat optimistic that things may actually work out. >> GEOFFREY: Well one of the reasons AI will be developed to stop it now is because there so many many good things will come out of it. SO, FOR EXAMPLE, IN HEALTHCARE, IT’S GOING TO DO AMAZING THINGS. YOU’RE GOING TO GET MUCH, MUCH BETTER HEALTHCARE. LIKE, YOU’RE GOING TO HAVE A FAMILY DOCTOR WHO’S SEEN 100 MILLION PATIENTS, WHO KNOWS AND REMEMBERS THE RESULTS OF ALL THE TESTS THAT HAVE EVER BEEN DONE ON YOU AND ON YOUR RELATIVES, AND CAN GIVE MUCH, MUCH BETTER DIAGNOSES. ALREADY, AN AI SYSTEM WORKING WITH A DOCTOR GETS FAR LESS ERRORS IN DIAGNOSING COMPLEX CASES THAN A DOCTOR ALONE. SO, ALREADY THAT’S HAPPENING, AND IT’S GOING TO GET MUCH BETTER. IT’S GOING TO BE AMAZING IN EDUCATION. SO WE KNOW THAT A KID WITH A PRIVATE TUTOR WILL LEARN ABOUT IT TWICE AS FAST, BECAUSE THE TUTORS CAN SEE WHAT THE KID MISUNDERSTANDS. NOW, AI SYSTEMS AREN’T THERE YET, BUT SOMETIME IN THE NEXT 10 YEARS, PROBABLY, THEY’LL BE REALLY GOOD. AND SO WHEN A KID IS LEARNING SOMETHING, THE AI SYSTEM WILL BE ABLE TO SEE EXACTLY WHAT IT IS THE KID MISUNDERSTANDS, BECAUSE THE AI SYSTEM’S SEEN A MILLION OTHER KIDS, RIGHT? IT KNOWS EXACTLY WHAT THE KID MISUNDERSTANDS, EXACTLY WHAT EXAMPLE TO GIVE THE KID TO MAKE IT CLEAR WHAT THE MISUNDERSTANDING IS. AND SO IF A PRIVATE TUTOR THAT’S A PERSON IS, LIKE, TWO TIMES BETTER, THESE WILL BE THREE OR FOUR TIMES BETTER. IT MAY NOT BE GOOD NEWS FOR UNIVERSITIES, BUT IT’S VERY GOOD NEWS FOR PEOPLE LEARNING STUFF. >> STEVE: Not good news for universities because? >> GEOFFREY: We may not need them anymore. You will need them for doing graduate research I think you will still need an apprenticeship to learn how to do research because we can’t say how you do research we can say this, I would tackle it this way, we can’t really give the rules, not many rules in an apprenticeship. >> STEVE: All the kids say it was a great idea to go to university and learn to code to do computer science are they in trouble now? >> GEOFFREY: They may well be. Yes. Computer science you are learning more than how to code. >> STEVE: They call you the godfather of AI. Do like the title? >> GEOFFREY: I quite do actually. It wasn’t intended kindly. Somebody started calling me that after a meeting in which, I was interrupting people. >> STEVE: And therefore. >> GEOFFREY: They called me the Godfather. Andrew it was a meeting in Windsor, and England, and after the meeting Andrew referring to me as the Godfather. >> STEVE: Because you cut people off? >> GEOFFREY: I was the oldest guy there and pushing people around. >> STEVE: All right got it. HALF OF YOUR NOBEL MONEY, WHICH I GATHER IS, WHAT, $350,000, SOMETHING LIKE THAT? >> GEOFFREY: The whole prize is a million dollars, about so half is half a million. >> STEVE: Half a million dollars. Of the half a million you donated 350 to water first do I have that right? >> GEOFFREY: A quarter of a million US is $350,000 Canadian. >> STEVE: What is water first? >> GEOFFREY: Water first is an organization that trains people who live in INDIGENOUS COMMUNITIES in water technologies so how people who live in the community can make their water safe. >> STEVE: And why did you pick them? >> GEOFFREY: I adopted a child in Peru I lived there for two months and you could not drink the tap water. IT WAS KIND OF LETHAL. AND SO I EXPERIENCED WHAT IT’S LIKE NOT TO HAVE SAFE DRINKING WATER. IF YOU HAVE A BABY AND YOU DON’T HAVE SAFE DRINKING WATER, IT JUST OCCUPIES ALL YOUR TIME ON HOW YOU’RE GOING TO STOP THE BABY GETTING SICK. AND IT’S JUST A CRAZY EXTRA BURDEN TO IMPOSE ON PEOPLE. AND I THINK IT’S KIND OF OBSCENE THAT IN A RICH COUNTRY LIKE CANADA, THERE’S ALL THESE INDIGENOUS COMMUNITIES THAT DON’T HAVE SAFE DRINKING WATER. LIKE IN ONTARIO, 20% OF THE INDIGENOUS COMMUNITIES DON’T HAVE SAFE DRINKING WATER. >> STEVE: This will not satisfy you, I don’t mean it to satisfy you, but it is better today than it was a decade ago. >> GEOFFREY: Maybe. >> STEVE: It is. >> GEOFFREY: They should all have safe drinking water. >> STEVE: Of course they should, of course they should what’s ahead for you. >> GEOFFREY: I am trying to retire but I am doing a bad job at it. >> STEVE: How old are you? >> GEOFFREY: 77. >> STEVE: That is way too young to retire. >> GEOFFREY: I left Google of 75 because I wanted to retire. >> STEVE: YOU’VE GOT A LOT OF RUNWAY YOU LEFT STILL. I MEAN, YOU LOOK AWFULLY GOOD FOR 77, I’VE GOT TO SAY. I THINK YOU’VE GOT AT LEAST ONE OR TWO OR MAYBE THREE CHAPTERS LEFT. >> GEOFFREY: You look awfully good for 77 too. >> STEVE: Good make up artist makes all the difference let me tell you I am so grateful you could spare some time to come in and take these important questions today. Who knows maybe you and ELON WILL GET BACK TOGETHER AGAIN AND TRY AND SOLVE THESE PROBLEMS THAT WE NEED SOLUTIONS TO. >> GEOFFREY: I think that is improbable. >> STEVE: That is Geoffrey Hinton. PROFESSOR EMERITUS, COMPUTER SCIENCE, UNIVERSITY OF TORONTO, WHO IS A NOBEL LAUREATE IN PHYSICS. THANK YOU FOR JOINING US ON TVO TONIGHT.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.