Geoffrey Hinton said he resigned to speak more freely about the technology, saying it’s quickly becoming smarter than humans. You left your job at Google because you want to focus solely on your concerns about ai. You’ve said it could ma nip late or possibly figure out a way to kill humans? How could it kill humans? >> If it gets to be much smarter, it will gain manipulation because it will have learned that from us. So it would be a more intelligent thing being controlled by a less intelligent thing. It can figure out ways of manipulating people to do what it wants. >> What do we need to do, pull the plug, put in far more restrictions and back stops on this? How do we solve this problem? >> It’s not clear to me that we can solve this problem. I believe we should put a big effort into thinking about ways to solve the problem. I don’t have a solution presently. I just want people to be aware that this is a really serious problem and we need to be thinking about it very hard. I don’t think we can stop the progress. I didn’t stein the petition saying we should stop working on ai because if people stopped in mesh, people in China wouldn’t. It’s very hard to verify if people have been doing it. >> There have been whistle-blowers warning about the dangers of ai. One was forced out of Google for voicing his concerns. Looking back, do you wish you had stood behind these whistle blowers more? >> That’s actually a woman. >> Oh, sorry. >> So they were rather different concerns from mine. I think it’s easier to voice concerns if you leave the company first. D idea is they’ll get more intelligent and take us over. >> Steve Wozniak, to co-founder of apple, is also speaking out about the dangers of ai. >> Ai is another more powerful tool and it will be used by those people, you know, for basically really evil purposes. And I hate to see technology being used that way. It shouldn’t be, and some — probably some types of regulation are needed. >> It sounds like you agree. >> I agree with that, yes. >> What should that regulation look like? >> I’m not often expert on how to do regulation. I’m just a scientist who suddenly realized these things are getting smarter than us. I want to blow the whistle and say we should worry seriously about how we stop these things getting control over us. And it’s going to be very hard. And I don’t have the solutions. I wish I did. >> Does there need to be a meeting of all of the tech groups and governments working on this, Google, China, whatever, in some sort of set of rules of the road? How do we’ve protect text against bad actors or rogue nations harnessing ai? >> So, for some things it’s very hard like using ai for manipulating electorates or for soldiers but for the existential threats taking over, we’re all in the same boat. It’s bad for all of us. So we might be able to get China and the U.S. To agree on things like that. It’s like nuclear weapons. If there’s a nuclear war, we all lose, and it’s the same if these things take over. So since we’re all in the same boat, we should be able to get agreement between China and the U.S. On things like that. >> Do you think tech companies will be the solution or are they so invested in this financially and also, let’s be frank, in terms of power, that they’re not going to be part of the solution here? >> I think the tech companies are the people most likely to be able to see how to keep this stuff under control.