FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

The standard way of thinking and coming up with a a way of thinking about the problem that’s different from way everyone else has thought about things before and alphago certainly was not doing that um so the reason we didn’t understand moo 37 is just because the search tree that it builds is too big and it just takes too long to go through all the reasoning steps uh that he did but the reasoning steps it’s doing are perfectly classical uh we might call logical symbolic reasoning steps um the other form in which we don’t understand these systems is that they’re not doing classical logical symbolic reasoning steps at all right they are doing uh they’re passing uh real numbers like you know fractional numbers you know 1.6 324 minus 0.812 passing those numbers through this giant circuit and doing uh various arithmetic operations on those numbers as as they pass through the circuit so again you know we could trace the each of those calculations but in the case of the game tree that alphago is building we know what each step means it’s a move that could be played by one player or the other in the game but in the calculations that the network is doing you know what does 1. 6123 times 8.11 TH what what does that mean it doesn’t mean anything right it does it doesn’t have semantics as a as a reasoning step um so that’s it that’s the sense in which we don’t understand what the Deep networks are doing is because not only are they doing uh quintilian of calculations but we don’t know what any single one of those calculations means because it doesn’t have a meaning right it’s just that in aggregate if you do the quintilian uh steps you get out a system that behaves well um and we can do some reverse engineering we are able to perceive little bits and pieces of uh stuff that’s happening inside these giant circuits um but it’s very very partial and it’s very unreliable uh you know it’s a little bit like trying to interpret what’s happening inside a human brain. So the question is could we build AGI by methods where we do understand the individual steps right and then we can verify uh the the method of reasoning and show that the method of reasoning is correct we can verify the individual pieces of knowledge that are being composed to produce the answers to questions um and I would say I think probably yes we can build AGI that way um but there’s still a lot of work to do. (55:28)

Formal Oracles

I don’t think we can fully avoid um autonomy what we might separate out is um the the functions where we need autonomy we might restrict the capabilities and we might leave the sort of superhuman capabilities for systems that are not agents and and here here’s a direction I think is quite plausible is what we call formal oracles so um an Oracle in general in computer science is just a system that you you pose a problem to it and it just gives you an answer yes or no right and a formal Oracle is one whose internal operations are are steps in a provably sound reasoning system so for example imagine a logic based theorem provement so it just does logical inference steps the kind of thing that your math teacher tried to drill into you um all those years ago and that’s the only thing it can do so it can choose which logical inference step to do next and and if it’s really really intelligent it will be able to prove new mathematical theorems and um that we never conceived of or or that would be much too difficult for humans to do and and it’s not so mathematical theorem sounds like a very narrow thing but in fact you know uh a design for a building that satisfies uh structural and and functional properties uh that is a mathematical theorem right there’s a mathematical theorem that that design will have those structural and functional properties and so such a system can be incredibly useful but is still I believe completely safe because there not an agent um it can only answer yes or no and it always has to tell the truth right because every one of its operations is the operation of a logically sound reasoning system um and I think this is a direction that we probably should pursue quite urgently. (1:12:28)

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Learn more

TRAILER

FULL INTERVIEW (start at 20:00)