“We wouldn’t be able to prove that a system that is that powerful [ASI] could be contained and would be safe and therefore until we can prove unequivocally that it is, we shouldn’t be inventing it. Right. That, I think, is a pretty straightforward common sense reality.”

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

“We wouldn’t be able to prove that a system that is that powerful [10,000X a human] could be contained and would be safe and therefore until we can prove unequivocally that it is we shouldn’t be inventing it right that that I think is a pretty straightforward Common Sense reality right we can still get tons and tons of benefit from building these these narrow practical applied AI systems they’ll still talk to us we’ll have personal assistance you know they will automate a bunch of work that we don’t want to do they will create vast amounts of value we’ll have to figure out how we redistribute that value so that everybody you know ultimately will end up with an income but that does not mean that we have to create a super intelligence just means that we will have created a huge amount of value in the world and the current structure of society and the politics and governance around that is going to look very different to what it is today.” (53:57)

“The argument that I’ve often made and others in the field is that a system [ASI] that powerful is is unlikely to follow your instruction to obey you as a ruler as much as it is you as an enemy right because at that point it’s not going to care you know whether you’re China or India or Russia or the UK or you’re a government or you’re just a random academic you know it it is it is there’s going to be a question of how you actually constrain something that powerful regardless of where you’re from so I I think that that’s an initial starting point for thinking about how you know we all like add some serious caution here if and when we get to that moment in decades to come like just to be clear we’re nowhere near that right now but you know it’s a question that we have to start thinking about yeah it’s it’s scary to see the prediction that AI could then self-improve right because it seems like as soon as it gets to that point it could that that curve could go so fast that we just wake up one day and it’s it surprises everybody.” (56:18)

“AI is mostly going to target white collar work… what’s actually going to happen is knowledge workers that work in a big bureaucracy who you know spend most of their time doing payroll or Administration or Supply Chain management or accounting or paralegal work these kinds of things you know I think we’re already seeing it in the last you know 12 months or so are are going to be the first to be displaced… AIs use the same tools that we use to do our work right they use browsers they’ll be able to navigate using a mouse and a keyboard effectively in the back end using apis and they can process images right so they can just read the screen of what is on you know your desktop or inside of your web page and you know they’ll be able to and they can now write emails and send emails and negotiate contracts and you know design Blueprints and you know produce entire um you know spreadsheets and slide decks um and write the contract so you know those skills combined are what most of us do day to day for you know our regular jobs um you know in in in kind of White Collar work and so that’s what we’re going to have to confront over the next decade or two.” (18:22)

“You can AI actually see the power of these models in practice and then the second thing is just the rate of improvement is kind of incredible and what’s driving this rate of improvement is training these large scale models and what we’ve seen over the last 10 orders of magnitude of computation so 10times 10times 10 10 times in a row of adding more computers to train these large models is that with each order of magnitude you get better results right the image quality is better the speech recognition is better the language trans translation is better the transcription is better the the the the language generation is better right you know you can clearly see that this curve has been very predictable and over the next sort of five to 10 years you know many labs are going to add orders of magnitude 10x 10x 10x per year and so I think it’s quite reasonable to predict that there’s going to be a new set of capabilities Beyond just understanding images and video and and and text you know AIS are going to be able to take actions they’re going to be able to use apis they’re going to be able to predict and plan over extended time sequences and so I think that’s why we’re all predicting that this time is different.” (23:22)

we have to talk about the potential ways in which things can go wrong so that we can proactively manage them and so we can actually start putting in place checks and balances and limits and and and and not just have a bias towards optimism that leads to you know us missing the boat when it comes to the consequences affect everybody.” (29:10)

“this 10x increase in the amount of compute used to train the cutting edge AI models per year so instead of doubling per year which is the Moore’s law trend we’re increasing the amount of compute by 10 times per year because in this case we don’t need the compute to be smaller we can just daisy chain more computers together so our server farm at inflection for example is the size of four football pitches right it’s absolutely astronomical uses like 50 megawatt of power… that is accelerating much much faster than Moors law and is going to continue um for many years to come.” (32:29)

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.