“What I’m concerned about is that all of the safety protocols that people have warned about are being completely blown through. So for many years AI safety experts have said don’t give AI the ability to use the internet. Don’t give AI the ability to write its own code. We’ve gone right through that.” — Peter Berezin
“The safety risks around AI are huge, and we think there is a more than 50/50 chance AI will wipe out all of humanity by the middle of the century… Firms that embrace AI safety as part of their offerings will prosper.” — ChatGPT And The Curse Of The Second Law. BCA Research Report.
Peter Berezin, BCA chief global strategist, joins ‘Power Lunch’ to discuss the future of A.I.
but they are clear there will be a serious impact and here to discuss is the strategist behind this note BCA researches Peter bearizon and our own Steve Kovac Peter I don’t see a tin foil hat I mean you look like a normal guy sitting in a normal office and I mean what just talk to us about this call I think the mistake that people are making Leonard linearly when they should be thinking equally so we’ve made the same mistake at the start of the pandemic people didn’t take the spread of the virus seriously enough because they were just seeing a few cases but then those few cases became a few more cases and before he knew it you had a massive pandemic that was ravaging the globe but I think it’s the same with AI people are thinking linearly when they should be thinking exponentially we’ve been on this exponential curve with the AI for many years probably many decades but it’s only now that we’re reaching the point where AI can do a lot of the things that people can do and that process will continue and because we’re on that exponential curve it’ll probably happen a lot more quickly than people realize so the upside is absolutely huge right I talk in the report that AI could have the same benefit to growth as prior technological revolutions the Agricultural Revolution the Industrial Revolution all of which saw growth increase by over 30 fold so we’re talking about global GDP growth potentially of over 100 percent now that’s way above consensus is amazing and let me just jump inside itself on that downside too so the transmission mechanism I mean is this basically saying AI wipes out Humanity because the code writes code like just what is the actual mechanism that you’re concerned about knowing again we’re speculating about an exponential future that none of us can really foresee but what is the direct cause and effect that you are concerned about what I’m concerned about is that all of the safety protocols that people have warmed about are being completely blown through so for many years AI safety experts have said don’t make AI don’t give AI the ability to use the internet don’t give AI the ability to write its own code what have we done that we’ve gone right through that and so the risk is that Chad GPT 6 won’t be written by humans it’ll be written by Chad GPT 5.5 and 6.5 will be reckoned by shotgpt5 and so on and so forth and it becomes an exponential growth in intelligence and it’s going to be unlike anything that we’ve ever seen before humans are used to being a top species on the planets but we may not be if machine intelligence arrives and it right it could arrive much more quickly than a lot of people are anticipating I feel like we’re at the beginning of a disaster film where they show the news Clips The Last of Us right it started off but uh Peter I got a question for you because you know especially for Microsoft and Google the two leaders here and open AI I guess you know we hear the word responsible responsible responsible we’re going to do this responsibly seems like you’re not buying that what do you think well well those companies are responsible to their shareholders they’re not necessarily responsible to humanity as a whole and I think they have been rather uh complacent about some of these risks and so I think those risks are there and again it doesn’t have to be a Terminator type of scenario like if you have an advanced enough Ai and you say okay you know we’ve got this problem global warming come up and Implement solution that alleviates global warming global warning if you don’t say anything more than that it’s quite possible that the AI will say okay well nuclear war would reduce the temperature of the planet so the problem and we’ve seen this over and over again with complicated systems is that they can be very very unpredictable you give them a set of goals and then they have to adopt certain sub goals to achieve those end goals and those sub goals could be not the sort of thing that we want them to do
I just filmed a segment on CNBC’s Power Lunch about my latest report on AI. I argued that we are making the same mistake that we made at the start of the pandemic: We are thinking linearly about AI’s potential when we should be thinking exponentially. pic.twitter.com/TtkZuY6Pfr
— Peter Berezin (@PeterBerezinBCA) May 12, 2023