“There might simply not be any humans on the planet at all. This is not an arms race it’s a suicide race.” — Max Tegmark
“It can be really inconvenient to have to share the planet with much smarter alien Minds that don’t care about us. Just ask the Neanderthals and know how that worked out for them. So my vision is, yeah exactly, we humans should make sure that we put in place the safety measures also, so that we can keep control over these machines, so that they help Humanity flourish rather than the risky things that we might lose control over it. Despite very diligent technical work on on so-called AI alignment, we have failed- I confess as an AI researcher myself – to solve this so far. We need a little bit more time. Otherwise there might simply not can be any humans on the planet at all within a few decades.”
“The pessimism isn’t because we don’t have ways of solving this. The pessimism is because basically everybody who’s driving the race towards this cliff is in denial about there even being a cliff.”
“The problem is not technical. The problem is really political and and sociological, stemming from the fact that most people don’t understand how big the downside is if nobody even wants to cooperate.”
‘This is not a race ultimately that anyone is going to win. If we do an out of control race we’re all going to lose. It doesn’t matter to Germany whether the AI that we lose control over that drives Humanity extinct is American or German or Chinese. It really doesn’t matter. What matters is that Nobody Does It. And that we can use all this amazing technology to help all humans on the planet get get dramatically better off. This is not an arms race it’s a suicide race.”
“I’m hoping this will make a lot of policy makers realize that this is not science fiction. Intelligence is not something mysterious that can only exist in human brains. It’s something we can also build . When we can build it, we can also very easily build, things which are straightly beyond us, as far beyond us as we are beyond insects. Right. And obviously we’re building this. We should build AI for Humanity, by Humanity, for Humanity- not for the purpose of the machines having a great time later on. Right. So making sure we we really give ourselves time to make sure we control these machines and or make sure at least that they have our values and do what we want. That is something more important than any other choice that humanity is making right now.”
— Max Tegmark
435,574 views. 14 Apr 2023.
A leading expert in artificial intelligence warns that the race to develop more sophisticated models is outpacing our ability to regulate the technology. Critics say his warnings overhype the dangers of new AI models like GPT. But MIT professor Max Tegmark says private companies risk leading the world into dangerous territory without guardrails on their work. His Institute of Life issued a letter signed by tech luminaries like Elon Musk warning Silicon Valley to immediately stop work on AI for six months to unite on a safe way forward. Without that, Tegmark says, the consequences could be devastating for humanity.