More precisely, “Safety Engineering” is the solution for Safe AI.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

CNBC. Richard Branson and Oppenheimer’s grandson urge action to stop AI and climate ‘catastrophe’

Tegmark told CNBC in an interview: “The old strategy for steering toward good uses [when it comes to new technology] has always been learning from mistakes. We invented fire, then later we invented the fire extinguisher. We invented the car, then we learned from our mistakes and invented the seatbelt and the traffic lights and speed limits. But when the power of the technology crosses a threshold, the ‘learning-from-mistakes’ strategy becomes awful. As a nerd myself, I think of it as safety engineering. When we sent people to the moon, we carefully thought through all the things that could go wrong when putting people on explosive fuel tanks and sending them where no one could help them. And that’s why it ultimately went well. That wasn’t ‘doomerism.’ That was safety engineering. And we need this kind of safety engineering for our future also, with nuclear weapons, with synthetic biology, with ever more powerful AI.”

KEY POINTS
  • Billionaire Virgin Group founder Richard Branson, former U.N. Secretary-General Ban Ki-moon and Charles Oppenheimer, J. Robert Oppenheimer’s grandson, are among names calling for action to address escalating risks surrounding the climate crisis, pandemics, nuclear weapons and AI.
  • They signed an open letter released Thursday by The Elders, a nongovernmental organization set up by former South African President Nelson Mandela and Branson to address global human rights issues.
  • Future of Life Institute founder Max Tegmark, one of the signatories, told CNBC that, while not in and of itself “evil,” the technology remains a “tool” that could lead to some dire consequences, if it is left to advance rapidly in the hands of the wrong people.

Dozens of high-profile figures in business and politics are calling on world leaders to address the existential risks of artificial intelligence and the climate crisis.

Virgin Group founder Richard Branson, along with former United Nations Secretary-General Ban Ki-moon, and Charles Oppenheimer — the grandson of American physicist J. Robert Oppenheimer — signed an open letter urging action against the escalating dangers of the climate crisis, pandemics, nuclear weapons and ungoverned AI.

The message asks world leaders to embrace a long-view strategy and a “determination to resolve intractable problems, not just manage them, the wisdom to make decisions based on scientific evidence and reason, and the humility to listen to all those affected.”

“Our world is in grave danger. We face a set of threats that put all humanity at risk. Our leaders are not responding with the wisdom and urgency required,” the letter, which was published Thursday and shared with global governments, according to a spokesperson, said.

“The impact of these threats is already being seen: a rapidly changing climate, a pandemic that killed millions and cost trillions, wars in which the use of nuclear weapons has been openly raised,” “There could be worse to come. Some of these threats jeopardise the very existence of life on earth.”

Signatories called for urgent multilateral action, including through financing the transition away from fossil fuels, signing an equitable pandemic treaty, restarting nuclear arms talks and building global governance needed to make AI a force for good.

The letter was released Thursday by The Elders, a nongovernmental organization that was launched by former South African President Nelson Mandela and Branson to address global human rights issues and advocate for world peace.

The message is also backed by the Future of Life Institute, a nonprofit organization set up by MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn, which aims to steer transformative technology like AI toward benefiting life and away from large-scale risks.

Tegmark said The Elders and his organization wanted to convey that, while not in and of itself “evil,” the technology remains a “tool” that could lead to some dire consequences, if it is left to advance rapidly in the hands of the wrong people.

“The old strategy for steering toward good uses [when it comes to new technology] has always been learning from mistakes,” Tegmark told CNBC in an interview. “We invented fire, then later we invented the fire extinguisher. We invented the car, then we learned from our mistakes and invented the seatbelt and the traffic lights and speed limits.”

‘Safety engineering’

“But when the power of the technology crosses a threshold, the ‘learning-from-mistakes’ strategy becomes awful,” Tegmark added

“As a nerd myself, I think of it as safety engineering. When we sent people to the moon, we carefully thought through all the things that could go wrong when putting people on explosive fuel tanks and sending them where no one could help them. And that’s why it ultimately went well.”

He went on to say: “That wasn’t ‘doomerism.’ That was safety engineering. And we need this kind of safety engineering for our future also, with nuclear weapons, with synthetic biology, with ever more powerful AI.”

The letter was issued ahead of the Munich Security Conference, where government officials, military leaders and diplomats will discuss international security amid escalating global armed conflicts, including the Russia-Ukraine and Israel-Hamas wars. Tegmark will be attending the event to advocate the message of the letter.

The Future of Life Institute last year also released an open letter backed by leading figures including Tesla boss Elon Musk and Apple co-founder Steve Wozniak, which called on AI labs like OpenAI to pause work on training AI models that are more powerful than GPT-4 — currently the most advanced AI model from Sam Altman’s OpenAI.

The technologists called for such a pause in AI development to avoid a “loss of control” of civilization, which might result in a mass wipeout of jobs and an outsmarting of humans by computers.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.