“I try not to think about it too much, but I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Sam Altman — who warned AI poses ‘risk of extinction’ to humanity — is also a ‘doomsday prepper’

By Thomas Barrabi

Published June 5, 2023, 5:24 p.m. ET

OpenAI boss Sam Altman is a self-admitted doomsday prepper who once bragged about his stash of guns, gold and other survival goods — long before he and other experts warned AI posed a “risk of extinction” to humanity on par with nuclear weapons and pandemics.

In 2016, a profile of Altman in the New Yorker recounted a conversation in which he told two tech entrepreneurs that one of his hobbies, besides collecting cars and flying planes, was prepping “for survival” in the event of catastrophe – such as a lethal synthetic virus or the onset of a rogue AI “that attacks us.”

“I try not to think about it too much,” Altman reportedly said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

The OpenAI executive downplayed his past remarks during an April appearance on the podcast “Honestly with Bari Weiss,” telling the journalist he was not a doomsday prepper “in the way I would think about.”

“It was like a fun hobby, but there’s nothing else to do. None of this is going to help you if [artificial general intelligence] goes wrong, but it’s like, a fun hobby,” Altman said.

Altman’s doomsday vision of AI gone wrong is commonplace in Silicon Valley, where a growing number of tech billionaires have poured money into post-apocalyptic contingency plans such as remote bunkers in recent years.

Some, such as Peter Thiel and Google co-founder Larry Page, have snapped up land in New Zealand. The same New Yorker profile revealed Altman’s “backup plan” was to fly to New Zealand with Thiel if society crumbled.

Critics of tech’s obsession with a “Terminator”-like future, such as Douglas Rushkoff, author of “Survival of the Richest: Escape Fantasies of the Tech Billionaires,” have taken a cynical view of the recent AI panic.

They argue the tech industry’s warnings of potential Armageddon are a convenient distraction from more pressing issues – and a way to secure a seat at the table to shape regulations that will raise the barrier of entry for potential AI rivals.

It’s also a way for Altman and other frontrunners to tout their progress toward achieving artificial general intelligence, or systems with human-level abilities, when current services like ChatGPT are actually just “trained” to regurgitate information, Rushkoff added.

“The real problem with tech bro existential panic at scale is it tends to make them ignore, or even exacerbate, the problems that are happening right now,” Rushkoff told The Post.

“They’re focused on a kind of Terminator end-game nightmare, and for me, it’s a way of distracting themselves from the very real and present dangers they pose to us,” he added. “It’s not the AI that’s going to set off a nuclear bomb in a fictional future. It’s the stuff they’re doing right now at this very moment.”

Nonetheless, catering to paranoid tech bros has become a lucrative business for some firms – including Rising S, a Texas-based bunker manufacturer that builds shelters and installs them at a customer’s preferred destination.

Rising S sells about 20 to 40 bunkers per year, depending on the size and complexity, according to Gary Lynch, a general manager at the firm for the last 20 years. The company lists prices ranging from $49,000 for its smallest model to more than $9.6 million for a decked-out bunker dubbed “The Aristocrat.”

“I’ve absolutely sold plenty to tech people,” Lynch told The Post, noting a rush of sales to customers in Silicon Valley and Napa during the COVID-19 pandemic. “It seemed like every bunker we sold was going out there for about three years.”

Lynch said tech sector customers generally buy “larger model” bunkers of 2,000-square-feet or larger – with bells and whistles such as an onsite bowling alley, swimming pools, hot tubs and biometric entryways.

Lynch said none of his buyers have specifically mentioned AI as the reason for their purchase – though he noted it isn’t unusual for customers to withhold their thinking “in fear of sounding cooky.”

John Ramey, a longtime Silicon Valley entrepreneur and founder of the disaster prep blog The Prepared, said he personally disagrees “with the default assumption that AI will be an evil Skynet” — but offered a defense of Altman and others who feel the need to call out the possibility.

“Seeing that advanced computing is inevitable and seeing the predictable problems that derive from it and seeing what it takes to prepare for those problems is why preppers are disproportionately represented in the Valley crowd,” Ramey told The Post.

Altman isn’t the only expert warning of potential doom of AI advances unchecked. Earlier this month, Elon Musk said he sees a risk of AI “going Terminator” in the future – and has previously described his plan to build a sustainable colony on Mars as key to humanity’s long-term survival in the event of a catastrophe.

Ex-Google boss Eric Schmidt warned AI is an “existential risk” to humanity that could result in “many, many, many, many people harmed or killed”

Altman also hinted at his fear while testifying before a Senate panel earlier this month, declaring that his worst fear is that advanced AI will “cause significant harm to the world.”

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.