CONTROL.AI | Peter Thiel & Joe Rogan | Mark Cuban & Jon Stewart | Demis Hassabis & Walter Issaccson | Elon Musk & Lex Fridman | Scott Aaronson & Alexis Papazoglou | Stuart Russell & Pat Joseph | Jaan Tallinn | Major General (Ret.) Robert Latiff | Former US Navy Secretary Richard Danzig |
Peter Thiel on the AI extinction risk:
— The AI industry have done a bad job at convincing us this technology will actually be good.
— If AI advancement means humans are headed for the glue factory, like horses, the glue factory is getting shut down. pic.twitter.com/Jrtr30HDAr— ControlAI (@ai_ctrl) August 19, 2024
Peter Thiel on why regulating AI in the US doesn’t mean China would just race ahead: pic.twitter.com/L2Xr2zXmTJ
— ControlAI (@ai_ctrl) August 20, 2024
Peter Thiel on AI advancement: “I hear that as that means we’re going to be extinct. I don’t like that.” pic.twitter.com/avUt1HCc0Y
— ControlAI (@ai_ctrl) August 19, 2024
Mark Cuban:
— Silicon Valley is trying to take over the world
— Unlike other technologies, we have no idea what the impact of AI is going to be pic.twitter.com/BAsKAoO38c— ControlAI (@ai_ctrl) August 21, 2024
Demis Hassabis on the risks of open-source AI:
— In 2-4 years, if powerful AI is misused, it could cause serious damage.
— With open-source AI, if something goes wrong and a bad actor misuses it, you can’t recall it. There’s no pulling it back. pic.twitter.com/nFe8EX6nq7— ControlAI (@ai_ctrl) August 16, 2024
Demis Hassabis on the AI risk of extinction:
— Mitigating the risk of extinction should be a global priority.
— It’s “not a given” that we will succeed in avoiding this risk. pic.twitter.com/aj0rkSEJS5— ControlAI (@ai_ctrl) August 20, 2024
Elon Musk:
— At some point AI will be smarter than all humans combined
— Superintelligence could be a great filter (it could wipe out humanity)
— xAI (his company) could build it first pic.twitter.com/8M0GNhoWhX— ControlAI (@ai_ctrl) August 13, 2024
Elon Musk:
— We’re headed for a situation where AI could take control and reach a point where you couldn’t turn it off.
— If we wait until something terrible has happened to put AI regulations in place, it may be too late; the AI could be in control by that point. pic.twitter.com/uGWfY9IzzO— ControlAI (@ai_ctrl) August 7, 2024
“Once there is awareness, people will be extremely afraid — as they should be.”
Elon Musk at the 2017 NGA:
— AI is the biggest risk that our civilization faces
— Government must address dangers to the public
— AI must be regulated, regardless of AI companies complaining about it pic.twitter.com/r673xg7CGF— ControlAI (@ai_ctrl) July 25, 2024
Former US Navy Secretary Richard Danzig on the danger of AI development: “We’re dealing with machinery here that is self-replicating, that has the potential for amplifying itself … by designing itself”
“This sets off … a kind of chain reaction … and a very troublesome one” pic.twitter.com/9UxIpqXvJS
— ControlAI (@ai_ctrl) August 23, 2024
OpenAI’s Scott Aaronson:
— AI companies are doing AI gain-of-function research, trying to get AIs to help with biological and chemical weapons production.
— If AI can walk you through every step of building a chemical weapon, that would be very concerning. pic.twitter.com/p3aobsx5Pp— ControlAI (@ai_ctrl) August 16, 2024
OpenAI computer scientist Scott Aaronson:
— All the top AI companies were founded to build AI safely before someone else does it dangerously.
— This race could bring about the very thing they were worried about. pic.twitter.com/xlNJfMGsh2— ControlAI (@ai_ctrl) August 14, 2024
AI expert Jaan Tallinn on the AI industry’s dirty secret:
— Frontier AIs are not built. They are grown, and then companies try to tame them.
— Godlike AI will not care about humans because the methods used to tame AIs rely on the AI being less competent than the humans taming it. pic.twitter.com/bRI5rWo10j— ControlAI (@ai_ctrl) July 22, 2024
Major General (Ret.) Robert Latiff:
— We’re not being aggressive enough at controlling AI
— These are “really dangerous”
— A big concern is the implementation of AI in military command and control systems pic.twitter.com/P6KHwF9jZs— ControlAI (@ai_ctrl) July 19, 2024
I really appreciate how straightforwardly Stuart Russell describes this situation. Often people discuss such things with euphemisms, or technical language that doesn’t quite describe the actual consequences; imo clear speech is surprisingly helpful here. https://t.co/v96ZgSfAfY
— Adam Scholl (@adamascholl) August 10, 2024
AI scientist Stuart Russell says the AI industry’s position on regulation is pathetic: “There are far more rules on sandwiches than there are on software … and these trillion dollar companies will say ‘Oh we can’t fill out a form!'”. pic.twitter.com/pfNXsLSybs
— ControlAI (@ai_ctrl) August 8, 2024
AI scientist Stuart Russell:
— Nuclear energy has quantitative safety guarantees
— We’re nowhere near this level of safety engineering in AI
— Current AI is a black box; we don’t understand how these systems work internally, which makes it impossible to even get close pic.twitter.com/TeB01PcZDb— ControlAI (@ai_ctrl) July 31, 2024
Renowned AI scientist Stuart Russell:
— Even if you could specify the objectives of powerful AI systems, it would be impossible to get them to do what you actually want.
— We don’t even have an idea what objectives we’re building into current AI systems, which is even worse. pic.twitter.com/L3Q2HEppcy— ControlAI (@ai_ctrl) July 30, 2024