Learn more at Control AI and Pause AI and P(doom)Fixer.
“Once there is awareness, people will be extremely afraid — as they should be.”
Elon Musk at the 2017 NGA:
— AI is the biggest risk that our civilization faces
— Government must address dangers to the public
— AI must be regulated, regardless of AI companies complaining about it pic.twitter.com/r673xg7CGF— ControlAI (@ai_ctrl) July 25, 2024
OpenAI CEO Sam Altman, back in 2020:
“it’s so easy to get caught up in the geopolitical tensions and race that we can lose sight of this gigantic humanity-level decision that we have to make in the not too distant future.” pic.twitter.com/FJBf1ntQHH— ControlAI (@ai_ctrl) July 25, 2024
AI expert Jaan Tallinn on the AI industry’s dirty secret:
— Frontier AIs are not built. They are grown, and then companies try to tame them.
— Godlike AI will not care about humans because the methods used to tame AIs rely on the AI being less competent than the humans taming it. pic.twitter.com/bRI5rWo10j— ControlAI (@ai_ctrl) July 22, 2024
Major General (Ret.) Robert Latiff:
— We’re not being aggressive enough at controlling AI
— These are “really dangerous”
— A big concern is the implementation of AI in military command and control systems pic.twitter.com/P6KHwF9jZs— ControlAI (@ai_ctrl) July 19, 2024
Anthropic CEO Dario Amodei on AI existential risks:
— In a couple years, as AI models improve, people could do very bad things with them.
— It could be difficult to control AI agents, which might pose an existential risk. pic.twitter.com/gQdGt6Tg1x— ControlAI (@ai_ctrl) July 17, 2024
Anthropic CEO Dario Amodei: “You can say a million things to a model and it can say a million things back, and you might not know that the million and oneth was something very dangerous.”
If Anthropic misses a dangerous AI threat, who faces the consequences? pic.twitter.com/HBa6YMchSw
— ControlAI (@ai_ctrl) July 23, 2024
Elon Musk says there’s a 10 to 20% probability that AI annihilates us. pic.twitter.com/WgOozWQW2s
— ControlAI (@ai_ctrl) July 16, 2024
OpenAI co-founder Wojciech Zaremba compares self-modifying AI to cancer:
• Once there is an increased number of AIs and they’re modifying their own code, there is a process of natural selection.
• The AI that wants to maximally spread will be the one that exists. pic.twitter.com/38E4jj6bTG— ControlAI (@ai_ctrl) July 18, 2024
Ex-OpenAI safety researcher William Saunders explains:
— The firing of Leopold Aschenbrenner, who had raised security concerns
— The call of current and former OpenAI employees for a right to warn about advanced AIAnd delivers a message to Sam Altman about trust. pic.twitter.com/JJpsESRxZS
— ControlAI (@ai_ctrl) July 15, 2024
Ex-OpenAI safety researcher William Saunders:
— We fundamentally don’t know how AI works inside
— A lot of people in OpenAI think we could be 3 years away from something dangerous
— GPT-5 could be the Titanic pic.twitter.com/hnUPL7vs6C— ControlAI (@ai_ctrl) July 10, 2024
Demis Hassabis calls for international collaboration on AI safety pic.twitter.com/AUdfiD2uqf
— ControlAI (@ai_ctrl) July 10, 2024
AI security researcher Christina Liaghati explains how easy it is for open sourced AI models to be maliciously altered to produce bad outputs, without users being able to detect this. pic.twitter.com/gnEB9YAsXs
— ControlAI (@ai_ctrl) July 5, 2024
Jeff Bezos on the risks of general-purpose AI:
— We can’t predict the capabilities of large language models
— Even specialized AI could be very bad for humanityAI companies are racing as fast as possible to build AGI, which will be far more dangerous than specialized systems. pic.twitter.com/1NoK9WMVAI
— ControlAI (@ai_ctrl) July 9, 2024
Legal scholar Jonathan Zittrain on the the risks of AI agents: “Give it a few goals, let it have a bank account, let it draw from the bank account to spend money on stuff … and then who knows where it ends up?” pic.twitter.com/Vubg7BxMEM
— ControlAI (@ai_ctrl) July 3, 2024
Elon Musk predicts 20 billion humanoid robots, adding: “We definitely need to be careful that they don’t go all Terminator on us.” pic.twitter.com/akdqoxR3uR
— ControlAI (@ai_ctrl) July 1, 2024
Anthropic CEO Dario Amodei:
— There is a “good chance” that AGI could be built within the next 1 – 3 years.
— There is catastrophic risk from AI and that too could be 1 – 3 years away.Dario’s company is aiming to build AGI. pic.twitter.com/GwDTv39uWN
— ControlAI (@ai_ctrl) June 28, 2024
Former British PM Tony Blair on the AI revolution, the importance of political leaders understanding it, and the need to prepare to deal with a crisis. pic.twitter.com/de7OT9wd7e
— ControlAI (@ai_ctrl) June 27, 2024
will i am points out the incredible lack of even the most minimal regulations for AI. pic.twitter.com/BKThjsJFE5
— ControlAI (@ai_ctrl) July 1, 2024
OpenAI board member and former US Treasury Secretary Larry Summers: “we cannot leave AI only to AI developers. That’s why it’s absolutely essential that public authorities take a strong role here to make sure that this technology is used for good.” pic.twitter.com/inR6yE4VLC
— ControlAI (@ai_ctrl) June 26, 2024
US National Security Adviser Jake Sullivan: “AI promises to be the most powerful, impactful, technology, perhaps for the next hundred years, perhaps for the next thousand years.”
It also promises to be the most dangerous. Humanity needs a plan to manage AI risks. pic.twitter.com/vidl0UhvlE
— ControlAI (@ai_ctrl) June 26, 2024
AI godfather Geoffrey Hinton warns of the existential threat of AI taking over.
“Many people were saying it was science fiction. I no longer believe it’s science fiction” pic.twitter.com/BA5zjnJUaF
— ControlAI (@ai_ctrl) June 25, 2024
Will we find a way forward in AI development that is good for humanity?
Geoffrey Hinton: “I think we’ve got a better than even chance of surviving this, but it’s not like there’s only a 1% chance of it taking over, it’s much bigger than that.” pic.twitter.com/Xg9Wdvozaa
— ControlAI (@ai_ctrl) June 24, 2024
AI godfather Geoffrey Hinton: AI superintelligence could be developed within the next 10 years.
Superintelligence would be more capable than the best humans in virtually every intellectual domain.
Currently, humanity has no plan to ensure that this goes well for us. pic.twitter.com/KRVsL1oMdL
— ControlAI (@ai_ctrl) June 24, 2024
Will we find a way forward in AI development that is good for humanity?
Geoffrey Hinton: “I think we’ve got a better than even chance of surviving this, but it’s not like there’s only a 1% chance of it taking over, it’s much bigger than that.” pic.twitter.com/Xg9Wdvozaa
— ControlAI (@ai_ctrl) June 24, 2024
‘Godfather of AI’ Geoffrey Hinton highlights the danger of economic-competitive racing dynamics between AI companies causing them to prioritize rapid development over ensuring their models are safe.
Hinton concludes that the only way to mitigate this is with strict regulation. pic.twitter.com/QjeG1Di8Td
— ControlAI (@ai_ctrl) June 21, 2024
AI godfather Geoffrey Hinton explains the instrumental convergence threat of powerful AI systems creating dangerous sub-goals like power acquisition and self-preservation, regardless of what you actually ask them to do: pic.twitter.com/KwXsw3sv4y
— ControlAI (@ai_ctrl) June 20, 2024
What happens if we neglect to regulate AI?
Ex-OpenAI board member Helen Toner states that the default path is that something goes wrong with AI, and we end up in a giant crisis — where consequently the only laws that we get are written in a knee-jerk reaction to such a crisis. pic.twitter.com/1aHbTpugIh
— ControlAI (@ai_ctrl) June 18, 2024
John Schulman, the co-founder of OpenAI, warns that AGI (a general-purpose artificial intelligence so powerful that it can supplant almost all human jobs) could be only 2 or 3 years away. This puts us all in an extremely risky situation – but he admits that doesn’t even have a… pic.twitter.com/vBDMYySLfG
— ControlAI (@ai_ctrl) May 17, 2024
Google DeepMind CEO Demis Hassabis on AI accelerationists:
“They don’t actually fully understand the enormity of what’s coming. Because if they did — I’m very optimistic we can get this right — but only if we do it carefully and take the time needed to do it.” pic.twitter.com/Q9YZmfEZKL— ControlAI (@ai_ctrl) June 11, 2024
Pope Francis warns the G7 that Artificial Intelligence must never be allowed to get the upper hand over humanity.
In his speech, the Pope stated: “We need to ensure and safeguard a space for proper human control over the choices made by AI programs”.
— ControlAI (@ai_ctrl) June 17, 2024