"Superintelligent AI could create a virus that just kills us all"
Nobel Laureate, Geoffrey Hinton warned:
we’re at a critical point in making AI safe, but we don’t know how — or if it’s even possible.
there is an enormous effort to solve this because AI could end us, possibly… pic.twitter.com/gBlfGMs4yR
— Haider. (@slow_developer) April 3, 2025
Geoffrey Hinton the godfather of Artificial intelligence explains the dangers of AI! pic.twitter.com/dGNmXCccPK
— Moneir Aslam (@moneir27) March 27, 2025
Geoffrey Hinton (@geoffreyhinton): "I would like to have been concerned about this existential threat sooner. I always thought superintelligence was a long way off and we could worry about it later … And the problem is, it's close now." pic.twitter.com/uEHL7sR9Cs
— ControlAI (@ai_ctrl) April 2, 2025
Geoffrey Hinton (@geoffreyhinton) explains what it means to understand something: pic.twitter.com/CvWS1WogNA
— ControlAI (@ai_ctrl) February 27, 2025
'Godfather of AI' Geoffrey Hinton highlights the danger of economic-competitive racing dynamics between AI companies causing them to prioritize rapid development over ensuring their models are safe.
Hinton concludes that the only way to mitigate this is with strict regulation. pic.twitter.com/QjeG1Di8Td
— ControlAI (@ai_ctrl) June 21, 2024
Nobel laureate Geoffrey Hinton says there is evidence that AIs can be deliberately and intentionally deceptive pic.twitter.com/y6TAV3cp6g
— Tsarathustra (@tsarnick) January 18, 2025