Listen to scientists. Reduce AI safety risks! @GavinNewsom SIGN #SB1047 Join 77% CA voters, 120+ employees at frontier AI companies, 100 youth leaders, unions, + more. Don’t side with Meta and OpenAI who have shown they can’t be trusted. @TIME https://t.co/hw6fgTeLjX
— SF Bay PSR (@SFBayPSR) September 19, 2024
I read California Governor @GavinNewsom‘s comments about SB1047 yesterday: “The governor said he is weighing what risks of AI are demonstrable versus hypothetical.” https://t.co/Iv5NM6kXh4
Here is my perspective on this:
Although experts don’t all agree on the magnitude and…
— Yoshua Bengio (@Yoshua_Bengio) September 18, 2024
Ilge Akkaya of OpenAI says the new o1 model is “oftentimes better than humans”, has “the equivalent of several PhDs” and is saturating all the industry’s evals, making it difficult to assess the model pic.twitter.com/GfETejgppT
— Tsarathustra (@tsarnick) September 20, 2024
OpenAI staff who worked on the new o1 model say that the AI is “spiritual” and “oddly human” in how it can reason, reflect and question itself pic.twitter.com/Qm96h2aqbh
— Tsarathustra (@tsarnick) September 20, 2024
Google CEO Sundar Pichai says AI is a platform shift and as compute is scaled like never before and the cost of generating tokens has fallen by 97% in the past 18 months, we will get “intelligence, just like air, too cheap to meter” pic.twitter.com/P23WSnuMMY
— Tsarathustra (@tsarnick) September 21, 2024
Alphabet CEO Sundar Pichai says Google are scaling up their compute infrastructure and working on 1 gigawatt+ data centers, while exploring options for powering them including small modular nuclear reactors pic.twitter.com/smBUiruuYg
— Tsarathustra (@tsarnick) September 21, 2024
Marc Benioff says Microsoft Copilot is the new Clippy and by contrast, Salesforce is looking to deploy a billion AI agents in the next 12 months pic.twitter.com/LWK1Oc08Gc
— Tsarathustra (@tsarnick) September 22, 2024
Aza Raskin and Hannah Fry say using AI to communicate with animals could disrupt their behavior and navigation, and we may need a Geneva Convention for cross-species communication pic.twitter.com/ljcdCQaOuf
— Tsarathustra (@tsarnick) September 21, 2024
Sorry, but anyone who says “AI is just math so it can’t be dangerous” is just embarrassing themselves
It’s so obviously stupid I just can’t take them seriously anymore https://t.co/E2aBjAqHDd
— AI Notkilleveryoneism Memes ⏸️ (@AISafetyMemes) September 21, 2024
“Once one humanoid robot learns a skill, every robot in the fleet will have it acquired”
AIs will blow past us with TWO superpowers:
1) “I Know Kung Fu”
2) Hive MindsHive Mind Risk: Do you know why Godfather of AI Hinton quit Google to warn the world about extinction risk?… https://t.co/g9Bk00uxEg pic.twitter.com/H3L4SJDVtT
— AI Notkilleveryoneism Memes ⏸️ (@AISafetyMemes) September 20, 2024
I am once again asking for humanity to take seriously the mad science Frankenstein shit going on right now in SF https://t.co/nNm4RUaXFn pic.twitter.com/K7K3mZZ6yf
— AI Notkilleveryoneism Memes ⏸️ (@AISafetyMemes) September 21, 2024
In 2019, Naval said: “An AI that can program as well or better than humans is an AI that just took over the world … that’s the end of the human species.”
Guys, this just happened.
AIs are now gold medal level at IOI – the “Olympics of programming”
So, @Naval, do we now face… https://t.co/nXgQ5icIDg
— AI Notkilleveryoneism Memes ⏸️ (@AISafetyMemes) September 20, 2024
Why pausing AI is feasible. pic.twitter.com/wxZVzOGSad
— PauseAI ⏸ (@PauseAI) September 21, 2024
Weirdly most ppl I talk to in the Bay area believe “Superintelligence will end humanity”. Please note “will”, not “could” (!)
Anyway, fantastic day @Stanford yesterday with lots of curious people wanting to learn more! pic.twitter.com/oLXsc98By9
— Chris Gerrby ⏸️AI (@ChrisGerrby) September 21, 2024
As of this month, four former openAI pages
(https://t.co/K57FgcirIo,https://t.co/qYnVGGJLn8,https://t.co/C6vsrnyAEbhttps://t.co/lVlQg4Gem4)
have been apparently removed, redirecting instead to:https://t.co/dj7kKcO312
— AI Safety Corporate Policy Changes (@SafetyChanges) September 15, 2024
Leading computer scientists from around the world, including @Yoshua_Bengio, Andrew Yao, @yaqinzhang and Stuart Russell met last week and released their most urgent and ambitious call to action on AI Safety from this group yet.🧵 pic.twitter.com/qXknvCMBTV
— International Dialogues on AI Safety (@ais_dialogues) September 16, 2024
Earlier this month, I participated in the UN Secretary-General’s @ScienceBoard_UN retreat which led to a statement offering a roadmap for rebuilding trust in science, accelerating the #SDGs, and opening the benefits of scientific progress for all.
I’m now looking forward to… pic.twitter.com/er6hRjAXWE
— Yoshua Bengio (@Yoshua_Bengio) September 21, 2024
The global nature of AI risks makes it necessary to recognize AI safety as an international public good, and work towards coordinated governance of these risks.
Statement made in Venice: https://t.co/L0yS1TmItx
Associated @nytimes article: https://t.co/66pz3V9w7k https://t.co/NbFmFJlox3
— Yoshua Bengio (@Yoshua_Bengio) September 16, 2024