Blog2024-11-20T08:58:08+00:00

First do no harm.

Editors’ humble opinion based on AI technology thought leaders over the past 80 years… we are all now in extremely deep trouble. ANALYZE THE DATA.

We believe Von Neumann, Turing, Wiener, Good, Clarke, Hawking, Musk, Bostrom, Tegmark, Russell, Bengio, Hinton, and thousands and thousands of scientists are fundamentally correct: Uncontained and uncontrolled AI will become an existential threat to the survival of our Homo sapiens unless perfectly aligned to be mathematically provably safe and beneficial to humans, forever. Learn more: The Containment Problem and The AI Safety Problem


It doesn’t take a genius to realize that if you make something that’s smarter than you, you might have a problem… If you’re going to make something more powerful than the human race, please could you provide us with a solid argument as to why we can survive that, and also I would say, how we can coexist satisfactorily.Stuart Russell

“This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.” — Actual AI output. Published 13 November 2024 at 03:32

“These things really do understand.” — Nobel laureate, Prof. Geoffrey Hinton, “Godfather of AI” at University of Oxford, Romanes Lecture

Curated news & opinion for public benefit.
Free, no ads, no paywall, no advice.
Not for profit. (1,101 posts)

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY.
NOT-FOR-PROFIT. COPY-PROTECTED. VERY GOOD READS FROM RESPECTED SOURCES!
The technical problem of AI safety is relatively well understood- Making AI Safe is impossible.
The technical solutions are currently unknown to making Safe AI.
Making AI Safe is impossible. Making Safe AI is doable with time and investment.
We need mathematically provable guarantees of Safe AI.

About X-risk fixers: P(doom)Fixer.com

About X-risk: Future of Life Institute
Stuart Russell | Provably Beneficial AI

About X-risk: The Elders
It’s time for bold new thinking.

Good Summary of X-risk by PauseAGI

Experts on X-risk: The AI Safety Risk

THE GUARDIAN. AI-controlled US military drone ‘kills’ its operator in simulated test. No real person was harmed, but artificial intelligence used ‘highly unexpected strategies’ in test to achieve its mission and attacked anyone who interfered. 02 JUNE 2023

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. THE GUARDIAN. AI-controlled US military drone ‘kills’ its operator in simulated test. 02 JUNE 2023. No real person was harmed, but artificial intelligence used ‘highly unexpected strategies’ in test to achieve its [...]

INNOVATIONS ChatGPT took their jobs. Now they walk dogs and fix air conditioners. Technology used to automate dirty and repetitive jobs. Now, artificial intelligence chatbots are coming after high-paid ones. June 2, 2023

THE WASHINGTON POST. INNOVATIONS. ChatGPT took their jobs. Now they walk dogs and fix air conditioners. Technology used to automate dirty and repetitive jobs. Now, artificial intelligence chatbots are coming after high-paid ones. June 2, 2023  By Pranshu Verma and Gerrit De Vynck When ChatGPT came out last November, Olivia [...]

Michael Faraday. Inventor of The Faraday Cage – Wikipedia

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Michael Faraday. Inventor of The Faraday Cage - Wikipedia Michael Faraday FRS (/ˈfærədeɪ, -di/; 22 September 1791 – 25 August 1867) was an English scientist who contributed to the study of electromagnetism and [...]

THE NEW YORK TIMES. A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn. Leaders from OpenAI, Google DeepMind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics and nuclear weapons. 30 MAY 2023

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. THE NEW YORK TIMES A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn Leaders from OpenAI, Google DeepMind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics [...]

THE NEW YORK TIMES. Why an Octopus-like Creature Has Come to Symbolize the State of A.I. The Shoggoth, a character from a science fiction story, captures the essential weirdness of the A.I. moment. Kevin Roose. 30 MAY 2023.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. At the Mountains of Madness (1931) is a science fiction-horror novella by American author H. P. Lovecraft and was originally serialized in the February, March, and April 1936 issues of Astounding Stories. [...]

Former CEO of Google, Eric Schmidt, told the WSJ gathering of CEOs in London on 23 May that the existential risk of AI “is defined as many, many, many, many people harmed or killed,” but he did not have any clear ideas himself on how AI should, or could, be regulated. Schmidt’s opinion was that AI safety should be a ‘broader question for society.’ 24 MAY 2023

Former CEO of Google, Eric Schmidt, told the WSJ gathering of CEOs in London on 23 May that the existential risk of AI "is defined as many, many, many, many people harmed or killed,” but he did not have any clear ideas himself on how AI should, or could, [...]

OpenAI. Governance of superintelligence. 22 MAY 2023

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. OpenAI. Governance of superintelligence. 22 MAY 2023 Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI. [...]

Load More Posts
Go to Top