Blog2025-01-16T06:22:49+00:00

First, do no harm.

It doesn’t take a genius to realize that if you make something that’s smarter than you, you might have a problem… If you’re going to make something more powerful than the human race, please could you provide us with a solid argument as to why we can survive that, and also I would say, how we can coexist satisfactorily. Prof. Stuart Russell

“This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.” — Actual AI output. Published 13 November 2024 at 03:32

“These things really do understand.” — Nobel laureate, Prof. Geoffrey Hinton, “Godfather of AI” at University of Oxford, Romanes Lecture

Editors’ humble opinion based on AI technology thought leaders over the past 80 years… we are all now in extremely deep trouble. ANALYZE THE DATA.

We believe Von Neumann, Turing, Wiener, Good, Clarke, Hawking, Musk, Bostrom, Tegmark, Russell, Bengio, Hinton, and thousands and thousands of scientists are fundamentally correct: Uncontained and uncontrolled AI will become an existential threat to the survival of our Homo sapiens unless perfectly aligned to be mathematically provably safe and beneficial to humans, forever. Learn more: The Containment Problem and The AI Safety Problem

Curated news & opinion for public benefit.
Free, no ads, no paywall, no advice.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY FOR SAFE AI LEARNING.
NOT-FOR-PROFIT. COPY-PROTECTED. VERY GOOD READS FROM RESPECTED SOURCES!
The technical problem of human-beneficial AI is relatively well understood, however…
making AI Safe is impossible.
The technical solutions are currently unknown to making Safe AI.
Containment and control of AI is the requirement, forever.
Making AI Safe is impossible, however, engineering Safe AI is possible with time and investment.
We need mathematically provable guarantees of Safe AI.

About X-risk fixers: P(doom)Fixer.com

Why?

About X-risk: Future of Life Institute
Stuart Russell | Provably Beneficial AI

What?

About X-risk: The Elders
International Association for Safe and Ethical Artificial Intelligence (IASEAI)

Why? (1:00)

Good Summary of X-risk by PauseAGI

What? (1:00)

How? (3:00)

Experts on X-risk: The AI Safety Risk

Scientific Consensus:
Mathematically provable Safe AI is the requirement.

2,023 Posts…

Free knowledge sharing for Safe AI. Not for profit. Linkouts to sources provided. Ads are likely to appear on linkouts (zero benefit to this blog publisher)

5.8 million High-end Nvidia GPUs [H100, B100] in the World by 2025 [at least].

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. 5.8 million high-end Nvidia GPUs in the World by 2025 [at least]: Company Estimated GPUs (Nvidia H100's or equivalent) Source New GPU in Development Amazon unpublished unknown Trainium Google unpublished unknown [...]

The Shoggoth and p(doom) and The Untestability of AGI Safety.

Learn about The Shoggoth and p(doom) and The Untestability of AGI Safety. FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. The Shoggoth is an AI industry insider meme for LLMs. (and the average AI scientist [...]

From 100% p(doom) to 0% p(doom).

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. I believe Homo sapiens can evolve, with AI, from our current state of 100% p(doom) to a future state of 0% p(doom), with mutualistic symbiosis. We have a chance to avoid extinction ["p(doom"] [...]

THE GUARDIAN. TechScape: How one man stopped a potentially massive cyber-attack – by accident. 2 Apr 2024. Imagine what a Super Artificial Intelligent (SAI) could do to our digital infrastructure…

Imagine what a Super Artificial Intelligent (SAI) could do to our digital infrastructure... "I would guess that SAI can hack its way into anything without relying on backdoors." --- Roman V. Yampolskiy, Ph.D. A GOOD READ: AI: Unexplainable, Unpredictable, Uncontrollable --- Roman V. Yampolskiy, Ph.D. [...]

Load More Posts
Go to Top