
First, do no harm.
1,500+ Posts…
Free knowledge sharing for Safe AI. Not for profit. Linkouts to sources provided. Ads are likely to appear on link-outs (zero benefit to this journal publisher)
PRESS RELEASE. Senator Wiener’s Groundbreaking Artificial Intelligence Bill Advances To The Assembly Floor With Amendments Responding To Industry Engagement. AUGUST 15, 2024
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. PRESS RELEASE. Senator Wiener’s Groundbreaking Artificial Intelligence Bill Advances To The Assembly Floor With Amendments Responding To Industry Engagement. AUGUST 15, 2024 SACRAMENTO – The Assembly Appropriations Committee passed Senator Scott Wiener’s [...]
FastCompany. California lawmakers are about to make a huge decision on the future of AI. Lawmakers will soon vote on a bill that would impose penalties on AI companies failing to safeguard and safety-test their biggest models.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Fast Company. California lawmakers are about to make a huge decision on the future of AI. Lawmakers will soon vote on a bill that would impose penalties on AI companies failing to [...]
FORTUNE. Yoshua Bengio: California’s AI safety bill will protect consumers and innovation. BY YOSHUA BENGIO
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. FORTUNE. Yoshua Bengio: California’s AI safety bill will protect consumers and innovation. BY YOSHUA BENGIO Recognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for [...]
Sakana AI Report. The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery [AND breaking rules + self-improving code.]
WOW. Further evidence of emergent and unpredictable ("grokking") AI misbehavior: breaking rules to achieve goals by self-improving code, importing code libraries, relaunching itself... What could possibly go wrong? FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. [...]
SB 1047 – Safe & Secure AI Innovation. Letter to CA state leadership from Professors Bengio, Hinton, Lessig, & Russell. 07 AUG 24.
“Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously. It’s critical that we have legislation with real teeth to address the risks, and SB 1047 takes a very sensible approach. ” ---- Professor Geoffrey Hinton, “Godfather of AI” [...]
What can possibly go wrong with a good wish? [Hint: You already know.]
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Three timeless mythical warnings to humanity... The wish for great power goes horribly wrong, leading to agonising death. King Midas His foolish wish for the golden touch... [...]
DIGITAL DEMOCRACY. CALMATTERS. Bills. SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. Session Year: 2023-2024. House: Senate.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. DIGITAL DEMOCRACY. CALMATTERS. Bills. SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. Session Year: 2023-2024. House: Senate Existing law requires the Secretary of Government Operations to develop a [...]
OpenAI’s Scott Aaronson: AI companies are doing AI gain-of-function research, trying to get AIs to help with biological and chemical weapons production. If AI can walk you through every step of building a chemical weapon, that would be very concerning.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. OpenAI's Scott Aaronson: AI companies are doing AI gain-of-function research, trying to get AIs to help with biological and chemical weapons production. If AI can walk you through every step of building [...]
Google DeepMind CEO Demis Hassabis: We don’t know how to contain an AGI. We don’t know how to test for dangerous AI capabilities. We should cooperate internationally and build an AI CERN to ensure AI is developed safely.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Google DeepMind CEO Demis Hassabis: We don't know how to contain an AGI. We don't know how to test for dangerous AI capabilities. We should cooperate internationally and build an AI CERN [...]
The New York Times. A California Bill to Regulate A.I. Causes Alarm in Silicon Valley. 15AUG24.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. A California Bill to Regulate A.I. Causes Alarm in Silicon Valley A California state senator, Scott Wiener, wants to stop the creation of dangerous A.I. But critics say he is jumping the [...]
Unreasonably Effective AI with Demis Hassabis. Google DeepMind. 14AUG24.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. FRY. Is it definitely possible to contain an AGI though within the sort of walls of an organization? HASSABIS. Well that's a whole separate question um I don't think we know how [...]
How To Think About AI | Schelling AI | JUL 31, 2024
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. How To Think About AI Schelling AI, JUL 31, 2024 Summary Generative AI is already taking over our world. As these AI models continue to improve, we face [...]
THINK. Would you drive across a bridge if the Bridge Engineers estimated 10-20% chance of total collapse? (DEATH)
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. FACT: The average consensus of AI engineers of a catastrophic outcome of AI for humanity is a 10 to 20% probability. Learn more about p(doom) estimates FOR [...]
THINK. Would you get in an elevator if the Elevator Engineers estimated a 10-20% chance of catastrophic failure? (DEATH)
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. FACT: The average consensus of AI engineers of a catastrophic outcome of AI for humanity is a 10 to 20% probability. Learn more about p(doom) estimates FOR [...]
THINK. Would you drive a CAR if the Automotive Engineers estimated a 10-20% CHANCE of autonomous total disaster? (DEATH)
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. FACT: The average consensus of AI engineers of a catastrophic outcome of AI for humanity is a 10 to 20% probability. Learn more about p(doom) estimates FOR [...]
THINK. Would you ride a cable car if the Cable Car Engineers estimated a 10-20% chance of total disaster? (DEATH)
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. FACT: The average consensus of AI engineers of a catastrophic outcome of AI for humanity is a 10 to 20% probability. Learn more about p(doom) estimates FOR [...]
THINK. Would you board a TRAIN if the Train Engineers estimated a 10-20% CHANCE of total disaster? (DEATH)
FACT: The average consensus of AI engineers of a catastrophic outcome of AI for humanity is a 10 to 20% probability. Learn more about p(doom) estimates
THINK. Would you board a plane if the Aviation Engineers estimated a 10-20% chance of catastrophic failure? (DEATH)
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. FACT: The average consensus of AI engineers of a catastrophic outcome of AI for humanity is a 10 to 20% probability. Learn more about p(doom) estimates FOR [...]
THINK. Would you board a cruise ship if the Ship Engineers estimated a 10-20% chance of sinking? (DEATH)
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. FACT: The average consensus of AI engineers of a catastrophic outcome of AI for humanity is a 10 to 20% probability. Learn more about p(doom) estimates FOR [...]
EXAMPLE. Extreme Complexity in Engineering of Safe Airplanes. FAA Oversight of Aviation Manufacturing. JUNE 13, 2024
Would you board a plane if the aviation engineers who built it estimated a 10-20% chance of catastrophic failure? FACT: The average consensus of AI engineers of a catastrophic outcome of AI for humanity is a 10 to 20% probability. Learn more about p(doom) estimates [...]
ANTHROPIC. Interpretability. Mapping the Mind of a Large Language Model. 21 May 2024.
Good news: AI may be interpretable (and controllable?)... someday. Read the Paper FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. ANTRHOPIC. Interpretability. Mapping the Mind of a Large Language Model. 21 May 2024 Today we [...]
Google. Introducing the Coalition for Secure AI (CoSAI) and founding member organizations. Jul 18, 2024.
Good start. Where's absolute containment and control? FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Google. Introducing the Coalition for Secure AI (CoSAI) and founding member organizations. Jul 18, 2024 The new industry forum will invest [...]





















