
First, do no harm.
1,500+ Posts…
Free knowledge sharing for Safe AI. Not for profit. Linkouts to sources provided. Ads are likely to appear on link-outs (zero benefit to this journal publisher)
CBS. Face The Nation. Opinion. Autonomous weapons. Social media. AI Transformation of our Society.
CBS. Face The Nation. Opinion. Autonomous weapons. Social media. AI Transformation of our Society. Very good information from a respected source. FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. What is the most underreported news story [...]
WIRED. How Not to Be Stupid About AI, With Yann LeCun. It’ll take over the world. It won’t subjugate humans. For Meta’s chief AI scientist, both things are true. 22 DEC.
A very good read from a respected source! FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. LeCun: "You would be extremely stupid to build a system and not build any guardrails. That would be like building [...]
arXiv. Exploiting Novel GPT-4 APIs. by FAR. More Evidence that so-called “Guardrails” are not fit-for-purpose (they don’t work). 21 DEC.
MORE EVIDENCE that so-called current "Guardrails" certainly are not fit for purpose. A very good read from a respected source. FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Exploiting Novel GPT-4 APIs Kellin Pelrine, Mohammad Taufeeque, [...]
NIST Calls for Information to Support Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. 19 DEC.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. NIST Calls for Information to Support Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. December 19, 2023 NIST seeks information to support its response to the Executive Order on AI. [...]
OpenAI. Preparedness. The Preparedness team is dedicated to making frontier AI models safe. 18 DEC.
An important development from a leading AI company! FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. OpenAI. Preparedness The Preparedness team is dedicated to making frontier AI models safe The study of frontier AI risks has [...]
LESSWRONG. “Humanity vs. AGI” Will Never Look Like “Humanity vs. AGI” to Humanity. 16th Dec 2023.
LESSWRONG. "Humanity vs. AGI" Will Never Look Like "Humanity vs. AGI" to Humanity. by Thane Ruthenis 6 min read 16th Dec 2023 "We're not keeping our AIs in airgapped data centers" When discussing AGI Risk, people often talk about it in terms of a war between humanity and an AGI. [...]
OpenAI. Weak-to-Strong Generalisation: Eliciting Strong Capabilities With Weak Supervision. [probably won’t work]
An interesting read from a respected source. However... Stuart Russell, a professor at UC Berkeley who works on AI safety, says the idea of using a less powerful AI model to control a more powerful one has been around for a while. He also says it is unclear that [...]
AXIOS. Scoop: U.S. is leading “AI for good” push at UN. 13 DEC.
A very good read from a respected source! FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. AXIOS. Scoop: U.S. is leading "AI for good" push at UN Ryan Heath , author of Axios AI+ The United [...]
VATICAN NEWS. In World Peace Day message, Pope warns of risks of AI for peace. 14 DEC
Good to know that even The Pope believes that safe AI is a requirement. FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. In World Peace Day message, Pope warns of risks of AI for peace. In [...]
THE WHITE HOUSE. Delivering on the Promise of AI to Improve Health Outcomes. 14 DEC.
A very good start! 28 providers and payers commitments: Allina Health, Bassett Healthcare Network, Boston Children’s Hospital, Curai Health, CVS Health, Devoted Health, Duke Health, Emory Healthcare, Endeavor Health, Fairview Health Systems, Geisinger, Hackensack Meridian, HealthFirst (Florida), Houston Methodist, John Muir Health, Keck Medicine, Main Line Health, Mass General [...]
Meta. Trust and safety. Welcome to Purple Llama. Empowering developers, advancing safety, and building an open ecosystem.
A very good start from a respected source! (but how will AGI be CONTAINED and CONTROLLED??) Learn more: Meta trials Purple Llama project for AI developers to test safety risks in models. Security boosted and inappropriate content blocked in large language models - THE REGISTER Meta releases open-source [...]
BUSINESS INSIDER. We now have more info on what Sam Altman did that was so bad he got fired from OpenAI. 10 DEC.
A very good read from a respected source! FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. BUSINESS INSIDER. We now have more info on what Sam Altman did that was so bad he got fired from [...]
THE WASHINGTON POST. Opinion. AI is forcing teachers to confront an existential question. 12 DEC.
A very good read from a respected source! FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. THE WASHINGTON POST. Opinion. AI is forcing teachers to confront an existential question. Opinion AI is forcing teachers to confront [...]
FEDERATION OF AMERICAN SCIENTISTS. Bio X AI: Policy Recommendations For A New Frontier. 12 DEC.
A very good read from a respected source! FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Bio X AI: Policy Recommendations For A New Frontier 12.12.23|27 MIN READ|TEXT BY NAZISH JEFFERY & SARAH R. CARTER & [...]
AGI (and the ASI Intelligence Explosion) is Closer than you think… a lot Closer! It could be already happening.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. AGI (and the ASI Intelligence Explosion) is Closer than you think... a lot Closer. It could be already happening. Seriously. AI models are now creating and coding and creating synthetic data (knowledge) [...]
News. European Parliament. Press Releases. Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI . 09 December 2023.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. News. European Parliament. Press Releases. Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI . 09 December 2023 Safeguards agreed on general purpose artificial intelligence Limitation for the of use biometric [...]
Council of the EU. Press release. 9 December 2023. Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world Council of the EU. Press release. 9 December 2023. Following 3-day ‘marathon’ talks, the [...]
THE NEW YORK TIMES. E.U. Agrees on Landmark Artificial Intelligence Rules. The agreement over the A.I. Act solidifies one of the world’s first comprehensive attempts to limit the use of artificial intelligence. 08 DEC.
A very good read from very respected sources! FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. THE NEW YORK TIMES. E.U. Agrees on Landmark Artificial Intelligence Rules. The agreement over the A.I. Act solidifies one of [...]
Gemini: Google’s newest and most capable AI model.
Wow! BRAVO to Google. Gemini is a very important and powerful AI product/service from Google Deepmind! (But where is the real safety TECHNOLOGY? Where is the real CONTAINMENT? Where is the real CONTROL?) Ref: GOV UK Guidelines for secure AI system development FOR EDUCATIONAL [...]
The Transformative Potential of AGI — and When It Might Arrive | Shane Legg and Chris Anderson | TED
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. The Transformative Potential of AGI — and When It Might Arrive | Shane Legg and Chris Anderson | TED So having intelligence in machines is an incredibly valuable thing to develop. [...]
WHAT can go wrong with a powerful WISH? [AGI Goal] EXAMPLE: Disney’s Fantasia (1940). Mickey Mouse in The Sorcerer’s Apprentice. (Goethe & Dukas)
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. WHAT can go wrong with a powerful WISH? Example: Disney's Fantasia (1940). Mickey Mouse in The Sorcerer's Apprentice. (Goethe & Dukas) Based on Goethe's 1797 poem "Der Zauberlehrling" (The Sorcerer's Apprentice). Mickey Mouse, [...]
Canada.ca. Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. From: Innovation, Science and Economic Development Canada.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. More organizations sign on to Canada's voluntary AI code of conduct, including CGI and IBM Français NEWS PROVIDED BY Innovation, Science and Economic Development Canada 07 Dec, 2023, 12:32 ET Code sets [...]
THE NEW YORK TIMES. Silicon Valley Confronts a Grim New A.I. Metric. Where do you fall on the doom scale — is artificial intelligence a threat to humankind? And if so, how high is the risk?
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. A very good read from a respected source! Editor's P(doom) is 1 to 99 percent. 1 percent if all AGI systems are mathematically provably contained and controlled before the intelligence explosion occurs... [...]