
First, do no harm.
1,500+ Posts…
Free knowledge sharing for Safe AI. Not for profit. Linkouts to sources provided. Ads are likely to appear on link-outs (zero benefit to this journal publisher)
THE NEW YORK TIMES. Global Leaders Warn A.I. Could Cause ‘Catastrophic’ Harm At a U.K. summit, 28 governments, including China and the U.S., signed a declaration agreeing to cooperate on evaluating the risks of artificial intelligence. 01 NOV 2023.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. THE NEW YORK TIMES. Global Leaders Warn A.I. Could Cause ‘Catastrophic’ Harm At a U.K. summit, 28 governments, including China and the U.S., signed a declaration agreeing to cooperate on evaluating the [...]
GOV UK. Press release. Nations and AI experts convene for day one of first global AI Safety Summit Leading AI nations, organisations and experts meet at Bletchley Park today to discuss the global future of AI and work towards a shared understanding of risks. 01 NOV 2023.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. GOV UK. Press release. Nations and AI experts convene for day one of first global AI Safety Summit Leading AI nations, organisations and experts meet at Bletchley Park today to discuss the [...]
GOV UK. Policy paper. The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Policy paper. The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 Published 1 November 2023. GOV UK Artificial Intelligence (AI) presents enormous global opportunities: it has the potential [...]
AI Summit at Bletchley Park, UK. Elon Musk: ‘AI is one of the the biggest threats to humanity’. 01 NOV 2023.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. AI Summit at Bletchley Park, UK. Elon Musk: 'AI is one of the the biggest threats to humanity'. 01 NOV 2023. “We have for the first time the situation where we [...]
NVIDIA Omniverse™ Overview Unify Your 3D Work With Omniverse NVIDIA Omniverse™ is a computing platform that enables individuals and teams to develop Universal Scene Description (OpenUSD)-based 3D workflows and applications. Creators Create in 3D Faster Than Ever Sync your favorite creative apps to Omniverse and USD and work with your 3D [...]
GOV UK. Guidance AI Safety Summit: confirmed attendees (governments and organisations) Updated 31 October 2023.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. GOV UK. Guidance. AI Safety Summit: confirmed attendees (governments and organisations). Updated 31 October 2023 Academia and civil society Ada Lovelace Institute Advanced Research and Invention Agency African Commission on Human and [...]
BBC NEWS. AI: Scientists excited by tool that grades severity of rare cancer. 01 NOV 2023.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Excellent example of the benefits of narrow AI applications. 2 min read. BBC NEWS. AI: Scientists excited by tool that grades severity of rare cancer. 01 NOV 2023. By Fergus Walsh [...]
FLI recommendations for the UK Global AI Safety Summit Bletchley Park, 1-2 November 2023
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. An excellent policy document. Highly recommended! FLI recommendations for the UK Global AI Safety Summit Bletchley Park, 1-2 November 2023 “The time for saying that this is just pure research has long [...]
Existential Risk Observatory. AI Summit Talks featuring Professor Stuart Russell. 31 OCT 2023.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Existential Risk Observatory. AI Summit Talks featuring Professor Stuart Russell. 31 OCT 2023. What if we succeed? Lift the living standards of everyone on Earth to a respectable level 10x increase in [...]
THE WHITE HOUSE. FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. OCTOBER 30, 2023.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. THE WHITE HOUSE. FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. OCTOBER 30, 2023. Today, President Biden is issuing a landmark Executive Order to ensure that [...]
BBC NEWS. US announces ‘strongest global action yet’ on AI safety. 30 OCT 2023.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. BBC NEWS. US announces 'strongest global action yet' on AI safety. 30 OCT 2023. By Shiona McCallum & Zoe Kleinman, Technology team The White House has announced what it is calling "the [...]
WANTED: Provably Safe AI for The Benefit of People, Forever.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Learn about p(doom) and... How to increase our probability of survival? Demand mathematically provable Safe AI. Take a tea/coffee/water and 5 minutes to understand (download pdf for convenience) [...]
GOV UK. Independent report. Frontier AI Taskforce: second progress report. 30 October 2023
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. GOV UK. Independent report. Frontier AI Taskforce: second progress report. 30 October 2023 Today we are announcing that in the 7 weeks since our first progress report we have: Tripled the capacity [...]
European Commission. POLICY AND LEGISLATION | Publication 30 October 2023. G7 Leaders’ Statement on the Hiroshima AI Process.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. POLICY AND LEGISLATION | Publication 30 October 2023 G7 Leaders’ Statement on the Hiroshima AI Process The G7 leaders have welcomed international guiding principles on artificial intelligence and a voluntary Code of [...]
Existential Risk Observatory. Reducing human extinction risks by informing the public debate.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Existential Risk Observatory. Reducing human extinction risks by informing the public debate. Unaligned AI Toby Ord estimates a one in ten likeliness that unaligned AI will cause human extinction or permanent and [...]
BBC NEWS. Sunday with Laura Kuenssberg. Michelle Donelan MP, secretary of state for science and technology. And Alex Karp, CEO Palantir.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. BBC NEWS. Sunday with Laura Kuenssberg. Michelle Donelan MP, secretary of state for science and technology. AI could make it easier to build chemical [...]
GOV UK. Press release. Leading frontier AI companies publish safety policies Top frontier AI firms have outlined their safety policies to boost transparency and encourage the sharing of best practice within the AI community. 27 OCT 2023.
GOV UK. Press release. Leading frontier AI companies publish safety policies Top frontier AI firms have outlined their safety policies to boost transparency and encourage the sharing of best practice within the AI community. From:Department for Science, Innovation and Technology and The Rt Hon Michelle Donelan MP 27 OCT 2023. [...]
UN. Press Release. UN Secretary-General launches AI Advisory Body on risks, opportunities, and international governance of artificial intelligence. 27 OCT 2023.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Press Release. UN Secretary-General launches AI Advisory Body on risks, opportunities, and international governance of artificial intelligence. 27 OCT 2023 The United Nations Secretary-General António Guterres has announced the creation of a [...]
Control AI. MAGIC Video (1:49) “Why Are We Letting Them Do This?”
Control AI. MAGIC Video (1:49) "Why Are We Letting Them Do This?"
MIT TECHNOLOGY REVIEW. Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist. An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work. 26 OCT.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. “It’s important to talk about where it’s all headed. At some point we really will have AGI. Maybe OpenAI will build it. Maybe some other company will build it... It’s going to [...]
BBC NEWS. Rishi Sunak says AI has threats and risks – but outlines its potential. 26 OCT 2023.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Rishi Sunak says AI has threats and risks - but outlines its potential By James Gregory & Zoe Kleinman, technology editor- BBC News Artificial intelligence could help [...]
MIT Technology Review. Exclusive: Ilya Sutskever, OpenAI’s chief scientist, on his hopes and fears for the future of AI “It’s going to be monumental, earth-shattering. There will be a before and an after.” October 26, 2023.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. Exclusive: Ilya Sutskever, OpenAI’s chief scientist, on his hopes and fears for the future of AI “It’s going to be monumental, earth-shattering. There will be a before and an after.” By Will [...]
THE GUARDIAN. Humanity at risk from AI ‘race to the bottom’, says tech expert. MIT professor behind influential letter says unchecked development is allowing a few AI firms to jeopardise society’s future.
FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER. THE GUARDIAN. Humanity at risk from AI ‘race to the bottom’, says Max Tegmark. MIT professor behind influential letter says unchecked development is allowing a few AI firms to jeopardise society’s future. [...]




















