Need a new search?

If you didn't find what you were looking for, try a new search!

VOX. California’s governor has the chance to make AI history. Gavin Newsom could decide the future of AI safety. [Sign the Bill!]

By |2024-09-01T22:49:38+00:00September 1, 2024|AGI, AI and Business, AI and Jobs, AI and Killer Robots, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

A very good read from a respected source! [...]

Comments Off on VOX. California’s governor has the chance to make AI history. Gavin Newsom could decide the future of AI safety. [Sign the Bill!]

TIME. Exclusive: Renowned Experts Pen Support for California’s Landmark AI Safety Bill. 07 August 2024.

By |2024-08-16T12:20:55+00:00August 7, 2024|AGI, AI and Business, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

“I worry that technology companies will not solve these [...]

Comments Off on TIME. Exclusive: Renowned Experts Pen Support for California’s Landmark AI Safety Bill. 07 August 2024.

TWITTER. Ex-OpenAI safety researcher William Saunders: “We fundamentally don’t know how AI works inside.”

By |2024-08-18T07:57:42+00:00July 23, 2024|AGI, AI and Business, AI by OpenAI, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI Industry, AI Industry Products, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

The scientific facts: LLMs are a black box. AI [...]

Comments Off on TWITTER. Ex-OpenAI safety researcher William Saunders: “We fundamentally don’t know how AI works inside.”

Meta. Open Source AI Is the Path Forward. By Mark Zuckerberg, Founder and CEO. 23 JULY 2024. [Editor Comment: Open Source Safe AI? YES. Open Source Unsafe AI? NO.]

By |2024-07-29T07:34:28+00:00July 23, 2024|AGI, AI and Business, AI by Meta, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI Industry, AI Industry Products, AI Open Source, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

"Open source should be significantly safer since the systems [...]

Comments Off on Meta. Open Source AI Is the Path Forward. By Mark Zuckerberg, Founder and CEO. 23 JULY 2024. [Editor Comment: Open Source Safe AI? YES. Open Source Unsafe AI? NO.]

The Future of AI: Too Much to Handle? With Roman Yampolskiy and 3 Dutch MPs. Existential Risk Observatory.

By |2024-07-22T11:39:27+00:00July 20, 2024|AGI, AI and Business, AI and Killer Robots, AI Benefits, AI Biosecurity Risk, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Open Source, AI Opinions, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on The Future of AI: Too Much to Handle? With Roman Yampolskiy and 3 Dutch MPs. Existential Risk Observatory.

Jaan Tallinn: As of July 2024, my top priorities for reducing existential risks from Al are these.

By |2024-07-29T05:15:15+00:00July 1, 2024|AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI Opinions, AI Regulation, AI Safety Organisations, AI Science, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Jaan Tallinn: As of July 2024, my top priorities for reducing existential risks from Al are these.

CNBC. Current and former OpenAI employees warn of AI’s ‘serious risk’ and lack of oversight. O4 JUNE.

By |2024-06-05T03:30:21+00:00June 5, 2024|AI and Business, AI by OpenAI, AI Dangers, AI Legal Matters, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

OPEN LETTER. A Right to Warn about Advanced Artificial [...]

Comments Off on CNBC. Current and former OpenAI employees warn of AI’s ‘serious risk’ and lack of oversight. O4 JUNE.

SCIENCE. Managing extreme AI risks amid rapid progress Preparation requires technical research and development, as well as adaptive, proactive governance. 20 MAY.

By |2024-06-01T13:04:26+00:00May 31, 2024|AGI, AI and Business, AI and Fake News, AI and Health, AI and Jobs, AI and Killer Robots, AI and Social Media, AI Biosecurity Risk, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Open Source, AI Opinions, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

A very important read from extremely respected and knowledgable [...]

Comments Off on SCIENCE. Managing extreme AI risks amid rapid progress Preparation requires technical research and development, as well as adaptive, proactive governance. 20 MAY.

SEMINAL REPORT. Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems. 10 MAY 2024.

By |2024-05-19T07:49:33+00:00May 19, 2024|AGI, AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI Open Source, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on SEMINAL REPORT. Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems. 10 MAY 2024.

PRESS RELEASE. U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team. The institute is housed at the National Institute of Standards and Technology (NIST). April 16, 2024

By |2024-04-22T13:09:46+00:00April 22, 2024|AGI, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Legal Matters, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

Good news for AI safety from U.S. Department of [...]

Comments Off on PRESS RELEASE. U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team. The institute is housed at the National Institute of Standards and Technology (NIST). April 16, 2024

How can AI take control and kill all humans on earth? Take 15 minutes to Learn the answer. (no technical knowledge required)

By |2024-04-26T18:56:22+00:00April 20, 2024|AGI, AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on How can AI take control and kill all humans on earth? Take 15 minutes to Learn the answer. (no technical knowledge required)

FT. We must slow down the race to God-like AI. I’ve invested in more than 50 artificial intelligence start-ups. What I’ve seen worries me. Ian Hogarth. APRIL 12 2023.

By |2024-05-19T05:26:19+00:00April 12, 2024|AGI, AI and Business, AI and Killer Robots, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Opinions, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

"God-like AI could be a force beyond our control [...]

Comments Off on FT. We must slow down the race to God-like AI. I’ve invested in more than 50 artificial intelligence start-ups. What I’ve seen worries me. Ian Hogarth. APRIL 12 2023.

BERKELEY NEWS. How to keep AI from killing us all. In a new paper, UC Berkeley researchers argue that companies should not be allowed to create advanced AI systems until they can prove they are safe.

By |2024-04-11T04:26:44+00:00April 11, 2024|AGI, AI and Business, AI Biosecurity Risk, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

As companies build ever more powerful AI systems — [...]

Comments Off on BERKELEY NEWS. How to keep AI from killing us all. In a new paper, UC Berkeley researchers argue that companies should not be allowed to create advanced AI systems until they can prove they are safe.

FLI. Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter. 11,251 Signatures. Published October 28, 2015.

By |2024-03-02T12:35:39+00:00March 2, 2024|AGI, AI and Business, AI and Health, AI and Jobs, AI and Killer Robots, AI and Social Media, AI Biosecurity Risk, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Open Source, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FLI. Research Priorities for Robust and Beneficial Artificial Intelligence: An [...]

Comments Off on FLI. Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter. 11,251 Signatures. Published October 28, 2015.
Go to Top