Need a new search?

If you didn't find what you were looking for, try a new search!

How can AI take control and kill all humans on earth? Take 15 minutes to Learn the answer. (no technical knowledge required)

By |2024-04-26T18:56:22+00:00April 20, 2024|AGI, AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on How can AI take control and kill all humans on earth? Take 15 minutes to Learn the answer. (no technical knowledge required)

FT. We must slow down the race to God-like AI. I’ve invested in more than 50 artificial intelligence start-ups. What I’ve seen worries me. Ian Hogarth. APRIL 12 2023.

By |2024-05-19T05:26:19+00:00April 12, 2024|AGI, AI and Business, AI and Killer Robots, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Opinions, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

"God-like AI could be a force beyond our control [...]

Comments Off on FT. We must slow down the race to God-like AI. I’ve invested in more than 50 artificial intelligence start-ups. What I’ve seen worries me. Ian Hogarth. APRIL 12 2023.

BERKELEY NEWS. How to keep AI from killing us all. In a new paper, UC Berkeley researchers argue that companies should not be allowed to create advanced AI systems until they can prove they are safe.

By |2024-04-11T04:26:44+00:00April 11, 2024|AGI, AI and Business, AI Biosecurity Risk, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

As companies build ever more powerful AI systems — [...]

Comments Off on BERKELEY NEWS. How to keep AI from killing us all. In a new paper, UC Berkeley researchers argue that companies should not be allowed to create advanced AI systems until they can prove they are safe.

FLI. Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter. 11,251 Signatures. Published October 28, 2015.

By |2024-03-02T12:35:39+00:00March 2, 2024|AGI, AI and Business, AI and Health, AI and Jobs, AI and Killer Robots, AI and Social Media, AI Biosecurity Risk, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Open Source, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FLI. Research Priorities for Robust and Beneficial Artificial Intelligence: An [...]

Comments Off on FLI. Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter. 11,251 Signatures. Published October 28, 2015.

PauseAI. The Existential Risk Of Superintelligent AI.

By |2024-02-29T21:46:38+00:00February 29, 2024|AGI, AI and Business, AI and Health, AI and Killer Robots, AI and Social Media, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on PauseAI. The Existential Risk Of Superintelligent AI.

OpenAI. Weak-to-Strong Generalisation: Eliciting Strong Capabilities With Weak Supervision. [probably won’t work]

By |2024-01-01T23:09:15+00:00December 15, 2023|AGI, AI by OpenAI, AI Emergent Ability, AI Existential Risk, AI Industry, AI Industry Products, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

An interesting read from a respected source. However... Stuart [...]

Comments Off on OpenAI. Weak-to-Strong Generalisation: Eliciting Strong Capabilities With Weak Supervision. [probably won’t work]

THE NEW YORK TIMES. How Nations Are Losing a Global Race to Tackle A.I.’s Harms. Alarmed by the power of artificial intelligence, Europe, the United States and others are trying to respond — but the technology is evolving more rapidly than their policies. 06 DEC.

By |2023-12-09T13:55:02+00:00December 6, 2023|AGI, AI and Business, AI and Killer Robots, AI Benefits, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in EU, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Regulation, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

A very good read from a respected source! [...]

Comments Off on THE NEW YORK TIMES. How Nations Are Losing a Global Race to Tackle A.I.’s Harms. Alarmed by the power of artificial intelligence, Europe, the United States and others are trying to respond — but the technology is evolving more rapidly than their policies. 06 DEC.

Guardrails? Unfortunately NO. Mathematically provable containment, forever? YES. (sole option)

By |2023-12-02T13:31:09+00:00November 29, 2023|AGI, AI and Business, AI and Health, AI and Killer Robots, AI Benefits, AI Biosecurity Risk, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in EU, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Science, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Guardrails? Unfortunately NO. Mathematically provable containment, forever? YES. (sole option)

MOLLY ROSE FOUNDATION REPORT. Preventable yet pervasive. The prevalence and characteristics of harmful content, including suicide and self-harm material, on Instagram, TikTok and Pinterest.

By |2023-11-30T07:56:40+00:00November 29, 2023|AI and Social Media, AI by Meta, AI Industry, AI Industry Products, AI Legal Matters, AI Science, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on MOLLY ROSE FOUNDATION REPORT. Preventable yet pervasive. The prevalence and characteristics of harmful content, including suicide and self-harm material, on Instagram, TikTok and Pinterest.

FT. OpenAI and the rift at the heart of Silicon Valley. The tech industry is divided over how best to develop AI, and whether it’s possible to balance safety with the pursuit of profits. 24 NOV.

By |2023-11-25T14:33:23+00:00November 25, 2023|AI and Business, AI and Jobs, AI and Killer Robots, AI Benefits, AI Biosecurity Risk, AI by OpenAI, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Opinions, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on FT. OpenAI and the rift at the heart of Silicon Valley. The tech industry is divided over how best to develop AI, and whether it’s possible to balance safety with the pursuit of profits. 24 NOV.

NATURE. The world’s week on AI safety: powerful computing efforts launched to boost research UK and US governments establish efforts to democratize access to supercomputers that will aid studies on AI systems. 03 NOV 2023.

By |2023-11-08T02:52:15+00:00November 3, 2023|AI Existential Risk, AI in Politics & Government, AI Regulation, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on NATURE. The world’s week on AI safety: powerful computing efforts launched to boost research UK and US governments establish efforts to democratize access to supercomputers that will aid studies on AI systems. 03 NOV 2023.

FLI recommendations for the UK Global AI Safety Summit Bletchley Park, 1-2 November 2023

By |2023-11-01T10:01:05+00:00November 1, 2023|AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on FLI recommendations for the UK Global AI Safety Summit Bletchley Park, 1-2 November 2023

Existential Risk Observatory. Reducing human extinction risks by informing the public debate.

By |2023-10-30T07:15:08+00:00October 30, 2023|AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Existential Risk Observatory. Reducing human extinction risks by informing the public debate.

What is P(doom)? P(doom) is the percentage chance that AI scientists think AI is going to wipe out all of humanity. This is what Bing and ChatGPT and Leading AI Researchers say about P(doom).

By |2023-12-02T13:33:17+00:00October 16, 2023|AI by OpenAI, AI Dangers, AI Emergent Ability, AI Existential Risk, AI Industry Products, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on What is P(doom)? P(doom) is the percentage chance that AI scientists think AI is going to wipe out all of humanity. This is what Bing and ChatGPT and Leading AI Researchers say about P(doom).

DEADLY MATTER OF FACT: CONTAINMENT of AI/AGI is A COMMON PROBLEM. All of HUMANITY Problem. OUR Problem. THEIR Problem. HIS Problem. HER Problem. MY Problem. YOUR Problem. NOW.

By |2023-10-15T15:17:57+00:00October 15, 2023|AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Regulation, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on DEADLY MATTER OF FACT: CONTAINMENT of AI/AGI is A COMMON PROBLEM. All of HUMANITY Problem. OUR Problem. THEIR Problem. HIS Problem. HER Problem. MY Problem. YOUR Problem. NOW.

Jaan Tallinn: As of July 2023, my top priorities for reducing existential risks from Al are these.

By |2023-12-02T13:29:43+00:00October 14, 2023|AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI Opinions, AI Regulation, AI Safety Organisations, AI Science, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Jaan Tallinn: As of July 2023, my top priorities for reducing existential risks from Al are these.
Go to Top