Need a new search?

If you didn't find what you were looking for, try a new search!

PauseAI. The Existential Risk Of Superintelligent AI.

By |2024-02-29T21:46:38+00:00February 29, 2024|AGI, AI and Business, AI and Health, AI and Killer Robots, AI and Social Media, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on PauseAI. The Existential Risk Of Superintelligent AI.

OpenAI. Weak-to-Strong Generalisation: Eliciting Strong Capabilities With Weak Supervision. [probably won’t work]

By |2024-01-01T23:09:15+00:00December 15, 2023|AGI, AI by OpenAI, AI Emergent Ability, AI Existential Risk, AI Industry, AI Industry Products, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

An interesting read from a respected source. However... Stuart [...]

Comments Off on OpenAI. Weak-to-Strong Generalisation: Eliciting Strong Capabilities With Weak Supervision. [probably won’t work]

THE NEW YORK TIMES. How Nations Are Losing a Global Race to Tackle A.I.’s Harms. Alarmed by the power of artificial intelligence, Europe, the United States and others are trying to respond — but the technology is evolving more rapidly than their policies. 06 DEC.

By |2023-12-09T13:55:02+00:00December 6, 2023|AGI, AI and Business, AI and Killer Robots, AI Benefits, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in EU, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Regulation, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

A very good read from a respected source! [...]

Comments Off on THE NEW YORK TIMES. How Nations Are Losing a Global Race to Tackle A.I.’s Harms. Alarmed by the power of artificial intelligence, Europe, the United States and others are trying to respond — but the technology is evolving more rapidly than their policies. 06 DEC.

Guardrails? Unfortunately NO. Mathematically provable containment, forever? YES. (sole option)

By |2023-12-02T13:31:09+00:00November 29, 2023|AGI, AI and Business, AI and Health, AI and Killer Robots, AI Benefits, AI Biosecurity Risk, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in EU, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Science, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Guardrails? Unfortunately NO. Mathematically provable containment, forever? YES. (sole option)

MOLLY ROSE FOUNDATION REPORT. Preventable yet pervasive. The prevalence and characteristics of harmful content, including suicide and self-harm material, on Instagram, TikTok and Pinterest.

By |2023-11-30T07:56:40+00:00November 29, 2023|AI and Social Media, AI by Meta, AI Industry, AI Industry Products, AI Legal Matters, AI Science, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on MOLLY ROSE FOUNDATION REPORT. Preventable yet pervasive. The prevalence and characteristics of harmful content, including suicide and self-harm material, on Instagram, TikTok and Pinterest.

FT. OpenAI and the rift at the heart of Silicon Valley. The tech industry is divided over how best to develop AI, and whether it’s possible to balance safety with the pursuit of profits. 24 NOV.

By |2023-11-25T14:33:23+00:00November 25, 2023|AI and Business, AI and Jobs, AI and Killer Robots, AI Benefits, AI Biosecurity Risk, AI by OpenAI, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Opinions, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on FT. OpenAI and the rift at the heart of Silicon Valley. The tech industry is divided over how best to develop AI, and whether it’s possible to balance safety with the pursuit of profits. 24 NOV.

NATURE. The world’s week on AI safety: powerful computing efforts launched to boost research UK and US governments establish efforts to democratize access to supercomputers that will aid studies on AI systems. 03 NOV 2023.

By |2023-11-08T02:52:15+00:00November 3, 2023|AI Existential Risk, AI in Politics & Government, AI Regulation, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on NATURE. The world’s week on AI safety: powerful computing efforts launched to boost research UK and US governments establish efforts to democratize access to supercomputers that will aid studies on AI systems. 03 NOV 2023.

FLI recommendations for the UK Global AI Safety Summit Bletchley Park, 1-2 November 2023

By |2023-11-01T10:01:05+00:00November 1, 2023|AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on FLI recommendations for the UK Global AI Safety Summit Bletchley Park, 1-2 November 2023

Existential Risk Observatory. Reducing human extinction risks by informing the public debate.

By |2023-10-30T07:15:08+00:00October 30, 2023|AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Existential Risk Observatory. Reducing human extinction risks by informing the public debate.

What is P(doom)? P(doom) is the percentage chance that AI scientists think AI is going to wipe out all of humanity. This is what Bing and ChatGPT and Leading AI Researchers say about P(doom).

By |2023-12-02T13:33:17+00:00October 16, 2023|AI by OpenAI, AI Dangers, AI Emergent Ability, AI Existential Risk, AI Industry Products, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on What is P(doom)? P(doom) is the percentage chance that AI scientists think AI is going to wipe out all of humanity. This is what Bing and ChatGPT and Leading AI Researchers say about P(doom).

DEADLY MATTER OF FACT: CONTAINMENT of AI/AGI is A COMMON PROBLEM. All of HUMANITY Problem. OUR Problem. THEIR Problem. HIS Problem. HER Problem. MY Problem. YOUR Problem. NOW.

By |2023-10-15T15:17:57+00:00October 15, 2023|AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Regulation, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on DEADLY MATTER OF FACT: CONTAINMENT of AI/AGI is A COMMON PROBLEM. All of HUMANITY Problem. OUR Problem. THEIR Problem. HIS Problem. HER Problem. MY Problem. YOUR Problem. NOW.

Dario Amodei, CEO, Anthropic | Biosecurity Risk of AI at Senate Judiciary Committee holds hearing on AI oversight and regulation — 25 AUG 23

By |2023-11-26T08:10:32+00:00August 11, 2023|AI Biosecurity Risk, AI by Anthropic, AI Existential Risk, AI in Politics & Government, AI Industry, AI Opinions, AI Regulation, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Dario Amodei, CEO, Anthropic | Biosecurity Risk of AI at Senate Judiciary Committee holds hearing on AI oversight and regulation — 25 AUG 23

THE WASHINGTON POST. AI leaders warn Congress that AI could be used to create bioweapons. Two AI researchers and the CEO of AI startup Anthropic testified about the long-term risks of AI and said Congress needs to institute rules to control it. 25 JULY 2023.

By |2023-10-05T09:17:21+00:00July 25, 2023|AI Biosecurity Risk, AI Existential Risk, AI in Politics & Government, AI Regulation, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on THE WASHINGTON POST. AI leaders warn Congress that AI could be used to create bioweapons. Two AI researchers and the CEO of AI startup Anthropic testified about the long-term risks of AI and said Congress needs to institute rules to control it. 25 JULY 2023.
Go to Top