Need a new search?

If you didn't find what you were looking for, try a new search!

SCIENCE. Regulating Advanced Artificial Agents | Bengio, Russell et al.

By |2024-04-18T22:37:47+00:00April 18, 2024|AGI, AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in EU, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

A very important report, by thought leaders in AI [...]

Comments Off on SCIENCE. Regulating Advanced Artificial Agents | Bengio, Russell et al.

Existential Risk Observatory. AI Summit Talks featuring Professor Stuart Russell. 31 OCT 2023.

By |2023-11-02T02:17:53+00:00October 31, 2023|AI and Business, AI and Killer Robots, AI Benefits, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Existential Risk Observatory. AI Summit Talks featuring Professor Stuart Russell. 31 OCT 2023.

BBC HARDtalk. Professor of Computer Science at University of California, Berkeley – Stuart Russell. 14 OCT 2023.

By |2023-10-30T09:43:00+00:00October 14, 2023|AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Opinions, AI Regulation, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on BBC HARDtalk. Professor of Computer Science at University of California, Berkeley – Stuart Russell. 14 OCT 2023.

Written Testimony of Stuart Russell Professor of Computer Science The University of California, Berkeley Before the U.S. Senate Commitee on the Judiciary Subcommitee on Privacy, Technology, & the Law

By |2023-10-16T04:05:27+00:00August 2, 2023|AI Dangers, AI Existential Risk, AI in Politics & Government, AI Opinions, AI Regulation, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Written Testimony of Stuart Russell Professor of Computer Science The University of California, Berkeley Before the U.S. Senate Commitee on the Judiciary Subcommitee on Privacy, Technology, & the Law

BERKELEY NEWS. How to keep AI from killing us all. In a new paper, UC Berkeley researchers argue that companies should not be allowed to create advanced AI systems until they can prove they are safe.

By |2024-04-11T04:26:44+00:00April 11, 2024|AGI, AI and Business, AI Biosecurity Risk, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

As companies build ever more powerful AI systems — [...]

Comments Off on BERKELEY NEWS. How to keep AI from killing us all. In a new paper, UC Berkeley researchers argue that companies should not be allowed to create advanced AI systems until they can prove they are safe.

FLI. Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter. 11,251 Signatures. Published October 28, 2015.

By |2024-03-02T12:35:39+00:00March 2, 2024|AGI, AI and Business, AI and Health, AI and Jobs, AI and Killer Robots, AI and Social Media, AI Biosecurity Risk, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Open Source, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FLI. Research Priorities for Robust and Beneficial Artificial Intelligence: An [...]

Comments Off on FLI. Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter. 11,251 Signatures. Published October 28, 2015.

PauseAI. The Existential Risk Of Superintelligent AI.

By |2024-02-29T21:46:38+00:00February 29, 2024|AGI, AI and Business, AI and Health, AI and Killer Robots, AI and Social Media, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on PauseAI. The Existential Risk Of Superintelligent AI.

OpenAI. Weak-to-Strong Generalisation: Eliciting Strong Capabilities With Weak Supervision. [probably won’t work]

By |2024-01-01T23:09:15+00:00December 15, 2023|AGI, AI by OpenAI, AI Emergent Ability, AI Existential Risk, AI Industry, AI Industry Products, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

An interesting read from a respected source. However... Stuart [...]

Comments Off on OpenAI. Weak-to-Strong Generalisation: Eliciting Strong Capabilities With Weak Supervision. [probably won’t work]
Go to Top