Need a new search?

If you didn't find what you were looking for, try a new search!

Stuart Russell, “AI: What If We Succeed?” At The Neubauer Collegium for Culture and Society. April 25, 2024.

By |2024-06-09T18:37:07+00:00April 25, 2024|AGI, AI and Business, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI Industry, AI Industry Products, AI Provably Safe, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Stuart Russell, “AI: What If We Succeed?” At The Neubauer Collegium for Culture and Society. April 25, 2024.

SCIENCE. Regulating Advanced Artificial Agents | Bengio, Russell et al.

By |2024-04-18T22:37:47+00:00April 18, 2024|AGI, AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in EU, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

A very important report, by thought leaders in AI [...]

Comments Off on SCIENCE. Regulating Advanced Artificial Agents | Bengio, Russell et al.

Managing AI Risks in an Era of Rapid Progress. Russell, Bengio et al. 12 NOV23.

By |2024-10-06T13:38:30+00:00November 12, 2023|AGI, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI Provably Safe, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Managing AI Risks in an Era of Rapid Progress. Russell, Bengio et al. 12 NOV23.

Existential Risk Observatory. AI Summit Talks featuring Professor Stuart Russell. 31 OCT 2023.

By |2023-11-02T02:17:53+00:00October 31, 2023|AI and Business, AI and Killer Robots, AI Benefits, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Existential Risk Observatory. AI Summit Talks featuring Professor Stuart Russell. 31 OCT 2023.

BBC HARDtalk. Professor of Computer Science at University of California, Berkeley – Stuart Russell. 14 OCT 2023.

By |2023-10-30T09:43:00+00:00October 14, 2023|AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Opinions, AI Regulation, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on BBC HARDtalk. Professor of Computer Science at University of California, Berkeley – Stuart Russell. 14 OCT 2023.

Written Testimony of Stuart Russell Professor of Computer Science The University of California, Berkeley Before the U.S. Senate Commitee on the Judiciary Subcommitee on Privacy, Technology, & the Law

By |2023-10-16T04:05:27+00:00August 2, 2023|AI Dangers, AI Existential Risk, AI in Politics & Government, AI Opinions, AI Regulation, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Written Testimony of Stuart Russell Professor of Computer Science The University of California, Berkeley Before the U.S. Senate Commitee on the Judiciary Subcommitee on Privacy, Technology, & the Law

Safe AI by Design. AssistanceZero. Assistance Game. Relativistic Adversarial Reasoning Optimization (RARO)

By |2025-12-20T21:15:10+00:00December 20, 2025|AGI, AI Emergent Ability, AI Open Source, AI Provably Safe, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, Nature|

In The Assistance Game the human and assistant share [...]

Comments Off on Safe AI by Design. AssistanceZero. Assistance Game. Relativistic Adversarial Reasoning Optimization (RARO)

Quantifying Expert Consensus on Existential Risk: A Biographical and Statistical Analysis of the Top 50 Scientists and Leaders in Artificial Intelligence Safety and Alignment

By |2025-12-01T11:17:03+00:00November 29, 2025|AGI, AI Opinions, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, Nature, P(doom)|

P(Doom) at Wikipedia [...]

Comments Off on Quantifying Expert Consensus on Existential Risk: A Biographical and Statistical Analysis of the Top 50 Scientists and Leaders in Artificial Intelligence Safety and Alignment
Go to Top