Need a new search?

If you didn't find what you were looking for, try a new search!

Stuart Russell, “AI: What If We Succeed?” At The Neubauer Collegium for Culture and Society. April 25, 2024.

By |2024-06-09T18:37:07+00:00April 25, 2024|AGI, AI and Business, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI Industry, AI Industry Products, AI Provably Safe, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Stuart Russell, “AI: What If We Succeed?” At The Neubauer Collegium for Culture and Society. April 25, 2024.

SCIENCE. Regulating Advanced Artificial Agents | Bengio, Russell et al.

By |2024-04-18T22:37:47+00:00April 18, 2024|AGI, AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in EU, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

A very important report, by thought leaders in AI [...]

Comments Off on SCIENCE. Regulating Advanced Artificial Agents | Bengio, Russell et al.

Existential Risk Observatory. AI Summit Talks featuring Professor Stuart Russell. 31 OCT 2023.

By |2023-11-02T02:17:53+00:00October 31, 2023|AI and Business, AI and Killer Robots, AI Benefits, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Existential Risk Observatory. AI Summit Talks featuring Professor Stuart Russell. 31 OCT 2023.

BBC HARDtalk. Professor of Computer Science at University of California, Berkeley – Stuart Russell. 14 OCT 2023.

By |2023-10-30T09:43:00+00:00October 14, 2023|AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Opinions, AI Regulation, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on BBC HARDtalk. Professor of Computer Science at University of California, Berkeley – Stuart Russell. 14 OCT 2023.

Written Testimony of Stuart Russell Professor of Computer Science The University of California, Berkeley Before the U.S. Senate Commitee on the Judiciary Subcommitee on Privacy, Technology, & the Law

By |2023-10-16T04:05:27+00:00August 2, 2023|AI Dangers, AI Existential Risk, AI in Politics & Government, AI Opinions, AI Regulation, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Written Testimony of Stuart Russell Professor of Computer Science The University of California, Berkeley Before the U.S. Senate Commitee on the Judiciary Subcommitee on Privacy, Technology, & the Law

CNBC. Current and former OpenAI employees warn of AI’s ‘serious risk’ and lack of oversight. O4 JUNE.

By |2024-06-05T03:30:21+00:00June 5, 2024|AI and Business, AI by OpenAI, AI Dangers, AI Legal Matters, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

OPEN LETTER. A Right to Warn about Advanced Artificial [...]

Comments Off on CNBC. Current and former OpenAI employees warn of AI’s ‘serious risk’ and lack of oversight. O4 JUNE.

SCIENCE. Managing extreme AI risks amid rapid progress Preparation requires technical research and development, as well as adaptive, proactive governance. 20 MAY.

By |2024-06-01T13:04:26+00:00May 31, 2024|AGI, AI and Business, AI and Fake News, AI and Health, AI and Jobs, AI and Killer Robots, AI and Social Media, AI Biosecurity Risk, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Open Source, AI Opinions, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

A very important read from extremely respected and knowledgable [...]

Comments Off on SCIENCE. Managing extreme AI risks amid rapid progress Preparation requires technical research and development, as well as adaptive, proactive governance. 20 MAY.

SEMINAL REPORT. Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems. 10 MAY 2024.

By |2024-05-19T07:49:33+00:00May 19, 2024|AGI, AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI Open Source, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on SEMINAL REPORT. Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems. 10 MAY 2024.

PRESS RELEASE. U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team. The institute is housed at the National Institute of Standards and Technology (NIST). April 16, 2024

By |2024-04-22T13:09:46+00:00April 22, 2024|AGI, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Legal Matters, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

Good news for AI safety from U.S. Department of [...]

Comments Off on PRESS RELEASE. U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team. The institute is housed at the National Institute of Standards and Technology (NIST). April 16, 2024
Go to Top