Need a new search?

If you didn't find what you were looking for, try a new search!

Stuart Russell – Avoiding the Cliff of Uncontrollable AI (AGI Governance, Episode 9) | The Trajectory with Dan Faggella

By |2025-10-31T13:25:38+00:00October 31, 2025|AGI, AI and Business, AI Biosecurity Risk, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, Nature, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Stuart Russell – Avoiding the Cliff of Uncontrollable AI (AGI Governance, Episode 9) | The Trajectory with Dan Faggella

AI’s Chernobyl Moment: What Stuart Russell Was Just Told | For Humanity #72

By |2025-10-25T18:55:29+00:00October 25, 2025|AGI, AI and Fake News, AI and Jobs, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Open Source, AI Opinions, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, Nature, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on AI’s Chernobyl Moment: What Stuart Russell Was Just Told | For Humanity #72

Looking Ahead. Why We Need Safe and Ethical AI | Stuart Russell | IASEAI

By |2025-10-06T05:10:28+00:00October 6, 2025|AGI, AI and Business, AI Biosecurity Risk, AI Emergent Ability, AI Existential Risk, AI in Movies, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Opinions, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

Looking Ahead. Why We Need Safe and Ethical AI [...]

Comments Off on Looking Ahead. Why We Need Safe and Ethical AI | Stuart Russell | IASEAI

Hinton. Bengio. Tegmark. Russell. | IASEAI 2025 International Association for Safe & Ethical AI

By |2025-08-22T19:28:20+00:00August 22, 2025|AGI, AI and Jobs, AI and Killer Robots, AI Benefits, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Provably Safe, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

"When I think about what it would be like to [...]

Comments Off on Hinton. Bengio. Tegmark. Russell. | IASEAI 2025 International Association for Safe & Ethical AI

“You can’t fetch the coffee if you’re dead.” — Prof. Stuart Russell, “Human Compatible”

By |2025-05-23T08:24:41+00:00May 23, 2025|AGI, AI and Killer Robots, AI Biosecurity Risk, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on “You can’t fetch the coffee if you’re dead.” — Prof. Stuart Russell, “Human Compatible”

Stuart Russell on AI and Jobs: By the end of this decade AI may exceed human capabilities in every dimension and perform work for free, so there may be more employment, it just won’t be employment of humans.

By |2024-10-22T15:49:18+00:00October 22, 2024|AGI, AI and Business, AI and Jobs, AI Dangers, AI Emergent Ability, AI Existential Risk, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

Stuart Russell: saying AI is like the calculator is a [...]

Comments Off on Stuart Russell on AI and Jobs: By the end of this decade AI may exceed human capabilities in every dimension and perform work for free, so there may be more employment, it just won’t be employment of humans.

“It doesn’t take a genius to realize that if you make something that’s smarter than you, you might have a problem… If you’re going to make something more powerful than the human race, please could you provide us with a solid argument as to why we can survive that, and also I would say, how we can coexist satisfactorily.” — Professor Stuart Russell

By |2024-11-20T08:42:03+00:00October 6, 2024|AGI, AI and Business, AI and Fake News, AI and Health, AI and Jobs, AI and Killer Robots, AI and Social Media, AI Benefits, AI Biosecurity Risk, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Open Source, AI Provably Safe, AI Regulation, AI Robot, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on “It doesn’t take a genius to realize that if you make something that’s smarter than you, you might have a problem… If you’re going to make something more powerful than the human race, please could you provide us with a solid argument as to why we can survive that, and also I would say, how we can coexist satisfactorily.” — Professor Stuart Russell

Stuart Russell | Provably Beneficial AI (2017) | Future Of Life Institute

By |2024-10-05T19:44:30+00:00October 5, 2024|AGI, AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI Provably Safe, AI Robot, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

"It doesn't take a genius to realize that if [...]

Comments Off on Stuart Russell | Provably Beneficial AI (2017) | Future Of Life Institute

Stuart Russell at NORA – Norwegian AI Research Consortium. AI: What if we succeed?

By |2024-10-05T07:44:34+00:00October 5, 2024|AGI, AI and Business, AI and Fake News, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Stuart Russell at NORA – Norwegian AI Research Consortium. AI: What if we succeed?

Stuart Russell: How do we prevent unsafe AI?

By |2024-09-16T23:32:51+00:00September 16, 2024|AGI, AI and Business, AI and Killer Robots, AI and Social Media, AI Benefits, AI Biosecurity Risk, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI Industry, AI Industry Products, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Stuart Russell: How do we prevent unsafe AI?

IMPORTANT LETTER to California Governor Newsom. Bengio, Hinton, Russell, et al.

By |2024-09-10T01:22:29+00:00September 10, 2024|AGI, AI and Business, AI and Health, AI and Jobs, AI and Social Media, AI Benefits, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Open Source, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on IMPORTANT LETTER to California Governor Newsom. Bengio, Hinton, Russell, et al.

CONTROL.AI | Peter Thiel & Joe Rogan | Mark Cuban & Jon Stewart | Demis Hassabis & Walter Issaccson | Elon Musk & Lex Fridman | Scott Aaronson & Alexis Papazoglou | Stuart Russell & Pat Joseph | Jaan Tallinn | Major General (Ret.) Robert Latiff | Former US Navy Secretary Richard Danzig |

By |2024-08-24T21:49:03+00:00August 19, 2024|AGI, AI and Business, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Legal Matters, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

CONTROL.AI | Peter Thiel & Joe Rogan | Mark [...]

Comments Off on CONTROL.AI | Peter Thiel & Joe Rogan | Mark Cuban & Jon Stewart | Demis Hassabis & Walter Issaccson | Elon Musk & Lex Fridman | Scott Aaronson & Alexis Papazoglou | Stuart Russell & Pat Joseph | Jaan Tallinn | Major General (Ret.) Robert Latiff | Former US Navy Secretary Richard Danzig |

SB 1047 – Safe & Secure AI Innovation. Letter to CA state leadership from Professors Bengio, Hinton, Lessig, & Russell. 07 AUG 24.

By |2024-08-17T04:11:04+00:00August 15, 2024|AGI, AI and Business, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Opinions, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

“Powerful AI systems bring incredible promise, but the risks [...]

Comments Off on SB 1047 – Safe & Secure AI Innovation. Letter to CA state leadership from Professors Bengio, Hinton, Lessig, & Russell. 07 AUG 24.

California Live! presents Will AI Be Humanity’s Last Act? with Stuart Russell

By |2024-08-11T17:56:35+00:00July 21, 2024|AGI, AI and Business, AI and Killer Robots, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI Industry, AI Industry Products, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on California Live! presents Will AI Be Humanity’s Last Act? with Stuart Russell

Stuart Russell, “AI: What If We Succeed?” At The Neubauer Collegium for Culture and Society. April 25, 2024.

By |2024-06-09T18:37:07+00:00April 25, 2024|AGI, AI and Business, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI Industry, AI Industry Products, AI Provably Safe, AI Science, AI Scientists, AI Thought Leaders, Blog Posts|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on Stuart Russell, “AI: What If We Succeed?” At The Neubauer Collegium for Culture and Society. April 25, 2024.
Go to Top