Need a new search?

If you didn't find what you were looking for, try a new search!

Will A.I. Kill Us All? Our Guest [Eliezer Yudkowsky] Says It Might. | Interview Hard Fork

By |2025-09-15T05:06:56+00:00September 15, 2025|AGI, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, Nature, P(doom)|

"The thing I would say to sort of refute [...]

Comments Off on Will A.I. Kill Us All? Our Guest [Eliezer Yudkowsky] Says It Might. | Interview Hard Fork

The troubling decline in conscientiousness. A critical life skill is fading out — and especially fast among young adults. FT.

By |2025-08-19T13:11:51+00:00August 19, 2025|AI and Health, AI and Social Media, AI Emergent Ability, AI Industry, AI Industry Products, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, Nature|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on The troubling decline in conscientiousness. A critical life skill is fading out — and especially fast among young adults. FT.

“Superintelligent AI could create a virus that just kills us all” – Nobel Laureate, Geoffrey Hinton

By |2025-04-03T22:52:13+00:00April 3, 2025|AGI, AI and Killer Robots, AI Dangers, AI Emergent Ability, AI Existential Risk, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on “Superintelligent AI could create a virus that just kills us all” – Nobel Laureate, Geoffrey Hinton

THE GUARDIAN. California won’t require big tech firms to test safety of AI after Newsom kills bill. Governor vetoes bill that would require generative AI safety testing after tech industry says it’d drive companies away.

By |2024-10-02T20:07:14+00:00September 30, 2024|AGI, AI and Business, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on THE GUARDIAN. California won’t require big tech firms to test safety of AI after Newsom kills bill. Governor vetoes bill that would require generative AI safety testing after tech industry says it’d drive companies away.

THE NEW YORK TIMES. OPINION. NICHOLAS KRISTOF. A.I. May Save Us or May Construct Viruses to Kill Us. July 27, 2024

By |2024-09-03T21:34:16+00:00September 3, 2024|AGI, AI and Business, AI and Fake News, AI and Health, AI and Jobs, AI and Killer Robots, AI Benefits, AI Biosecurity Risk, AI Dangers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Opinions, AI Regulation, AI Robot, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

"A.I. may also make it easier to manipulate people, [...]

Comments Off on THE NEW YORK TIMES. OPINION. NICHOLAS KRISTOF. A.I. May Save Us or May Construct Viruses to Kill Us. July 27, 2024

THE NATION. Big Tech Is Very Afraid of a Very Modest AI Safety Bill. Despite claiming to support AI safety, powerful tech interests are trying to kill SB1047.

By |2024-09-01T22:48:26+00:00August 30, 2024|AGI, AI and Business, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on THE NATION. Big Tech Is Very Afraid of a Very Modest AI Safety Bill. Despite claiming to support AI safety, powerful tech interests are trying to kill SB1047.

FT. Silicon Valley in uproar over Californian AI safety bill. Tech companies launch fightback against proposed law to introduce ‘kill switch’ on powerful artificial intelligence models.

By |2024-06-10T07:14:03+00:00June 10, 2024|AGI, AI and Business, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Legal Matters, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

A very important read from an extremely respected source [...]

Comments Off on FT. Silicon Valley in uproar over Californian AI safety bill. Tech companies launch fightback against proposed law to introduce ‘kill switch’ on powerful artificial intelligence models.

BBC NEWS. Killer robots: Tech experts warn against AI arms race. Published 29 July 2015.

By |2024-05-04T14:56:28+00:00May 4, 2024|AGI, AI and Killer Robots, AI Dangers, AI Emergent Ability, AI Existential Risk, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on BBC NEWS. Killer robots: Tech experts warn against AI arms race. Published 29 July 2015.

How can AI take control and kill all humans on earth? Take 15 minutes to Learn the answer. (no technical knowledge required)

By |2024-04-26T18:56:22+00:00April 20, 2024|AGI, AI and Business, AI and Killer Robots, AI Biosecurity Risk, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on How can AI take control and kill all humans on earth? Take 15 minutes to Learn the answer. (no technical knowledge required)

BERKELEY NEWS. How to keep AI from killing us all. In a new paper, UC Berkeley researchers argue that companies should not be allowed to create advanced AI systems until they can prove they are safe.

By |2024-04-11T04:26:44+00:00April 11, 2024|AGI, AI and Business, AI Biosecurity Risk, AI Dangers, AI Data Centers, AI Emergent Ability, AI Existential Risk, AI in Politics & Government, AI Industry, AI Industry Products, AI Provably Safe, AI Regulation, AI Safety Organisations, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

As companies build ever more powerful AI systems — [...]

Comments Off on BERKELEY NEWS. How to keep AI from killing us all. In a new paper, UC Berkeley researchers argue that companies should not be allowed to create advanced AI systems until they can prove they are safe.

NATURE NEUROSCIENCE. Natural language instructions induce compositional generalization in networks of neurons. [Scientists create AI models that can talk to each other and pass on skills with limited human input]. 18MAR2024.

By |2024-03-25T03:53:23+00:00March 21, 2024|AGI, AI Dangers, AI Emergent Ability, AI Existential Risk, AI Science, AI Scientists, AI Thought Leaders, Blog Posts, P(doom)|

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. [...]

Comments Off on NATURE NEUROSCIENCE. Natural language instructions induce compositional generalization in networks of neurons. [Scientists create AI models that can talk to each other and pass on skills with limited human input]. 18MAR2024.
Go to Top