FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

A very good resource from respected experts!

AI ALIGNMENT FORUM

IMPORTANT RESOURCE aligned with LESSWRONG

Welcome & FAQ! The AI Alignment Forum was launched in 2018. Since then, several hundred researchers have contributed approximately two thousand posts and nine thousand comments. Nearing the third birthday of the Forum, we are publishing this updated and clarified FAQ.

This recent post is a good example of important perspective and analysis:

  • [Intro to brain-like-AGI safety.] 1. What’s the problem & Why work on it now? by Steve Byrnes. “I’ll argue that “radically nonhuman motivations” is not just possible for a brain-like AGI, but is my baseline expectation for a brain-like AGI. I’ll argue that this is generally a bad thing, and that we should consider prioritizing certain lines of R&D in a proactive effort to avoid that…

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.