Apparently attempting to use a “good” AGI to defend against a “bad” AGI has a very low theoretical probability of mitigating catastrophe.
LESSWRONG. What does it take to defend the world against out-of-control AGIs?
by Steven Byrnes
36 min read
25th Oct 2022
46 comments
Possible solution; [contained, mutalistic symbiotic] “super-powerful sovereign AGI motivated to ensure a good future of life without requiring continued human supervision”