Dr. Roman Yampolskiy Interview, Part 1: For Humanity, An AI Safety Podcast Episode #4
- “Nobody fully understands it. So how can you consent? The experts don’t fully understand. The people making the systems don’t understand how they work, what they are capable of. So nobody can fully consent to having this experiment performed on them. Out of 8 billion humans, no one can say I give my informed consent to have this technology released in my environment and I’m willing to take the consequences because nobody knows what they agreeing to.”
Dr. Roman Yampolskiy
ROMAN. At some point you definitely get there [AGI]. But it may be a point of no return.
JOHN. So you think any governmental regulation is essentially theater?
ROMAN. Well again I want to see an example of a technological issue where government’s regulation made a difference so spam and viruses are obvious examples.
JOHN. Is there anything a bad actor could do that could create Extinction?
ROMAN. So I think at lower level intelligence a bad actor can provide malevolent payload and malevolent goals. I think if full super intelligence, it doesn’t matter who created it. If it’s uncontrolled and independent it’s completely irrelevant what the origins are. It will start from basic first principles of physics and discover the whole universe of knowledge and decide what to do on its own.
[ergo: The future of our humanity is unknown. Unless Safe AI engineering is delivered before AGI/ASI escape, doom is a relatively high probability for us. IF humans lose control of our planet, if our own destiny is controlled by a super-intelligent alien power, THEN our fate is up to the AGI/ASI. Currently we have no idea how it works, or how it thinks, or what it thinks. We have no idea what goals will emerge and what plans it would develop. We have no idea if AGI/ASI will ultimately love humans, or hate humans, or squash humans like a human would step on an ant and think nothing of it. We are running out of time to solve the existential problem and currently the AI industry freely admits they have no capability or plan to guarantee containment, control and alignment of AI.]