FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

“One of the first major professions to be fully automated will actually be programming because that’s what the companies are trying hardest to achieve, because they realize that will help them to accelerate their own research and compete with each other… It doesn’t seem like OpenAI or any other company is at all ready for what’s coming, and they don’t seem inclined to be getting ready anytime soon. They’re not on track and they don’t seem like they’re going to be on track… There’s this important technical question of AI alignment, which is how do we actually make sure that we continue to control these AIs after they become fully autonomous and smarter than we are. And this is an unsolved technical problem.” — Daniel Kokotajlo

Are AI companies recklessly racing toward artificial superintelligence or can we avoid a worst case scenario? On GZERO World, Ian Bremmer sits down with Daniel Kokotajlo, co-author of AI 2027, a new report that forecasts how artificial intelligence might progress over the next few years. As AI approaches human-level intelligence, AI 2027 predicts its impact will “exceed that of the Industrial Revolution,” but it warns of a future where tech firms race to develop superintelligence, safety rails are ignored, and AI systems go rogue, wreaking havoc on the global order. Kokotajlo, a former OpenAI researcher, left the company last year warning the company was ignoring safety concerns and avoiding oversight in its race to develop more and more powerful AI. Kokotajlo joins Bremmer to talk about the race to superhuman AI, the existential risk, and what policymakers and tech firms should be doing right now to prepare for an AI future experts warn is only a few short years away. “One of the unfortunate situations that we’re in as a species right now is that humanity in general mostly fixes problems after they happen,” Kokotajlo says, “Unfortunately, the problem of losing control of your army of super intelligences is a problem that we can’t afford to wait and see how it goes and then fix it afterwards.”

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.