CA: I mean, you’ve had some departures from your safety team. How many people have departed, why have they left?
SA: We have, I don’t know the exact number, but there are clearly different views about AI safety systems. I would really point to our track record.
Editor’s Humble Comment: 1. Safe AI Forever is NOT about the past (track record) 2. Safe AI Forever is about the future we want (zero X-risk) 3. AI safety is impossible, untestable, unpredictable (they all say so). 4. Safe AI requirement is for mathematically provable Safety guarantees (no exceptions). 5. Rethink Safe AI engineering. (Get it right!)
The AI revolution is here to stay, says Sam Altman, the CEO of OpenAI. In a probing, live conversation with head of TED Chris Anderson, Altman discusses the astonishing growth of AI and shows how models like ChatGPT could soon become extensions of ourselves. He also addresses questions of safety, power and moral authority, reflecting on the world he envisions — where AI will almost certainly outpace human intelligence. (Recorded live at TED2025 on April 11, 2025)