FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

This is an interview with Max Tegmark, MIT professor, Founder of the Future of Humanity Institute, and author of Life 3.0. This interview was recorded on-site at AI Safety Connect 2025, a side event from the AI Action Summit in Paris. See the full article from this episode: https://danfaggella.com/tegmark1 Listen to the full podcast episode on Apple Podcasts: https://podcasts.apple.com/us/podcast… This episode referred to the following other essays and resources: — Max’s A.G.I. Framework / “Keep the Future Human” – https://keepthefuturehuman.ai/ — AI Safety Connect – https://aisafetyconnect.com … There three main questions we cover here on the Trajectory: 1. Who are the power players in AGI and what are their incentives? 2. What kind of posthuman future are we moving towards, or should we be moving towards? 3. What should we do about it?

Today my Trajectory episode with Prof Max  @tegmark is live. This was recorded live during the AI Action Summit in Paris. 2 big takeaways, among many: 1. Military leadership seeing AGI as a threat may increase (not decrease) an arms race. 2. Wishing for posthuman futures TOO soon = ridiculous. Lets just lay out a few of these points here: 1. The tipping point for coordination / military leadership. Max talks about how, when the militaries in the US and China see AGI itself as a risk to both of them. He has faith that international coordination is possible, and that a combination of (a) raising awareness (like with the frameworks he shares in this episode) and (b) massive, scary growth in AGI capabilities could very well lead to an attractor state of coordination over conflict. He believes that it’s important to make “teh control problem” well known ahead of time, so that if/when an AGI disaster happens, it won’t be seen as an attack from the enemy (which would accelerate an arms race), but it would be seen as a shared danger for both nations. He’s also no blind optimist. He’s very clear that there may not be easy odds for international coordination, but even if the odds are slim – its vastly better than having a million Unworthy Successor AGIs getting hurled into the world at once. 2. Posthuman futures – but only when we get them right. Max’s Life 3.0 is a pretty damn inspiring long-term look at AGI futures (albeit through an anthropocentric lens). He says in this interview that there might be a long term future (he says 1MM years, which seems kind of hyperbolic and wholly unrealistic, but alas) where humans may not even want to control AGIs and posthuman life should flourish. But he rightly points out that: a) We have no idea what we’re even building now, so hurling posthuman life into the world is ridiculous, and b) We should focus on understanding the tech and obtaining global coordination and human benefits as a near-term step While my timelines for when a posthuman transition will happen are vastly more near-term than Max’s, the argument AGAINST “rushing to posthumanism” is one I wholly agree with.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.