FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

At Axios House SXSW 2025, Axios publisher Nicholas Johnston speaks to Future of Life Institute co-founder and executive director Anthony Aguirre in a sponsored View From the Top conversation in Austin, Texas.

thank you all for being here today at axio house at South by Southwest we do these all over the world but it is great to be back here deep in the Heart of Texas so thanks for joining us and of course nothing that we do at axos would be possible without the support of our partners so a huge huge thanks to the future of Life Institute for making this conversation about AI um possible and I’m very excited to welcome for a view from the top conversation uh the institute’s co-founder and executive executive director Anthony Airi Anthony welcome to aio’s house [Music] how are you sir um let’s just dive right into it uh you folks have released uh some research on the development of AGI calling for its Halt and you’ve got uh some swag here which I love uh keep the future human when we talk about AI in The Newsroom I like to say as a human I am very pro-human um so I’m on the side of the humans and so this is a great hat but so let’s dig in a little bit to the reasoning behind that like tell us your Viewpoint of like what AGI is what the threats are and why in your mind we have to close the gates I being prum was taken for granted up until recently um so so I wrote keep the future human because you know as a scientist and an educator um I feel like there’s something that is truly immense that is happening and is you know people are talking around it you know around the the borders but it is not a central focus of conversation AI is a central focus of conversation but um AI is not the only thing that’s happening AGI artificial general intelligence is the thing that some of the largest companies on earth have now sort of declared that they are after that they are developing so we need to talk about what that is um so the the AI that we’ve had so far is largely a tool so we’ve had narrow AI like uh Alpha fold that you know folds proteins Alpha go that that plays go and you know image generation tools uh these are powerful systems and they’re better than humans at some of those things like folding proteins um but they only do one thing they’re they’re not going to go off the rails um more recently we’ve had general purpose AI systems you know chat GPT and Claude and all of these things these can do a lot of things they can write poetry they can write code they can generate it analyze images they can do physics problems that’s that’s my my thing um they can do many of the things that humans can do and in fact they’re more General in some senses like there’s no human that speaks 100 languages and that knows you know all of the subjects so they’re more General in some senses but still they’re relatively passive um now where we are turning now is towards autonomous systems systems that can make their own plans they can follow complex uh sets of actions toward goals they can have their own goals and compose their own goals and it’s that combination the intelligence like the power of being able to to do the things that humans can do the generality and the autonomy that is really the way to think about AGI I think it’s better called autonomous general intelligence than artificial general intellig now what does that mean well the autonomy and theity and the intelligence together that’s what makes humans unique that’s what gives us the ability to be sort of the stewards of the earth um if you have a machine that has those three things that is what makes us replaceable um and this is the This Is The crucial thing this is why the companies are after artificial general intelligence and we should think about what it really means if we build machines that are as capable as we are intellectually um and as general and can follow their own goals right but there’s there’s very few examples in history of humans being confronted with a new technology and saying well no we’re not going to pursue it so talk to a little bit about how like why is this different and what are the steps that we can take to have a different outcome yeah so so all of the things that we’ve built in history as Technologies almost all of them have been tools they’ve been things that we’ve designed to extend our capability to empower us to do the things that we want to do so I think the the real difference and I think this is fundamental to understand is that AGI and after it super intelligence is not a tool it is a uh it is a competitor it is something that H that is more like a different species and so when you think about some of the Technologies we haven’t pursued what do we what do we not have we didn’t build super soldiers using Eugenics or genetic engineering um we didn’t um build bioweapons that you know took off and proliferated around the world the things that we have not built uh and we have chosen sort of not to play God with are the things that take on a life of their own and are competitors for us so I I think that’s the situation here we have a choice whether we want to build powerful AI tools like can solve cancer um this everybody agrees with like let’s solve cancer let’s cure cancer what does that actually take it does not I think take building a magical AGI super intelligent Genie and please solve cancer for us it takes developing the the AI the powerful AI tools the data sets the integration with the scientific method to like actually solve do the science of of curing cancer so we have a choice between those two things and we have turned away from Technologies before that have really undermined what it means to be human um that have sort of uh would have created a a competitor to humans as a species that is I think what we could do here too but like it’s interesting that you frame it as a choice but there’s a a broad sense that we’ve already decided if you look at Business Leaders if you look at politicians here like the the the RAC is on as a competition a global competition between United States and other countries around the world what is the message then you say to uh people like Sam meltman who run these companies or folks in the administration or you Senator rounds who was here earlier to make to make that different Choice yeah I mean I think there’s a sense that that you know we’re on this giant ship that is headed in some direction I think unfortunately in AGI and super intelligence it an iceberg is where it’s headed and when you’ve got a giant ship headed toward an iceberg pulling the brakes is neither going to happen nor be effective enough but you can steer and I think that’s what we have to do here we we can choose to uh direct ourselves towards the powerful AI tools that are controllable or we can choose to double down as the companies are currently doing on AGI and superintelligence I think which I think will not be controllable so so it’s really important to to fully contemplate what is trying to be built here so um so Dario amade head of open AI sorry head of an topic uh evok the idea of a country of a million Geniuses in a data center so let let’s think a little bit about what that means these are a million Geniuses um that operate at many times human speed that have prodigious memory that never Tire never sleep that can copy and reproduce themselves and improve themselves um this country of a million Geniuses in a data center how is that kept under control how do we control a million Geniuses operating thousands of times times faster than us that can that have Beyond human capabilities anyone have the answer for that we don’t we don’t control that um and so this is what we’re talking about and um it’s not just that AGI is going to replace most of humans in their job though that would happen and it’s not just that it will totally undermine the civilization that we have though that would happen it’s that AGI will lead to Super intelligence almost inevitably so the thing the Dario described is not one AGI but a community of them that will H happen almost immediately that Community improving itself is super intelligence and that’s just the floor of what that would look like so before we run out of time here then let’s spin this forward maybe give people a little bit of homework in the room you painted this picture now of an infinite number of Super Geniuses who know everything you’ve got my attention my words what do we do like what what can be the next step here to arrest this so so the reason I wrote k f was not just to sound the alarm that this is happening but what concretely can we do and uh I think the answer here is that we actually can take very significant and concrete steps um one of you know their policy solutions they include liability they include governance of the computation that goes into these giant AI systems um the but the predominantly what they take is realization and political will like I think uh the general public and the people who are currently in power and going to lose the power to AGI and superintelligence need to understand that this is coming um and this is not coming in 20 years or in 50 years uh Sam Alman says he knows how to do it Dario amade says 2026 2027 um there’s some you know tech companies that promise things and they come maybe years or decades later that has not been the case with AI so far things have come faster than we thought they were coming um so we do not have a lot of time to you know significantly change course here the good news is that there are concrete things that we can do we just have to decide what how to do them uh and so a lot of that now is uh informing people of these kinds of risks and getting them active in the short time of 2026 is next year yep yep all right call to action uh a really important conversation uh Anthony thank you so much for being here thanks for future of life for making this conversation possible

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.