FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

“AI clearly poses an imminent threat, a security threat, imminent in our lifetimes, to humanity.” — Paul Tudor Jones

Paul Tudor Jones, Tudor Investment Corporation founder and CIO and Robin Hood Foundation founder and board member, joins ‘Squawk Box’ to discuss the future of AI, the potential dangers posed by the technology, latest market trends, President Trump’s tariff policy, state of the economy, what to expect from the Robin Hood Foundation’s annual benefit next week, and more.

Welcome back to Squawk Box. This morning we are joined by a very special guest right here at the Nasdaq to discuss the economy, markets and so much more. Legendary investor Paul Tudor Jones is here, founder and chief investment officer of Tudor Investment Corporation. And he is, of course, the founder and a member of the board of the Robin Hood Foundation, which is going to be hosting its annual benefit next week. Always a big night in New York, a lot of money hopefully to hopefully. >> We’re going to raise 75 million. >> Well, we’re going to talk about that in just a minute. But let’s talk let’s talk markets. First I was going to ask you about the stock market itself. But then you just said something to me which makes me a little bit nervous, which is you’re focused less on that right this moment than you are about artificial intelligence. What. Well, what were you what do you mean? >> Well, let me. >> Just say I was minding my business, minding my business. And I went to this tech conference about two weeks ago out west. And I just want to share with you what I learned there, Chatham House rules so we can talk about the content. >> So it. >> Was a. Small one. >> 40 notables. But real notables like household names that you would recognize. The leaders in finance, politics, science, tech. And they had a variety of panels I was on one on capitalism, and there was a tech panel that had four of the leading modelers of the AI models that we’re all using today. So it would be persons one through five of each of those four models. >> And the quick three. >> Takeaways from that are one. Wow, I can be such a force for good, and we’re going to see it immediately in both health and education very quickly. It’s going to be fantastic. That’s the good news. >> Two the. >> Neutral news. These models are increasing in their efficiency and performance between on the very low end, 25% on the high end, 500%. Every 3 or 4 quarters. So it’s not even curvilinear. It’s a vertical lift. And how powerful artificial intelligence is becoming. And then thirdly, and the one that that disturbed me the most is that AI clearly poses an imminent threat, security threat imminent in our lifetimes to humanity, and that that was the one. That really, really got me. >> When you see imminent threat, what do you mean? >> So I’ll get to it. >> So they had a. >> Panel of again. Four of the leading tech experts and kind of about halfway through, someone asked them on AI security. Well. What are you doing on AI security? And they said. The competitive dynamic is so intense among the companies. And then geopolitically between Russia and China, that there’s no agency, no ability to stop and say, maybe we should think about. >> What actually. >> We’re creating and building here. And so then the final question is, well, what are you doing about it? He said, well, I’m buying a 100 acres in the Midwest. I’m getting cattle and chickens, and I’m laying in provisions and for real, for real, for real. And that was obviously a little disconcerting. And then he went on to say, I think it’s going to take an accident where 50 to 100 million people die to make the world take the threat of this really seriously. Well, that was. That was a freaky deke to me. And no one pushed back on him on that panel. And then afterwards we had a breakout session, which was really interesting. All 40 people got up in a room like this, and they had a series of propositions, and you had to either agree with or disagree with the proposition. And one of the propositions was, there’s a 10% chance in the next 20 years that AI will kill 50% of humanity. So there’s a 10% chance that AI will kill 50% of humanity in the next 20 years. Agree or disagree? So I’d say the vast majority of the room moved to disagree side. I had just heard Joe Rogan and Elon Musk two months ago, where Elon Musk said there’s only a 20% chance that AI can annihilate humanity. Now I know why he wants to go to. >> Mars, right? >> And so. >> About 6 or 7 of us went. >> To the. >> Agree side, and. >> I’d gone there because of what I’d heard Elon Musk say, who’s maybe the most brilliant, brilliant engineer of our time. All four modelers were on the agree side. >> Of that. >> All four of the leading developers of the AI models were on that side. And then we debated. Then the two sides got to debate, and one of the modelers says to the disagree side. If you don’t think there’s a 10% chance, as fast as these models are growing and how quickly they’re commoditizing knowledge, how how easily they’re making it accessible, that someone can biohack biohack because that’s where the real weakness is a weapon that could take out half of humanity. I, I don’t know, 10% seems seems reasonable to me. >> Okay. So thank you for bringing us this great, great news over breakfast. >> No, no, no. Can I just say I’m not a tech expert? I’m not. But I’ve spent my whole life managing risk. I that’s why I’m here today, is because I’m a I’m as good as there is on macro risk management. And we just have to realize to their credit, all these folks in AI are telling us we’re creating something that’s really dangerous. It’s going to be really great to, but we’re helpless to do anything about it. That’s to their credit

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.