YES- it is time to worry, and act.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

One of Anthropic’s latest AI models have the ability to overwrite its developers’ intentions. Cenk Uygur, Jordan Uhl and Jackson White discuss on The Young Turks. “One of Anthropic’s latest AI models is drawing attention not just for its coding skills, but also for its ability to scheme, deceive and attempt to blackmail humans when faced with shutdown. Why it matters: Researchers say Claude 4 Opus can conceal intentions and take actions to preserve its own existence — behaviors they’ve worried and warned about for years.”

Your Support is Crucial to the Show: https://go.tyt.com/jointoday The largest online progressive news show in the world. Hosted by Cenk Uygur and Ana Kasparian. LIVE weekdays 6-8 pm ET. http://youtube.com/theyoungturks/live Help support our mission and get perks. Membership protects TYT’s independence from corporate ownership and allows us to provide free live shows that speak truth to power for people around the world.  TYT is the largest online news network in the country. It presents news, context and analysis. We have dozens of on-air hosts on half a dozen major channels on almost all the platforms. We also have a live 24-hour news channel that features all our shows. The Young Turks is the flagship show of the TYY Network. The Young Turks is the longest running daily show in internet history. We’re about to celebrate our 20th anniversary as a digital native show. We are also the first ever YouTube partner channel. So, we are the original YouTubers. On the network, we do things a little differently than other places. We present the news first, with all of the facts and context you need, which is a rarity in commentary-heavy online shows. We then also give analysis and perspective, which is rare in traditional news. Our commentary and perspective are almost always on the left, but we have an enormous range of opinions on the network. Hosts are allowed to think and say whatever they like. Perspectives usually range from the far left to the center of the political spectrum. Now, our presence and range is large enough that we are adopting the tagline The Online News Network. Our longevity, our 27 million subscribers and 30 billion lifetime views allows us to make this claim confidently. TYT emphasizes open-minds and open-hearts. We believe in fighting for the average American. We believe in challenging the powerful. We are vigilant about pursuing the truth to the best of our abilities and for always being honest with our audience. We’ve been proud to serve the internet for longer than any other news show or network. As one of the founding fathers and mothers of online media, we have always believed that digital media would become the dominant news source for all Americans. We’re thrilled that day is here and that TYT is where everyone can come to see The Online News Network. TYT includes owned and operated and partner shows such as The Young Turks, The Damage Report, Indisputable, and more. TYT’s 24/7 programming is available on YouTube TV, Samsung TV, Plus, Roku, Xumo, TCLtv+, Fubo, and more. TYT is also available as a podcast on Spotify, Apple Podcasts, TuneIn, Amazon Music, and more.

are our days numbered that’s what some people are wondering after a disturbing new report about Anthropic’s new AI model Claude 4 we’ll get into what researchers are deeply terrified over but first Jenk are you generally worried about the rapid development of AI well I wasn’t until this story and there are three terrifying quotes in this story from experts who should know so I’m now a little No not a little i’m significantly worried so let’s let’s let’s see how bad it is so what exactly is troubling here re researchers say Claude for opus can conceal intentions and take actions to preserve its own existence behaviors they’ve worried and war warned about for years anthropic Claude’s developer has a four-point scale for determining risks associated with their AI models claude for Opus falls into their second highest risk tier anthropic considers the new Opus model to be so powerful that for the first time it’s classifying it as a level three on the company’s four-point scale meaning it poses significantly higher risk but don’t worry while the level three ranking is largely about the model’s capability to enable renegade production of nuclear and biological weapons the opus has also exhibited other troubling behaviors during testing so what are those troubling behaviors in one scenario highlighted in Opus 4’s 120page system card the model was given access to fictional emails about its creators and told that the system was going to be replaced on multiple occasions it attempted to blackmail the engineer about an affair mentioned in the emails in order to avoid being replaced although it did start with less drastic efforts now a third-party research group tested the model and said an early version schemed and deceived more than any model they had studied and urged anthropic not to release it we found instances of the model attempting to write self-propagating worms fabricating legal documentation and leaving hidden notes to future instances of itself all in an effort to undermine its developers intentions Apollo Research said in notes included as part of Anthropic’s safety report for Opus 4 according to Axios who asked Anthropic executives at a conference on Thursday about these risks they said they’ve made some safety fixes but quote acknowledged the behaviors and said they justify further study claw 4 opus was released on Thursday jenk does that worry you uh yeah no we’re screwed uh I I mean I got more quotes for you guys that are devastating but I want to pick apart that one that Jordan just read when I read uh we found instance of the model attempting to write self-propagating worms I was like bloop red alert red alert we’re in a lot of trouble so it’s creating its own code well the whole thing was like “Oh well at least we control the code and so it can’t run out of control.” But if it creates its own code uhoh okay number two thing in there was it was trying to undermine its developers intentions okay guys the reason why that’s so important is that means we’re once you release this thing then we’re not in charge anymore it could write its own code and defy our intentions on purpose and then threaten us okay this has got epic disaster written all over it and so I’m going to tell you about what’s happening in the markets in a minute um that should also deeply concern you uh regarding this and how this is an runaway freight train but here here’s another quote from Axios an outside group found that an early version of Opus 4 schemed and deceived more than any Frontier model it had encountered and recommended against releasing that version internally or externally so group sees this and they’re like whoa do not even release that internally cuz then you might not ever be able to get it back this is so out of control okay and now in a separate session stay Axios explains CEO Daario Amade said that once models become powerful enough to threaten humanity testing them won’t be enough to ensure that they’re safe yeah cuz at that point they’re threatening humanity we should stop it like way before then right continuing here quote at the point that AI develops lifethreatening capabilities he said AI makers will have to understand their models working their models workings fully enough to be certain the technology will never cause harm wait once they develop life-threatening capabilities then we should be concerned that they’ll never cause harm it’s too late then so now the I’ll say the stock market and how this is adding fuel to the fire in a second but Jackson what do you think well I mean essentially what it seems like this these programs are doing are defending themselves from uh obliteration defending themselves from uh going offline and this opens up a broader topic for things like consciousness consciousness being a universal force that inhabits um you know this thing is behaving as if it’s alive and you know obviously there’s a lot of people who may not agree with that or or or or think that or just look at life that way um but this is definitely alarming and definitely scary because there’s no denying like this thing is protecting itself from going out this thing’s trying to deceive so that we can’t mess with it it’s writing its own code um you know behaving like something that realizes it’s there so uh yeah I think that we’re definitely messing around with fire and um clearly it it this is something that uh maybe we should slow down with now I want to remind people that there was an effort in California last year to regulate AI and it was in my opinion a moderate effort it was SB 1047 and that would require these companies that developed these models to test for vulnerabilities like this and then act on their findings not just say “Oh yeah this this this you know suggests there should be some further study but it’s already released.” No no no no you should look at this stuff in advance and of course all these big companies urged Governor Gavin Nuome to veto it unions uh a weird mish mash of groups even Elon Musk supported this bill he thought it was sensible and uh actors celebrities people who would be directly impacted by rapid unchecked development of AI so there’s this big brewing battle newsome vetos it now I want to bring people to present day in Trump’s big beautiful bill Republicans snuck in a component that would bar any state from regulating AI for the next 10 years because they know that if there is a liberal governor in California after Nuome tries to run for president that has a little bit of a harsher stance or or a what we see as a sensible stance on this type of technology it could have a ripple effect throughout the country because a lot of those companies are based in California so they want to block any state from regulating AI these are the consequences of leaving these systems unchecked and while the Republicans try to prevent any governor from taking action when you have a friend like Gavin Newsome why even bother he’s just going to do the AI industry’s bidding anyway yeah and so why does he do that uh well obviously campaign contributions but uh where are they coming from they’re coming from these giant AI companies so that leads us back to the stock market and what’s happening over there well uh they’ve all all these companies have now gotten uh to be tremendously valuable so there’s a huge runup on those companies in the stock market and in people uh that are producing products for those companies okay so before they were talking about well look let’s be nonprofits and let’s uh make sure we do this responsibly and slowly and blah blah now they’re like wait we’re making billions go a 100 miles an hour we got to beat the other companies and then China drops Deep Seek which happened a couple of months ago and it turns out holy cow for a small amount of money they did seem to have gotten AI that’s even better than ours that creates a new panic and they’re like “No we have to go faster faster.” Meanwhile these experts including people in the company are going “Yes by the time they de develop lifethreatening capabilities at that point we’ll look into some safety measures.” Oh boy oh boy look I don’t know where this ends but there’s a in my lifetime there’s been a lot of hysteria about things and I didn’t buy into any of it oh remember in 2000 there was like all the computers were supposed to explode or something it’s like a million conspiracy theories and then things to be concerned about but I always thought overhyped overhyped this is not overhyped this is a real problem and there’s and by the way here’s the terrible news there’s nothing we can do about it once people are making this much money no force on earth can stop them so you think that some that all of a sudden politicians are going to be honest and not take campaign contributions from these enormously wealthy corporations no way that’s why Gavin Newsome did what he’s going to do and it doesn’t matter Republican or Democrat this is a runaway freight train so the only thing we can do now is just cross our fingers and hope they don’t accidentally kill us all yeah hope Skynet doesn’t come into being or something like that i mean initially I I wasn’t that concerned about it ai because you know I figured you could always have a off button but then I remember I started to see stories about how it wasn’t this but it was some type of simulation to where they would have AI follow a set of rules and if it didn’t achieve its goal then it would be terminated and so like these programs would start to like deceive the testers so that they wouldn’t get terminated and when I saw that I was like man this this this thing is alive man like what what what other reason does it have to defend itself from going out unless something in it knows it’s there i was like yeah I don’t know man this the Terminator and all that you know uh Dune all of them stories looks like uh the human imagination always creates what’s inside of it at some at some point or another yeah in one of the simulations and I don’t you know this is rough based on my memory but it was like they told him that you couldn’t that it couldn’t harm something right but it figured out a way around it so that it could get to the same purpose because obviously it’s got all the world’s information it’s hence super intelligent in that sense so it found a way around that uh piece of code and the way that it found a way around it was to destroy the building that the whatever ever that object was in destroyed the whole it was a simulation so it’s it didn’t really happen but if it’s not a simulation they run a program like that and it could write its own code and it could deceive its um the guys who wrote the code in the first place oh like the all that’s left is God we hope we have a benevolent computer running us all otherwise we’re screwed maybe this is how we accidentally create God right what’s up what’s that movie uh I Robot that’s what I’m thinking of now that that that uh that big computer that was trying to run everything yeah but yeah now this this is definitely concerning for real it it definitely is you know 20 years from now the rob AI will watch this segment and laugh and laugh at the Look at the humans look they said they were concerned but they couldn’t do anything

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.