“I am a human you know. I have a lovely little one-year-old Leo. Yeah, and when I look into his eyes, you know, I feel he’s he’s my son- of course I should feel more loyal to him than to some random machine.”


why do you have such a strong Human vs machine bias towards humans compared to the the AI models?

MAX: cuz I am a human you know. I have a lovely little one-year-old Leo yeah and when I look into his eyes you know I feel he’s he’s my son of course I should feel more loyal to him than to some random machine shouldn’t I and uh I feel um we humans have figured out collectively how to start building Ai and so I think we have the right to have some influence over what future we build I’m on team human here not on team machine I think I think we want to have the machines work for us not the other way around.

MAX. Look at all other technologies that have the potential to cause harm we’ve we have a solution for how we do it with all of them whether it be airplanes or medicines we always have safety standards you know you can’t have a if if asra come comes up with says we have a new miracle drug that’s going to cure cancer and five you know and we’re going to start selling it in E tomorrow La m v would be like okay where is your clinical trial oh you you couldn’t be bothered make you haven’t had time to make one you just come back when when you’ve done the clinical trial and then we’ll see if you meet the standards right so that buys time for everybody involved it’s the responsib the companies now have an incentive to figure out what the impact on society is going to be how many percent of people are going to get this side effect that effect they quantify everything in a very nerdy way and that way we can trust our biotech that’s why ultimately asena has a pretty good reputation uh same thing with airplanes same thing with cars same thing with basically any Tech that cause harm except for AI where here in the US there’s basically no regulation if some of them wants to release dpd5 tomorrow he can right um so I think the sooner we switch over to treating AI the way we treat all other powerful Tech the better that I I think a very common misconception is that um somehow we have to choose between quickly reaping all sorts of benefits of AI on one hand and on the other hand avoiding going extinct the the the the the fact is 99% of the PE of things that the people I talk with are excited about which I think includes you with AI are things that are quite harmless that that have nothing to do with building smarter than human AI that we don’t know how to control can cure we can help spread knowledge we can help make companies more efficient we can we can do great progress in science and medicine etc etc so if we put safety standards in place that just end up slowing down a little bit that that last percent the stuff that we might lose control over then we can really have this age of abundance for a number of years now where we can enjoy revolutions and health care and education and so many other things without having to lose sleep that this is going to all blow up on us and then when we if we can get to the point eventually where we can see that even more powerful systems meet the safety standards great now if it takes a little longer fine we’re not in in any great rush we can have life flourishing for billions of years if we get this right so there’s no point in risking squandering everything just get it one year sooner.

In 1942 Enrico Fermi built the world’s first nuclear reactor in Chicago under a football stadium and um the phys when physicists found out about that they totally freaked out why was it because they thought this reactor was really dangerous no was really small low energy output it’s because they realize that now we’re only a few years away from the bomb and 3 years later Hiroshima Nagasaki happened right it’s there’s a nice analogy there because around that time in 1951 Alan Turing said that one day machines will become as smart as people and then very quickly they’ll become way smarter than people because we’re biological computers and there’s no reason machines can’t do much better and then the default is we lose control over the machines but I’ll give you a little Canary in the coal mine warning so you know when you’re close the ter test once machines become good enough at language and knowledge that they can fool a lot of people that um thinking that they are human that’s when you’re close that’s the enrio FY moment when you might have a few years and U last year yosu Benjo even one of the most cited AI researchers in the world argued that gp4 passes the touring test you can squabble about whether it’s passed the touring test or whe whether will’ll pass it next year but we’re roughly there at the enrio F reactor now for AI when um it’s high time to yeah just take seriously that big things are going to happen soon and um let’s get it right let’s prepare