Level 4: AI Innovators. AI that can aid humans in invention and/or generate discovery autonomously.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Sam Alman the CEO of openai sits down to talk to why combinators president and CEO Gary Tan it’s a fascinating discussion I recommend everybody watch it but in this video let me just kind of show you the few things that really jumped out to me I got to give credit to Gary tan here he is probably one of the best interviewers to interview Sam Alman so far because he knows what he’s talking about he’s not pushing his own agenda and he’s trying to get Sam’s opinions on things instead of pushing whatever beliefs he has and also they both obviously know quite a bit about scaling these tech companies these Tech startups so there’s certainly a lot in common there and a big theme of this discussion is just this idea of what happens when machines start thinking it’s a pretty big question they note early on Sam alman’s prediction that super intelligence might only be thousands of days away thousands that’s pretty wild and it’s a bold claim and certainly many many people are still SK iCal about this but if you look at opening eyes journey and it’s been a pretty fascinating ride starting with Sam Alman and Elon Musk who was one of the original founders of the company they transformed from a tiny startup almost a joke to some people in the industry into the driving force behind so much of what’s happening in AI right now and there’s uh one sort of key theme this main theme that you’ll notice throughout the interview that’s open a eyes unwavering belief and conviction in their approach even when pretty much everyone else was doubting them this by the way reminds me of Jeffrey Hinton sometimes referred to as the Godfather of AI who even earlier kind of held the same unwavering belief and conviction when everybody was doubting it in his case he believed in neural Nets as being one of the keys to kind of unlocking artificial intelligence for many decades While most people IND histry the researchers working on AI thought that this was a dead end he stuck with his belief and well is one of the reasons he’s referred to as the Godfather of AI right now so when open AI was just starting some of the other sort of AI experts people in Industry would refer to them as irresponsible young kids according to Sam mman as if they were just like messing around with something they didn’t really understand this was because they from the beginning specify that their approach what they were trying to do was to build AGI artificial general intelligence so machine with the same sort of intelligence and abilities as of the average human so to speak we don’t have a really nailed down definition of what AGI is but openi for example defines it as kind of reaching a point where artificial intelligence can do most of the economically viable tasks that humans can do now when they were starting out talking about building AGI the AI landscape was very very different image net had just come out and it was impressive for image recognition this AI was mostly about and most of the other AIS that are available they were mostly about classifying things could it tell a cat from a dog for example now open AI was looking Way Beyond that they were betting big on deep learning and scaling up on compute power that was their path to something truly groundbreaking now a lot of people thought they were crazy for it pouring that much money and resources into just one approach it seemed Reckless to a lot of people the the conventional wisdom was to diversify hedge your bets but open AI went all in you can argue that laser focus is what got them to where they are today it’s like a classic startup story take a huge risk a huge bet and maybe you fail spectacularly but if you win you win big speaking of winning big this interview highlights how crucial the access to all of the computing power was for open AI that’s something that startups could only dream of especially back in the days and this was another criticism level against open AI people saw it as wasteful throwing all those resources at what seemed like a very long shot but Alman looking back talks about how most people still underestimate the value of that extreme conviction it’s a powerful lesson really not just for AI but for anything focusing your energy not trying to do everything at once can make all of the difference it makes you think even outside inside of AI how often do we hold back from really going all in on an idea even if we believe it deep down it’s human nature to play it safe but sometimes you need that unwavering faith in your vision especially when you’re trying to do something truly Innovative interestingly and kind of unrelated to this but Peter teal one of the people that became a billionaire in the technological Revolution part of the PayPal Mafia that was one of the things he pointed out that a lot of the best tech Founders happen to be the spectrum they might have Asbergers and they tend to stick to their convictions regardless of what other people think one of our strengths is the ability to get punched in the face and get back up and keep going the pursue their weird thing that everyone else is telling them they should just forget about so when they win they win big but the point here is that open ey had the conviction they had the computing power but here’s where it gets really interesting open AI did not see large language models like Chad BT they didn’t see that as the key to AGI not at the time they were focused on robotics gam playing AI all sorts of things like that there’s this amazing anecdote about Alec Ratford one of the key researchers at open Ai and by the way this person seems kind of like a big deal he gets referred to as an incredible AI researcher although I don’t know if I’ve seen too many interviews or even pictures of him but this person he was experiment ING with Amazon reviews just kind of playing around testing things out and he stumbled on this unsupervised sentiment neuron so unsupervised sentiment neuron sounds pretty technical so what is that well basically this neuron in their neural network that was classifying these Amazon reviews the network was able to understand the sentiment of the review was this a positive review for the product or negative and I was able to do so without being explicitly told how to do it ages figured it out on its own it was a tiny detail almost an accident but that tiny detail sparked the whole GPT series it was kind of a foundational Discovery for the GPT series and as you know gpts have completely changed the game for AI by the way Dr Jim fan from Nvidia has talked about this in another post that he’s made this idea of that these models they’re not explicitly taught how to do certain things they learn implicitly they figure out on its own I tend to think of it as imagine a kid that you tell them certain thing like don’t do this don’t do that that’s trying to teach them explicitly by telling them but of course as you probably are aware kids also learn implicitly they observe what you’re doing how you’re behaving how other people are behaving they make their own sort of judgments about the world around them they’re learning implicitly and this is what Alex Bradford discovered with this unsupervised sentiment neuron that had learned implicitly to tell the good review from the bad negative from the positive and it’s kind of mind-blowing that this one little thing this neuron figuring out feelings and Amazon reviews could lead to something as massive as the architecture that is GPT kind of goes to show you never know where that next spark of Brilliance will come from and that little spark led to some pretty seriously disruptive stuff when GPT 3 was released while everyone thought it was impressive it didn’t really have these clear business uses that made GPT 4 kind of blow up and rapidly scale at least on the Enterprise side of things so gpt3 was more like look what it can do cool whereas GPT 4 was more like look what you can do with it look what you can build with it huge difference they also mentioned Jake heler who used GPT 4 to build a legal tech company so tech company for the legal field that was later acquired for 650 million it wasn’t just about cool demo it was about real world impact and this is where things started to move really really fast and that’s where we see this shift from level two systems this is as openi sort of explains the levels of AI right so level two is systems that can reason and solve problems and so we’re beginning to shift from level two to level three and these are the systems that can function as agents handling complex tasks GPT 4 and all the things being built with it are really showcasing that leap so by the way in the interview Alman describes the sort of five levels of AGI this is kind of a framework put forth by openi to kind of give us an idea of where we are in AI development so level one is basic AI now we’ve had this years ago level two is where things get interesting AI that can reason solve problems understand Concepts then we have level three like we said this is the agent level this is probably highlighted by the 01 model the latest release it’s able to think through its tasks for acting and seems to have more ability to have these agented capabilities to complete long-term tasks and so level three is about AI that can not only think but can act independently make decisions complete complex tasks imagine a personal assistant for example that can not only schedule your appointments but could also potentially negotiate a better deal on your behalf or even manage your Investments based on your personal goals that’s the kind of agency we’re talking about next we have level four and this is what Altman calls the innovator AI that can not only perform basic tasks but drive Innovation coming up with new ideas making scientific discoveries basically pushing the boundaries of human Ingenuity and level five is the big one company scale intelligence which is essentially AGI that can operate at the level of a large complex organization so we’re talking about AI that could potentially run a whole company manage resources make strategic decisions that’s the idea but again this is this is still very much in the realm of speculation however what’s interesting is that Alman seems to think will reach level five sooner than most people expect and there’s actually some evidence to back this up and it comes from a surprising place and that is the openi hackathon posted by Y combinator at this hackathon a startup actually used GPT 4 to design and air foil so an air foil like the wing of an aircraft this was a functional air foil that could actually lift and maneuver so we’re not talking about writing marketing copy or summarizing legal documents here this is AI designing a complex piece of technology something that would normally take years of specialized engineering expertise and it really does suggest we might be closer to those innovator level four systems than we realize it’s this kind of Rapid progress the leap from writing code to actually designing physical objects that has people like bman and likely Gary tan feeling pretty optimistic about the timeline for reaching those higher levels of a egi so hearing about this uh opening ey hackathon definitely got me thinking about this innovator level four systems seeing GPT 4 being used to actually design a working air foil that’s a huge leap forward kind of makes you wonder what’s coming next what happens when AI can not only perform tasks but actually start driving innovation in all sorts of fields where definitely seeing more and more of that come out of for example Google deep mine with their designer proteins as well as AI models that are beginning to design computer chips computer chips that AI runs on they talk about some areas where we might see this innovator level AI really having a big impact scientific research for one we can use AI to analyze massive data sets identified patterns that maybe human researchers would miss coming up with potentially new hypotheses imagine for example an AI that could sift through mountains of data on climate change and pinpoint the most critical factors or suggest Solutions we haven’t even thought of that could be a game changer especially for those really complex problems that have stumped scientists for decades and it’s not just about analyzing data AI could also play a role in designing experiments running simulations even controlling lab equipment it’s like it becomes a partner in the scientific process I’m picturing a future where AI isn’t just a tool for scientists but a collaborator helping to acel cerate the pace of Discovery and Innovation and it highlights how important it is to be adaptable to embrace these new technologies and find out ways to work with them not against them it seems like the future belongs to those who can adapt to this wave of AI to learn new skills to find new ways to integrate AI into their work their business their lives in a positive and productive way this interview points to the fact that we might be on the verge of a major societal shift we’re entering an era where AI could help us tackle challenges we always thought too big or too complex to solve AI could help us unleash new levels of creativity unlock human potential in ways we can barely even imagine now now of course with all this excitement optimism there are also things like AI safety and ethics that try to mitigate some of the risks from Ai and as we develop these increasingly powerful AI systems they need we need to make sure they align with our values and though we have safeguards in place to prevent any unintended consequences a number of people have left openi that were kind of working in this field I’ve recently watched an interview of Miles Brundage who is an openi researcher specifically working on safety that recently left and he’s not necessarily saying that what openi is doing is sort of bad or unsafe but he is saying that as a whole we really need to figure out how to have brakes on AI development he’s saying you wouldn’t drive a car without brakes you wouldn’t say well let’s just kind of accelerate and then if we see something bad you know in front of us then we’ll figure out how to break this car meaning how to you know push the brakes he’s saying we need to install the brakes now so that when we run into some dangerous territory we can rapidly decelerate he’s saying we don’t really have that in place now and of course we’ve had some people that are more critical of opening eyes specifically pointing the finger at them as being more let’s say profit driven more sort of driven by market incentives than first and foremost focusing on AI safety and thropic has been seen as kind of like the company that’s kind of a system sister company to open AI but that has been really focusing on safety a lot more recently they did join forces with paler which is kind of a spy slm military slal security company AI company that works closely with the US government among other nations and so a lot of people are kind of they’re not sure how to feel about that now besides that sort of air foil that opening ey hackathon did have some other really interesting aha moments that probably made people think like wo the future is closer than than we thought so again the air foil the GPT system was able to iteratively improve on it so was basically kept engineering and improving on it starting from scratch and turning into something that was highly efficient which certainly seems like something that could be described as that kind of Innovator level AI system maybe some sort of a Proto level four AI system but another team at that hackathon used GPT 4 to develop a new encryption algorithm by the way I’m not sure if they specified GPT 4 this might have been the 01 model I’m not one 100% sure I apologize if I missed that detail they were using opening eyes models but one team used it to develop a new encryption algorithm so we’re talking about cyber security and so apparently this algorithm was so complex so different from anything we’ve seen before that even experienced cryptographers were stumped they could not crack it the openi model just came up with this thing out of nowhere and it’s kind of unsettling to be honest if AI can create encryption that’s beyond our ability to break what does that mean for cyber security in the future if you recall the initial qar leak one of the rumors swirling around was that potentially this thing could potentially break any known encryptions now that doesn’t seem likely looking back at it that might have been a fake rumor that that was spread but there is this idea that AI officially Advanced AI could potentially break the entire encryption system by figuring out some Shortcut that we don’t know about some math that we we didn’t get to yet a lot of the destion that we have right now kind of relies on the fact that every time let’s say you add a letter or a digit you add a character the chance of you guessing that that password or that encryption gets exponentially higher right so for example if you have a password that’s seven characters a Brute Force attack might crack your password in milliseconds you have eight characters that goes up to 5 hours a massive jump you have nine character goes up to 5 days so notice how adding just one character increases the amount of time the complexity it increases exponentially right so 10 characters for months 11 characters would take one decade and just adding one more character so 12 characters would take two centuries so as you can see here even as computers get better and faster and smarter if we just add one character the complexity massively increases but there’s this sort of idea of P versus NP that basally says like what if there’s some way of solving that that doesn’t require this sort of exponential increase in complexity they simplified way of thinking about that is like if I asked you to tell me if a number is odd or even right so if I start with a onlet number if I say if I start with a one character number you know like two you’d tell me well it’s even but what if I gave you a number that started in the trillions well if it ended in two you’d say it’s even so in other words increasing the number of characters not number doesn’t make it more difficult to crack it stays the same because we figure out a little shortcut a little trick we don’t have to look at the whole number we don’t have to do any math around the whole number we just look at the last character the last digit if it’s even the numberers even if it’s odd the numbers on now that’s very simplistic but the idea of P versus P again really simplifying this idea here but what if the computer figures out the sort of trick but for solving our encryption our cyber security that would immediately make everything visible the entire Finance system your bank account everybody’s National Security if it’s online or connected to the internet in any way I mean the world is just not ready for that we have built a lot of our world economy and way of life on this idea that adding an extra digit makes things much harder to solve another great example from the hackathon highlights the positive side of AI or a team using this GPT architecture to develop a system that can analyze a satellite imagery and predict the spread of wildfires now if you live in a place like CN californ you know that wildfires can be a devastating problem and if we could actually predict how they’re going to move well that could save lives and the system was eerily accurate it used machine learning to identify patterns in the data things that human analysts might easily miss it would take into account wind speed terrain vegetation and make these really precise predictions so this isn’t even AI just helping us do something it’s it’s actually better than us at at certain things kind of humbling to think about and there would be tons of other areas where this would be very applicable where this predictive power could be useful predicting natural disasters managing traffic maybe even forecasting disease outbreaks this is opening up a whole new era of problem solving of scientific discovery now at the same time there’s a article on the information.com that’s talking about how maybe the next generation of openi models might be slowing down that maybe we’ve reached sort of like the peak of progress and now we we’ll get incremental gains will make incremental gains but maybe not these exponential leaps forward so some people are suggesting that maybe Sam Alman is trying to Hype things up and the reality is very different time will tell however it’s important to note that the scientists and the other researchers and developers some that are at openi some that have left open so kind of regardless of their affiliation they’re certainly more aligned with what Sam wman is saying the people leaving open a ey aren’t saying that well this AI progress is over so we can just chill I haven’t yet heard one person say that most of them are saying this thing is coming fast it’s accelerating fast and we need to start thinking more deeply about how we’re going to handle it so if this is hype and that means these AI safety researchers are also hyping it up they they don’t really have an incentive to it seems so again I’m I’m not 100% sure where this is going to go time will tell but certainly the people running these things you know Microsoft and Google Etc are investing more and more money and time into not just Computing Hardware they’re also in the case of Microsoft for example powering up an old nuclear power plant on 3M Island to develop more electricity for training and running these models so one way of looking at is we haven’t even begun to scale this thing up yet but let me know what you think in the comments if you made it this far I appreciate you watching consider subscribing hit the thumbs up and I’ll see you in the next one

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.