A very good read from a respected source!
LeCun: “You would be extremely stupid to build a system and not build any guardrails. That would be like building a car with a 1,000-horsepower engine and no brakes. Putting drives into AI systems is the only way to make them controllable and safe. I call this objective-driven AI. This is sort of a new architecture, and we don’t have any demonstration of it at the moment.“
Editor comment: Like brakes on a car, “guardrails” must certainly work with absolute perfection to mitigate scientifically well-known and understood existential risks of AGI. This means AGI must be developed with mathematically provably safe containment and control technology, before deployment. Anything else would, indeed, be “extremely stupid”.
WIRED. How Not to Be Stupid About AI, With Yann LeCun
It’ll take over the world. It won’t subjugate humans. For Meta’s chief AI scientist, both things are true.
Critics warn that this open source strategy might allow bad actors to make changes to the code and remove guardrails that minimize racist garbage and other toxic output from LLMs; LeCun, AI’s most prominent Pangloss, thinks humanity can deal with it. [Mirriam-Webster: “panglossian. adjective. marked by the view that all is for the best in this best of possible worlds : excessively optimistic. The first known use of Panglossian was in 1831.”]
By STEVEN LEVY, DEC 22, 2023
DO NOT PREACH doom to Yann LeCun. A pioneer of modern AI and Meta’s chief AI scientist, LeCun is one of the technology’s most vocal defenders. He scoffs at his peers’ dystopian scenarios of supercharged misinformation and even, eventually, human extinction. He’s known to fire off a vicious tweet (or whatever they’re called in the land of X) to call out the fearmongers. When his former collaborators Geoffrey Hinton and Yoshua Bengio put their names at the top of a statement calling AI a “societal-scale risk,” LeCun stayed away. Instead, he signed an open letter to US president Joe Bidenurging an embrace of open source AI and declaring that it “should not be under the control of a select few corporate entities.”
LeCun’s views matter. Along with Hinton and Bengio, he helped create the deep learning approach that’s been critical to leveling up AI—work for which the trio later earned the Turing Award, computing’s highest honor. Meta scored a major coup when the company (then Facebook) recruited him to be founding director of the Facebook AI Research lab (FAIR) in 2013. He’s also a professor at NYU. More recently, he helped persuade CEO Mark Zuckerberg to share some of Meta’s AI technology with the world: This summer, the company launched an open source large language model called Llama 2, which competes with LLMs from OpenAI, Microsoft, and Google—the “select few corporate entities” implied in the letter to Biden. Critics warn that this open source strategy might allow bad actors to make changes to the code and remove guardrails that minimize racist garbage and other toxic output from LLMs; LeCun, AI’s most prominent Pangloss, thinks humanity can deal with it.
I sat down with LeCun in a conference room at Meta’s Midtown office in New York City this fall. We talked about open source, why he thinks AI danger is overhyped, and whether a computer could move the human heart the way a Charlie Parker sax solo can. (LeCun, who grew up just outside Paris, frequently haunts the jazz clubs of NYC.) We followed up with another conversation in December, while LeCun attended the influential annual NeurIPS conference in New Orleans—a conference where he is regarded as a god. The interview has been edited for length and clarity.
EXCERPT FOR EDUCATIONAL PURPOSES…
Once we get computers to match human-level intelligence, they won’t stop there. With deep knowledge, machine-level mathematical abilities, and better algorithms, they’ll create superintelligence, right?
Yeah, there’s no question that machines will eventually be smarter than humans. We don’t know how long it’s going to take—it could be years, it could be centuries.
At that point, do we have to batten down the hatches?
No, no. We’ll all have AI assistants, and it will be like working with a staff of super smart people. They just won’t be people. Humans feel threatened by this, but I think we should feel excited. The thing that excites me the most is working with people who are smarter than me, because it amplifies your own abilities.
But if computers get superintelligent, why would they need us?
There is no reason to believe that just because AI systems are intelligent they will want to dominate us. People are mistaken when they imagine that AI systems will have the same motivations as humans. They just won’t. We’ll design them not to.
What if humans don’t build in those drives, and superintelligence systems wind up hurting humans by single-mindedly pursuing a goal? Like philosopher Nick Bostrom’s example of a system designed to make paper clips no matter what, and it takes over the world to make more of them.
You would be extremely stupid to build a system and not build any guardrails. That would be like building a car with a 1,000-horsepower engine and no brakes. Putting drives into AI systems is the only way to make them controllable and safe. I call this objective-driven AI. This is sort of a new architecture, and we don’t have any demonstration of it at the moment.
That’s what you’re working on now?
Yes. The idea is that the machine has objectives that it needs to satisfy, and it cannot produce anything that does not satisfy those objectives. Those objectives might include guardrails to prevent dangerous things or whatever. That’s how you make an AI system safe.
Do you think you’re going to live to regret the consequences of the AI you helped bring about?
If I thought that was the case, I would stop doing what I’m doing.