FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

BiocommAI Comment: AI must be controlled by humanity, for humanity, forever.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

FORBES. EDITORS’ PICK.

Geoff Hinton, AI’s Most Famous Researcher, Warns Of ‘Existential Threat’ From AI. 04 MAY 2023.

HINTON: “We’re all in the same boat with respect to the existential threat.”

Craig S. Smith

Geoffrey Everest Hinton, a seminal figure in the development of artificial intelligence, painted a frightening picture of the technology he helped create on Wednesday in his first public appearance since stunning the scientific community with his abrupt about face on the threat posed by AI.

“The alarm bell I’m ringing has to do with the existential threat of them taking control,” Hinton said Wednesday, referring to powerful AI systems and speaking by video at EmTech Digital 2023, a conference hosted by the magazine MIT Technology Review. “I used to think it was a long way off, but I now think it’s serious and fairly close.”

Hinton has lived at the outer reaches of machine learning research since an aborted attempt at a carpentry career a half century ago. After that brief dogleg, he came back into line with his illustrious ancestors, George Boole, the father of Boolean logic and George Everest, British surveyor general of India and eponym of the world’s tallest mountain.

Together with colleagues, Yoshua Bengio, and Yann LeCun, with whom he shared the 2018 Turing award, he developed a kind of artificial intelligence based on multi-layer neural networks, connected computer algorithms that mimicked information processing in the brain. That technology, which Hinton dubbed ‘deep learning,’ is transforming the global economy – but it’s success now haunts him because of its potential to surpass human intelligence.

In little more than two decades, deep learning has progressed from simple computer programs that could recognize images to highly complex, large language models like OpenAI’s GPT-4, which has absorbed much of human knowledge contained in text and can generate language, images ad audio.

On Wednesday, Hinton said there is no chance of stopping AI’s further development.

“If you take the existential risk seriously, as I now do, it might be quite sensible to just stop developing these things any further,” he said. “But I think is completely naive to think that would happen.”

“I don’t know of any solution to stop these things,” he continued. “I don’t think we’re going to stop developing them because they’re so useful.”

He called the open letter calling for a moratorium “silly.”

Deep learning is based on the backpropagation of error algorithm, which Hinton realized decades ago could be used to make computers learn. Ironically, his first success with the algorithm was in a language model, albeit a much smaller model than those he fears today.

“We showed that it could develop good internal representations, and, curiously, we did that by implementing a tiny language model,” he recalled Wednesday. “It had embedding vectors that were only six components and the training set was 112 cases, but it was a language model; it was trying to predict the next term in a string of symbols.”

He noted that GPT-4 has about a trillion neural connections and holds more knowledge than any human ever could, even though the human brain has about 100 trillion connections. “It’s much, much better at getting a lot of knowledge into only a trillion connections,” he said he said of the backpropagation algorithm. “Backpropagation may be a much. much better learning algorithm than what we’ve got.”


“The alarm bell I’m ringing has to do with the existential threat of them taking control.”


Hinton’s main goal in life has been to understand how the brain works, and while he has advanced the field, he has not reached that goal. He has called the powerful AI algorithms and architectures he has developed along the way ‘useful spinoff.’ But with the recent runaway advances of large language models, he worries that that spinoff may spin out of control.

“I used to think that the computer models we were developing weren’t as good as the brain and the aim was to see if you can understand more about the brain by seeing what it takes to improve the computer models,” he said Wednesday by video link from his home in the U.K. “Over the last few months, I’ve changed my mind completely.”

Earlier this week, Hinton resigned from Google, where he had worked since 2013 following his major deep learning breakthrough the previous year. He said Wednesday that he resigned in part because it was time to retire – Hinton is 75 – but that he also wanted to be free to express his concerns.


“It’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”


He recounted a recent interaction that he had with GPT-4:

“I told it I want all the rooms in my house to be white in two years and at present I have some white rooms, some blue rooms and some yellow rooms and yellow paint fades to white within a year. So, what should I do? And it said, ‘you should paint the blue rooms yellow.’”

“That’s pretty impressive common-sense reasoning of the kind that it’s been very hard to get AI to do,” he continued, noting that the model understood what ‘fades’ meant in that context and understood the time dimension.

He said current models may be reasoning with an IQ of 80 or 90, but asked what happens when they have an IQ of 210.

Large language models like GPT-4 “will have learned from us by reading all the novels that everyone ever wrote and everything Machiavelli ever wrote about how to manipulate people,” he said. As a result, “they’ll be very good at manipulating us and we won’t realize what’s going on.”

“If you can manipulate people, you can invade a building in Washington without ever going there yourself,” he said, in reference to the January 6, 2021, riot at the U.S. Capitol building over false claims that the Democrats had ‘stolen’ the 2020 election.

“Smart things can outsmart us,” he said.

Hinton said more research was needed to understand how to control AI rather than have it control us.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial for us – that’s called the alignment problem,” he said. “I wish I had a nice simple solution I can push for that, but I don’t.”

Hinton said setting ‘guardrails’ and other safety measures around AI sounds promising but questioned their effectiveness once AI systems are vastly more intelligent than humans. “Imagine your two-year-old saying, ‘my dad does things I don’t like so I’m going to make some rules for what my dad can do,’” he said, suggesting the intelligence gap that may one day exist between humans and AI. “You could probably figure out how to live with those rules and still get what you want.”

We evolved; we have certain built-in goals that we find very hard to turn off – like we try not to damage our bodies. That’s what pain is about,” he said. “But these digital intelligences didn’t evolve, we made them, so they don’t have these built in goals. If we can put the goals in, maybe it’ll be okay. But my big worry is, sooner or later someone will wire into them the ability to create their own sub goals … and if you give someone the ability to set sub goals in order to achieve other goals, they’ll very quickly realize that getting more control is a very good sub goal because it helps you achieve other goals.”

If that happens, he said, “we’re in trouble.”

“I think it’s very important that people get together and think hard about it and see whether there’s a solution,” he said. But he didn’t sound optimistic.

“It’s not clear there is a solution,” he said. I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”

Hinton noted that Google developed large language model technology – called generative AI – first and was very careful with it because the company knew it could lead to bad consequences. “But once OpenAI and Microsoft decided to put it out, then Google didn’t have really much choice,” he said. “You can’t stop Google competing with Microsoft.”

Hinton closed his remarks with an appeal for international cooperation on controlling the AI.

“My one hope is that, because if we allowed it to take over it will be bad for all of us, we could get the U.S. and China to agree like we could with nuclear weapons, which were bad for all of us,” he said. “We’re all in the same boat with respect to the existential threat.”

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.