BBC NEWS. AI ‘godfather’ Geoffrey Hinton warns of dangers as he quits Google. 02 MAY
FOR EDUCATIONAL PURPOSES
“And given the rate of progress, we expect things to get better quite fast. So we need to worry about that. Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.
I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have. We’re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.
And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”
A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field.
Geoffrey Hinton, aged 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.
And in a BBC interview on Monday, he said: “I can now just speak freely about what I think the dangers might be.
“And some of them are quite scary.”
Dr Hinton’s pioneering research on deep learning and neural networks has paved the way for current AI systems like ChatGPT.
But the British-Canadian cognitive psychologist and computer scientist told the BBC the chatbot could soon overtake the level of information that a human brain holds.
“Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning.
“And given the rate of progress, we expect things to get better quite fast. So we need to worry about that. Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”
In the New York Times article, Dr Hinton referred to “bad actors” who would try use AI for “bad things”.
When asked by the BBC to elaborate on this, he replied: “This is just a kind of worst-case scenario, kind of a nightmare scenario.
“You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals.”
The scientist warned that this eventually might “create sub-goals like ‘I need to get more power'”.
He added: “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have.
“We’re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.
“And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”
Dr Hinton also said there were several other reasons to quit his job.
“One is, I’m 75. So it’s time to retire. Another was, I actually want to say some good things about Google. And they’re more credible if I don’t work for Google.”
He stressed that he did not want to criticise Google and that the tech giant had been “very responsible”.
LEARN MORE
- Dr. Geoffrey Hinton leaves Google to freely share his concern that AI could cause the world and humanity serious harm.
- “I don’t think they should scale this up more until they have understood whether they can control it,”
- “The idea that this stuff could actually get smarter than people — a few people believed that,” said Hinton to the NYT. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
- “If I have 1,000 digital agents who are all exact clones with identical weights, whenever one agent learns how to do something, all of them immediately know it because they share weights,” Hinton told CNBC. “Biological agents cannot do this. So collections of identical digital agents can acquire hugely more knowledge than any individual biological agent. That is why GPT-4 knows hugely more than any one person.”
- “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” said Hinton, who had been employed by Google for more than a decade. “It is hard to see how you can prevent the bad actors from using it for bad things.”
- CNBC. ‘Godfather of A.I.’ leaves Google after a decade to warn society of technology he’s touted MAY 1 2023
- CNN. AI pioneer quits Google to warn about the technology’s ‘dangers’
- NYT. ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead. For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm. – DOWNLOAD FOR EDUCATIONAL PURPOSES
- THE VERGE. “Godfather of AI’ quits Google with regrets and fears about his life’s work. Geoffrey Hinton who won the ‘Nobel Prize of computing’ for his trailblazing work on neural networks is now free to speak about the risks of AI.
- DAILY MAIL. ‘If I didn’t build it, somebody else would’ve’: The Godfather of A.I. resigns from Google and says he regets pioneering ‘scary’ tech — likening himself to Oppenheimer creating first atomic bomb
- FORTUNE. ‘The Godfather of A.I.’ just quit Google and says he regrets his life’s work because it can be hard to stop ‘bad actors from using it for bad things
- BUSINESS INSIDER. The ‘Godfather of AI’ quit Google — and says he now regrets his role creating technology that poses a threat to humanity
- Geoffrey Hinton quit his job at Google and told The New York Times he regrets his role in pioneering AI.
- Hinton said he’s worried the technology will disseminate false information and eliminate jobs.
- Hinton said he quit so he could warn about the risks of AI without worrying about the impact to Google
- After recently leaving behind his decade-long career at Google, Geoffrey Hinton, nicknamed “the Godfather of AI,” told The New York Times he has regrets around the foundational role he played in developing the technology.
- “I console myself with the normal excuse: If I hadn’t done it, somebody else would have. It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton, who worked at Google for more than a decade, told the Times.
- Hinton’s departure from the company arrives at a time when the race to develop generative AI-powered products, like Google’s Bard chatbot and OpenAI’s ChatGPT, is heating up. Hinton, whose developments in the AI field decades ago helped pave the way for the creation of these chatbots, told the Times he’s now concerned the tech could harm humanity.
- He also voiced concern around the AI race currently underway among tech giants and questioned whether it was too far along to pump the brakes on.
- On Monday, following the publication of his interview with The New York Times, Hinton tweeted that he left Google so he could “talk about the dangers of AI without considering how this impacts Google,” adding that “Google has acted very responsibly.”
- “Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions at Google,” Jeff Dean, chief scientist at Google, said in a statement to Insider.
- Hinton did immediately respond to Insider’s request for additional comment ahead of publication.
- Some Googlers are reportedly worried about the company’s AI chatbot, Bard
- While Hinton did not appear to single out Google in his overall critique of the AI landscape, other employees from Google have reportedly expressed concern about the company’s AI chatbot.
- After Google employees were tasked with testing the Bard chatbot, some employees said they thought the technology could be dangerous, as reported by Bloomberg. Employees who spoke with Bloomberg said they thought Google wasn’t prioritizing AI ethics, and trying to develop the tech quickly to catch up to OpenAI’s ChatGPT. Two employees tried to stop the company from releasing Bard, per previous reporting from the Times.
- “We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly,” Dean said in a statement, and referred Insider to the companies AI Principles, and two blog postsfrom the company detailing how it is developing AI.
- In his interview with the Times, Hinton said he’s worried generative AI products will lead to the dissemination of fake information, photos, and videos across the internet — and the public will not be able to identify what is true or false.
- Hinton also spoke about how AI technologies could eventually eliminate human labor, including paralegals, translators, and assistants. This is a concern that CEO of OpenAI Sam Altman, and other critics of the AI technology, have echoed.
- In March, Goldman Sachs released a report that estimated 300 million full-time jobs could be “impacted” by AI systems like ChatGPT, namely legal and administrative workers, although the level of that impact could vary. Concern is also growing among software engineers who worry their jobs will be replaced by AI.
- CBS. Full interview: “Godfather of artificial intelligence” talks impact and potential of AI