A very good read from a respected source!

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

CNBC. DAVOS WEF. We don’t want to see an AI ‘Hiroshima,’ Salesforce CEO warns

“We don’t want to have a Hiroshima moment. We’ve seen technology go really wrong, and we saw a Hiroshima. We don’t want to see an AI Hiroshima. We want to make sure that we’ve got our head around this now.”

“This is a huge moment for AI. AI took a huge leap forward in the last year or two years. We don’t want something to go really wrong,”

PUBLISHED THU, JAN 18 20249:44 AM EST

UPDATED THU, JAN 18 202412:21 PM EST

Ryan Browne @RYAN_BROWNE_

Ruxandra Iordache @RMIORDACHE

Marc Benioff, co-founder, chairman and CEO Salesforce, speaking with CNBC’s Sara Eisen at the World Economic Forum Annual Meeting in Davos, Switzerland on Jan. 17th, 2024.

KEY POINTS
  • Salesforce CEO Marc Benioff on Thursday warned onstage at the World Economic Forum in Davos, Switzerland. that the technology industry is working to make sure artificial intelligence is developed safely enough to ensure the world avoids a “Hiroshima moment.”
  • Concerns have mounted over the trustworthiness, uses and potential information bias of AI, with critics worldwide raising questions over the software coming to replace human workers.
  • Several technology leaders have made bold comments about artificial intelligence at Davos this week.

The tech industry is setting down safety protocols and establishing trust principles in relation to the developing AI software that has taken the world by storm to avoid a “Hiroshima moment,” Salesforce CEO Marc Benioff told a World Economic Forum panel in Davos, Switzerland.

“This is a huge moment for AI. AI took a huge leap forward in the last year or two years,” he noted Thursday, acknowledging that, amid the rapid pace of its progress, the technology “could go really wrong.”

“We don’t want something to go really wrong. That’s why we’re going to, like, that safety summit. That’s why we’re talking about trust,” Benioff said, referencing a U.K. event last year.

“We don’t want to have a Hiroshima moment. We’ve seen technology go really wrong, and we saw a Hiroshima. We don’t want to see an AI Hiroshima. We want to make sure that we’ve got our head around this now.”

Concerns have mounted over the trustworthiness, uses and potential information bias of AI, with critics worldwide raising questions over the software coming to replace human workers. Earlier this week, the International Monetary Fund released a report that warned that nearly 40% of jobs across the globe could be impacted by the rise of artificial intelligence.

Raising alarm bells over the software’s potential for intellectual property abuses, The New York Times in December launched a lawsuit against Microsoft and ChatGPT creator OpenAI, accusing the companies of copyright infringement and training their large language models on the newspaper’s content.

Salesforce has skin in the game after launching its own generative AI software Einstein GPT and joining a global race among software developers to incorporate generative AI capabilities into their existing products.

The company, whose largest unit tackles customer support, reported fiscal third-quarter earnings in November that exceeded analyst expectations, with revenue up 11% year on year.

Generative AI is a form of artificial intelligence that allows users to produce novel content, designs and ideas in response to user prompts. It is trained on huge amounts of data sourced from the open web. In OpenAI’s case, ChatGPT is trained on information leading up to 2021.

Many companies have been experimenting with the technology, and using it for a range of tasks spanning art, marketing, copyright and more. AI has also created concerns around cyber vulnerabilities, not least because it empowers criminals to create and deploy malicious software.

Last year, at the key summit in Bletchley Park, England, world leaders signed a landmark agreement committing to form frameworks and standards around how to develop AI safely.

Several leading figures in technology have made remarks about the development of AI this week, including Sam Altman of OpenAI and Pat Gelsinger of U.S. chipmaking giant Intel.

Speaking on a panel with Bloomberg at Davos on Tuesday, Altman, CEO of OpenAI, said that he thinks artificial general intelligence (AGI) — a form of AI on par with, or more advanced than, humans — is likely to come soon, but that it won’t be as scary as many economists fear.

“This is much more of a tool than I expected,” Altman said. “It’ll get better, but it’s not yet replacing jobs. It is this incredible tool for productivity. This is a tool that magnifies what humans do, lets people do their jobs better and lets the AI do parts of jobs.”

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.