FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Tesla CEO Elon Musk says there was “overwhelming consensus” for regulation on artificial intelligence after tech heavyweights gathered in Washington to discuss AI.

Tech bosses attending the meeting included Meta’s Mark Zuckerberg and Google boss Sundar Pichai.

Microsoft’s former CEO Bill Gates and Microsoft’s current CEO Satya Nadella were also in attendance.

The Wednesday meeting with US lawmakers was held behind closed doors.

The forum was convened by Senate Majority Leader Chuck Schumer and included the tech leaders as well as civil rights advocates.

The power of artificial intelligence – for both good and bad – has been the subject of keen interest from politicians around the world.

In May, Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified before a US Senate committee, describing the potential pitfalls of the new technology.

ChatGPT and other similar programmes can create incredibly human-like answers to questions – but can also be wildly inaccurate.

“I think if this technology goes wrong, it can go quite wrong…we want to be vocal about that,” Mr Altman said. “We want to work with the government to prevent that from happening,” he said.

There are fears that the technology could lead to mass layoffs, turbo charge fraud and make misinformation more convincing.

AI companies have also been criticised for training their models on data scraped from the internet without permission or payment to creators.

In April, Mr Musk told the BBC: “I think there should be a regulatory body established for overseeing AI to make sure that it does not present a danger to the public.”

In Wednesday’s meeting, he said he wanted a “referee” for artificial intelligence.

“I think we’ll probably see something happen. I don’t know on what timeframe or exactly how it will manifest itself,” he told reporters after.

Mr Zuckerberg said that Congress “should engage with AI to support innovation and safeguards”.

He added it was “better that the standard is set by American companies that can work with our government to shape these models on important issues”.

Republican Senator Mike Rounds said it would take time for Congress to act.

“Are we ready to go out and write legislation? Absolutely not,” Mr Rounds said. “We’re not there.”

Democrat Senator Cory Booker said all participants agreed “the government has a regulatory role” but crafting legislation would be a challenge.

The nation’s biggest technology executives on Wednesday loosely endorsed the idea of government regulations for artificial intelligence at an unusual closed-door meeting in the U.S. Senate. But there is little consensus on what regulation would look like, and the political path for legislation is difficult.

Senate Majority Leader Chuck Schumer, who organized the private forum on Capitol Hill as part of a push to legislate artificial intelligence, said he asked everyone in the room — including almost two dozen tech executives, advocates and skeptics — whether government should have a role in the oversight of artificial intelligence, and “every single person raised their hands, even though they had diverse views,” he said.

Among the ideas discussed was whether there should be an independent agency to oversee certain aspects of the rapidly-developing technology, how companies could be more transparent and how the United States can stay ahead of China and other countries.

“The key point was really that it’s important for us to have a referee,” said Elon Musk, CEO of Tesla and X, during a break in the daylong forum. “It was a very civilized discussion, actually, among some of the smartest people in the world.”

Schumer will not necessarily take the tech executives’ advice as he works with colleagues on the politically difficult task of ensuring some oversight of the burgeoning sector. But he invited them to the meeting in hopes that they would give senators some realistic direction for meaningful regulation.

Congress should do what it can to maximize AI’s benefits and minimize the negatives, Schumer said, “whether that’s enshrining bias, or the loss of jobs, or even the kind of doomsday scenarios that were mentioned in the room. And only government can be there to put in guardrails.”

Other executives attending the meeting were Meta’s Mark Zuckerberg, former Microsoft CEO Bill Gates and Google CEO Sundar Pichai. Musk said the meeting “might go down in history as being very important for the future of civilization.”

First, though, lawmakers have to agree on whether to regulate, and how.

Congress has a lackluster track record when it comes to regulating new technology, and the industry has grown mostly unchecked by government in the past several decades. Many lawmakers point to the failure to pass any legislation surrounding social media, such as for stricter privacy standards.

Schumer, who has made AI one of his top issues as leader, said regulation of artificial intelligence will be “one of the most difficult issues we can ever take on,” and he listed some of the reasons why: It’s technically complicated, it keeps changing and it “has such a wide, broad effect across the whole world,” he said.

Sparked by the release of ChatGPT less than a year ago, businesses have been clamoring to apply new generative AI tools that can compose human-like passages of text, program computer code and create novel images, audio and video. The hype over such tools has accelerated worries over its potential societal harms and prompted calls for more transparency in how the data behind the new products is collected and used.

Republican Sen. Mike Rounds of South Dakota, who led the meeting with Schumer, said Congress needs to get ahead of fast-moving AI by making sure it continues to develop “on the positive side” while also taking care of potential issues surrounding data transparency and privacy.

“AI is not going away, and it can do some really good things or it can be a real challenge,” Rounds said.

The tech leaders and others outlined their views at the meeting, with each participant getting three minutes to speak on a topic of their choosing. Schumer and Rounds then led a group discussion.

During the discussion, according to attendees who spoke about it, Musk and former Google CEO Eric Schmidt raised existential risks posed by AI, and Zuckerberg brought up the question of closed vs. “open source” AI models. Gates talked about feeding the hungry. IBM CEO Arvind Krishna expressed opposition to proposals favored by other companies that would require licenses.

In terms of a potential new agency for regulation, “that is one of the biggest questions we have to answer and that we will continue to discuss,” Schumer said. Musk said afterward he thinks the creation of a regulatory agency is likely.

Outside the meeting, Google CEO Pichai declined to give details about specifics but generally endorsed the idea of Washington involvement.

“I think it’s important that government plays a role, both on the innovation side and building the right safeguards, and I thought it was a productive discussion,” he said.

Some senators were critical that the public was shut out of the meeting, arguing that the tech executives should testify in public.

Sen. Josh Hawley, R-Mo., said he would not attend what he said was a “giant cocktail party for big tech.” Hawley has introduced legislation with Sen. Richard Blumenthal, D-Conn., to require tech companies to seek licenses for high-risk AI systems.

“I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money and then close it to the public,” Hawley said.

While civil rights and labor groups were also represented at the meeting, some experts worried that Schumer’s event risked emphasizing the concerns of big firms over everyone else.

Sarah Myers West, managing director of the nonprofit AI Now Institute, estimated that the combined net worth of the room Wednesday was $550 billion and it was “hard to envision a room like that in any way meaningfully representing the interests of the broader public.” She did not attend.

In the United States, major tech companies have expressed support for AI regulations, though they don’t necessarily agree on what that means. Similarly, members of Congress agree that legislation is needed, but there is little consensus on what to do.

There is also division, with some members of Congress worrying more about overregulation of the industry while others are concerned more about the potential risks. Those differences often fall along party lines.

“I am involved in this process in large measure to ensure that we act, but we don’t act more boldly or over-broadly than the circumstances require,” Young said. “We should be skeptical of government, which is why I think it’s important that you got Republicans at the table.”

Some concrete proposals have already been introduced, including legislation by Sen. Amy Klobuchar, D-Minn., that would require disclaimers for AI-generated election ads with deceptive imagery and sounds. Schumer said they discussed “the need to do something fairly immediate” before next year’s presidential election.

Hawley and Blumenthal’s broader approach would create a government oversight authority with the power to audit certain AI systems for harms before granting a license.

Some of those invited to Capitol Hill, such as Musk, have voiced dire concerns evoking popular science fiction about the possibility of humanity losing control to advanced AI systems if the right safeguards are not in place. But the only academic invited to the forum, Deborah Raji, a University of California, Berkeley researcher who has studied algorithmic bias, said she tried to emphasize real-world harms already occurring.

“There was a lot of care to make sure the room was a balanced conversation, or as balanced as it could be” Raji said. What remains to be seen, she said, is which voices senators will listen to and what priorities they elevate as they work to pass new laws.

Some Republicans have been wary of following the path of the European Union, which signed off in June on the world’s first set of comprehensive rules for artificial intelligence. The EU’s AI Act will govern any product or service that uses an AI system and classify them according to four levels of risk, from minimal to unacceptable.

A group of European corporations has called on EU leaders to rethink the rules, arguing that it could make it harder for companies in the 27-nation bloc to compete with rivals overseas in the use of generative AI.

_–

O’Brien reported from Providence, Rhode Island. Associated Press journalists Ali Swenson in New York, Kelvin Chan in London and Nathan Ellgren in Washington contributed to this report.

VOX. AI rules that US policymakers are considering, explained

ChatGPT, Midjourney, and other tools are forcing Biden and Congress to take AI seriously.

You can break most of the ideas circulating into one of four rough categories:

  • Rules: New regulations and laws for individuals and companies training AI models, building or selling chips used for AI training, and/or using AI models in their business
  • Institutions: New government agencies or international organizations that can implement and enforce these new regulations and laws
  • Money: Additional funding for research, either to expand AI capabilities or to ensure safety
  • People: Expanded high-skilled immigration and increased education funding to build out a workforce that can build and control AI

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.