FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

THE NEW YORK TIMES. A.I. Pioneers Call for Protections Against ‘Catastrophic Risks’

Scientists from the United States, China and other nations called for an international authority to oversee artificial intelligence.

Reporting from Venice

Scientists who helped pioneer artificial intelligence are warning that countries must create a global system of oversight to check the potentially grave risks posed by the fast-developing technology.

The release of ChatGPT and a string of similar services that can create text and images on command have shown how A.I. is advancing in powerful ways. The race to commercialize the technology has quickly brought it from the fringes of science to smartphones, cars and classrooms, and governments from Washington to Beijing have been forced to figure out how to regulate and harness it.

In a statement on Monday, a group of influential A.I. scientists raised concerns that the technology they helped build could cause serious harm. They warned that A.I. technology could, within a matter of years, overtake the capabilities of its makers and that “loss of human control or malicious use of these A.I. systems could lead to catastrophic outcomes for all of humanity.”

If A.I. systems anywhere in the world were to develop these abilities today, there is no plan for how to rein them in, said Gillian Hadfield, a legal scholar and professor of computer science and government at Johns Hopkins University.

“If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?” Dr. Hadfield said.

On Sept. 5-8, Dr. Hadfield joined scientists from around the world in Venice to talk about such a plan. It was the third meeting of the International Dialogues on A.I. Safety, organized by the Safe AI Forum, a project of a nonprofit research group in the United States called Far.AI.

Governments need to know what is going on at the research labs and companies working on A.I. systems in their countries, the group said in its statement. And they need a way to communicate about potential risks that does not require companies or researchers to share proprietary information with competitors.

The group proposed that countries set up A.I. safety authorities to register the A.I. systems within their borders. Those authorities would then work together to agree on a set of red lines and warning signs, such as if an A.I. system could copy itself or intentionally deceive its creators. This would all be coordinated by an international body.

Scientists from the United States, China, Britain, Singapore, Canada and elsewhere signed the statement.

Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China’s top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing.

The group also included scientists from several of China’s leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.

Their latest gathering in Venice took place at a building owned by the billionaire philanthropist Nicolas Berggruen. The president of the Berggruen Institute think tank, Dawn Nakagawa, participated in the meeting and signed the statement released on Monday.

The meetings are a rare venue for engagement between Chinese and Western scientists at a time when the United States and China are locked in a tense competition for technological primacy.

In recent months, Chinese companies have unveiled technology that rivals the leading American A.I. systems.

Government officials in both China and the United States have made artificial intelligence a priority in the past year. In July, a Chinese Communist Party conclave that takes place every five years called for a system to regulate A.I. safety. Last week, an influential technical standards group in China published an A.I. safety framework.

Last October, President Biden signed an executive order that required companies to report to the federal government about the risks that their A.I. systems could pose, like their ability to create weapons of mass destruction or potential to be used by terrorists.

President Biden and China’s leader, Xi Jinping, agreed when they met last year that officials from both countries should hold talks on A.I. safety. The first took place in Geneva in May.

In a broader government initiative, representatives from 28 countries signed a declaration in Britain last November, agreeing to cooperate on evaluating the risks of artificial intelligence. They met again in Seoul in May. But these gatherings have stopped short of setting specific policy goals.

Distrust between the United States and China adds to the difficulty of achieving alignment.

“Both countries are hugely suspicious of each other’s intentions,” said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace, who was not part of the dialogue. “They’re worried that if they pump the brakes because of safety concerns, that will allow the other to zoom ahead,” Mr. Sheehan said. “That suspicion is just going to be baked in.”

The scientists who met in Venice this month said their conversations were important because scientific exchange is shrinking amid the competition between the two geopolitical superpowers.

In an interview, Dr. Bengio, one of the founding members of the group, cited talks between American and Soviet scientists at the height of the Cold War that helped bring about coordination to avert nuclear catastrophe. In both cases, the scientists involved felt an obligation to help close the Pandora’s box opened by their research.

Technology is changing so quickly that is difficult for individual companies and governments to decide how to approach it, and collaboration is crucial, said Fu Hongyu, the director of A.I. governance at Alibaba’s research institute, AliResearch, who did not participate in the dialogue.

“It’s not like regulating a mature technology,” Mr. Fu said. “Nobody knows what the future of A.I. looks like.”

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.