FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

In july of last year Henry Kissinger travelled to Beijing for the final time before his death. Among the messages he delivered to China’s ruler, Xi Jinping, was a warning about the catastrophic risks of artificial intelligence (AI). Since then American tech bosses and ex-government officials have quietly met with their Chinese counterparts in a series of informal meetings dubbed the Kissinger Dialogues. The conversations have focused in part on how to protect the world from the dangers of AI. On August 27th American and Chinese officials are expected to take up the subject (along with many others) when America’s national security advisor, Jake Sullivan, travels to Beijing.

Many in the tech world think that ai will come to match or surpass the cognitive abilities of humans. Some developers predict that artificial general intelligence (AGI) models will one day be able to learn, which could make them uncontrollable. Those who believe that, left unchecked, ai poses an existential risk to humanity are called “doomers”. They tend to advocate stricter regulations. On the other side are “accelerationists”, who stress ai’s potential to benefit humanity.

Western accelerationists often argue that competition with Chinese developers, who are uninhibited by strong safeguards, is so fierce that the West cannot afford to slow down. The implication is that the debate in China is one-sided, with accelerationists having the most say over the regulatory environment. In fact, China has its own AI doomers—and they are increasingly influential.

Until recently China’s regulators have focused on the risk of rogue chatbots saying politically incorrect things about the Communist Party, rather than that of cutting-edge models slipping out of human control. In 2023 the government required developers to register their large language models. Algorithms are regularly marked on how well they comply with socialist values and whether they might “subvert state power”. The rules are also meant to prevent discrimination and leaks of customer data. But, in general, ai-safety regulations are light. Some of China’s more onerous restrictions were rescinded last year.

China’s accelerationists want to keep things this way. Zhu Songchun, a party adviser and director of a state-backed programme to develop AGI, has argued that AI development is as important as the “Two Bombs, One Satellite” project, a Mao-era push to produce long-range nuclear weapons. Earlier this year Yin Hejun, the minister of science and technology, used an old party slogan to press for faster progress, writing that development, including in the field of ai, was China’s greatest source of security. Some economic policymakers warn that an over-zealous pursuit of safety will harm China’s competitiveness.

But the accelerationists are getting pushback from a clique of elite scientists with the Communist Party’s ear. Most prominent among them is Andrew Chi-Chih Yao, the only Chinese person to have won the Turing award for advances in computer science. In July Mr Yao said ai poses a greater existential risk to humans than nuclear or biological weapons. Zhang Ya-Qin, the former president of Baidu, a Chinese tech giant, and Xue Lan, the chair of the state’s expert committee on ai governance, also reckon that AI may threaten the human race. Yi Zeng of the Chinese Academy of Sciences believes that AGI models will eventually see humans as humans see ants.

The influence of such arguments is increasingly on display. In March an international panel of experts meeting in Beijing called on researchers to kill models that appear to seek power or show signs of self-replication or deceit. A short time later the risks posed by AI, and how to control them, became a subject of study sessions for party leaders. A state body that funds scientific research has begun offering grants to researchers who study how to align ai with human values. State labs are doing increasingly advanced work in this domain. Private firms have been less active, but more of them have at least begun paying lip service to the risks of AI.

The debate over how to approach the technology has led to a turf war between China’s regulators. The industry ministry has called attention to safety concerns, telling researchers to test models for threats to humans. But most of China’s securocrats see falling behind America as a bigger risk. The science ministry and state economic planners also favour faster development. A national ai law slated for this year quietly fell off the government’s work agenda in recent months because of these disagreements. The impasse was made plain on July 11th, when the official responsible for writing the AI law cautioned against prioritising either safety or expediency.

The decision will ultimately come down to what Mr Xi thinks. In June he sent a letter to Mr Yao, praising his work on AI. In July, at a meeting of the party’s central committee called the “third plenum”, Mr Xi sent his clearest signal yet that he takes the doomers’ concerns seriously. The official report from the plenum listed AI risks alongside other big concerns, such as biohazards and natural disasters. For the first time it called for monitoring ai safety, a reference to the technology’s potential to endanger humans. The report may lead to new restrictions on AI-research activities.

More clues to Mr Xi’s thinking come from the study guide prepared for party cadres, which he is said to have personally edited. China should “abandon uninhibited growth that comes at the cost of sacrificing safety”, says the guide. Since AI will determine “the fate of all mankind”, it must always be controllable, it goes on. The document calls for regulation to be pre-emptive rather than reactive.

Safety gurus say that what matters is how these instructions are implemented. China will probably create an AI-safety institute to observe cutting-edge research, as America and Britain have done, says Matt Sheehan of the Carnegie Endowment for International Peace, a think-tank in Washington. Which department would oversee such an institute is an open question. For now Chinese officials are emphasising the need to share the responsibility of regulating AI and to improve co-ordination.

If China does move ahead with efforts to restrict the most advanced AI research and development it will have gone further than any other big country. Mr Xi says he wants to “strengthen the governance of artificial-intelligence rules within the framework of the United Nations”. To do that China will have to work more closely with others. But America and its friends are still considering the issue. The debate between doomers and accelerationists, in China and elsewhere, is far from over.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.