“Is Xi Jinping an AI Doomer? China’s Elite Is Split over Artificial Intelligence”, 2024-08-25 (; similar):
[more] In July of last year Henry Kissinger travelled to Beijing for the final time before his death. Among the messages he delivered to China’s ruler, Xi Jinping, was a warning about the catastrophic risks of artificial intelligence (AI). Since then American tech bosses and ex-government officials have quietly met with their Chinese counterparts in a series of informal meetings dubbed the ‘Kissinger Dialogues’. The conversations have focused in part on how to protect the world from the dangers of AI. On August 27th American and Chinese officials are expected to take up the subject (along with many others) when America’s national security advisor, Jake Sullivan, travels to Beijing.
…Western accelerationists often argue that competition with Chinese developers, who are uninhibited by strong safeguards, is so fierce that the West cannot afford to slow down. The implication is that the debate in China is one-sided, with accelerationists having the most say over the regulatory environment. In fact, China has its own AI doomers—and they are increasingly influential.
…China’s accelerationists want to keep things this way. Zhu Songchun, a party adviser and director of a state-backed programme to develop AGI, has argued that AI development is as important as the “Two Bombs, One Satellite” project, a Mao-era push to produce long-range nuclear weapons. Earlier this year Yin Hejun, the minister of science and technology, used an old party slogan to press for faster progress, writing that development, including in the field of AI, was China’s greatest source of security. Some economic policymakers warn that an over-zealous pursuit of safety will harm China’s competitiveness.
But the accelerationists are getting pushback from a clique of elite scientists with the Communist Party’s ear. Most prominent among them is Andrew Chi-Chih Yao, the only Chinese person to have won the Turing Award for advances in computer science. In July, Yao said AI poses a greater existential risk to humans than nuclear or biological weapons. Zhang Ya-Qin, the former president of Baidu, a Chinese tech giant, and Xue Lan, the chair of the state’s expert committee on AI governance, also reckon that AI may threaten the human race. Yi Zeng of the Chinese Academy of Sciences believes that AGI models will eventually see humans as humans see ants.
The influence of such arguments is increasingly on display. In March an international panel of experts meeting in Beijing called on researchers to kill models that appear to seek power or show signs of self-replication or deceit.
…The debate over how to approach the technology has led to a turf war between China’s regulators…The impasse was made plain on July 11th, when the official responsible for writing the AI law cautioned against prioritising either safety or expediency.
The decision will ultimately come down to what Xi thinks. In June he sent a letter to Yao, praising his work on AI. In July, at a meeting of the party’s central committee called the “third plenum”, Xi sent his clearest signal yet that he takes the doomers’ concerns seriously. The official report from the plenum listed AI risks alongside other big concerns, such as biohazards and natural disasters. For the first time it called for monitoring AI safety, a reference to the technology’s potential to endanger humans. The report may lead to new restrictions on AI-research activities.
More clues to Xi’s thinking come from the study guide prepared for party cadres, which he is said to have personally edited. China should “abandon uninhibited growth that comes at the cost of sacrificing safety”, says the guide. Since AI will determine “the fate of all mankind”, it must always be controllable, it goes on. The document calls for regulation to be pre-emptive rather than reactive.
…If China does move ahead with efforts to restrict the most advanced AI research and development it will have gone further than any other big country. Xi says he wants to “strengthen the governance of artificial-intelligence rules within the framework of the United Nations”.