U.S. and China to discuss risks of artificial intelligence

U.S. and China to discuss risks of artificial intelligence

The U.S. and Chinese governments are set to meet in Geneva to discuss the risks associated with artificial intelligence (AI), as the rapid development of new technologies continues to transform various sectors globally. This will be the first meeting on AI between the two governments, with the intention of identifying areas of concern and sharing their respective approaches to tackling AI problems.

The talks will primarily focus on risk and safety, particularly in relation to advanced systems. The goal is to exchange views on the technical risks of AI and establish open lines of communication to address each nation’s concerns. Although officials declined to provide specific details regarding the risks that will be raised, both the U.S. and China have previously expressed their anxieties about AI.

In November, President Biden disclosed plans to discuss AI risks and safety with Chinese President Xi Jinping. Rumors circulated prior to the meeting, suggesting that Mr. Biden might announce a deal to ban the use of AI in various weaponry, including nuclear warheads. However, no such agreement has emerged thus far.

The abundance of AI tools and their potential threats have prompted American officials to closely monitor the development of emerging technologies. U.S. national security officials express concerns about AI-powered deepfakes—manipulated images, audio, and text designed to deceive audiences with false information. The fear is that such deepfakes could provoke fear, uncertainty, and doubt, potentially influencing elections. Generative AI tools, like ChatGPT, facilitate the creation of manipulated content without requiring sophisticated knowledge of the underlying algorithms. However, these tools sometimes produce inexplicably false information, leading to what technologists refer to as “hallucinations.”

The U.S. intelligence community is paying close attention to these “hallucinations” as it integrates generative AI tools into its work. The Office of the Director of National Intelligence and the CIA have emphasized the importance of human review to identify any errors. As the Department of Defense adopts thousands of autonomous systems, it is establishing policies that allow for the replacement of human decision-making with autonomous systems.

The question arises as to whether the push for advanced AI capabilities by adversaries will impact the U.S.’s willingness to delegate decision-making to machines. While American officials assert that decisions involving nuclear weapons will always be made by humans, China and Russia have not made similar commitments. The U.S. has called for China and Russia to make public statements reaffirming their reliance on human decision-making regarding nuclear employment.

China also has its own concerns about generative AI and has launched regulatory efforts to restrict the flow of information. Chinese regulators are primarily focused on combatting internet trolls and dispelling rumors that the Chinese government dislikes.

At the upcoming meeting, both the U.S. and China will seek to identify common concerns while also discussing their respective domestic approaches to addressing AI risks. However, technical collaboration or research cooperation will not be part of the agenda. The U.S. plans to explain its approach to AI safety and the role of international governance in addressing AI-related challenges.

As the U.S. and China come together to address the risks associated with AI, this meeting represents an important step towards establishing common ground and fostering meaningful dialogue between the two nations. By openly discussing their concerns and approaches, they can work towards mitigating the potential hazards of AI and ensuring its responsible and ethical development.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.