China's AI Revolution: Balancing Progress and Safety

China's AI Revolution: Balancing Progress and Safety

China’s AI Revolution: Balancing Progress and Safety

In a historic meeting in Geneva on May 14, Chinese and U.S. envoys sat down for the first official bilateral dialogue on artificial intelligence (AI). This dialogue comes as a result of the Woodside Summit meeting between Chinese President Xi Jinping and U.S. President Joe Biden in November 2023. Although the details of the closed-door talks have not been disclosed, initial statements suggest that China expressed frustration over the Biden administration’s export controls on advanced computer chips and semiconductors, which could hinder China’s progress in AI development. The U.S. side, on the other hand, raised concerns about China’s potential misuse of AI and the need for safety measures. These talks highlight the delicate balance that China must strike between advancing its AI capabilities and addressing legitimate safety concerns from the international community.

China has traditionally followed a top-down model of industry development, with the central government overseeing emerging sectors to ensure responsible growth. However, policymakers are now realizing that over-regulation of AI could impede innovation. Finding the right balance between nurturing AI capabilities and ensuring responsible development has become a delicate exercise for Chinese authorities. China has taken notable steps domestically to address AI risks, implementing strict regulations against deepfakes and harmful recommendation algorithms as early as 2018. The establishment of an AI safety and security governance expert committee by the Cyber Security Association of China (CSAC) last October and the calls from municipal governments in tech centers like Shanghai, Guangdong, and Beijing for the development of benchmarks and assessments to evaluate AI system safety demonstrate China’s commitment to addressing AI risks.

China has also played an active role internationally in addressing AI risks. It co-signed the “Bletchley Declaration” with the United States, European Union, and 25 other countries at last year’s AI Safety Summit in the United Kingdom to strengthen cooperation on frontier AI risks. China even unveiled its own “Global AI Governance Initiative” with the goal of making AI technologies more secure, reliable, controllable, and equitable. Chinese experts have contributed to consensus papers and dialogues with global counterparts, advocating for safety research and governance policies to mitigate AI risks.

Despite these efforts, concerns remain about China’s commitment to addressing the true risks of its growing AI sector. U.S. lawmakers have expressed alarm about the potential use of AI-generated deepfakes to spread political disinformation overseas, and there is a lack of understanding regarding China’s AI landscape and military applications. The United States has publicly pledged to maintain human control over nuclear weapons, while China has remained silent on the issue. It is clear that Washington expects more from Beijing in addressing the risks associated with AI.

The expectation for safety measures creates a conundrum for China, as such measures can be seen as obstacles to rapid AI progress – a goal that China is fervently pursuing to catch up with top U.S. labs. The tension between safety and progress was exemplified by the open letter signed by tech luminaries like Elon Musk and Steve Wozniak, which called for a six-month pause on all AI development for safety evaluations. Finding a balance where safety and progress are not mutually exclusive is crucial.

The recent talks in Geneva suggest a potential pathway where safety and progress can go hand-in-hand. For years, China has criticized U.S. export controls that limit its access to advanced chips crucial for AI breakthroughs. By demonstrating a credible commitment to mitigating AI risks, China can address the concerns raised by the United States. This hardware-centric approach to AI safety has gained prominence in recent years and, if successful, could allow China to import the advanced computing technology needed to grow its AI sector, while assuaging international concerns.

However, U.S. regulations are about more than just safety, as demonstrated by the steep tariff increases on Chinese imports announced by President Biden on the same day as the AI talks in Geneva. Nevertheless, if China shows a commitment to AI safety, it could still benefit from gaining access to advanced computing hardware. By allocating significant resources to safety initiatives such as “red-teaming” inspections and alignment research, China can reassure the U.S. about its national security concerns. At the same time, demonstrating responsible AI development would improve China’s international reputation, a key objective of its public diplomacy.

The coming months will witness an unprecedented level of international dialogue on AI safety. From global summits to governmental dialogues, the world’s powers are convening to chart a course for responsible AI development. Throughout this process, China’s actions will face intense scrutiny as it demonstrates its willingness to collaborate and promote responsible AI practices on the world stage. The decisions made by China during this pivotal period will determine the fate of its AI dream.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.