The Biden administration is taking steps to protect U.S. artificial intelligence (AI) technology from China by considering export restrictions on advanced AI models, according to sources. These models, which serve as the core software of AI systems like ChatGPT, contain proprietary or closed source software and data, making them a potential target for export control. The move aims to counter China’s ambitions in AI and follows previous measures to block the export of advanced AI chips to China. However, keeping pace with the fast-moving AI industry poses challenges for regulators. The Commerce Department declined to comment on the matter.
Currently, U.S. AI giants like Microsoft-backed OpenAI, Google DeepMind, and Anthropic can sell their closed source AI models to anyone in the world without government oversight. This raises concerns among government and private sector researchers that U.S. adversaries could use these models for malicious purposes, including cyber attacks or the creation of biological weapons. To establish export controls on AI models, the U.S. may use the threshold outlined in an AI executive order issued in October 2023, which is based on the computing power required to train a model. If implemented, this threshold would determine which AI models are subject to export restrictions. However, as of now, no AI models are thought to have reached the threshold, although Google’s Gemini Ultra is seen as being close, according to EpochAI.
The idea of implementing export controls on AI models is still in the early stages, and the Commerce Department has not finalized any rule proposals. Nevertheless, the consideration of such measures underscores the U.S. government’s commitment to addressing China’s AI ambitions. Peter Harrell, a former National Security Council official, noted that AI models are a potential choke point in countering China’s AI capabilities. However, the practicality of turning AI models into export-controllable choke points remains to be seen.
The concerns surrounding AI models stem from their potential use by foreign actors for malicious purposes. Researchers have highlighted the risks of advanced AI models in the creation of biological weapons, while the Department of Homeland Security has raised concerns about cyber actors using AI for more evasive cyber attacks. The potential explosion of AI’s use and exploitation is a significant challenge for intelligence agencies and regulators alike.
The U.S. has already taken steps to address these concerns, such as restricting the flow of American AI chips and proposing a rule that requires U.S. cloud companies to report the use of their services by foreign customers for training AI models. However, the focus has not yet shifted to controlling AI models directly. Establishing a threshold based on computing power is seen as a temporary measure until better methods for measuring capabilities and risks are developed. However, controlling AI model exports will be challenging, given that many models are open source and would fall outside the scope of export controls. Defining the right criteria to determine which models should be controlled will also prove to be a difficult task.
The Biden administration’s consideration of export restrictions on advanced AI models reflects the growing importance of AI technology in the U.S.-China competition. While the details of any potential regulations are yet to be finalized, this move demonstrates a concerted effort to safeguard U.S. AI innovation and national security. The outcome of these considerations will play a crucial role in shaping the future of AI technology in the global landscape.
Use the share button below if you liked it.