U.S. State Department hosts global meeting on ethical use of AI in military applications

U.S. State Department hosts global meeting on ethical use of AI in military applications

On March 17, 2024, the U.S. State Department will host a global meeting of signatories to discuss the ethical use of artificial intelligence (AI) in military applications. The meeting is the first of its kind and reflects the growing concern over the responsible development and deployment of AI technologies. Mark Montgomery, senior director of the Center on Cyber and Technology Innovation for the Foundation for the Defense of Democracies, praises the State Department’s efforts but also emphasizes that the countries whose military applications of AI should worry us the most are notably absent from the gathering.

Last year, the U.S. managed to get 53 nations to sign the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy; however, key players such as China, Russia, Saudi Arabia, Brazil, Israel, and India were not among the signatories. This week’s conference will see the participation of 42 signatory nations, with over 100 individuals from diplomatic and military backgrounds discussing the various military applications of AI that have emerged in recent years.

A senior State Department official expressed the desire for this conference to be the first in a series of meetings, to be held as long as necessary. The aim is to keep states focused on the issue of responsible AI and to build practical capacity. In the interim between meetings, the signatories are encouraged to engage in discussions and war games to explore new ideas and test new AI technologies. The State Department hopes that these actions will contribute to the implementation of the declaration’s goals.

The primary concern regarding AI in the context of warfare and international security is its potential for causing harm. Bonnie Jenkins, the undersecretary of state for Arms Control and International Security Affairs, highlights the importance of championing the safe, secure, and trustworthy development and use of AI. While acknowledging the positive potential of AI in areas such as medicine, agriculture, and addressing global challenges like food insecurity and climate change, Jenkins warns about the risks and the need for appropriate guardrails to mitigate them.

The Biden administration has made it a priority to address these challenges and ensure responsible AI development. Jenkins emphasizes that while we cannot predict how AI will evolve or what it will be capable of in the future, steps can be taken now to implement necessary policies and build the technical capacities to enable responsible use. The focus is on creating a framework that can adapt to technological advancements while ensuring the ethical and responsible use of AI.

As we look to the future, it is essential that global leaders come together to address the ethical implications of AI, particularly in military applications. The U.S.-led meeting represents an important step in fostering international cooperation and understanding. By sharing information and best practices, signatory nations can work towards building practical capacity and establishing guidelines for the responsible use of AI. It is a necessary conversation that will shape the development and deployment of AI technologies going forward.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.