AI Systems in Israel's Bombing Campaign: Ethical and Legal Implications

AI Systems in Israel's Bombing Campaign: Ethical and Legal Implications

Reports have recently surfaced regarding the use of advanced artificial intelligence (AI) systems in Israel’s bombing campaign in Gaza, leaving many to ponder the ethical and legal implications. The two AI systems at the center of these debates are called Lavender and Gospel.

Lavender, developed by Israel’s elite intelligence division, Unit 8200, functions as an AI-powered database specifically designed to identify potential targets associated with Hamas and Palestinian Islamic Jihad (PIJ). Utilizing machine learning algorithms, Lavender processes enormous amounts of data in order to pinpoint individuals deemed as “junior” militants within these armed groups.

According to reports, Lavender initially identified approximately 37,000 Palestinian men linked to Hamas or PIJ, presenting a significant departure from the traditional methods employed by Israel’s intelligence apparatus, Mossad and Shin Bet. These agencies have historically relied on more labor-intensive human decision-making processes. However, soldiers now make split-second decisions based on Lavender’s information, sometimes taking as little as 20 seconds to determine whether to bomb the identified targets, primarily to ascertain the gender of the target.

It is worth noting that human soldiers frequently follow Lavender’s information without question, despite the fact that the AI program has an error margin of up to 10 percent, meaning it can be incorrect up to 10 percent of the time. As a result, the program often targets individuals with minimal or no affiliation with Hamas, causing concern over the accuracy and reliability of the AI system.

Gospel, on the other hand, is an AI system that generates automatic recommendations for targeting structures and buildings. The IDF released a statement describing Gospel as a system that “allows the use of automatic tools to produce targets at a fast pace and works by improving accurate and high-quality intelligence material according to the requirement.” This system relies on artificial intelligence to rapidly and automatically extract updated intelligence, producing a recommendation for the human researcher.

While the specific data sources used by Gospel remain undisclosed, experts speculate that AI-driven targeting systems typically analyze a range of data sets, including drone imagery, intercepted communications, surveillance data, and behavioral patterns of individuals and groups.

The deployment of Lavender and Gospel in Israel’s bombing campaign represents a significant advancement in the intersection of AI and modern warfare. These technologies have the potential to enhance target identification and operational efficiency. However, their use also raises important ethical and legal concerns.

The ethical dilemmas revolve around the potential for errors and misidentification. With Lavender’s error margin and the reports of individuals being targeted despite having minimal or no affiliation with Hamas, questions arise about the reliability and accuracy of AI systems in targeting decisions. Additionally, the rapid pace at which Gospel generates target recommendations may lead to a lack of thorough scrutiny, potentially resulting in the targeting of innocent structures and buildings.

From a legal standpoint, the use of AI systems in warfare raises concerns about compliance with international humanitarian law, particularly in terms of proportionality and distinction. Proportionality dictates that the expected harm caused by an attack must not exceed the anticipated military advantage gained. Distinction requires that parties to a conflict distinguish between combatants and civilians and only target military objectives. The use of AI systems in targeting decisions can impact the ability to uphold these principles, potentially leading to legal violations.

As the utilization of AI systems in warfare becomes increasingly prevalent, it is essential to carefully consider the ethical and legal implications. Balancing the advantages of enhanced operational efficiency and target identification with the potential risks for errors and violation of international humanitarian law is crucial. It is clear that further discussions and regulations must be developed to address these important concerns.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.