Israel Faces Accusations of AI-Powered Targeted Killings, Denied Involvement

Israel Faces Accusations of AI-Powered Targeted Killings, Denied Involvement

Israel Faces Accusations of AI-Powered Targeted Killings, Denied Involvement

In a recent controversy, Israel has vehemently denied involvement in an AI program allegedly used for targeted killings that result in civilian casualties. The allegations surfaced in news reports earlier this week, citing anonymous intelligence sources involved in the Hamas-Israel conflict. Israel’s Defense Forces (IDF) released a statement, refuting the claims and stating that they do not use AI to designate targets for military strikes. The IDF clarified that information systems are simply tools used by analysts in the target identification process.

According to reports from +972 Magazine and the Guardian, Israel allegedly allowed an AI system called “Lavender” to influence human analysts' judgment, leading to the authorization of the killing of over 100 civilians in pursuit of targets deemed to be senior Hamas officials. The reports suggest that the machine’s decisions were often accepted without thorough examination or consideration of raw intelligence data by human personnel. Instead, analysts merely served as a “rubber stamp” for the AI’s choices, spending as little as 20 seconds per target before authorizing bombings.

As further evidence, writer Yuval Abraham from +972 Magazine claimed to have spoken to six unnamed Israeli intelligence officers directly involved in the use of AI for assassinations. The Guardian corroborated this information, stating that they also received accounts from the same officers. These sources alleged that airstrikes targeting low-ranking militants had resulted in the deaths of 15 to 20 civilians in each attack. The attacks reportedly utilized unguided munitions, leading to the destruction of entire homes and the loss of all occupants.

The IDF took to social media to dispute these assertions, with Lt. Col Peter Werner, an IDF spokesman, criticizing what he called “poor media ethics.” He stated that no Hamas individuals were deliberately targeted with expectations of causing civilian casualties.

While full details on the use of AI in the Hamas-Israel conflict may not be available at this time, the IDF has acknowledged some of its AI capabilities on its website. In a post dating back to November 2023, the IDF described an AI system called “Gospel,” which facilitates the production of targets quickly by enhancing intelligence material. The IDF clarifies that the goal is to achieve a complete match between the machine’s recommendation and human identification.

It is worth noting that the writer of the +972 Magazine report, Yuval Abraham, personally experienced the impact of bombings when Israel targeted the home of a close friend. This reveals the emotional toll that the conflict has taken on individuals involved.

While Israel denies any involvement in an AI program for targeted killings, it is true that other militaries, such as the United States' Central Command, have used AI tools for target identification. AI has the potential to assist in narrowing down potential targets, providing a more efficient approach to military operations. As the controversy continues, it remains to be seen what further information will emerge regarding the use of AI in the Hamas-Israel conflict.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.