Please Wait...

Loyal to the Pledge

NYT: ’Israel’s’ AI Experiments in Gaza War Raise Ethical Concerns

NYT: ’Israel’s’ AI Experiments in Gaza War Raise Ethical Concerns
folder_openZionist Entity access_time 3 days ago
starAdd to favorites

By Staff, Agencies

Before the “Israeli” war on Gaza, "Israel" built a machine-learning algorithm — code-named “Lavender” — that could quickly sort data to hunt for low-level officials. It was trained on a database of confirmed Hamas members and meant to predict who else might be part of the group. Though the system’s predictions were imperfect, "Israel" used it at the start of the war in Gaza to help choose attack targets.

Few goals loomed larger than finding and eliminating Hamas’s senior leadership. Near the top of the list was Biari, the Hamas commander who "Israeli" officials believed played a central role in planning the Oct. 7 operation.

"Israeli" intelligence quickly intercepted Biari’s calls with other Hamas members but could not pinpoint his location. So, they turned to the AI-backed audio tool, which analyzed different sounds, such as sonic bombs and airstrikes.

After deducing an approximate location for where Biari was placing his calls, "Israeli" officials were warned that the area, which included several apartment complexes, was densely populated, two intelligence officers said. An airstrike would need to target several buildings to ensure Biari was assassinated, they said. The operation was greenlit.

Since then, “Israeli” intelligence has also used the audio tool alongside maps and photos of Gaza’s underground tunnel maze to locate captives. Over time, the tool was refined to more precisely find individuals, two “Israeli” officers said.

The audio tool was just one example of how "Israel" has used the war in Gaza to rapidly test and deploy AI-backed military technologies to a degree that had not been seen before, according to interviews with nine American and "Israeli" war officials, who spoke on the condition of anonymity because the work is confidential.

In the past 18 months, "Israel" has also combined AI with facial recognition software to match partly obscured or injured faces to real identities, turned to AI to compile potential airstrike targets. It also created an Arabic-language AI model to power a chatbot that could scan and analyze text messages, social media posts and other Arabic-language data, two people with knowledge of the programs said.

Many of these efforts were a partnership between enlisted forces in Unit 8200 and reserve forces who work at tech companies such as Google, Microsoft and Meta, three people with knowledge of the technologies said. Unit 8200 set up what became known as “The Studio,” an innovation hub and place to match experts with AI projects.

Yet even as "Israel" raced to develop the AI arsenal, deployment of the technologies sometimes led to mistaken identifications and arrests, as well as civilian deaths, as per "Israeli" and American officials.

But the technologies “also raise serious ethical questions,” Lorber said. She warned that A.I. needs checks and balances, adding that humans should make the final decisions.

A spokeswoman for the “Israeli” army said she could not comment on specific technologies because of their “confidential nature", adding that "Israel" was investigating the strike on Biari and was “unable to provide any further information until the investigation is complete.”

Meta and Microsoft declined to comment. Google said it has “employees who do reserve duty in various countries around the world. The work those employees do as reservists is not connected to Google.”

“Israel” previously used conflicts in Gaza and Lebanon to experiment with and advance tech tools for its military, such as drones, phone hacking tools and the Iron Dome system, which can help intercept short-range ballistic missiles.

After Oct. 7, 2023 operation, AI technologies were quickly cleared for deployment, four "Israeli" officials said. That led to the cooperation between Unit 8200 and reserve forces in “The Studio” to swiftly develop new A.I. capabilities, they said.

Developers previously struggled to create such a model because of a dearth of Arabic-language data to train the technology. When such data was available, it was mostly in standard written Arabic, which is more formal than the dozens of dialects used in spoken Arabic.

"Israel" did not have that problem, the three officers said. The entity had decades of intercepted text messages, transcribed phone calls and posts scraped from social media in spoken Arabic dialects. So "Israeli" officers created the large language model in the first few months of the war and built a chatbot to run queries in Arabic. They merged the tool with multimedia databases, allowing analysts to run complex searches across images and videos, four "Israeli" officials said.

After the martyrdom of His Eminence Sayyed Hassan Nasrallah in September, the chatbot analyzed the responses across the Arabic-speaking world, three “Israeli” officers said. The technology differentiated among different dialects in Lebanon to gauge public reaction, helping "Israel" to assess if there was public pressure for a counterstrike.

At times, the chatbot could not identify some modern slang terms and words that were transliterated from English to Arabic, two officers said. That required "Israeli" intelligence officers with expertise in different dialects to review and correct its work, one of the officers said.

The chatbot also sometimes provided wrong answers — for instance, returning photos of pipes instead of guns — two "Israeli" intelligence officers said. Even so, the AI tool significantly accelerated research and analysis, they said.

At temporary checkpoints set up between the northern and southern Gaza Strip, "Israel" also began equipping cameras after the Oct. 7 operation with the ability to scan and send high-resolution images of Palestinians to an AI-backed facial recognition program.

Comments