World

Microsoft says it provided AI to Israeli military for war but denies use to harm people in Gaza

WASHINGTON: Microsoft acknowledged that it sold advanced artificial intelligence and cloud computing services to the Israeli military during the war in Gaza and aided in efforts to locate and rescue Israeli hostages. But the company also said it has found no evidence to date that its Azure platform and AI technologies were used to target or harm people in Gaza.

The unsigned blog post on Microsoft’s corporate website appears to be the company’s first public acknowledgement of its deep involvement in the war, which started after Hamas killed about 1,200 people in Israel and has led to the deaths of tens of thousands in Gaza.

It comes nearly three months after an investigation by The Associated Press revealed previously unreported details about the American tech giant’s close partnership with the Israeli Ministry of Defense, with military use of commercial AI products skyrocketing by nearly 200 times after the deadly Oct. 7, 2023, Hamas attack.

The AP reported that the Israeli military uses Azure to transcribe, translate and process intelligence gathered through mass surveillance, which can then be cross-checked with Israel’s in-house AI-enabled targeting systems and vice versa.

The partnership reflects a growing drive by tech companies to sell their artificial intelligence products to militaries for a wide range of uses, including in Israel, Ukraine and the United States. However, human rights groups have raised concerns that AI systems, which can be flawed and prone to errors, are being used to help make decisions about who or what to target, resulting in the deaths of innocent people.

Microsoft said Thursday that employee concerns and media reports had prompted the company to launch an internal review and hire an external firm to undertake “additional fact-finding.” The statement did not identify the outside firm or provide a copy of its report.

The statement also did not directly address several questions about precisely how the Israeli military is using its technologies, and the company declined Friday to comment further. Microsoft declined to answer written questions from The AP about how its AI models helped translate, sort and analyze intelligence used by the military to select targets for airstrikes.

The company’s statement said it had provided the Israeli military with software, professional services, Azure cloud storage and Azure AI services, including language translation, and had worked with the Israeli government to protect its national cyberspace against external threats. Microsoft said it had also provided “special access to our technologies beyond the terms of our commercial agreements” and “limited emergency support” to Israel as part of the effort to help rescue the more than 250 hostages taken by Hamas on Oct. 7.

“We provided this help with significant oversight and on a limited basis, including approval of some requests and denial of others,” Microsoft said. “We believe the company followed its principles on a considered and careful basis, to help save the lives of hostages while also honoring the privacy and other rights of civilians in Gaza.”

The company did not answer whether it or the outside firm it hired communicated or consulted with the Israeli military as part of its internal probe. It also did not respond to requests for additional details about the special assistance it provided to the Israeli military to recover hostages or the specific steps to safeguard the rights and privacy of Palestinians.

In its statement, the company also conceded that it “does not have visibility into how customers use our software on their own servers or other devices.” The company added that it could not know how its products might be used through other commercial cloud providers.

In addition to Microsoft, the Israeli military has extensive contracts for cloud or AI services with Google, Amazon, Palantir and several other major American tech firms.

Microsoft said the Israeli military, like any other customer, was bound to follow the company’s Acceptable Use Policy and AI Code of Conduct, which prohibit the use of products to inflict harm in any way prohibited by law. In its statement, the company said it had found “no evidence” the Israeli military had violated those terms.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button