I’m having a lot of trouble reconciling this statement. Maybe you can help me out here…
Microsoft acknowledged Thursday that it sold advanced artificial intelligence and cloud computing services to the Israeli military during the war in Gaza and aided in efforts to locate and rescue Israeli hostages. But the company also said it has found no evidence to date that its Azure platform and AI technologies were used to target or harm people in Gaza.
So, Microsoft is aware that the Israeli military (the IDF), is actively using its cloud computing and AI services in the midst of one of the most brutal, high-casualty conflicts of our time. Yet somehow, it simultaneously claims that those tools weren’t used to harm people in Gaza?
…how does that add up?
To be clear, I’m not suggesting Microsoft was directly responsible for pulling any triggers. But let’s be honest, if you’re selling high-performance tools to a military during wartime, tools that are explicitly designed to optimize surveillance, decision-making, and targeting, you don’t get to say “Well, we found no evidence of misuse.”
Here’s the thing, accountability doesn’t begin after harm is done - it begins when you enter into the relationship knowing full well how your tools may be used.
The Hammer Analogy Falls Apart
Some may argue whether it’s really on Microsoft to police how its infrastructure is used. After all, is a hammer manufacturer responsible if someone buys a hammer and uses it to commit a crime?
That same analogy, used by tech companies for decades, collapses under the weight of modern tech.
If a hammer company knowingly sells to someone who openly says they plan to commit violence then YES - the company holds some responsibility. And unlike traditional hammer-like tools, Microsoft’s products aren’t static. Azure cloud services and AI models can be updated, monitored, revoked in real-time. We’re not talking about hammers, we’re talking about a remote-controlled weapons system that Microsoft can, in theory, shut off.
So… why hasn’t it?
Why hasn’t Microsoft taken a more active role in reviewing, restricting, or even ceasing these services during this ongoing war?
Shareholders Are Asking the Same Thing
Microsoft’s own shareholders are beginning to speak up. Earlier this month, a group of 60+ shareholders, representing over $80M in Microsoft shares, filed a resolution calling on the company to conduct a human rights due diligence assessment of its contracts with the Israeli military. [source]
This isn’t just about ethics. It’s about long-term business risk, brand trust, and complicity in international law violations.
Microsoft, notably, has previously committed to “responsible AI” principles. One of its core tenets? That AI should be “used to serve humanity and designed to respect human rights.”
But how do those words match-up with its actions…? How do Microsoft’s principles match-up with its desire to execute contracts with military parties responsible for widespread civilian harm?
Tech Companies Can’t Stay Neutral Anymore
Microsoft is not alone. Amazon, Google, Palantir - many of the biggest tech firms are now deeply embedded in military and security infrastructures around the world. But the scale and visibility of the Israel-Gaza war have forced a broader reckoning.
The era of tech neutrality is over. You can’t develop the backbone for military-grade AI systems and then claim you have no responsibility for how those systems are used.
It’s not just about revoking contracts or cutting off access. It’s about designing governance frameworks that prevent these harms in the first place. It’s about embedding human rights reviews into procurement decisions. It’s about choosing not to sell the hammer, or to revoke the hammer, when you know how it’s going to be used.
So I’ll ask again:
Why hasn’t Microsoft done more?