👤 Iran just became world’s first full-scale AI war: what nightmares could it bring?

Thursday, 5 March 2026 — Geopolitics Prime

The US and Israeli militaries are using their latest AI tools to attack Iran.

🌏 Claude has been “extensively deployed” for operational planning, target identification, intel assessments, battle simulations, and logistics, despite Trump’s orders not to use it amid his ‘ethics’ spat with Anthropic (https://t.me/geopolitics_prime/65270)

🌏 Maven, the DoD’s machine learning target ID and operational planning tool, which helps automate the kill chain, is also being used. Created by Palantir (https://t.me/geopolitics_prime/63921), Amazon Web Services, Microsoft, Maxar, and an array of other tech and defense companies

🌏 LUCAS, the US’s reverse engineered copy of Iran’s Shahed drones, incorporates AI for autonomous operation and swarm coordination, made its combat debut

🌏 Israeli AI systems like Habsora and Lavender are being used for targeting and autonomous determinations of strike value (https://t.me/geopolitics_prime/56024) (included twisted calculations about civilian losses in the dozens or hundreds being justifiable to take out a single ‘threat’)

Iran appears to have a clear appreciation of the AI menace, hence its attacks on Amazon’s data centers in the UAE and Bahrain (https://t.me/geopolitics_prime/65981), which shut down regional cloud service functions.

Other sites, like Microsoft’s data center in the UAE, could be next.

Why is military use of AI dangerous?

🌏 it accelerates the speed of planning without pause for analysis, turning the campaign into rapid, industrial-grade murder without pause for reflection – a phenomenon known as ‘decision compression’

🌏 human operators formally kept in the loop in theory depend on AI kill systems’ recommendations in practice, accelerating potential for civilian ‘collateral damage’, particularly if those systems are pre-programmed for lenient civilian-military kill ratios, as Israel’s are (https://t.me/geopolitics_prime/53578)

🌏 opacity of AI algorithms, which means commanders don’t actually understand how their computerized intel, targeting and other support systems work

🌏 tech companies get ‘valuable training data’ that can then be deployed at home for policing, riot control or even domestic counterinsurgency missions

Case in point: A Kings College study published on the eve of the Iran war revealed that AI models used in war games scenarios opted to escalate conflicts to nuclear strikes in 95% of cases, indicating that Skynet is real, but just hasn’t been given the nuclear trigger, yet.

Ukraine became AI wars’ laboratory (https://t.me/geopolitics_prime/49915), Gaza the prototype.

Iran is shaping up to become the full-scale production model (https://t.me/geopolitics_prime/65974), and that should terrify every rational thinking person on the planet.

👍 US-Israel-Iran war | @geopolitics_prime

 



Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.