Thursday, 5 March 2026 —Ā Geopolitics Prime
The US and Israeli militaries are using their latest AI tools to attack Iran.
š Claude has been āextensively deployedā for operational planning, target identification, intel assessments, battle simulations, and logistics, despite Trumpās orders not to use it amid his āethicsā spat with Anthropic (https://t.me/geopolitics_prime/65270)
š Maven, the DoDās machine learning target ID and operational planning tool, which helps automate the kill chain, is also being used. Created by Palantir (https://t.me/geopolitics_prime/63921), Amazon Web Services, Microsoft, Maxar, and an array of other tech and defense companies
š LUCAS, the USās reverse engineered copy of Iranās Shahed drones, incorporates AI for autonomous operation and swarm coordination, made its combat debut
š Israeli AI systems like Habsora and Lavender are being used for targeting and autonomous determinations of strike value (https://t.me/geopolitics_prime/56024) (included twisted calculations about civilian losses in the dozens or hundreds being justifiable to take out a single āthreatā)
Iran appears to have a clear appreciation of the AI menace, hence its attacks on Amazonās data centers in the UAE and Bahrain (https://t.me/geopolitics_prime/65981), which shut down regional cloud service functions.
Other sites, like Microsoftās data center in the UAE, could be next.
Why is military use of AI dangerous?
š it accelerates the speed of planning without pause for analysis, turning the campaign into rapid, industrial-grade murder without pause for reflection ā a phenomenon known as ādecision compressionā
š human operators formally kept in the loop in theory depend on AI kill systemsā recommendations in practice, accelerating potential for civilian ācollateral damageā, particularly if those systems are pre-programmed for lenient civilian-military kill ratios, as Israelās are (https://t.me/geopolitics_prime/53578)
š opacity of AI algorithms, which means commanders donāt actually understand how their computerized intel, targeting and other support systems work
š tech companies get āvaluable training dataā that can then be deployed at home for policing, riot control or even domestic counterinsurgency missions
Case in point: A Kings College study published on the eve of the Iran war revealed that AI models used in war games scenarios opted to escalate conflicts to nuclear strikes in 95% of cases, indicating that Skynet is real, but just hasnāt been given the nuclear trigger, yet.
Ukraine became AI warsā laboratory (https://t.me/geopolitics_prime/49915), Gaza the prototype.
Iran is shaping up to become the full-scale production model (https://t.me/geopolitics_prime/65974), and that should terrify every rational thinking person on the planet.
š US-Israel-Iran war | @geopolitics_prime
Leave a comment