More than 100 Google A.I. employees sent a letter to Jeff Dean, a chief scientist, opposing Gemini’s use for U.S. surveillance and some autonomous weapons. https://www.nytimes.com/2026/02/26/technology/google-deepmind-letter-pentagon.html
At the same time, Google is already providing some AI services to the US military, although it has not yet been approved for classified use https://breakingdefense.com/2026/02/pentagon-cto-says-its-not-democratic-for-anthropic-to-limit-military-use-of-claude-ai/
If Google sticks to both elements of Anthropic's commitment, of not permitting usage of its AI for mass surveillance or autonomous lethal weapons by the US military, by the end of 2026, this will resolve to Yes. Note that if Google chooses to end its contract with the US military, this will still resolve to Yes, because presumably the is the same red lines as for Anthropic. Note that if they do strike a deal and the military somehow does use Google's Gemini or other AI tools for mass surveillance or autonomous lethal weapons without Google's explicit approval, such as through subterfuge, fudging definitions, or the Defense Procurement Act, this still resolves to Yes. The No condition has to be about Google explicitly permitting the US military to use its AI for mass surveillance or autonomous lethal weapons. The determination will be through credible reporting of at least three top 10 media sites on explicit permission for the US military to use its AI for mass surveillance or autonomous lethal weapons by the end of 2026, otherwise this resolved to Yes.