OpenAI published portions of its contract with the Department of Defense, outlining “red lines” that bar the use of its technology for mass domestic surveillance, autonomous weapons, or high-stakes systems such as social credit scoring.
The company said its agreement includes stronger safety guardrails than prior classified AI deals and allows OpenAI to retain full control over its safety stack. It added that it could terminate the contract if the terms are violated.
CEO Sam Altman defended the partnership in social media posts, arguing the deal is more robust than the one Anthropic declined. He said OpenAI is comfortable relying on existing US laws in addition to contractual limits and emphasized that unconstitutional surveillance would not be permitted.
Anthropic was recently blacklisted as a supply chain risk after refusing to comply with the military’s terms for its Claude model. OpenAI said it does not believe Anthropic should face that designation and urged the government to offer similar terms to all AI labs to ease tensions.
The dispute has fueled debate over the Pentagon’s use of AI, with critics raising ethical concerns, while OpenAI said collaboration with government is necessary to manage national security risks responsibly.
