Home / Viral & Trending / OpenAI updates Department of War deal after backlash

OpenAI updates Department of War deal after backlash

OpenAI has officially amended its contract with the U.S. Department of War following a period of intense public scrutiny and a significant drop in its civilian user base. CEO Sam Altman characterized the initial rollout of the military partnership as "opportunistic and sloppy," leading the San Francisco-based artificial intelligence firm to introduce new language intended to restrict the use of its technology for domestic surveillance. Despite these revisions, privacy advocates and industry analysts suggest the updated agreement leaves substantial legal loopholes that could still permit controversial applications of AI in defense and intelligence operations.

The controversy began late last week when OpenAI announced it had secured a major contract with the Department of War (DOW), a move that occurred shortly after the federal government severed ties with competitor Anthropic. The shift in vendors followed an executive order from the White House directing federal agencies to cease the use of Anthropic’s models. According to industry reports, Anthropic had refused to comply with DOW demands to remove safety protocols that prohibited the use of its AI for mass surveillance and the development of fully autonomous lethal weapons.

OpenAI updates Department of War deal after backlash and internal criticism

In an internal memo later shared on social media, Sam Altman acknowledged that the timing and communication of the DOW deal had been handled poorly. He noted that the company had attempted to "de-escalate" tensions surrounding the integration of AI into military infrastructure but admitted that the optics of the deal appeared rushed. "The issues are super complex, and demand clear communication," Altman stated, adding that the company was now working to clarify its boundaries regarding national security applications.

The backlash from the general public was nearly instantaneous, with social media platforms seeing a surge in organized movements to boycott OpenAI products. Data indicates that uninstalls of the ChatGPT mobile application surged by approximately 295 percent in the days following the initial announcement. This mass exodus of users has directly benefited competitors; Anthropic’s AI assistant, Claude, recently overtook ChatGPT as the most downloaded free application in the U.S. Apple App Store, signaling a potential shift in market dominance driven by ethical concerns.

Technical revisions and the "intentionality" loophole

To address the growing criticism, OpenAI and the Department of War updated the language of their agreement to include specific prohibitions against domestic spying. The new sections of the contract state that OpenAI’s systems shall not be "intentionally used for domestic surveillance of U.S. persons and nationals." The amendment further clarifies that this limitation is intended to prohibit the "deliberate tracking, surveillance, or monitoring" of American citizens, including through the use of commercially acquired personal data.

However, legal experts and privacy researchers have raised alarms over the specific terminology used in the revision. By qualifying the prohibition with words like "intentionally" and "deliberate," the contract may allow for "incidental" collection of data on U.S. citizens. In the context of intelligence gathering, incidental collection occurs when the communications or data of Americans are swept up during the lawful targeting of foreign entities. Critics argue that this provides a "legal shield" for the government to maintain large-scale data dragnets while claiming they are not "intentionally" targeting domestic populations.

Security vs. Ethics: The debate over autonomous weaponry

One of the most significant omissions in the updated agreement is a clear stance on the development and deployment of fully autonomous weapons systems. While the revised contract addresses domestic surveillance, it remains largely silent on the use of AI to control lethal machinery or make targeting decisions on the battlefield. The Department of War has previously expressed a desire to utilize AI tools for "any lawful use," a broad mandate that potentially includes the integration of Large Language Models (LLMs) into kinetic combat operations.

This lack of clarity has drawn criticism from the scientific community and AI safety advocates. Unlike the previous arrangement with Anthropic, which specifically barred the use of AI for autonomous violence, OpenAI’s current framework appears to defer to the legality of such actions rather than establishing its own ethical red lines. This position has sparked a debate within the tech industry about the responsibility of AI developers to prevent the militarization of their software, regardless of government directives.

Political pressure and the shift in federal AI procurement

The rapid transition from Anthropic to OpenAI highlights a shifting landscape in federal AI procurement under the current administration. President Donald Trump’s recent orders have emphasized the need for "unrestricted" technological advantages in the defense sector, leading to friction with companies that prioritize safety "guardrails" over military utility. Anthropic CEO Dario Amodei confirmed that his company’s refusal to abandon its core safety principles led to its exclusion from the DOW’s latest initiatives.

OpenAI’s willingness to step into the void left by Anthropic has been viewed by some as a strategic move to secure a dominant position in the lucrative federal market. However, this strategy carries significant reputational risks. The company, which originally launched as a non-profit dedicated to ensuring AI benefits "all of humanity," now finds itself at the center of a military-industrial complex debate. The tension between its commercial ambitions and its founding mission has led to reported internal friction among staff members concerned about the long-term implications of weaponized AI.

Public reaction and the rise of ethical alternatives

The fallout from the OpenAI-DOW deal has resonated beyond the halls of government and the boardrooms of Silicon Valley. On platforms like Reddit, thousands of users have shared screenshots of their canceled ChatGPT Plus subscriptions, citing the company’s military ties as the primary reason for their departure. The hashtag #GoodbyeChatGPT trended briefly as users migrated to alternative platforms that they perceive as having more robust ethical stances.

This consumer-led movement has highlighted a growing awareness of data privacy and the ethical use of artificial intelligence. Analysts suggest that for many users, the prospect of their data being utilized—directly or indirectly—to train models used by the Department of War is a "bridge too far." The surge in Claude downloads suggests that users are increasingly making software choices based on the perceived values of the parent company, rather than just the capabilities of the technology itself.

OpenAI updates Department of War deal after backlash regarding the NSA

Adding to the public’s apprehension is the historical context of mass surveillance in the United States. Sam Altman’s memo mentioned that intelligence agencies under the DOW umbrella, such as the National Security Agency (NSA), would not use OpenAI technology without further contract amendments. However, the history of the NSA—specifically the 2013 revelations by whistleblower Edward Snowden—has left many skeptical of such assurances.

The Snowden leaks revealed that the NSA had previously engaged in widespread, unauthorized surveillance of American citizens’ digital communications. Given this history, the "intentionality" clauses in the new OpenAI contract are being viewed through a lens of deep distrust. Privacy advocates argue that without independent oversight and transparent reporting, there is no way for the public to verify that AI systems are not being used to resurrect or expand previous surveillance programs.

The doctrine of "Democratic Processes" in AI governance

In defense of the company’s actions, Sam Altman has argued that private tech firms should not be the ultimate arbiters of how society uses powerful new technologies. Instead, he has advocated for a model of "deference to democratic processes," suggesting that the government should make the key decisions regarding the deployment of AI in national security. Altman stated that OpenAI wants a "seat at the table" to share expertise but ultimately believes the elected government must set the rules.

This philosophical stance has been met with mixed reviews. Supporters argue it is a pragmatic approach that respects the authority of the state. Critics, however, view it as an "abdication of responsibility," arguing that companies creating potentially world-altering technology have a moral obligation to ensure it is not used for harm, regardless of whether a government deems that harm "lawful." Altman’s claim that he would "rather go to jail" than follow an unconstitutional order has done little to quiet the critics who point out that the definition of "unconstitutional" is often decided in courts years after the damage has been done.

Future implications for the AI industry and defense contracts

As OpenAI updates Department of War deal after backlash, the entire AI industry is watching closely. The outcome of this controversy will likely set a precedent for how other major players, such as Google, Meta, and Microsoft, navigate the intersection of commercial AI and military applications. If OpenAI successfully weathers the storm and retains its federal contracts, it may signal a new era where "any lawful use" becomes the standard operating procedure for AI firms.

Conversely, if the public boycott continues to impact OpenAI’s bottom line, it could force a more significant retreat from military involvement. For now, the company remains committed to its partnership with the Department of War, even as it continues to tweak the language of its agreements to satisfy a wary public. The balance between national security interests and the privacy rights of the individual remains one of the most contentious issues in the modern digital age, with AI now serving as the new frontline of that conflict.

Tagged:

Leave a Reply

Your email address will not be published. Required fields are marked *