网爆门

Anthropic-Pentagon Dispute Reveals Limits of AI Self-Regulation, Expert Says

AI policy expert Hamid Ekbia examines why the Anthropic-Pentagon dispute was inevitable and what it reveals about the limits of industry self-regulation.
Christopher Munoz March 13, 2026

Can an AI company take government money and still set limits on how its technology is used? That question is at the center of an ongoing dispute between the Pentagon and Anthropic, and 网爆门 professor Hamid Ekbia says it exposes fundamental tensions in how the AI industry operates.

Ekbia, founding director of the Academic Alliance for AI Policy, says the Pentagon鈥檚 demand that Anthropic either change its approach or forgo its lucrative contract is a vivid example of current federal policy. 鈥淲ith the bulk of public AI funding in the U.S. still coming from defense, companies either have to budge or shut themselves out from this unique source of money,鈥 Ekbia says.

While Anthropic has adjusted some safety policies, it has so far declined to allow its technology to be used for domestic surveillance or autonomous drones, a distinction Ekbia says matters.

“That is cause for celebration for any observer concerned about such applications,” he says. “But the question going forward is whether this will continue to be the case.”

Political and Economic Forces

Ekbia says the pressure on Anthropic reflects a broader shift in the federal government’s approach to AI regulation.

“The anti-regulatory policies of the Trump administration don’t leave much room for safety-oriented approaches to AI,” he says, adding that those policies push companies and oversight bodies toward “aggressive and often reckless behaviors in the name of innovation.”

Market competition makes the pressure worse. “The AI ecosystem is defined by furious competition among a few big players in a race to grab the lion’s share of the spoils in a rapidly growing industry,” Ekbia says. “The ‘moral economy’ of the AI industry is one of the jungle, where only the most reckless, ruthless, and aggressive behaviors are expected to be rewarded.”

Employees as a Wild Card

One factor that could shape the outcome is pressure from within Anthropic itself. Ekbia says employee resistance has played a meaningful role so far, with workers vocal during negotiations and leadership appearing to take that seriously.

But he cautions that employee influence is not guaranteed to last. “How critical will employees be in the future of the company given the current wave of white-collar聽under-employment, and how assertive will they be in expressing their resistance?” he says.

He outlines several other variables that will determine how the situation unfolds: whether competing AI companies are willing to fill the gap for the Pentagon, how hard the Trump administration continues to push for broad access to AI technology, and how well Anthropic can sustain itself financially without defense funding.

“The speed of change in these areas makes it hard to make solid predictions,” Ekbia says.

The Limits of Self-Regulation

Ekbia says the dispute ultimately tests a premise that Anthropic has staked its reputation on鈥攖hat a company can be both commercially successful and a responsible steward of powerful technology.

“In the absence of federal policy, Anthropic aspired to play that role in the industry,” he says. “What is happening shows the limited efficacy of that aspiration. Society cannot rely on the industry to self-police itself, despite even the best intentions.”

He connects that failure to a broader culture in Silicon Valley, where prominent figures publicly embrace “effective altruism”鈥攖he idea that profit and doing good can coexist.

“The case of Anthropic shows how much of an illusion this is,” Ekbia says. “As the old saying goes, you cannot have your cake and eat it too.”

Faculty Expert

University Professor

Media Contact

Christopher Munoz
Media Relations Specialist