We collect cookies to analyze our website traffic and performance; we never collect any personal data. Cookies Policy
Accept
AsolicaAsolicaAsolica
  • Home
  • Business
  • Crypto
  • Finance
  • Marketing
  • Startup
Reading: The Anthropic–OpenAI feud and their Pentagon dispute expose a deeper downside with AI security | Fortune
Share
Font ResizerAa
AsolicaAsolica
Font ResizerAa
  • Home
  • Business
  • Crypto
  • Finance
  • Marketing
  • Startup
Follow US
© 2025 Asolica News Network. All Rights Reserved.
Asolica > Blog > Business > The Anthropic–OpenAI feud and their Pentagon dispute expose a deeper downside with AI security | Fortune
Business

The Anthropic–OpenAI feud and their Pentagon dispute expose a deeper downside with AI security | Fortune

Admin
Last updated: March 5, 2026 7:56 pm
Admin
5 days ago
Share
The Anthropic–OpenAI feud and their Pentagon dispute expose a deeper downside with AI security | Fortune
SHARE

Contents
    • FORTUNE ON AI
    • AI IN THE NEWS
    • EYE ON AI NUMBERS
  • $25 billion
    • AI CALENDAR

Welcome to Eye on AI, with AI reporter Sharon Goldman. On this version: Trump has an AI knowledge heart downside forward of the midterms…Don’t belief AI to file your taxes…Anthropic’s AI instrument Claude is central to US marketing campaign in Iran, amid a bitter feud.

The talk round AI security typically focuses on the know-how itself—how highly effective fashions may change into, or what dangers they may pose. However the battle this week involving Anthropic, OpenAI and the Pentagon factors to a deeper downside: how a lot energy over the way forward for AI is concentrated within the fingers of a small variety of company leaders and authorities officers deciding how these techniques are constructed, deployed, and used.

For years, critics of the trade have warned concerning the danger of “industrial capture”—a future by which the event of highly effective AI techniques is concentrated amongst a handful of firms working carefully with governments, leaving the protection of these techniques depending on the incentives and rivalries of the folks operating them. In 2023, for instance, researcher Yoshua Bengio mentioned the potential for the AI sector to be managed by just a few firms was the “number two problem” behind the existential dangers posed by the know-how.

So it’s not notably reassuring to learn yesterday concerning the disdain Anthropic CEO Dario Amodei expressed in direction of OpenAI CEO Sam Altman in leaked memo Amodei wrote to workers on Friday. Amodei’s indignant missive, which was apparently despatched over Anthropic’s Slack to all its workers, got here after OpenAI introduced a deal to supply AI to the Pentagon and Secretary of Conflict Pete Hegseth mentioned he was declaring Anthropic a “supply chain risk” for failing to come back to an analogous deal.

Amodei known as OpenAI’s messaging “mendacious,” “safety theater,” and “an example of who they really are,” whereas describing a lot of Altman’s feedback as “straight up lies” and “gaslighting.”

Altman has taken his personal public pictures at Anthropic. He lately known as one of many firm’s Tremendous Bowl campaigns “clearly dishonest” and accused it of “doublespeak.” And the rivalry has change into seen in additional symbolic methods as properly: At a current summit, Altman and Amodei went viral for refusing to carry fingers for a gaggle photograph with Prime Minister Narendra Modi. 

With the US authorities taking little motion to manage AI—and worldwide efforts on AI security largely stalled—the world has successfully been counting on self-regulation by the trade. Each OpenAI and Anthropic have publicly supported that paradigm and signed voluntary security commitments. They’ve additionally collaborated at instances to run unbiased security evaluations of each other’s fashions previous to these fashions being launched.

However when the leaders of the 2 most influential AI labs so clearly can’t appear to get alongside, and the competitors between them is so fierce, it raises an uncomfortable query: how a lot cooperation on security can we realistically count on?

The strain of competitors has already impacted each firms in the case of AI security.  Anthropic lately revised its Accountable Scaling Coverage to say it could not unilaterally maintain again from growing a brand new mannequin just because it didn’t but know make that mannequin protected. And OpenAI has made its personal changes, eradicating specific bans on army and warfare makes use of from its insurance policies in 2024, and shifting its focus from security analysis to product growth to the purpose that former superalignment lead Jan Leike (who left for Anthropic in mid-2024) wrote on X that at OpenAI “safety culture and processes have taken a backseat to shiny products.”

The present security strategy assumes that firms and governments will finally act with restraint. However the way forward for AI security might finally rely upon how a small variety of highly effective gamers navigate the pressures of competitors, geopolitics, and the occasional Silicon Valley cleaning soap opera.

FORTUNE ON AI

Why Leopold Aschenbrenner’s AI hedge fund is betting large on energy firms and bitcoin miners to gasoline the ‘superintelligence’ race – by Sharon Goldman

OpenAI sees Codex customers spike to 1.6 million, positions coding instrument as gateway to AI brokers for enterprise – by Jeremy Kahn

Korean startup wrtn is on observe to move $100M in annual recurring income, using a loneliness epidemic-fueled increase in AI leisure – by Nicolas Gordon

AI IN THE NEWS

Trump has an AI knowledge heart downside forward of the midterms. CNBC and others reported that President Trump is going through a rising political dilemma because the U.S. races to construct energy-hungry AI knowledge facilities forward of the 2026 midterms. The infrastructure wanted to energy the AI increase is driving considerations about rising electrical energy costs and pressure on the grid, prompting backlash from voters and native communities. In response, main tech firms—together with OpenAI, Microsoft, Google, Amazon, Meta, and Oracle—have pledged to cowl the vitality and infrastructure prices related to their AI knowledge facilities so that buyers don’t see larger utility payments. The voluntary settlement, promoted by the White Home as a option to ease voter considerations, displays a broader rigidity: policymakers need the financial and geopolitical benefits of speedy AI growth, however the huge electrical energy calls for of the know-how are creating political and environmental pressures which can be changing into more durable to disregard.

Do not belief AI to file your taxes. In outcomes that ought to shock nobody, a take a look at by The New York Occasions discovered that AI isn’t any match for the US tax code, highlighting an essential limitation of in the present day’s AI chatbots: they nonetheless battle with duties that require exact, multi-step reasoning. To evaluate the know-how’s skill to file a federal revenue tax return, the paper examined 4 AI chatbots — Google’s Gemini, OpenAI’s ChatGPT, Anthropic’s Claude and xAI’s Grok — to see how properly they fared with eight fictional tax conditions. They struggled, arduous, miscalculating the refund or quantity owed to the Inner Income Service by a median of greater than $2,000. Even when supplied with all the mandatory supplies, together with all of the types they wanted to fill out, the chatbots whiffed on some calculations. The issue displays a elementary limitation of enormous language fashions: they’re designed to foretell possible phrases reasonably than exactly observe advanced, interconnected data, making them sturdy at writing and summarization however weaker at procedural duties like tax submitting. Consultants say the techniques might enhance with further reasoning instruments and verification layers, however for now they work greatest as assistants reasonably than replacements—one other reminder that whilst AI reshapes industries from coding to drugs, some seemingly easier duties stay surprisingly tough.

Anthropic’s AI instrument Claude is central to US marketing campaign in Iran, amid a bitter feud. A brand new report from The Washington Submit highlights how rapidly AI has moved from experimentation to the battlefield. In accordance with the paper, the US army used an AI-enabled focusing on system known as Maven Good System—constructed by Palantir and incorporating Anthropic’s Claude mannequin—to assist determine and prioritize targets throughout current U.S. operations in Iran, accelerating what as soon as took weeks of army planning into near-real-time resolution making. But the deployment comes amid a bitter dispute between Anthropic and the Pentagon over limits on how its know-how can be utilized in warfare, together with considerations about autonomous weapons and mass surveillance. The episode underscores each the rising strategic significance of frontier AI techniques and the strain between authorities demand for speedy deployment and corporations’ makes an attempt to set security boundaries. 

EYE ON AI NUMBERS

$25 billion

That’s how a lot annualized income OpenAI was producing as of the top of final month, based on reporting by The Info—a 17% bounce from the $21.4 billion annualized run charge it had on the finish of the yr, based on two folks acquainted with the figures.

OpenAI nonetheless brings in additional income than its closest rival, Anthropic, although the hole is rapidly narrowing. Anthropic’s annualized income lately topped $19 billion, practically triple what it was on the finish of final yr and up 36% in simply the previous two weeks.

OpenAI calculates annualized income by multiplying the earlier 4 weeks of income by 12. One supply mentioned that if the corporate as a substitute extrapolated from income spikes in the newest week alone, its annualized run charge can be nearer to $30 billion.

Anthropic’s speedy progress has been fueled partially by sturdy demand for its coding-focused AI fashions, which have helped the corporate rapidly slim the income hole with OpenAI. As lately as 2025, OpenAI was producing roughly thrice as a lot income as Anthropic.

AI CALENDAR

March 2-5: Cellular World Congress, Barcelona, Spain.

March 12-18: South by Southwest, Austin, Texas.

March 16-19: Nvidia GTC, San Jose, Calif.

April 6-9: HumanX, San Francisco. 

Nvidia drags Wall Avenue towards its worst day in a month as AI superstars maintain weakening | Fortune
Pfizer’s CEO on main after a moonshot—and making offers with Trump | Fortune
Mike Bloomberg’s new $50 million mayor bootcamp trains native leaders to not ‘play it protected’ | Fortune
From search to discovery: how AI Is redrawing the aggressive map for each model | Fortune
AI pilots maintain stalling on knowledge fears. The repair? A tradition of curiosity. | Fortune
TAGGED:AnthropicOpenAIDeeperdisputeexposefeudFortunePentagonproblemSafety
Share This Article
Facebook Email Print
Previous Article Crypto devs accused of rug pull blame Iran draft for abandoning mission Crypto devs accused of rug pull blame Iran draft for abandoning mission
Next Article Airways cancel extra flights over geopolitical instability Airways cancel extra flights over geopolitical instability
Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Follow US

Find US on Social Medias
FacebookLike
XFollow
YoutubeSubscribe
TelegramFollow
Popular News
4 shares to purchase now (And 1 shocking promote)
Finance

4 shares to purchase now (And 1 shocking promote)

Admin
By Admin
3 weeks ago
Congressmen who pushed to launch Epstein recordsdata say large blackout would not adjust to legislation and ‘are exploring all choices’ — together with impeachment | Fortune
Costco store playing cards & how they work: Every part you have to know
Amazon’s Ring ends partnership with high operator of license-plate studying programs after Tremendous Bowl advert raises fears of dystopian surveillance society | Fortune
One other troubled trucking firm closes down, no chapter

You Might Also Like

A BlackRock-led  billion knowledge middle deal highlights the unstoppable AI gold rush | Fortune

A BlackRock-led $40 billion knowledge middle deal highlights the unstoppable AI gold rush | Fortune

5 months ago
New examine finds that late bloomers are extra profitable than baby prodigies | Fortune

New examine finds that late bloomers are extra profitable than baby prodigies | Fortune

2 months ago
California Gov. Gavin Newsom doubles down on his criticism of the proposed billionaire wealth tax | Fortune

California Gov. Gavin Newsom doubles down on his criticism of the proposed billionaire wealth tax | Fortune

1 month ago
Anthropic’s latest mannequin excels at discovering safety vulnerabilities, however raises cybersecurity dangers | Fortune

Anthropic’s latest mannequin excels at discovering safety vulnerabilities, however raises cybersecurity dangers | Fortune

1 month ago
about us

Welcome to Asolica, your reliable destination for independent news, in-depth analysis, and global updates.

  • Home
  • Business
  • Crypto
  • Finance
  • Marketing
  • Startup
  • About Us
  • Contact Us
  • Privacy Policy
  • Cookie Policy
  • Disclaimer
  • Terms & Conditions

Find Us on Socials

© 2025 Asolica News Network. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?