We collect cookies to analyze our website traffic and performance; we never collect any personal data. Cookies Policy
Accept
AsolicaAsolicaAsolica
  • Home
  • Business
  • Crypto
  • Finance
  • Marketing
  • Startup
  • Press Release
Reading: OpenAI is negotiating with the U.S. authorities, Sam Altman tells workers | Fortune
Share
Font ResizerAa
AsolicaAsolica
Font ResizerAa
  • Home
  • Business
  • Crypto
  • Finance
  • Marketing
  • Startup
  • Press Release
Follow US
© 2025 Asolica News Network. All Rights Reserved.
Asolica > Blog > Business > OpenAI is negotiating with the U.S. authorities, Sam Altman tells workers | Fortune
Business

OpenAI is negotiating with the U.S. authorities, Sam Altman tells workers | Fortune

Admin
Last updated: February 28, 2026 12:11 am
Admin
2 months ago
Share
OpenAI is negotiating with the U.S. authorities, Sam Altman tells workers | Fortune
SHARE

Sam Altman informed OpenAI workers at an all-hands assembly on Friday afternoon {that a} potential settlement is rising with the U.S. Division of Conflict to make use of the startup’s AI fashions and instruments, in accordance with a supply current on the assembly and a abstract of the assembly seen by Fortune. The contract has not but been signed.

The assembly got here on the finish of per week the place a battle between Secretary of Conflict Pete Hegseth and OpenAI rival Anthropic burst into public acrimony, ending with the obvious finish of Anthropic’s contracts with the Pentagon and with the federal authorities usually.

Altman mentioned the federal government is prepared to let OpenAI construct their very own “safety stack”—that’s, the layered system of technical, coverage, and human controls that sit between a robust AI mannequin and real-world use—and that if the mannequin refuses to do a process, then the federal government wouldn’t power OpenAI to make it do this process.

OpenAI would retain management over how technical safeguards are applied, which fashions are deployed and the place, and would restrict deployment to cloud environments relatively than “edge systems.” (In a army context, edge methods are a class that would embody plane and drones.) In what could be a significant concession, Altman informed workers that the federal government mentioned it’s prepared to incorporate OpenAI’s named “red lines” within the contract, together with not utilizing AI to energy autonomous weapons, no home mass surveillance and no important decision-making.

OpenAI and the Division of Conflict didn’t instantly reply to requests for remark.

Sasha Baker, head of nationwide safety coverage at OpenAI, and Katrina Mulligan, who leads nationwide safety for OpenAI for Authorities, additionally spoke on the OpenAI all-hands, in accordance with the supply. A kind of officers mentioned the connection with Anthropic and the federal government had damaged down as a result of Anthropic CEO and cofounder Dario Amodei had offended Division of Conflict management, together with publishing weblog posts that “the department got upset about.”

Anthropic, an organization based by individuals who left OpenAI over issues of safety, had been the one giant industrial AI maker whose fashions have been accepted to be used on the Pentagon, in a deployment achieved by way of a partnership with Palantir. However Anthropic’s administration and the Pentagon been locked for a number of days in a dispute over limitations that Anthropic wished to placed on using its know-how. These limitations are basically the identical ones that Altman mentioned the Pentagon would abide by if it used OpenAI’s know-how.

Anthropic had refused Pentagon calls for that it take away safeguards on its Claude mannequin that prohibit makes use of corresponding to home mass surveillance or absolutely autonomous weapons, at the same time as protection officers insisted that AI fashions have to be out there for “all lawful purposes.” The Pentagon, together with Secretary of Conflict Pete Hegseth, had warned Anthropic it may lose a contract value as much as $200 million if it didn’t comply. Altman has beforehand mentioned OpenAI shares Anthropic’s “red lines” on limiting sure army makes use of of AI, underscoring that at the same time as OpenAI negotiates with the U.S. authorities, it faces the identical core stress now taking part in out publicly between Anthropic and the Pentagon.

The OpenAI all-hands got here simply after President Trump introduced that the federal authorities will cease working with Anthropic, in a dramatic escalation of the authorities’s conflict with the corporate over its AI fashions.

“I am directing every federal agency in the United States government to immediately cease all use of Anthropic’s technology. We don’t need it, we don’t want it and will not do business with them again!” Trump mentioned in a put up on Fact Social. The Division of Conflict and different companies utilizing Anthropic’s Claude fashions can have a six-month phase-out interval, he mentioned.

On the OpenAI all-hands, workers have been informed that probably the most difficult facet of the deal for management have been considerations about international surveillance, and that there was a significant fear about AI-driven surveillance threatening democracy, in accordance with the supply. Nonetheless, firm leaders additionally appeared to acknowledge the truth that governments will spy on adversaries internationally, recognizing claims that national-security officers “can’t do their jobs” with out worldwide surveillance capabilities. References have been made to menace intelligence stories displaying that China was already utilizing AI fashions to focus on dissidents abroad.

Nobel laureate Joseph Stiglitz warns AI’s starvation for web feedback might degrade the world’s ‘information ecosystem’ | Fortune
DeFi is creeping into company money circulate—’The cat’s out of the bag’ | Fortune Crypto
How the Iran battle provides to the U.S. Ok-shaped economic system by means of increased gasoline costs | Fortune
This Apple would not fall removed from the tree: Tim Prepare dinner is leaving at a peak and John Ternus is precisely the appropriate CEO for the AI period | Fortune
A U.Ok.-based sustainability initiative is drawing U.S. CEOs like BoA’s Brian Moynihan: ‘We have to make this occur the appropriate approach’ | Fortune
TAGGED:AltmanFortunegovernmentnegotiatingOpenAISamstafftellsU.S
Share This Article
Facebook Email Print
Previous Article 3 Altcoins To Watch This Weekend | February 28 – March 1 3 Altcoins To Watch This Weekend | February 28 – March 1
Next Article American Airways provides unpopular new price, however vacationers can keep away from it American Airways provides unpopular new price, however vacationers can keep away from it

Follow US

Find US on Social Medias
FacebookLike
XFollow
YoutubeSubscribe
TelegramFollow
Popular News
Roku inventory rises after This autumn FY25 earnings beat estimates; income up 16% – AlphaStreet Information
Marketing

Roku inventory rises after This autumn FY25 earnings beat estimates; income up 16% – AlphaStreet Information

Admin
By Admin
2 months ago
Solana Worth Evaluation: SOL Holders May Face New Threat
These 3 UK dividend shares may pay a superb £1,500 a 12 months in tax-free ISA revenue
Overlook Crypto — Bitcoin Miners Simply Turned America’s AI Powerhouses – BeInCrypto
Ethereum Primed for $4,000 As On-Chain Alerts Flash Inexperienced

You Might Also Like

UAE and Kuwait begin oil output cuts after Hormuz blockage | Fortune

UAE and Kuwait begin oil output cuts after Hormuz blockage | Fortune

1 month ago
Regardless of their ‘no limits’ friendship, Russia is paying a virtually 90% markup on sanctioned items from China — in comparison with 9% from different nations | Fortune

Regardless of their ‘no limits’ friendship, Russia is paying a virtually 90% markup on sanctioned items from China — in comparison with 9% from different nations | Fortune

5 months ago
As Gates’ Epstein connection unnerves workers, his basis seems to be for a method ahead | Fortune

As Gates’ Epstein connection unnerves workers, his basis seems to be for a method ahead | Fortune

2 months ago
2 years after  million take care of bankrupt chatbot agency, LA’s colleges superintendent is underneath investigation | Fortune

2 years after $3 million take care of bankrupt chatbot agency, LA’s colleges superintendent is underneath investigation | Fortune

2 months ago
about us

Welcome to Asolica, your reliable destination for independent news, in-depth analysis, and global updates.

  • Home
  • Business
  • Crypto
  • Finance
  • Marketing
  • Startup
  • Press Release
  • About Us
  • Contact Us
  • Privacy Policy
  • Cookie Policy
  • Disclaimer
  • Terms & Conditions

Find Us on Socials

© 2025 Asolica News Network. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?