We collect cookies to analyze our website traffic and performance; we never collect any personal data. Cookies Policy
Accept
AsolicaAsolicaAsolica
  • Home
  • Business
  • Crypto
  • Finance
  • Marketing
  • Startup
  • Press Release
Reading: OpenAI disputes watchdog allegation it violated California’s new AI regulation with GPT-5.3-Codex launch | Fortune
Share
Font ResizerAa
AsolicaAsolica
Font ResizerAa
  • Home
  • Business
  • Crypto
  • Finance
  • Marketing
  • Startup
  • Press Release
Follow US
© 2025 Asolica News Network. All Rights Reserved.
Asolica > Blog > Business > OpenAI disputes watchdog allegation it violated California’s new AI regulation with GPT-5.3-Codex launch | Fortune
Business

OpenAI disputes watchdog allegation it violated California’s new AI regulation with GPT-5.3-Codex launch | Fortune

Admin
Last updated: February 11, 2026 1:08 am
Admin
2 months ago
Share
OpenAI disputes watchdog allegation it violated California’s new AI regulation with GPT-5.3-Codex launch | Fortune
SHARE

OpenAI could have violated California’s new AI security regulation with the discharge of its newest coding mannequin, in keeping with allegations from an AI watchdog group.

A violation would doubtlessly expose the corporate to thousands and thousands of {dollars} in fines, and the case could turn out to be a precedent-setting first take a look at of the brand new regulation’s provisions.

An OpenAI spokesperson disputed the watch canine’s place, telling Fortune the corporate was “confident in our compliance with frontier safety laws, including SB 53.”

The controversy facilities on GPT-5.3-Codex, OpenAI’s latest coding mannequin, which was launched final week. The mannequin is a part of an effort by OpenAI to reclaim its lead in AI-powered coding and, in keeping with benchmark information OpenAI launched, reveals markedly increased efficiency on coding duties than earlier mannequin variations from each OpenAI and opponents like Anthropic. Nevertheless, the mannequin has additionally raised unprecedented cybersecurity considerations.

CEO Sam Altman stated the mannequin was the primary to hit the “high” danger class for cybersecurity on the corporate’s Preparedness Framework, an inside danger classification system OpenAI makes use of for mannequin releases. This implies OpenAI is basically classifying the mannequin as succesful sufficient at coding to doubtlessly facilitate important cyber hurt, particularly if automated or used at scale.

AI watchdog group the Midas Undertaking is claiming OpenAI failed to stay to its personal security commitments—which at the moment are legally binding below California regulation—with the launch of the brand new high-risk mannequin.

California’s SB 53, which went into impact in January, requires main AI firms to publish and follow their very own security frameworks, detailing how they’ll stop catastrophic dangers—outlined as incidents inflicting greater than 50 deaths or $1 billion in property injury—from their fashions. It additionally prohibits these firms from making deceptive statements about compliance.

OpenAI’s security framework requires particular safeguards for fashions with excessive cybersecurity danger which might be designed to stop the AI from going rogue and doing issues like appearing deceptively, sabotaging security analysis, or hiding its true capabilities. Nevertheless, the Midas Undertaking stated that regardless of triggering the “high risk” cybersecurity threshold, OpenAI didn’t seem to have applied the precise misalignment safeguards earlier than deployment.

OpenAI says the Midas Undertaking’s interpretation of the wording in its Preparedness Framework is unsuitable, though it additionally stated that the wording within the framework is “ambiguous” and that it sought to make clear the intent of the wording in that framework with an announcement within the security report the corporate launched with GPT-5.3-Codex. In that security report, OpenAI stated that further safeguards are solely wanted when excessive cyber danger happens “in conjunction with” long-range autonomy—the flexibility to function independently over prolonged durations. For the reason that firm believes GPT-5.3-Codex lacks this autonomy, they are saying the safeguards weren’t required.

“GPT-5.3-Codex completed our full testing and governance process, as detailed in the publicly released system card, and did not demonstrate long-range autonomy capabilities based on proxy evaluations and confirmed by internal expert judgments, including from our Safety Advisory Group,” the spokesperson stated. The corporate has additionally stated, nevertheless, that it lacks a definitive approach to assess a mannequin’s long-range autonomy and so depends on exams that it believes can act as proxies for this metric whereas it really works to develop higher analysis strategies.

Nevertheless, some security researchers have disputed OpenAI’s interpretation. Nathan Calvin, vice chairman of state affairs and basic counsel at Encode, stated in a submit on X: “Rather than admit they didn’t follow their plan or update it before the release, it looks like OpenAI is saying that the criteria was ambiguous. From reading the relevant docs … it doesn’t look ambiguous to me.”

The Midas Undertaking additionally claims that OpenAI can’t definitively show the mannequin lacks the autonomy required for the additional measures, as the corporate’s earlier, much less superior mannequin already topped international benchmarks for autonomous job completion. The group argues that even when the foundations have been unclear, OpenAI ought to have clarified them earlier than releasing the mannequin.

Tyler Johnston, founding father of Midas Undertaking, known as the potential violation “especially embarrassing given how low the floor SB 53 sets is: basically just adopt a voluntary safety plan of your choice and communicate honestly about it, changing it as needed, but not violating or lying about it.”

If an investigation is opened and the allegations show correct, SB 53 permits for substantial penalties for violations, doubtlessly working into thousands and thousands of {dollars} relying on the severity and period of noncompliance. A consultant for the California Legal professional Basic’s Workplace informed Fortune the division was “committed to enforcing the laws of our state, including those enacted to increase transparency and safety in the emerging AI space.” Nevertheless, they stated the division was unable to touch upon, even to verify or deny, potential or ongoing investigations.

Up to date, Feb. 10: This story has been up to date to maneuver OpenAI’s assertion that it believes that it’s in compliance with the California AI regulation increased within the story. The headlines has additionally been modified to clarify that OpenAI is disputing the allegations from the watch canine group. As well as, the story has been up to date to make clear that OpenAI’s assertion within the GPT-5.3-Codex security report was meant to make clear what the corporate says was ambiguous language in its Preparedness Framework.

CFOs are central to AI mindset shift, says Google veteran | Fortune
Extra groceries might profit from tariff exemptions because the 2026 midterm elections get nearer, analyst says | Fortune
Ken Griffin is outwardly finished with ‘sucking up’ to the White Home | Fortune
America’s cell housing affordability disaster reveals a system the place earnings determines publicity to local weather disasters | Fortune
Microsoft AI’s design head desires her staff to be AI-native by the top of the fiscal yr | Fortune
TAGGED:AllegationCaliforniasdisputesFortuneGPT5.3CodexlawOpenAIreleaseviolatedwatchdog
Share This Article
Facebook Email Print
Previous Article Miami Seashore Home for Sale, However Solely With Bitcoin? Miami Seashore Home for Sale, However Solely With Bitcoin?
Next Article Ought to I purchase extra inventory of Amazon and Uber for my ISA after 10%+ falls Ought to I purchase extra inventory of Amazon and Uber for my ISA after 10%+ falls

Follow US

Find US on Social Medias
FacebookLike
XFollow
YoutubeSubscribe
TelegramFollow
Popular News
Goal expands key part in shops as prospects minimize spending
Finance

Goal expands key part in shops as prospects minimize spending

Admin
By Admin
1 month ago
Amazon is promoting a $90 Bogg bag for simply $68, and it is available in 28 colours
From candy to bitter: Core slaps Maple with injunction over ‘syrupBTC’
Sable Offshore runs right into a California Line 325 chokepoint
Jean Chatzky sends pressing message on 401(ok) threat

You Might Also Like

U.S. pursues one other tanker skirting Venezuela sanctions as GOP senator calls seizures a ‘provocation and a prelude to conflict’ | Fortune

U.S. pursues one other tanker skirting Venezuela sanctions as GOP senator calls seizures a ‘provocation and a prelude to conflict’ | Fortune

3 months ago
Sorry, mother. The buying bots prompt a bathrobe for Christmas | Fortune

Sorry, mother. The buying bots prompt a bathrobe for Christmas | Fortune

4 months ago
The identical day as his Epstein humiliation within the Home, Trump rages at media’s questions whereas sitting subsequent to Saudi crown prince | Fortune

The identical day as his Epstein humiliation within the Home, Trump rages at media’s questions whereas sitting subsequent to Saudi crown prince | Fortune

4 months ago
Our Ok-12 college system is sending us a message: AI instruments are for the wealthy children | Fortune

Our Ok-12 college system is sending us a message: AI instruments are for the wealthy children | Fortune

1 month ago
about us

Welcome to Asolica, your reliable destination for independent news, in-depth analysis, and global updates.

  • Home
  • Business
  • Crypto
  • Finance
  • Marketing
  • Startup
  • Press Release
  • About Us
  • Contact Us
  • Privacy Policy
  • Cookie Policy
  • Disclaimer
  • Terms & Conditions

Find Us on Socials

© 2025 Asolica News Network. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?