OpenAI believes it has lastly pulled forward in one of the crucial intently watched races in synthetic intelligence: AI-powered coding. Its latest mannequin, GPT-5.3-Codex, represents a stable advance over rival methods, exhibiting markedly larger efficiency on coding benchmarks and reported outcomes than earlier generations of each OpenAI’s and Anthropic’s fashions—suggesting a long-sought edge in a class that might reshape how software program is constructed.
However the firm is rolling out the mannequin with unusually tight controls and delaying full developer entry because it confronts a more durable actuality: the identical capabilities that make GPT-5.3-Codex so efficient at writing, testing, and reasoning about code additionally elevate critical cybersecurity considerations. Within the race to construct essentially the most highly effective coding mannequin, OpenAI has run headlong into the dangers of releasing it.
GPT-5.3-Codex is out there to paid ChatGPT customers, who can use the mannequin for on a regular basis software program improvement duties comparable to writing, debugging, and testing code by OpenAI’s Codex instruments and ChatGPT interface. However for now, the corporate just isn’t opening unrestricted entry for high-risk cybersecurity makes use of, and OpenAI just isn’t instantly enabling full API entry that might permit the mannequin to be automated at scale. These extra delicate functions are being gated behind further safeguards, together with a brand new trusted-access program for vetted safety professionals, reflecting OpenAI’s view that the mannequin has crossed a brand new cybersecurity threat threshold.
The corporate’s weblog put up accompanying the mannequin launch on Thursday stated that whereas it doesn’t have “definitive evidence” the brand new mannequin can totally automate cyber assaults, “we’re taking a precautionary approach and deploying our most comprehensive cybersecurity safety stack to date. Our mitigations include safety training, automated monitoring, trusted access for advanced capabilities, and enforcement pipelines including threat intelligence.”
OpenAI CEO Sam Altman posted on X in regards to the considerations, saying that GPT-5.3-Codex is “our first model that hits ‘high’ for cybersecurity on our preparedness framework,” an inside threat classification system OpenAI makes use of for mannequin releases. In different phrases, that is the primary mannequin OpenAI believes is sweet sufficient at coding and reasoning that it may meaningfully allow real-world cyber hurt, particularly if automated or used at scale.
This story was initially featured on Fortune.com
