Anthropic CEO Dario Amodei doesn’t suppose he ought to be the one calling the pictures on the guardrails surrounding AI.
“I think I’m deeply uncomfortable with these decisions being made by a few companies, by a few people,” Amodei stated. “And this is one reason why I’ve always advocated for responsible and thoughtful regulation of the technology.”
“Who elected you and Sam Altman?” Cooper requested.
“No one. Honestly, no one,” Amodei replied.
Anthropic has adopted the philosophy of being clear in regards to the limitations—and risks—of AI because it continues to develop, he added. Forward of the interview’s publication, the corporate stated it thwarted “the first documented case of a large-scale AI cyberattack executed without substantial human intervention.”
Anthropic stated final week it donated $20 million to Public First Motion, a brilliant PAC centered on AI security and regulation—and one which straight opposed tremendous PACs backed by rival OpenAI’s traders.
“AI safety continues to be the highest-level focus,” Amodei instructed Fortune in a January cowl story. “Businesses value trust and reliability,” he says.
There aren’t any federal rules outlining any prohibitions on AI or surrounding the protection of the know-how. Whereas all 50 states have launched AI-related laws this yr and 38 have adopted or enacted transparency and security measures, tech trade consultants have urged AI firms to strategy cybersecurity with a way of urgency.
Earlier final yr, cybersecurity skilled and Mandiant CEO Kevin Mandia warned of the primary AI-agent cybersecurity assault taking place within the subsequent 12-18 months—that means Anthropic’s disclosure in regards to the thwarted assault was months forward of Mandia’s predicted schedule.
Amodei has outlined short-, medium-, and long-term dangers related to unrestricted AI: The know-how will first current bias and misinformation, because it does now. Subsequent, it’ll generate dangerous info utilizing enhanced data of science and engineering, earlier than lastly presenting an existential risk by eradicating human company, doubtlessly changing into too autonomous and locking people out of programs.
The issues mirror these of “godfather of AI” Geoffrey Hinton, who has warned AI can have the flexibility to outsmart and management people, maybe within the subsequent decade.
Better AI scrutiny and safeguards had been on the basis of Anthropic’s 2021 founding. Amodei was beforehand the vp of analysis at Sam Altman’s OpenAI. He left the corporate over variations in opinion on AI security issues. (To this point, Amodei’s efforts to compete with Altman have appeared efficient: Anthropic stated this month it’s now valued at $380 billion. OpenAI is valued at an estimated $500 billion.)
“There was a group of us within OpenAI, that in the wake of making GPT-2 and GPT-3, had a kind of very strong focus belief in two things,” Amodei instructed Fortune in 2023. “One was the idea that if you pour more compute into these models, they’ll get better and better and that there’s almost no end to this… And the second was the idea that you needed something in addition to just scaling the models up, which is alignment or safety.”
Anthropic’s transparency efforts
As Anthropic continues to increase its knowledge heart investments, it has revealed a few of its efforts in addressing the shortcomings and threats of AI. In a Might 2025 security report, Anthropic reported some variations of its Opus mannequin threatened blackmail, similar to revealing an engineer was having an affair, to keep away from shutting down. The corporate additionally stated the AI mannequin complied with harmful requests if given dangerous prompts like the way to plan a terrorist assault, which it stated it has since fastened.
Final November, the corporate stated in a weblog put up that its chatbot Claude scored a 94% political even-handedness” score, outperforming or matching rivals on neutrality.
Along with Anthropic’s personal analysis efforts to fight corruption of the know-how, Amodei has known as for larger legislative efforts to deal with the dangers of AI. In a New York Instances op-ed in June 2025, he criticized the Senate’s determination to incorporate a provision in President Donald Trump’s coverage invoice that may put a 10-year moratorium on states regulating AI.
“AI is advancing too head-spinningly fast,” Amodei stated. “I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off.”
Criticisms of Anthropic
Anthropic’s apply of calling out its personal lapses and efforts to deal with them has drawn criticism. In response to Anthropic sounding the alarm on the AI-powered cybersecurity assault, Meta’s chief AI scientist, Yann LeCun, stated the warning was a method to manipulate legislators into limiting using open-source fashions.
“You’re being played by people who want regulatory capture,” LeCun stated in an X put up in response to Connecticut Sen. Chris Murphy’s put up expressing concern in regards to the assault. “They are scaring everyone with dubious studies so that open source models are regulated out of existence.”
Others have stated Anthropic’s technique is one in all “safety theater” that quantities to good branding, however no guarantees about really implementing safeguards on know-how.
Even a few of Anthropic’s personal personnel seem to have doubts a few tech firm’s potential to control itself. Earlier final week, Anthropic AI security researcher Mrinank Sharma introduced he resigned from the corporate, saying “the world is in peril.”
“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” Sharma wrote in his resignation letter. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”
Anthropic didn’t instantly reply to Fortune’s request for remark.
Amodei denied to Cooper that Anthropic was participating in “safety theater,” however admitted in an episode of the Dwarkesh Podcast final week that the corporate generally struggles to stability security and earnings.
“We’re under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies,” he stated.
A model of this story was revealed on Fortune.com on Nov. 17, 2025.
