RunSybil, an AI cybersecurity startup that makes use of AI brokers to mechanically hack firm software program to search out safety weaknesses, has secured $40 million in enterprise capital funding.
The spherical was led by Khosla Ventures, with participation from S32, the Anthology Fund from Anthropic and Menlo Ventures, Conviction and Elad Gil, together with angel traders together with Nikesh Arora, Amit Agarwal, Jeff Dean, and different founders and leaders from firms together with OpenAI, Palo Alto Networks, Stripe and Google.
The corporate didn’t disclose the valuation it achieved within the new funding spherical.
The corporate’s AI agent, Sybil, conducts steady autonomous penetration assessments in opposition to dwell purposes—discovering, exploiting and documenting actual safety vulnerabilities with out people within the loop. That’s completely different from different safety instruments at present making headlines, resembling Claude Code Safety, which analyzes supply code in purposes for identified vulnerabilities earlier than it’s deployed.
RunSybil as a substitute assessments software program that’s already operating, probing dwell programs the way in which a hacker would—by exploring programs, chaining vulnerabilities collectively and testing authentication boundaries to search out paths to delicate knowledge.
Automating ‘ethical hacking’
Firms have lengthy relied on a mixture of penetration assessments—the place exterior safety specialists, or “ethical hackers,” attempt to break into their programs; bug bounty packages that reward impartial hackers for reporting flaws; and inner “red teams” that simulate actual cyberattacks. RunSybil says its AI system can automate a lot of that work, constantly probing purposes for vulnerabilities as new code is deployed.
RunSybil argues this sort of automation is changing into mandatory as AI reshapes how firms function. Procurement, authorized, finance, engineering and operations are all being rebuilt with AI—together with the rising use of AI brokers. But safety testing continues to be usually handled as a discrete, scheduled occasion managed by a separate staff by itself timeline. That mismatch could be particularly difficult for extremely regulated industries resembling finance, insurance coverage and well being care, which face strict authorized and audit necessities round cybersecurity.
RunSybil was co-founded in 2023 by Ari Herbert-Voss, who joined OpenAI as its first safety analysis rent in 2019, and Vlad Ionescu, who beforehand led offensive safety purple groups at Meta. Collectively, they are saying they symbolize a uncommon intersection: individuals who perceive the right way to construct frontier AI programs and the right way to hack into advanced software program.
“We check every box that needs to be checked—for auditors, regulators and compliance teams,” Herbert-Voss mentioned. However the actual work, he mentioned is reworking the place, when and the way clients uncover and repair safety points: “Not as a project, but as a permanent capability embedded in how they build.”
‘On the edge’ of the AI safety frontier
Vinod Khosla, who made an early guess on OpenAI in 2019 and infrequently invests in firms he considers to be on the technological frontier, informed Fortune that “what it takes to add security and penetration testing to the AI world is definitely frontier—RunSybil is on the edge.” There’s at present little competitors on this a part of the offensive safety market, he mentioned, although safety incumbents resembling Palo Alto Networks might finally transfer into the area.
For now, “nobody’s really knowledgeable about it except individuals like [Herbert-Voss],” he mentioned, including that he has lengthy been involved about AI’s cyber capabilities falling into the palms of adversaries resembling China. “We invest in founders who tackle large, unsolved problems with technically ambitious solutions,” he added. “[Herbert-Voss and Ionsecu] are building exactly the kind of platform security teams will need as software complexity and AI-driven development accelerate.”
Herbert-Voss has lengthy been steeped in each hacking and AI. Rising up in a largely Mormon group in Utah, he mentioned he was drawn to the net hacker scene in center and highschool however pivoted away after buddies “started getting arrested.” Whereas pursuing a Ph.D. at Harvard College finding out machine studying and methods to make algorithms extra environment friendly, he first heard about OpenAI.
He dropped out of Harvard, he mentioned, after changing into satisfied that the fast scaling of AI fashions—coaching bigger programs with extra knowledge and computing energy—would unlock highly effective new capabilities.
Evolving cyber capabilities with LLMs
“Once OpenAI dropped GPT-2, I said wow, this changes everything about the economics of what it would take to run a cyber campaign,” he defined. He despatched a few hacker demos to OpenAI CEO Sam Altman and Jack Clark, then-head of coverage at OpenAI who went on to co-found Anthropic. Each of them expressed their issues concerning the potential misuse of LLMs and requested Herbert-Voss to return on to do safety analysis.
However by 2022, Herbert-Voss mentioned he additionally started to see how shortly offensive cyber capabilities may evolve as soon as highly effective language fashions turned extensively accessible, together with to malicious actors. Those self same advances, he mentioned, may dramatically increase cyber threats. That led to Herbert-Voss’s choice to depart OpenAI and begin RunSybil as a analysis mission.
RunSybil at present works with startups together with Cursor, Turbopuffer, Notion, Baseten, and Pondering Machines Lab, in addition to what the corporate says are main monetary establishments and Fortune 500 firms. (The corporate declined to call any of these Fortune 500 or monetary clients.) Herbert-Voss mentioned that clients have already reported discovering important vulnerabilities that had gone undetected utilizing conventional strategies.
