Of all of the unlikely tales to emerge from the present AI frenzy, few are extra placing than that of Leopold Aschenbrenner.
The 23-year-old’s profession didn’t precisely begin auspiciously: He hung out on the philanthropy arm of Sam Bankman-Fried’s now-bankrupt FTX cryptocurrency trade earlier than a controversial yr at OpenAI, the place he was finally fired. Then, simply two months after being booted out of probably the most influential firm in AI, he penned an AI manifesto that went viral—President Trump’s daughter Ivanka even praised it on social media—and used it as a launching pad for a hedge fund that now manages greater than $1.5 billion. That’s modest by hedge-fund requirements however outstanding for somebody barely out of school. Simply 4 years after graduating from Columbia, Aschenbrenner is holding personal discussions with tech CEOs, buyers, and policymakers who deal with him as a sort of prophet of the AI age.
It’s an astonishing ascent, one which has many asking not simply how this German-born early-career AI researcher pulled it off, however whether or not the hype surrounding him matches the fact. To some, Aschenbrenner is a uncommon genius who noticed the second—the approaching of humanlike synthetic normal intelligence, China’s accelerating AI race, and the huge fortunes awaiting those that transfer first—extra clearly than anybody else. To others, together with a number of former OpenAI colleagues, he’s a fortunate novice with no finance monitor document, repackaging hype right into a hedge fund pitch.
His meteoric rise captures how Silicon Valley converts zeitgeist into capital—and the way that, in flip, could be parlayed into affect. Whereas critics query whether or not launching a hedge fund was merely a technique to flip doubtful techno-prophecy into revenue, buddies like Anthropic researcher Sholto Douglas body it otherwise—as a “theory of change.” Aschenbrenner is utilizing the hedge fund to garner a reputable voice within the monetary ecosystem, Douglas defined: “He is saying, ‘I have an extremely high conviction [that this is] how the world is going to evolve, and I am literally putting my money where my mouth is.”
However that additionally prompts the query: Why are so many keen to belief this newcomer?
The reply is difficult. In conversations with over a dozen buddies, former colleagues, and acquaintances of Aschenbrenner, in addition to buyers and Silicon Valley insiders, one theme retains surfacing: that Aschenbrenner has been in a position to seize concepts which were gathering momentum throughout Silicon Valley’s labs and use them as substances for a coherent and convincing narrative which can be like a blue plate particular to buyers with a wholesome urge for food for threat.
Aschenbrenner declined to remark for this story. Numerous sources had been granted anonymity owing to issues over the potential penalties of talking about individuals who wield appreciable energy and affect in AI circles.
Many spoke of Aschenbrenner with a mix of admiration and wariness—“intense,” “scarily smart,” “brash,” “confident.” Multiple described him as carrying the aura of a wunderkind, the sort of determine Silicon Valley has lengthy been desperate to anoint. Others, nevertheless, famous that his considering wasn’t particularly novel, simply unusually well-packaged and well-timed. But, whereas critics dismiss him as extra hype than perception, buyers Fortune spoke with see him otherwise, crediting his essays and early portfolio bets with uncommon foresight.
There isn’t a doubt, nevertheless, that Aschenbrenner’s rise displays a singular convergence: huge swimming pools of world capital desperate to experience the AI wave; a Valley enthralled by the prospect of attaining synthetic normal intelligence (AGI), or AI that matches or surpasses human intelligence; and a geopolitical backdrop that frames AI improvement as a technological arms race with China.
Sketching the longer term
Inside sure corners of the AI world, Leopold Aschenbrenner was already acquainted as somebody who had written weblog posts, essays, and analysis papers that circulated amongst AI security circles, even earlier than becoming a member of OpenAI. However for most individuals, he appeared seemingly in a single day in June 2024. That’s when he self-published on-line a 165-page monograph known as Situational Consciousness: The Decade Forward. The lengthy essay borrowed for its title a phrase already acquainted in AI circles, the place “situational awareness” often refers to fashions turning into conscious of their very own circumstances—a security threat. However Aschenbrenner used it to imply one thing else completely: the necessity for governments and buyers to acknowledge how shortly AGI may arrive, and what was at stake if the U.S. fell behind.
In a way, Aschenbrenner supposed his manifesto to be the AI period’s equal of George Kennan’s “Long Telegram,” wherein the American diplomat and Russia knowledgeable sought to awaken elite opinion within the U.S. to what he noticed because the looming Soviet menace to Europe. Within the introduction, Aschenbrenner sketched a future he claimed was seen solely to a couple hundred prescient folks, “most of them in San Francisco and the AI labs.” Not surprisingly, he included himself amongst these with “situational awareness,” whereas the remainder of the world had “not the faintest glimmer of what is about to hit them.” To most, AI regarded like hype or, at finest, one other internet-scale shift. What he insisted he may see extra clearly was that LLMs had been bettering at an exponential price, scaling quickly towards AGI, after which past to “superintelligence”—with geopolitical penalties and, for many who moved early, the possibility to seize the most important financial windfall of the century.
To drive the purpose residence, he invoked the instance of COVID in early 2020—arguing that just a few grasped the implications of a pandemic’s exponential unfold, understood the scope of the approaching financial shock, and profited by shorting earlier than the crash. “All I could do is buy masks and short the market,” he wrote. Equally, he emphasised that solely a small circle at present comprehends how shortly AGI is coming, and those that act early stand to seize historic positive aspects. And as soon as once more, he forged himself among the many prescient few.
However the core of Situational Consciousness’s argument wasn’t the COVID parallel. It was the argument that the maths itself—the scaling curves that prompt AI capabilities elevated exponentially with the quantity of knowledge and compute thrown on the identical fundamental algorithms—confirmed the place issues had been headed.
Douglas, now a tech lead on scaling reinforcement studying at Anthropic, is each a pal and former roommate of Aschenbrenner’s who had conversations with him in regards to the monograph. He instructed Fortune that the essay crystallized what many AI researchers had felt. ”If we consider that the development line will proceed, then we find yourself in some fairly wild locations,” Douglas mentioned. In contrast to many who targeted on the incremental progress of every successive mannequin launch, Aschenbrenner was keen to “really bet on the exponential,” he mentioned.
An essay goes viral
Loads of lengthy, dense essays about AI threat and technique flow into yearly, most vanishing after temporary debates in area of interest boards like LessWrong, a web site based by AI theorist and “doomer” extraordinaire Eliezer Yudkowsky that grew to become a hub for rationalist and AI-safety concepts.
However Situational Consciousness hit otherwise. Scott Aaronson, a pc science professor at UT Austin who spent two years at OpenAI overlapping with Aschenbrenner, remembered his preliminary response: “Oh man, another one.” However after studying, he instructed Fortune: “I had the sense that this is actually the document some general or national security person is going to read and say: ‘This requires action.’” In a weblog submit, he known as the essay “one of the most extraordinary documents I’ve ever read,” saying Aschenbrenner “makes a case that, even after ChatGPT and all that followed it, the world still hasn’t come close to ‘pricing in’ what’s about to hit it.”
A longtime AI governance knowledgeable described the essays as “a big achievement,” however emphasised that the concepts weren’t new: “He basically took what was already common wisdom inside frontier AI labs and wrote it up in a very nicely packaged, compelling, easy-to-consume way.” The consequence was to make insider considering legible to a wider viewers at a fever-pitch second within the AI dialog.
Amongst AI security researchers, who fear primarily in regards to the methods wherein AI may pose an existential threat to humanity, the essays had been extra divisive. For a lot of, Aschenbrenner’s work felt like a betrayal, significantly as a result of he had come out of these very circles. They felt their arguments urging warning and regulation had been repurposed right into a gross sales pitch to buyers. “Some people who are very worried about [existential risks] quite dislike Leopold now because of what he’s done—they basically think he sold out,” mentioned one former OpenAI governance researcher. Others agreed with most of his predictions and noticed worth in amplifying them.
Nonetheless, even critics conceded his knack for packaging and advertising. “He’s very good at understanding the zeitgeist—what people are interested in and what could go viral,” mentioned one other former OpenAI researcher. “That’s his superpower. He knew how to capture the attention of powerful people by articulating a narrative very favorable to the mood of the moment: that the U.S. needed to beat China, that we needed to take AI security more seriously. Even if the details were wrong, the timing was perfect.”
That timing made the essays unavoidable. Tech founders and buyers shared Situational Consciousness with the type of urgency often reserved for warm time period sheets, whereas policymakers and nationwide safety officers circulated it just like the juiciest categorized NSA evaluation.
As one present OpenAI staffer put it, Aschenbrenner’s talent is “knowing where the puck is skating.”
A sweeping narrative paired with an funding automobile
Concurrently the essays had been launched, Aschenbrenner launched Situational Consciousness LP, a hedge fund constructed across the theme of AGI, with its bets positioned in publicly traded firms quite than personal startups.
The fund was seeded by Silicon Valley heavyweights like investor and present Meta AI product lead Nat Friedman—Aschenbrenner reportedly related with him after Friedman learn one in every of his weblog posts in 2023—in addition to Friedman’s investing companion Daniel Gross, and Patrick and John Collison, Stripe’s cofounders. Patrick Collison reportedly met Aschenbrenner at a 2021 dinner arrange by a connection “to discuss their shared interests.” Aschenbrenner additionally introduced on Carl Shulman—a 45-year-old AI forecaster and governance researcher with deep ties within the AI security area and a previous stint at Peter Thiel’s Clarium Capital—to be the brand new hedge fund’s director of analysis.
In a four-hour podcast with Dwarkesh Patel tied to the launch, Aschenbrenner touted the explosive progress he expects as soon as AGI arrives, saying, “The decade after is also going to be wild,” wherein “capital will really matter.” If finished proper, he mentioned, “there’s a lot of money to be made. If AGI were priced in tomorrow, you could maybe make 100x.”
Collectively, the manifesto and the fund bolstered one another: Right here was a book-length funding thesis paired with a prognosticator with a lot conviction he was keen to place severe cash on the road. It proved an irresistible mixture to a sure sort of investor. One former OpenAI researcher mentioned Friedman is thought for “zeitgeist hacking”—backing individuals who may seize the temper of the second and amplify it into affect. Supporting Aschenbrenner match that playbook completely.
Situational Consciousness’s technique is easy: It bets on world shares prone to profit from AI—semiconductors, infrastructure, and energy firms—offset by shorts on industries that might lag behind. Public filings reveal a part of the portfolio: A June SEC submitting confirmed stakes in U.S. firms together with Intel, Broadcom, Vistra, and former Bitcoin-miner Core Scientific (which CoreWeave introduced it will purchase in July), all seen as beneficiaries of the AI build-out. To date, it has paid off: The fund shortly swelled to over $1.5 billion in property and delivered 47% positive aspects, after charges, within the first half of this yr.
In accordance with a spokesperson, Situational Consciousness LP has world buyers, together with West Coast founders, household places of work, establishments, and endowments. As well as, the spokesperson mentioned, Aschenbrenner “has almost all of his net worth invested in the fund.”
To make certain, any image of a U.S. hedge fund’s holdings is incomplete. The publicly out there 13F filings solely cowl lengthy positions in U.S.-listed shares—shorts, derivatives, and worldwide investments aren’t disclosed—including an inevitable layer of thriller round what the fund is basically betting on. Nonetheless, some observers have questioned whether or not Aschenbrenner’s early outcomes mirror talent or lucky timing. For instance, his fund disclosed roughly $459 million in Intel name choices in its first-quarter submitting—positions that later regarded prescient when Intel’s shares climbed over the summer time following a federal funding and a subsequent $5 billion stake from Nvidia.
However no less than some skilled monetary business professionals have come to view him otherwise. Veteran hedge fund investor Graham Duncan, who invested personally in Situational Consciousness LP and now serves as an advisor to the fund, mentioned he was struck by Aschenbrenner’s mixture of insider perspective and daring funding technique. “I found his paper provocative,” Duncan mentioned, including that Aschenbrenner and Shulman weren’t outsiders scanning alternatives however insiders constructing an funding automobile round their view. The fund’s thesis reminded him of the few contrarians who noticed the subprime collapse earlier than it hit—folks like Michael Burry, whom Michael Lewis made well-known in his e book The Large Brief. “If you want to have variant perception, it helps to be a little variant.”
He pointed to Situational Consciousness’s response to Chinese language startup DeepSeek’s January launch of its R1 open-source LLM, which many dubbed a “Sputnik moment” that showcased China’s rising AI capabilities regardless of restricted funding and export controls. Whereas most buyers panicked, he mentioned Aschenbrenner and Shulman had already been monitoring it and noticed the selloff as an overreaction. They purchased as a substitute of offered, and even a significant tech fund reportedly held again from dumping shares after an analyst mentioned, “Leopold says it’s fine.” That second, Duncan mentioned, cemented Aschenbrenner’s credibility—although Duncan acknowledged, “He could yet be proven wrong.”
One other investor in Situational Consciousness LP, who manages a number one hedge fund, instructed Fortune that he was struck by Aschenbrenner’s reply when requested why he was beginning a hedge fund targeted on AI quite than a VC fund, which appeared like the obvious selection.
“He said that AGI was going to be so impactful to the global economy that the only way to fully capitalize on it was to express investment ideas in the most liquid markets in the world,” he mentioned. “I’m a bit surprised by how briskly they’ve come up the educational curve … They’re far more refined on AI investing than anybody else I communicate to within the public markets.“
A Columbia ‘whiz kid’ who went on to FTX and OpenAI
Aschenbrenner, born in Germany to 2 docs, enrolled at Columbia when he was simply 15 and graduated valedictorian at 19. The longtime AI governance researcher, who described herself as an acquaintance of Aschenbrenner’s, recalled that she first heard of him when he was nonetheless an undergraduate.
“I heard about him as, ‘Oh, we heard about this Leopold Aschenbrenner kid, he seems like a sharp guy,’” she mentioned. “The vibe was very much a whiz kid sort of thing.”
That wunderkind repute solely deepened. At 17, Aschenbrenner received a grant from economist Tyler Cowen’s Emergent Ventures, and Cowen known as him an “economics prodigy.” Whereas nonetheless at Columbia, Aschenbrenner additionally interned on the International Priorities Institute, coauthoring a paper with economist Philip Trammell, and contributed essays to Works in Progress, a Stripe-funded publication that gave him one other foothold within the tech-intellectual world.
He was already embedded within the Efficient Altruism group—a controversial philosophy-driven motion influential in AI security circles—and cofounded Columbia’s EA chapter. That community finally led him to a job on the FTX Future Fund, a charity based by cryptocurrency trade founder Sam Bankman-Fried. Bankman-Fried was one other EA adherent who donated tons of of thousands and thousands of {dollars} to causes, together with AI governance analysis, that aligned with EA’s philanthropic priorities.
The FTX Future Fund was designed to assist EA-aligned philanthropic priorities, though it was later discovered to have used cash from Bankman-Fried’s FTX cryptocurrency trade that was primarily looted from account holders. (There isn’t a proof that anybody who labored on the FTX Future Fund knew the cash was stolen or did something unlawful.)
On the FTX Future Fund, Aschenbrenner labored with a small crew that included William MacAskill, a cofounder of Efficient Altruism, and Avital Balwit—now chief of employees to Anthropic CEO Dario Amodei and, based on a Situational Consciousness LP spokesperson, at present engaged to Aschenbrenner. Balwit wrote in a June 2024 essay that “these next five years might be the last few years that I work,” as a result of AGI may “end employment as I know it”—a placing mirror picture of Aschenbrenner’s conviction that the identical expertise will make his buyers wealthy.
However when Bankman-Fried’s FTX empire collapsed in November 2022, the Future Fund philanthropic effort imploded. “We were a tiny team, and then from one day to the next, it was all gone and associated with a giant fraud,” Aschenbrenner instructed Dwarkesh Patel. “That was incredibly tough.”
Simply months after FTX collapsed, nevertheless, Aschenbrenner reemerged—at OpenAI. He joined the corporate’s newly launched “superalignment” crew in 2023, created to sort out an issue nobody but is aware of remedy: steer and management future AI methods that might be far smarter than any human being, and maybe smarter than all of humanity put collectively. Present strategies like reinforcement studying from human suggestions (RLHF) had confirmed considerably efficient for at present’s fashions, however they rely upon people with the ability to consider outputs—one thing which could not be doable if methods surpassed human comprehension.
Aaronson, the UT pc science professor, joined OpenAI earlier than Aschenbrenner and mentioned what impressed him was Aschenbrenner’s intuition to behave. Aaronson had been engaged on watermarking ChatGPT outputs to make AI-generated textual content simpler to determine. “I had a proposal for how to do that, but the idea was just sort of languishing,” he mentioned. “Leopold immediately started saying, ‘Yes, we should be doing this, I’m going to take responsibility for pushing it.’”
Others remembered him otherwise, as politically clumsy and typically boastful. “He was never afraid to be astringent at meetings or piss off the higher-ups, to a degree I found alarming,” mentioned one present OpenAI researcher. A former OpenAI staffer, who mentioned they first grew to become conscious of Aschenbrenner when he gave a chat at an organization all-hands assembly that previewed themes he would later publish in Situational Consciousness, recalled him as “a bit abrasive.” A number of researchers additionally described a vacation celebration the place, in an off-the-cuff group dialogue, Aschenbrenner instructed then Scale AI CEO Alexandr Wang what number of GPUs OpenAI had—“just straight out in the open,” as one put it. Two folks instructed Fortune that they had straight overheard the comment. Numerous folks had been stunned, they defined, at how casually Aschenbrenner shared one thing so delicate. By spokespeople, each Wang and Aschenbrenner denied that the trade occurred.
“This account is entirely false,” a consultant of Aschenbrenner instructed Fortune. “Leopold never discussed private information with Alex. Leopold often discusses AI scaling trends such as in Situational Awareness, based on public information and industry trends.”
In April 2024, OpenAI fired Aschenbrenner, formally citing the leaking of inside info (the incident was not associated to the alleged GPU remarks to Wang). On the Dwarkesh podcast two months later, Aschenbrenner maintained the “leak” was “a brainstorming document on preparedness, safety, and security measures needed in the future on the path to AGI” that he shared with three exterior researchers for suggestions—one thing he mentioned was “totally normal” at OpenAI on the time. He argued that an earlier memo wherein he mentioned OpenAI’s safety was “egregiously insufficient to protect against the theft of model weights or key algorithmic secrets from foreign actors” was the true motive for his dismissal.
Both manner, Aschenbrenner’s ouster got here amid broader turmoil: Inside weeks, OpenAI’s “superalignment” crew—led by OpenAI’s cofounder and chief scientist Ilya Sutskever and AI researcher Jan Leike, and the place Aschenbrenner had labored—dissolved after each leaders departed from the corporate.
Two months later, Aschenbrenner revealed Situational Consciousness and unveiled his hedge fund. The velocity of the rollout prompted hypothesis amongst some former colleagues that he had been laying the groundwork whereas nonetheless at OpenAI.
Returns vs. rhetoric
Even skeptics acknowledge the market has rewarded Aschenbrenner for channeling at present’s AGI hype, however nonetheless, doubts linger. “I can’t think of anybody that would trust somebody that young with no prior fund management [experience],” mentioned a former OpenAI colleague who’s now a founder. “I would not be an LP in a fund drawn by a child unless I felt there was really strong governance in place.”
Others query the ethics of making the most of AI fears. “Many agree with Leopold’s arguments, but disapprove of stoking the U.S.-China race or raising money based off AGI hype, even if the hype is justified,” mentioned one former OpenAI researcher. “Either he no longer thinks that [the existential risk from AI] is a big deal or he is arguably being disingenuous,” mentioned one other.
One former strategist inside the Efficient Altruism group mentioned many in that world “are annoyed with him,” significantly for selling the narrative that there’s a “race to AGI” that “becomes a self-fulfilling prophecy.” Whereas making the most of stoking the thought of an arms race could be rationalized—since Efficient Altruists usually view earning money for the aim of then giving it away as virtuous—the previous strategist argued that “at the level of Leopold’s fund, you’re meaningfully providing capital,” and that carries extra ethical weight.
The deeper fear, mentioned Aaronson, is that Aschenbrenner’s message—that the U.S. should speed up the tempo of AI improvement in any respect prices with the intention to beat China—has landed in Washington at a second when accelerationist voices like Marc Andreessen, David Sacks, and Michael Kratsios are ascendant. “Even if Leopold doesn’t believe that, his essay will be used by people who do,” Aaronson mentioned. In that case, his largest legacy might not be a hedge fund, however a broader mental framework that’s serving to to cement a technological Chilly Struggle between the U.S. and China.
If that proves true, Aschenbrenner’s actual impression could also be much less about returns and extra about rhetoric—the way in which his concepts have rippled from Silicon Valley into Washington. It underscores the paradox on the heart of his story: To some, he’s a genius who noticed the second extra clearly than anybody else. To others, he’s a Machiavellian determine who repackaged insider security worries into an investor pitch. Both manner, billions at the moment are driving on whether or not his wager on AGI delivers.
