AI “seems much worse for the math people than the word people,” Peter Thiel tersely mentioned in 2024. He seemingly wasn’t anticipating that simply two years later his Palantir cofounder, CEO Alex Karp, would use some decidedly flowery language to explain folks he thought had been silly.
“If Silicon Valley believes we are going to take away everyone’s white-collar job … and you’re gonna screw the military—if you don’t think that’s gonna lead to nationalization of our technology, you’re retarded,” Karp mentioned whereas talking on the a16z American Dynamism Summit. “You might be particularly retarded, because you have a 160 IQ.”
Karp was commenting on a subject that has taken the AI world by storm: In what capability ought to AI firms collaborate with the federal government? A better look explains why a dustup between the Pentagon and two completely separate firms (Anthropic and OpenAI) has prompted Karp’s displeasure.
Katherine Boyle, basic companion at a16z, moderated the breakout session, which was titled “AI in Defense of the West.”
At which Karp famous: “If Silicon Valley believes we are going to take away everyone’s white-collar job—meaning primarily Democratic-shaped people that you might grow up with, highly educated people who went to elite schools or went to schools that are almost elite for one party—and you’re going to sue the military. If you don’t think that’s going to lead to nationalization of our technology, you’re retarded.”
Whoa. So what’s bothering Mr. Karp?
Why this hits dwelling for Palantir
Whereas Karp may have chosen much less offensive language to make his level, he was referring to a uncooked nerve—one that’s acutely private for Palantir. “You cannot have technologies that simultaneously take away everyone’s job,” he mentioned, after which be perceived as screwing the navy. That pressure isn’t summary for Palantir. It may very properly be a dwell operational disaster.
Corporations together with Anthropic, OpenAI, Google, and xAI have all signed contracts with the Division of Protection, every with restrictions on whether or not their applied sciences can be utilized in settings which may violate their phrases of service. The DOD has been in negotiations with AI firms to take away these restrictions and as an alternative permit use of their tech for “all lawful purposes.” Karp has little endurance for firms that deal with that ask as an ethical redline:
“There’s a difference between U.S. military and surveillance,” he mentioned on the summit. “Despite what everyone thinks, Palantir is the anti-surveillance company,” he mentioned, pushing again on claims that the corporate named after an all-seeing surveillance machine from Lord of the Rings is essentially about surveillance. Each technical skilled is aware of this to be the case, however the proverbial “person online” merely has the incorrect concept, Karp argued, “so I end up in every conversation that I don’t want to be in.”
Anthropic CEO Dario Amodei famously mentioned he couldn’t “in good conscience” help the “all lawful purposes” clause. Then, after hitting Anthropic with the specter of being deemed a navy supply-chain threat, the federal government penned a cope with OpenAI to make use of its instruments in labeled missions. (Anthropic is reportedly in talks with the Pentagon but once more, with the Pentagon confirming that Anthropic’s Claude Opus was key to its preparations for the historic strike by the U.S. and Israeli navy on Iran.)
For Palantir, that sequence of occasions isn’t an abstraction—it’s a direct operational menace. Palantir’s flagship AI Platform (AIP) depends on plugging best-in-class frontier fashions into its protection and intelligence workflows. Claude Opus is among the many most able to these fashions, prized for its reasoning depth and reliability in high-stakes environments. If Anthropic is blacklisted as a navy supply-chain threat—or if its phrases of service successfully bar it from the labeled settings the place Palantir operates—Palantir would lose entry to considered one of its strongest AI engines. It could be compelled to retool its platform round different fashions mid-contract, a expensive and reputationally damaging disruption for a corporation whose complete model promise is mission-critical reliability.
“Again, there’s a lot of subtlety here behind the curtain,” Karp acknowledged. “I’ve been heavily involved in that subtlety—what can be deployed, where it can be deployed.”
The larger financial image
The stakes, Karp argued, go properly past any single Pentagon contract or any single firm’s coverage resolution. “The danger for our industry,” he warned, “is that you get a famous horseshoe effect where there’s only one thing people agree on—and that’s that this is not paying the bills, and people in our industry should be nationalized.”
That populist convergence—the place left and proper alike activate tech—turns into inevitable, in Karp’s telling, if AI firms strip white-collar staff of their livelihoods whereas concurrently refusing to serve the navy. Once more, he was pointed about who these staff are: “Primarily Democratic-shaped people that you might grow up with—highly educated people who went to elite schools, or went to schools that are almost elite, for one party.”
These fears are already materializing at an financial scale that lends urgency to Karp’s argument. Consultants warn of an imminent AI doomsday state of affairs the place white-collar staff’ days are numbered—a destabilizing power that would go away most staff jobless. These aren’t merely panic-inducing concepts; they carry real-world penalties, like a viral essay from Citrini Analysis that triggered mass market upheaval.
In Karp’s view, the federal government wouldn’t permit AI firms to amass the ability they already maintain and nonetheless function in a self-regulatory, nongovernmental oversight capability—not to mention dictate phrases of use again to the federal government itself. “This is where that path is going,” he mentioned merely. The one method for firms like Palantir to retain their place, their contracts, and their entry to the frontier AI fashions that energy their platforms is to play by the federal government’s guidelines when referred to as upon. For Palantir, shedding that seat on the desk doesn’t simply imply unhealthy optics. It means shedding the technological inputs that make its core product work.
It could be a dramatic reversal for a corporation that delivered what Karp referred to as only a month in the past “one of the truly iconic performances in the history of corporate performance or technology” in Palantir’s newest quarterly earnings.
