Hiya and welcome to Eye on AI, with Sharon Goldman filling in for Jeremy Kahn. On this version: What OpenAI’s OpenClaw rent actually means…The Pentagon threatens Anthropic punishment…Why an AI video of Tom Cruise battling Brad Pitt spooked Hollywood…The nervousness driving AI’s brutal work tradition.
That form of autonomous conduct is exactly what made OpenClaw thrilling to builders, getting them nearer to their dream of an actual J.A.R.V.I.S., the always-on helper from the Iron Man motion pictures. However it rapidly triggered alarms amongst safety consultants. Simply final week, I described OpenClaw because the “bad boy” of AI brokers, as a result of an assistant that’s persistent, autonomous, and deeply linked throughout methods can also be far tougher to safe.
Some say OpenAI rent is the ‘best outcome’
That stress helps clarify why some see OpenAI’s intervention as a mandatory step. “I think it’s probably the best outcome for everyone,” mentioned Gavriel Cohen, a software program engineer who constructed NanoClaw, which he calls a “secure alternative” to OpenClaw. “Peter has great product sense, but the project got way too big, way too fast, without enough attention to architecture and security. OpenClaw is fundamentally insecure and flawed. They can’t just patch their way out of it.”
Others see the transfer as equally strategic for OpenAI. “It’s a great move on their part,” mentioned William Falcon, CEO of developer-focused AI cloud firm Lightning AI, who mentioned that Anthropic’s Claude merchandise–together with Claude Code–have dominated the developer phase. OpenAI, he defined, needs “to win all developers, that’s where the majority of spending in AI is.” OpenClaw, which is in some ways an open supply various to Claude Code, and have become a favourite of builders in a single day, offers OpenAI a “get out of jail free card,” he mentioned.
Altman, for his half, has framed the rent as a guess on what comes subsequent. He mentioned Steinberger brings “a lot of amazing ideas” about how AI brokers might work together with each other, including that “the future is going to be extremely multi-agent” and that such capabilities will “quickly become core to our product offerings.” OpenAI has mentioned it plans to maintain OpenClaw working as an unbiased, open-source challenge via a basis fairly than folding it into its personal merchandise—a pledge Steinberger has mentioned was central to his resolution to decide on OpenAI over rivals like Anthropic and Meta (In an interview with Lex Fridman, Steinberger mentioned Mark Zuckerberg even reached out to him personally on WhatsApp).
Subsequent section is profitable developer belief for AI brokers
Past the weekend buzz, OpenAI’s OpenClaw rent presents a window into how the AI agent race is evolving. As fashions develop into extra interchangeable, the competitors is shifting towards the much less seen infrastructure that determines whether or not brokers can run reliably, securely, and at scale. By bringing within the creator of a viral—however controversial—autonomous agent whereas pledging to maintain the challenge open supply, OpenAI is signaling that the following section of AI received’t be outlined solely by smarter fashions, however by profitable the belief of builders tasked with turning experimental brokers into reliable methods.
That might result in a wave of latest merchandise, mentioned Yohei Nakajima, a accomplice at Untapped Capital whose 2023 open supply experiment referred to as BabyAGI helped exhibit how LLMs might autonomously generate and execute duties—which helped kick off the trendy AI agent motion. Each BabyAGI and OpenClaw, he mentioned, impressed builders to see what extra they might construct with the most recent applied sciences. “Shortly after BabyAGI, we saw the first wave of agentic companies launch: gpt-engineer (became Lovable), Crew AI, Manus, Genspark,” he mentioned. “I hope we’ll see similar new inspired products after this recent wave.”
FORTUNE ON AI
AI investments surge in India as tech leaders convene for Delhi summit – by Beatrice Nolan
Huge tech approaches ‘red flag’ second: AI capex is so nice hyperscalers might go cash-flow unfavourable, Evercore warns – by Jim Edwards
Anthropic CEO Dario Amodei explains his spending warning, warning if AI progress forecasts are off by only a 12 months, ‘then you go bankrupt’ – by Jason Ma
AI IN THE NEWS
Pentagon threatens Anthropic punishment. The Pentagon is threatening to designate Anthropic a “supply chain risk,” a uncommon and punitive transfer that may successfully power any firm doing enterprise with the U.S. army to chop ties with the AI startup, based on Axios. Protection officers say they’re annoyed with Anthropic’s refusal to totally calm down safeguards on how its Claude mannequin can be utilized—significantly limits meant to stop mass surveillance of Individuals or the event of totally autonomous weapons—arguing the army should be capable of use AI for “all lawful purposes.” The standoff is particularly fraught as a result of Claude is presently the one AI mannequin accepted to be used within the Pentagon’s categorized methods and is deeply embedded in army workflows, which means an abrupt break could be pricey and disruptive. The dispute underscores a rising stress between AI labs that need to impose moral boundaries and a U.S. army institution more and more keen to play hardball because it seeks broader management over highly effective AI instruments.
The nervousness driving AI’s brutal work tradition. In the event you’ve ever nervous about your personal work-life stability, I feel you’ll really feel higher after studying this piece. Based on the Guardian, in San Francisco’s booming AI financial system, the tech sector’s long-standing perks and versatile tradition are being changed by relentless “grind” expectations as startups push staff into lengthy hours, little time without work, and excessive productiveness pressures within the title of maintaining with fast advances and intense competitors. Staff describe 12-hour days, six-day weeks, and environments the place skipping weekends or social life seems like the worth of staying related, at the same time as nervousness about job safety and AI’s impression on future roles grows. The shift displays a broader transformation in how AI labor is valued—one that’s reshaping office norms and will foreshadow related pressures in different sectors as automation and innovation speed up. I’ll positively have to take a look at how this seems to be on the bottom the following time I head to the Bay.
EYE ON AI RESEARCH
DEF CON, the world’s largest and longest working hacker convention, launched its newest Hackers’ Almanack, an annual report distilling the analysis offered at the latest version in August 2025. The report targeted on how researchers confirmed that AI methods are now not simply serving to people hack sooner—they will generally outperform them. In a number of cybersecurity competitions, groups utilizing AI brokers beat human-only groups, and in a single case an AI was allowed to run by itself, efficiently breaking right into a goal system with out additional human enter. Researchers additionally demonstrated AI instruments that may discover software program flaws at scale, imitate human voices, and manipulate machine-learning methods, highlighting how rapidly offensive makes use of of AI are advancing.
The issue, the researchers argue, is that the majority policymakers have little visibility into these capabilities, elevating the danger of poorly knowledgeable AI guidelines. Their proposal: permit AI methods to brazenly compete in public hacking contests, report the ends in a shared, open database, and use that real-world proof to assist governments develop smarter, extra life like AI safety insurance policies.
AI CALENDAR
Feb 16-20: India AI Influence Summit 2026, Delhi.
Feb. 24-26: Worldwide Affiliation for Protected & Moral AI (IASEAI), UNESCO, Paris, France.
March 2-5: Cell World Congress, Barcelona, Spain.
March 12-18: South by Southwest, Austin, Texas.
March 16-19: Nvidia GTC, San Jose, Calif.
April 6-9: HumanX, San Francisco
BRAIN FOOD
The belief dilemma when AI enters the examination room. I used to be fascinated by this new article in Scientific American, which factors out that as AI seeps deeper into scientific care, nurses are discovering themselves on the entrance strains of a brand new belief dilemma: ought to they observe algorithm-generated orders when real-world judgement says in any other case? For instance, a sepsis alert pushed an ER staff to push fluids on a affected person with compromised kidneys — till a nurse refused and a health care provider overrode the AI. Throughout U.S. hospitals, the article discovered that predictive fashions at the moment are embedded in every thing from threat scoring and documentation to logistics and even autonomous prescription renewals, however entrance line workers more and more complain that these instruments misfire, lack transparency and generally undermine scientific judgment. That friction has sparked demonstrations and strikes, with advocates insisting that nurses should be on the desk for AI choices — as a result of it’s finally people who bear the outcomes.
