Each Fortune 500 CEO investing in AI proper now faces the identical brutal math. They’re spending $590-$1,400 per worker yearly on AI instruments whereas 95% of their company AI initiatives fail to succeed in manufacturing.
In the meantime, workers utilizing private AI instruments succeed at a 40% fee.
The disconnect isn’t technological—it’s operational. Firms are scuffling with a disaster in AI measurement.
Three questions I invite each management workforce to reply after they ask about ROI from AI pilots:
- How a lot are you spending on AI instruments companywide?
- What enterprise issues are you fixing with AI?
- Who will get fired in case your AI technique fails to ship outcomes?
That final query often creates uncomfortable silence.
Because the CEO of Lanai, an edge-based AI detection platform, I’ve deployed our AI Observability Agent throughout Fortune 500 firms for CISOs and CIOs who need to observe and perceive what AI is doing at their firms.
What we’ve discovered is that many are stunned and unaware of every thing from worker productiveness to critical dangers. At one main insurance coverage firm, for occasion, the management workforce was assured that they had “locked everything down” with an authorised vendor record and safety critiques. As a substitute, in simply 4 days, we discovered 27 unauthorized AI instruments operating throughout their group.
The extra revealing discovery: One “unauthorized” software was really a Salesforce Einstein workflow. It was permitting the gross sales workforce to exceed its objectives — however it additionally violated state insurance coverage laws. The workforce was creating lookalike fashions with buyer ZIP codes, driving productiveness and threat concurrently.
That is the paradox for firms in search of to faucet AI’s full potential: You’ll be able to’t measure what you may’t see. And you may’t information a method (or function with out threat) if you don’t know what your workers are doing.
‘Governance theater’
The best way we’re measuring AI is holding firms again.
Proper now, most enterprises measure AI adoption the identical manner they do software program deployment. They observe licenses bought, trainings accomplished, and purposes accessed.
That’s the mistaken manner to consider it. AI is workflow augmentation. The efficiency influence lives in interplay patterns between people and AI, not solely on software choice.
The best way we presently do it could create systematic failure. Firms set up authorised vendor lists that change into out of date earlier than workers end compliance coaching. Conventional community monitoring misses embedded AI in authorised purposes equivalent to Microsoft Copilot, Adobe Firefly Slack AI and the aforementioned Salesforce Einstein. Safety groups implement insurance policies they can not implement, as a result of 78% of enterprises use AI, whereas solely 27% govern it.
This creates what I name the “governance theater” downside: AI initiatives that look profitable on government dashboards typically ship zero enterprise worth. In the meantime, the AI utilization that’s driving actual productiveness beneficial properties stays fully invisible to management (and creates threat).
Shadow AI as systematic innovation
Threat doesn’t equal rise up. Workers are attempting to resolve issues.
Analyzing hundreds of thousands of AI interactions by our edge-based detection fashions proved what most working leaders instinctively know, however can not show. What seems to be rule-breaking is typically workers merely doing their work in ways in which that conventional measurement methods can not detect.
Workers use unauthorized AI instruments as a result of they’re desirous to succeed and as a result of sanctioned enterprise instruments succeed in manufacturing solely 5% of the time, whereas client instruments like ChatGPT attain manufacturing 40% of the time. The “shadow” financial system is extra environment friendly than the official one. In some instances, workers could not even know they’re going rogue.
A know-how firm making ready for an IPO confirmed “ChatGPT – Approved” on safety dashboards, however missed an analyst utilizing private ChatGPT Plus to investigate confidential income projections below deadline stress. Our prompt-level visibility revealed SEC violation dangers that community monitoring fully missed.
A healthcare system acknowledged docs utilizing Epic’s scientific determination assist, however missed emergency physicians getting into affected person signs into embedded AI to speed up diagnoses. Whereas bettering affected person throughput, this violated HIPAA by utilizing AI fashions not lined below enterprise affiliate agreements.
The measurement transformation
Firms crossing the “GenAI divide” recognized by MIT, whose Undertaking Nanda recognized the outstanding struggles with AI adoption, aren’t these with the most important AI budgets; they’re those that can see, safe, and scale what really works. As a substitute of asking, “Are employees following our AI policy?” they ask, “Which AI workflows drive results, and how do we make them compliant?”
Conventional metrics deal with deployment: instruments bought, customers educated, insurance policies created. Efficient measurement focuses on workflow outcomes: Which interactions drive productiveness? Which creates real threat? Which patterns ought to we standardize organization-wide?
The insurance coverage firm that found 27 unauthorized instruments figured this out.
As a substitute of shutting down ZIP code workflows driving gross sales efficiency, they constructed compliant information paths preserving productiveness beneficial properties. Gross sales efficiency stayed excessive, regulatory threat disappeared, and so they scaled the secured workflow companywide—turning compliance violation into aggressive benefit price hundreds of thousands.
The underside line
Firms spending tons of of hundreds of thousands on AI transformation whereas remaining blind to 89% of precise utilization face compounding strategic disadvantages. They fund failed pilots whereas their greatest improvements occur invisibly, unmeasured and ungoverned.
Main organizations now deal with AI like the most important workforce determination they’ll make. They require clear enterprise instances, ROI projections, and success metrics for each AI funding. They set up named possession the place efficiency metrics embrace AI outcomes tied to government compensation.
The $8.1 billion enterprise AI market gained’t ship productiveness beneficial properties by conventional software program rollouts. It requires workflow-level visibility distinguishing innovation from violation.
Firms establishing workflow-based efficiency measurement will seize productiveness beneficial properties their workers already generate. These sticking with application-based metrics will proceed funding failed pilots whereas rivals exploit their blind spots.
The query isn’t whether or not to measure shadow AI—it’s whether or not measurement methods are subtle sufficient to show invisible workforce productiveness into sustainable aggressive benefit. For many enterprises, the reply reveals an pressing strategic hole.
The opinions expressed in Fortune.com commentary items are solely the views of their authors and don’t essentially mirror the opinions and beliefs of Fortune.
Fortune International Discussion board returns Oct. 26–27, 2025 in Riyadh. CEOs and world leaders will collect for a dynamic, invitation-only occasion shaping the way forward for enterprise. Apply for an invite.
