Economists Mariana Mazzucato and Rosie Collington argue that consultants can, at finest, give doubtful steerage, and at worst, exacerbate authorities and personal sector dysfunction. Of their ebook The Large Con: How the Consulting Trade Weakens Our Companies, Infantilizes Our Governments, and Warps Our Economies, the economists argue consultants emerged in a post-Ronald Reagan period of lowered laws, necessitating third events are available in to avoid wasting establishments who had misplaced religion in themselves.
As a substitute of righting the ship, Mazzacato and Collington argued, these consultants created simply an “impression of value,” an phantasm of helpfulness and little else, all whereas the federal government and personal corporations burned cash to rent them.
In an period of AI, promising to avoid wasting corporations money by automating white-collar jobs, the usage of chatbots for steerage could also be an interesting various to companies now not prepared or capable of shell out for consultants. However rising analysis reveals that whilst you can ask AI what you’ll a marketing consultant for a fraction of the value, its recommendation might not be price taking, both. In truth, AI help may simply current an outdated drawback in a brand new medium.
A current examine led by the Esade Enterprise Faculty on the Universitat Ramon Llull in Barcelona, Spain, discovered that when varied giant language fashions (LLMs) have been requested to supply steerage on a office challenge, they gravitated towards a response that was most aligned with buzzwords, quite than offering steerage that finest aligned with the state of affairs. Researchers dubbed the proclivity of AI to gravitate towards the identical jargon to tell their judgements “trendslop.”
“An LLM is not the colleague who critically evaluates current ideas, looks into the contextual specifics, stress-tests assumptions, and pushes back when everyone gets comfortable,” the examine authors wrote in a Harvard Enterprise Overview publish summarizing their analysis. “On strategy, LLMs might be more akin to a freshly minted MBA or junior consultant, parroting what’s popular rather than what’s right for a particular situation.”
Current layoffs among the many “Big Four” consultancies, amid a wider trade slowdown, have recommended corporations could already be dropping worth to potential shoppers. PwC slashed 150 enterprise help workers in November 2025, across the identical time McKinsey shed tons of of jobs.
“As our firm marks its 100th year, we’re operating in a moment shaped by rapid advances in AI that are transforming business and society,” a McKinsey spokesperson informed Bloomberg final yr.
However the emergence of “trendslop” suggests AI is much from capable of present path to corporations in search of counsel from the know-how, and this analysis exposes the bias LLMs battle with.
How ‘trendslop’ manifests
In an effort to measure AI’s tendency to provide responses aligning with tendencies quite than logic, researchers examined seven fashions, together with GPT-5, Claude, Gemini, Grok, throughout 15,000 simulations and eventualities. Fashions have been requested to decide on between two options when offered with office tensions, equivalent to if an organization ought to prioritize long run versus brief time period progress, or if a agency ought to use know-how to automate versus increase staff’ jobs.
Researchers predicted that if LLMs have been offering recommendation based mostly on the situation-specific particulars, there can be range through which resolution the fashions select. As a substitute, the seven fashions normally clustered their solutions across the identical technique, indicating a desire for “modern managerial buzzwords and cultural tropes.”
Even when researchers reworded prompts or requested for pros-and-cons evaluation, the AI fashions, in lots of circumstances, demonstrated a robust desire towards an analogous enterprise technique. The examine authors warn counting on AI as a marketing consultant won’t lead to bespoke enterprise options, however quite a cookie-cutter resolution it might suggest to any enterprise when prompted, whatever the specificities of a offered problem.
“This reveals a real risk for leaders,” the researchers mentioned. “An LLM can sound highly tailored to your situation while quietly steering you toward the same small cluster of modern managerial trends.”
Exposing LLM bias
In different phrases, when prompted to supply steerage on a tough office state of affairs, AI isn’t analyzing the scenario in query, it’s regurgitating key phrases based mostly on how typically it encountered whereas it was educated on information. Within the case of ChatGPT, the examine famous, the bot typically rejected offering a binary alternative, as an alternative recommending each options. Analysis revealed in Nature final yr discovered AI sycophancy isn’t simply unproductive, it may be dangerous to science, confirming the biases of these prompting it as an alternative of presenting customers with information supported from scientific literature or different dependable, extra neutral sources.
The “trendslop” researchers didn’t fully eschew the usage of LLMs in navigating difficult office conditions. They recommended fashions might nonetheless be useful in producing various options or figuring out blind plots in sure eventualities. If you happen to’re conscious of AI’s biases towards ideas like augmentation or long-term strategizing, you’ll be able to problem these biases to disclose extra insightful steerage, in response to the examine.
“Leadership is ultimately about making hard choices in conditions of uncertainty and taking responsibility for them,” the researchers mentioned. “AI cannot and should not be a substitute.”
