I’m not a tech naysayer. Removed from it. However we’re doing it once more.
A brand new period of expertise is taking off. AI is reshaping economies, industries, and governance. And identical to the final time, we’re shifting quick, breaking issues, and constructing the aircraft whereas flying it (to make use of some widespread tech phrases). These mantras have pushed innovation, however we’re now dwelling with the unintended penalties.
For over a decade, I labored within the engine room of the social media revolution, beginning in U.S. authorities, then at Twitter and Meta. I led groups participating with governments worldwide as they grappled with platforms they didn’t perceive. At first, it was intoxicating. Know-how moved quicker than establishments may sustain. Then got here the issues: misinformation, algorithmic bias, polarisation, political manipulation. By the point we tried to manage it, it was too late. These platforms had been too large, too embedded, too important.
The lesson? In case you wait till a expertise is ubiquitous to consider security, governance, and belief then you definately’ve already misplaced management. And but we’re on the verge of repeating the identical errors with AI.
The brand new infrastructure of intelligence
For years, AI was seen as a tech concern. Not anymore. It’s turning into the substrate for every thing from power to defence. The underlying fashions are getting higher, deployment prices are dropping, and the stakes are rising.
The identical mantras are again: construct quick, launch early, scale aggressively, win the race. Solely now we’re not disrupting media as a substitute we’re reinventing society’s core infrastructure.
AI isn’t only a product. It’s a public utility. It shapes how assets are allotted, how selections are made, and the way establishments perform. The results of getting it fallacious are exponentially better than with social media.
Some dangers look eerily acquainted. Fashions skilled on opaque knowledge with no exterior oversight. Algorithms optimised for efficiency over security. Closed methods making selections we don’t absolutely perceive. International governance void while capital flows quicker than regulation.
And as soon as once more, the dominant narrative is: “We’ll figure it out as we go.”
We want a brand new playbook
The social media period playbook of transfer quick, ask forgiveness, resist oversight received’t work for AI. We’ve seen what occurs when platforms scale quicker than the establishments meant to control them.
This time, the stakes are increased. AI methods aren’t simply mediating communication. They’re beginning to affect actuality from how power is transferred to how infrastructure is allotted throughout crises.
Vitality as a case research
Vitality is the most effective instance of an trade the place infrastructure is future. It’s complicated, regulated, mission-critical, and international. It’s the sector that may both allow or restrict the following part of AI.
AI racks in knowledge centres devour 10-50 occasions extra energy than conventional methods. Coaching a big mannequin requires the identical power as 120 houses use yearly. AI workloads are anticipated to drive a 2-3x improve in international knowledge centre electrical energy demand by 2030.
Already, AI is being embedded in methods optimising grids, forecasting outages, and integrating renewables. However with out the right oversights, we may face situations the place AI methods prioritise industrial clients over residential areas throughout peak demand. Or crises the place AI makes hundreds of fast selections throughout emergencies that depart total areas with out energy and nobody can clarify why or override the system. This isn’t about selecting sides. It’s about designing methods that work collectively, safely and transparently.
Don’t repeat the previous
We’re nonetheless early. We now have time to form the methods that may govern this expertise. However that window is closing. So, we should act in a different way.
We should perceive that incentive buildings form outcomes in invisible methods. If fashions prioritise effectivity with out safeguards, we danger constructing methods that reinforce bias or push reliability to the sting till one thing breaks.
We should govern from the start, not the top. Regulation shouldn’t be a retroactive repair however a design precept.
We should deal with infrastructure as infrastructure. Vitality, compute, and knowledge centres should be constructed with long-term governance in thoughts, not short-term optimisation.
We can’t rush vital methods with out strong testing, pink teaming and auditing. As soon as embedded at scale, it’s practically unimaginable to reverse dangerous design selections.
We should align public, personal, and international actors, which will be achieved via really cross-sector occasions like ADIPEC, a worldwide power platform that brings collectively governments, power corporations and expertise innovators to debate and talk about the way forward for power and AI.
No firm or nation can remedy this alone. We want shared requirements and interoperable methods that may evolve over time. The social media revolution confirmed what occurs when innovation outpaces establishments. With AI, we get to decide on a special path. Sure, we’ll transfer quick. However let’s not break the methods we rely on. As a result of this time, we’re not simply constructing networks. We’re constructing the following basis of the trendy world.
The opinions expressed in Fortune.com commentary items are solely the views of their authors and don’t essentially mirror the opinions and beliefs of Fortune.
Fortune International Discussion board returns Oct. 26–27, 2025 in Riyadh. CEOs and international leaders will collect for a dynamic, invitation-only occasion shaping the way forward for enterprise. Apply for an invite.
