Deepfake fraud drained $1.1 billion from U.S. company accounts in 2025, tripling from $360 million the yr earlier than. By midyear final yr, documented incidents had already quadrupled the 2024 whole. And most company communications and model groups stay dangerously unprepared.
Executives now face artificial threats from two instructions: their likenesses cloned to authorize fraudulent transfers or inflict reputational hurt, and AI-generated voices impersonating authorities officers, board members, and enterprise companions used to control them.
In 2019, an unnamed British power govt obtained a telephone name from somebody they believed was their chief govt. The accent and delicate consonant shifts had been proper, even the cadence was acquainted. Solely after wiring $243,000 did they study the voice on the opposite finish of the telephone was artificial. Final yr, scammers cloned Italy’s protection minister and referred to as the nation’s enterprise elite. No less than one despatched almost €1 million earlier than studying of the rip-off.
However these manufacturers had been lucky. Think about the affect if an artificial video of your CEO making inappropriate remarks, saying a false merger, or criticizing a regulator unfold quickly on social media earlier than your staff may reply. Deepfakes are now not a cybersecurity curiosity. They now symbolize a safety menace, a monetary danger, and a major reputational hazard.
The communications hole is wider than the safety hole
Most protection of deepfake threats facilities on detection algorithms and verification protocols. Cybersecurity distributors provide options, and IT departments replace insurance policies. Nevertheless, few deal with a essential query for CMOs and CCOs: What occurs to your model in case your CEO’s likeness is used for fraud, disinformation, or character assaults?
I’ve spent 20 years advising executives by way of reputational crises, together with regulatory investigations and hostile media campaigns. Established playbooks exist for these conditions. Nevertheless, there isn’t any established protocol for incidents corresponding to an artificial likeness of a CEO authorizing a fraudulent acquisition or a fabricated video of a founder going viral.
Government visibility now cuts each methods
Every social media submit, keynote deal with, podcast look, and earnings name involving your CEO offers potential coaching knowledge for attackers. The visibility that builds govt manufacturers and humanizes management additionally provides the voice samples and facial mapping wanted for artificial media.
Not each assault succeeds. Final yr, scammers focused the CEO of a worldwide promoting firm. They created a pretend WhatsApp account utilizing his picture, staged a Microsoft Groups name with an AI-cloned voice skilled on YouTube footage, and requested a senior govt to fund a brand new enterprise enterprise. The worker refused and the agency misplaced nothing, however the sophistication of the try revealed how far the know-how has superior.
The variety of deepfakes elevated from 500,000 in 2023 to over eight million in 2025. Voice cloning fraud rose by 680 p.c in a single yr. Projected losses from AI-enabled fraud are anticipated to succeed in $40 billion by 2027. Nevertheless, solely 32 p.c of company executives consider their organizations are ready to deal with a deepfake incident.
Three questions each communications staff ought to reply now
First, do you will have a disclosure protocol for artificial media assaults? If an AI-generated reproduction of your CEO is used for fraud or disinformation, who communicates, when, and thru which channels?
Second, have you ever carried out a deepfake tabletop train? Disaster simulations ought to now embrace situations the place an govt’s likeness is used for inside fraud, exterior disinformation, or each.
Third, have you ever coordinated response sequencing with authorized, cybersecurity, and investor relations? A deepfake disaster is a fraud occasion, a possible disclosure obligation, and a model emergency unexpectedly. Siloed responses will fail.
Act earlier than the assault
The businesses that may climate this period are constructing disaster protocols now, earlier than their executives’ faces present up in movies they by no means recorded, saying issues they by no means mentioned, authorizing transactions they by no means permitted. Your CEO’s likeness is a model asset. Additionally it is an assault vector.
Communications and model groups that deal with deepfakes as another person’s drawback—a cybersecurity concern, an IT concern, a fraud matter for finance—will discover themselves drafting apologies as a substitute of methods.
The opinions expressed in Fortune.com commentary items are solely the views of their authors and don’t essentially mirror the opinions and beliefs of Fortune.
