Click to toggle navigation menu.

AI-Assisted Social Engineering Is Producing Real Claims. Here Is What Your Policy Actually Covers

< BACK

By Ryan Windt | Head of Growth Marketing | Updated April 2026


Business email compromise was already the highest-loss cybercrime category in the United States before artificial intelligence entered the picture. The FBI’s 2025 IC3 report put BEC losses at $2.77 billion across more than 21,000 complaints. That number was built largely on old-fashioned social engineering: spoofed email addresses, impersonation over the phone, carefully crafted pretexts.

Now add AI to that attack model, and the numbers get significantly worse.

Voice cloning tools can replicate an executive’s voice from a few seconds of audio. Real-time video deepfakes convincingly replicate faces, body language, and speech patterns in live calls. AI-generated phishing emails match the writing style, vocabulary, and tone of the person being impersonated with accuracy that defeats most employee training.

The result is a category of fraud that is outpacing the coverage language most businesses currently carry. Losses are real. Claims are being filed. And a meaningful number of them are landing in coverage gray areas that policyholders did not know existed.

This post explains exactly how AI-assisted social engineering attacks work, how they are being adjudicated under current cyber and crime policies, and what you need to review in your coverage before an incident happens.


What AI-Assisted Social Engineering Actually Looks Like in 2026

The threat has evolved through several distinct phases that matter for understanding how it is covered.

Phase one was email. A spoofed address, a plausible pretext, urgency. The defense was simple in theory: check the sender address, look for grammar errors, call back on a known number. Coverage followed the attack model. Social engineering coverage in cyber and crime policies was written for text-based fraud, and the definitions reflected that.

Phase two was voice. Vishing attacks used human impersonators to follow up on fraudulent email requests. The tell was that human impersonators were often imprecise: wrong tone, wrong cadence, wrong details.

Phase three is now. Attackers clone an executive’s voice from audio scraped from a podcast, an earnings call, or a LinkedIn video. The voice is indistinguishable to the human ear. A finance employee receives a voicemail from what sounds exactly like the CFO asking for an urgent wire to close a deal. Or they join a video call where multiple synthetic colleagues are present, faces moving naturally, voices matching known speech patterns, asking them to authorize a vendor payment.

This is not hypothetical. A multinational firm had a finance employee wire roughly $25 million after a deepfake video call that appeared to include the company’s CFO and several other colleagues. That incident became a reference case in underwriting discussions across the industry and directly influenced how carriers began rewriting social engineering coverage language.

The attack sequence, at scale, now typically looks like this: a targeted phishing email establishes pretext and urgency, a follow-up voice clone or video call is used to “verify” the request and bypass the employee’s skepticism, and funds move before the deception is detected. The call is the social proof that defeats the verification step.


The Coverage Problem: Where These Losses Land

When a deepfake-assisted wire transfer fraud results in a $500,000 loss, where does that claim go?

The answer is: it depends on your policy language, and many businesses discover the answer for the first time at the worst possible moment.

Cyber insurance social engineering sublimits. Most cyber policies include social engineering or funds transfer fraud coverage, but frequently as a sublimited coverage rather than a full-limit coverage. A $1 million cyber policy might carry a $250,000 sublimit for social engineering losses. If your deepfake-assisted wire fraud loss is $600,000, your cyber policy absorbs a quarter of it.

The crime insurance question. When a loss results from an employee voluntarily authorizing a transfer, even under deception, some carriers classify it as a crime loss rather than a cyber loss. The distinction is how the fraud was executed. If the attacker never accessed your systems, there was no network intrusion, and the only compromised element was human judgment, the loss may fall outside many cyber policy definitions and into commercial crime territory. Many businesses carry neither adequate crime coverage nor an adequate social engineering sublimit to absorb these losses.

AI-specific exclusions emerging. Some carriers, particularly in late 2024 and into 2025, began adding explicit language addressing AI-generated content in social engineering coverage. The language varies significantly by carrier and form. Some policies now affirmatively cover deepfake-assisted fraud. Others have added exclusions or tightened the definition of what qualifies as a covered social engineering event. If your policy was written two or three years ago and has not been reviewed, you may not know which side of that line you are on.

Conditions precedent. Many social engineering coverage grants include conditions that must be satisfied before a claim is covered: dual authorization requirements for wire transfers above a threshold, callback verification to a known number, documented payment change procedures. If an employee followed every reasonable step but was still deceived by a convincing deepfake, coverage may apply. If they bypassed internal controls because the call sounded convincing, coverage may not.


Why This Is a Different Risk Than Traditional BEC

The coverage frameworks for social engineering were largely designed around human impersonation, where skilled attackers could be plausible but not perfect. Employees who followed verification procedures could catch fraud. Coverage conditions that required reasonable verification steps made sense because reasonable steps could actually work.

AI-assisted attacks break that model in two specific ways.

First, the verification step no longer provides assurance. An employee who calls back on a known number can be sent to a voicemail cloned from the real executive. A video call that appears to include three senior colleagues provides false confidence because the technology has advanced past what human perception can reliably detect. The employee did not fail to verify. The verification was itself fraudulent.

Second, the attack surface has expanded to platforms that older coverage language did not anticipate. Traditional social engineering coverage was written for email and telephone fraud. Attacks now occur over Teams, Slack, Zoom, and other collaboration platforms. Some policy definitions of “fraudulent instruction” are narrow enough that a deepfake Zoom call may not clearly qualify.


What Underwriters Are Looking For Now

When businesses come to us with social engineering exposure, these are the controls that underwriters are increasingly focused on.

Dual authorization for wire transfers. A single employee authorizing a transfer above a defined threshold, regardless of who appeared to request it, is a coverage condition gap at most carriers. Requiring two independent approvals is becoming a hard requirement in some markets and a pricing factor in most.

Out-of-band verification protocols. The critical control is a procedure that requires confirming any wire transfer or payment change through a method completely separate from the channel where the request arrived. If the request came via email, the verification call must go to a number already on file, not a number provided in the email or on a follow-up call. If the request appeared to come via a video call, the verification step must use a different authenticated channel.

Vendor payment change controls. A significant portion of deepfake-assisted losses involve fraudulent vendor payment changes rather than direct wire requests. An attacker impersonates a vendor’s representative and requests an update to banking details. Controls should require that any change to vendor payment information be verified through a previously established contact method for that vendor, with documentation.

Employee training that reflects the current threat. Most security awareness training programs still focus on spotting suspicious email addresses and identifying phishing links. That training is not adequate preparation for synthetic voice and video fraud. Underwriters are beginning to ask whether training programs have been updated to address AI-generated impersonation.


Questions to Ask About Your Current Policy

If you have not reviewed your social engineering and funds transfer fraud coverage recently, these are the questions worth bringing to your underwriter or broker.

How is “fraudulent instruction” defined in your policy? Does the definition extend beyond email to voice, video, and collaboration platforms?

What is your social engineering sublimit relative to your maximum realistic loss exposure? If your business could lose $2 million on a single fraudulent wire, a $250,000 sublimit is not adequate coverage. It is a co-insurance arrangement you did not knowingly enter.

Does your policy include conditions precedent for social engineering coverage? If so, are your internal controls documented and consistently followed in a way that satisfies those conditions?

Has your carrier added any AI-specific language in the last two policy cycles? Endorsements, exclusions, and narrowed definitions have been moving through the market. You need to know what is in your current form.

Does your policy address the cyber-crime gap? If a deepfake-assisted fraud does not involve a network intrusion, will your cyber policy respond, or does the loss fall to a crime policy you may not carry?


The Bottom Line

AI-assisted social engineering is producing real claims right now. The technology has outpaced the coverage language in a meaningful portion of the policies currently in force, and the businesses that discover that gap tend to do so at the worst possible time.

This is not an argument to avoid cyber insurance. It is an argument to understand what you have, review it against the current threat environment, and make deliberate decisions about sublimits, conditions, and policy definitions rather than assuming your current coverage addresses a category of fraud that largely did not exist when your policy was written.

At SeedPod Cyber, we work directly with carriers and can help you evaluate where your current coverage stands on AI-assisted fraud and what adjustments make sense for your business.

Contact Us


Related posts:

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.