It’s rare for a cyber policy to include exclusions for malicious activity, industry sources tell CU.

Generally, there are no exclusions under a cyber policy for criminal behaviour, unless it’s an insider top-level collusion. Even then, there are coverage grants.

That means, if a cyber thief uses AI to launch a social engineering attack – which tries to trick users of a corporate computer network to click malicious links or transfer funds – a cyber policy would apply, says Neal Jardine, global director of cyber risk intelligence and head of claims at BOXX Insurance.

“AI is a tool the cybercriminal uses to create a large language model; to gather all of the information on the internet that’s available about a company,” he says. “They will then use that information to create a believable email that the cybercriminal will send to the right person or people in the company that will have them click, access, and then divulge confidential information.

“The cybercriminal will use AI as a tool to launch a very accurate fake request that will be a social engineering attack against the business or individual.”

 

Who’s the criminal?

Some defence lawyers working in the cyber insurance space are asking what happens if the tool itself becomes the criminal “agent.”

Jardine notes the AI is always a tool used by the criminal. AI does not become the criminal “agent” (as identified in an insurance policy) by virtue of doing criminal activity.

For example, Jardine says, “AI could be used to go through lines of code and find errors in those lines of code. And then the cybercriminal finds an unpatched system or a weakness in code and exploits that. Again, the tool is being used by the cybercriminal for unauthorized access to company data.

“The cyber policy is there to indemnify when possible and includes the use of AI, be it by a cyber criminal or the insured. The limitations within a cyber policy are not AI-specific and instead relate to unlawful or illegal use by the insured.”

To understand why policy exclusions for criminal activity apply in some instances and not in others, think of the difference between third-party and first-party cyber insurance coverage.

In third-party coverage, cyber policies protect insureds from losses caused by criminal behaviour directed against it by an outside party. First-party cyber coverage, on the other hand, does not protect the insured against damage caused by its own unlawful or illegal use of AI.

 

AI’s potential

Industry sources acknowledge AI use could lead to widespread cyber exposure because of its enhanced potential and effectiveness in launching cyberattacks against clients. But Lindsey Nelson, head of cyber development at CFC, says this is balanced by AI’s enhanced capabilities in detecting cybercrime.

“The industry often speaks about AI in terms of the fear factor and horror stories,” Nelson tells CU. “AI presents huge cyber security challenges, and it’s a huge risk and just generally [is seen] in a negative light.

“But I do think AI brings a lot of opportunity as well, especially on the insurance side. AI is doing a lot of good [in detecting fraud, for example]. Technologies have been in this cat-and-mouse game — in terms of being exploited for the good or for the bad — for years now. AI is almost an elevated version of that. It’s getting a lot of attention right now.”

 

This article is excerpted from one that appeared in the February-March print edition of Canadian Underwriter. Feature image by iStock/nadia_bormotova

David Gambrill