iStock.com/PhonlamaiPhoto

Companies’ speed of adoption of artificial intelligence (AI), regardless of industry, remains one of the main risks associated with the technology, says Sam Chapman, CFC’s Canada technology team leader.

A recent survey of 500 companies globally conducted by Centiment for CFC found 79% of businesses already use AI in some capacity, with most planning to use the technology even more in coming years. At the same time, only 32% of businesses are confident that their current insurance policies address AI risks adequately.

“I think that’s kind of to be expected when we’re seeing the adoption of this kind of technology so rapidly…for relatively little amounts,” Chapman says. “I think people are feeling like they have to utilize it to remain relevant in their sector versus their competitors.

“It’s that rate of adoption which generally causes that concern; that lack of understanding that goes behind it that potentially causes us concern, especially over the last 12 to 24 months, [where] everyone dipped their toe into the AI sector.”

Like other sectors, professionals in the Canadian P&C insurance industry are applying AI to their operations. For example, Canadian Underwriter heard during the recent RIMS Canada Conference in Calgary that enterprise risk managers face being outpaced by competitors if they don’t move quickly to adopt AI.

The problem is traditional risk management frameworks, with lengthy procedures and strict oversight, can clash with the speed at which AI is evolving.

Dissecting VIN Fraud: Today’s Threat of Auto Fraud  Image
Insights Paid Content

Dissecting VIN Fraud: Today’s Threat of Auto Fraud 

By 

Sponsor Image

“If you try and apply standard risk management frameworks and procedures, you’re going to stifle innovation,” Paige Cheasley, national technology practice leader at Gallagher, tells CU at the conference. So, risk managers have overcome this by creating minimum frameworks, requirements and safeguards “to make it so that they can quickly try, fail, or try different AI initiatives…and then implement them.”

AI insurance

From an insurance industry perspective, the good news is there are insurance policies that cover AI-perpetrated cybercrime, such as AI-generated deepfake videos, CU has heard. There are also AI solutions to identify deepfakes through things like errors in pixelation, Chapman adds.

So how can your clients’ employees recognize AI-generated scams or deepfakes?

Although deepfakes are getting much better, there is “still an element of lag” to them, Chapman says, adding that he doesn’t believe this is going to be an issue for much longer. Another potential red flag is things like unusual tones or the manner in which somebody is speaking.

Are brokers legally responsible if AI spits out bad information to clients? Image
Industry

Are brokers legally responsible if AI spits out bad information to clients?

3 min read

Besides speed of adoption, another important consideration for the industry is privacy. For example, when insurance professionals rely on AI-generated recommendations without fully understanding how those outputs are produced, they risk breaching their duty to provide informed and transparent advice, says Jaime Cardy, a senior associate at Dentons Canada LLP.

“There are risks that the use of AI tools may contravene insurers’ privacy obligations and/or undermine clients’ privacy rights by processing clients’ personal information in a manner that is not aligned with customer consents or otherwise required or permitted by law,” she says.

Cardy also recommends using disclaimers when customers are interacting directly with AI.

Any disclaimer should be clear, concise and transparent, she says. The specific language will depend on the situation, but in general, Cardy says a disclaimer should:

  • Identify that the customer is interacting with AI, address any limitations (e.g., the outputs may not always be accurate or complete)
  • Inform the customer how their information or inputs may be used and stored (e.g., direct them to the relevant privacy notice), and
  • Offer an escalation option/a human alternative for sensitive matters or for customers that are not comfortable interfacing with AI.

“Where opt-out or escalation options are not legally required, they may still be advisable from a customer trust and service perspective,” Cardy says. “Jurisdiction-specific statutory notice requirements should also be considered.”

Subscribe to our newsletters

Subscribe Subscribe

Jason Contant

Jason has been an award-winning journalist with Canadian Underwriter for more than a decade, including the past three years as associate editor and, before that, as digital editor for seven years.