Newsletter

Quarterly Review | Winter 2024

By Dafna Izenberg
Document sitting on table

Artificial Intelligence: Emerging technologies and ethical questions for the insurance industry 

The current boom in artificial intelligence—including and perhaps particularly in generative AI—portends the possibility of better outcomes for both customers and carriers of insurance. “It has the potential to be the biggest evolutionary leap for us as an industry,” says Jonathan Spinner, who is assistant vice-president of claims transformation at Aviva. AI can be used in rating and underwriting to evaluate risk on an increasingly granular basis, which can lead to greater accuracy and fairness in policy pricing. Its predictive prowess can help adjusters and clients come to more expedient decisions on claims. Its analytic abilities can be used to enhance recruitment, retention, and customer service. At the same time, there are myriad potential ethical pitfalls that come with the application of AI to various aspects of the industry. For every new and exciting AI tool being developed and adopted by Canadian insurance companies, there are important questions to consider about transparency, fairness, availability and privacy.

In section one of this review, we look at specific examples of AI technologies that might benefit the insurance industry in Canada. In section two, we turn to the matter of ethics and some key concerns to keep in mind as new applications become integrated into nearly every part of the business. According to the 2021 IIC report on AI and big data, these technologies “are expected to reshape the insurance industry in Canada over the next 10 years.”

Section 1: AI applications

Productivity

Sachin Rustagi, who is head of digital at Markel International (Canada), points to what he calls the “low-hanging fruit” of AI opportunities: internal productivity. There are AI programs that can, in the blink of an eye, scroll through our calendars, inboxes, Slack histories, conferencing meetings, and presentations and tell us, in a given time period, how efficient we were with our emails, generate summarized meeting minutes, and provide many more key data points. These programs are not specific to the insurance industry—one that’s in wide use across sectors is Microsoft Copilot—but Rustagi believes they can be extremely advantageous to insurance professionals. “You can ask them questions that are central to your responsibility and your workflows,” he says. The programs can see which small and medium enterprise quote referrals a person hasn’t responded to at the end of the week, for example, and this will provide a lot of insight into exact numbers and reasons certain risks are outside of appetite.

Prediction

The advent of machine learning—a field of AI in which algorithms absorb massive datasets and subsequently anticipate and generate new data—has opened up the possibility of using AI to predict risk in the field of insurance. As an example, Jonathan Spinner points to a predictive model that allows claims adjusters to make “faster, better informed, and more confident decisions” at the first notice of loss in the case of an automobile collision. Analytics produced through machine learning can offer immediate insight into the repairability of a car. “Being able to click a button and have all of the factors of the claim evaluated against the predictive model means that an adjuster doesn’t have to wait three days for an appraiser to go see the vehicle or for a repair shop to tear it down and look for damage under the hood,” says Spinner. “They can actually have a much better, more meaningful conversation with the customer about the course that claim is going to take.” If the car is toast, the customer can start looking for a replacement sooner. And if it can be repaired, AI can again expedite the process by locating the closest mechanic with the most fitting expertise for the particular type of work required. “We’re now using a machine learning AI predictive model in real time to identify the 10 shops that will do the best job, the best quality repair in the fastest timeframe possible and have all the necessary certifications for that year, make and model of vehicle,” says Spinner.

Spinner talked about the possibility of AI being used to help assess injuries resulting from automobile accidents. AI can digest and summarize all manner and length of medical documents much faster than humans; a predictive AI model might be able to spot the likelihood of lingering soft-tissue damage or emotional trauma based on the language used by health professionals in their assessments; it might also be able to offer suggested remedies. “Our goal as an insurance carrier is to get people back to work and back to their lives and to help them feel better,” says Spinner, and this would be helped by “knowing more about what they’re going through and having AI suggest courses of treatment that are proven effective for certain types of injuries.”

In the commercial field, predictive AI might be applied in the case of new small businesses applying for insurance. Rustagi explained that, in the absence of revenue records, newer startup small and medium enterprises often provide their business plans to insurance companies as a tool for assessing risk. “Theoretically, an AI tool could analyze that business plan,” says Rustagi, “identify the sophistication and the likelihood of success and then ultimately decide if this new business should qualify as a good risk based on real-time macro and micro third-party data.” The IIC report on AI similarly anticipated “the prospect that insurance companies will begin introducing coverage of risks that were previously seen to be uninsurable. The availability of more information and stronger analytical tools will likely determine, over the next 10 years, that new coverage is viable.”

Human resources

Like many sectors, the insurance industry is currently facing personnel shortages. Companies are asking, “Where are we finding our next generation of workers? What kind of backgrounds will they have? And how do we get them up to the level of capability that allows them to serve our customers effectively?” says Spinner. He sees AI as being able to significantly reduce the time it takes to train new staff. As an example, imagine a recently recruited claims adjuster having an AI assistant listen in on a call with a customer and provide invaluable tips. “It can do interesting things like detect stress. It can suggest pausing more to let the customer speak or adjusting your energy level to better match the customer’s state of mind,” says Spinner. In some cases, AI assistants might even correspond directly with customers via text. Web-chatting technology is becoming “so much smarter,” says Spinner, and is now “able to read the words and understand not just the intent, but also the context, the emotion.”

The hope is that less appealing assignments can be relegated to AI while those requiring more skill are left to trained and experienced employees. “Work in the industry will become more satisfying as the automation of tedious tasks allows brokers, agents, adjusters, and others to focus on more rewarding activities that serve consumers,” predicts the IIC’s 2021 report on AI and big data. “Change is inevitable as insurers adopt a digital mindset, and it will be positive for most.”

Fraud prevention

Claims fraud is one of the insurance industry’s biggest headaches, but AI opens up new possibilities for preventing it. “The data we already have may point to the fact that a claimant who is a driver on a policy for a vehicle was also a witness on three other claims for people who live within three blocks of each other,” says Spinner. But investigating and establishing these connections are extremely time- and resource-consuming. AI can crawl through all manner of data—IP addresses, phone numbers and bank account numbers—“and make connections that are not evident. And tie second- third- and fourth-order connections together and paint a picture with it,” says Spinner. AI is learning to recognize speech patterns in fraudulent claims as well as coached testimony. “Even stress in the voice, the pacing and tone and trust level as certain questions are asked and answered,” says Spinner. “AI can actually predict the likelihood that something is not legit, because a variety of factors taken together add up to a risk that’s above a certain threshold.”

The IIC’s recent report on AI cited an article by expert Daniel Faggella who wrote that the application of analytics to detect fraud “is among the fastest areas of tech adoption in the insurance industry.” In addition to saving insurance companies massive amounts of money, more accurate fraud detection can spare customers untold grief, reducing “false positives and the resulting terrible claims experience for consumers wrongfully accused of fraud.”

Spinner finds the many possible applications of AI to the work of insurance companies both exciting and mind-boggling. “The ability to read and consume copious amounts of content, the documents associated with claims—whether they’re collision, house, injury—and then tie this to historical data on what has influenced claim outcomes, what has led us to win court cases, what has led to faster return-to-work for injured parties,” he says. “It’s just limitless in terms of how far we could go in helping the whole industry.”

Section 2: Applying AI ethically 

Artificial intelligence has the potential to make insurance more available and affordable with coverage that is “tailored to individual customer needs matched with accurate pricing,” says the IIC report on AI and big data. But judicious application of AI tools is critical. As risk models incorporate more and more variables and offer companies opportunities for greater efficiency, it’s important to explore how these innovations might disadvantage customers, some of them disproportionately. Among the biggest areas of concern when it comes to integrating AI ethically into the business of insurance are explainability, fairness, availability, and privacy. Each of these is discussed in the IIC report.

Explainability 

The challenge of explainability is somewhat endemic to insurance—many customers don’t fully understand how rates are determined, for example, and tend to see insurance as a “necessary expense rather than a critical form of protection.” The issue comes up in the divide between correlation and causality and has played out in the question of auto insurance premiums being determined by age and gender: While a correlation between these factors and higher risk has been established, it is unclear whether a person’s age and gender actually lead to higher risk.

When AI is put to the task of assessing for risk, the distinction between correlation and causality can become even blurrier. “As you start pointing the machine at all of the rich data that insurance is notorious for hoarding, you end up casting some light on to less intuitive correlations,” says Jonathan Spinner, assistant vice-president of claims transformation at Aviva. “It becomes very difficult from an explainability perspective to actually take the complexity of what the machine has driven and put it into the natural language that humans can approve and understand.” The IIC’s report on AI argues that “Black box models that predict outcomes without an explanation are incomplete and not ready for use.”

Spinner says he’s “not 100 percent sure” how the industry can overcome the challenge of explainability as AI becomes a go-to resource for ever-more complex predictive models. But he wonders if AI itself might be part of the solution. Anyone who has experimented with ChatGPT knows it can be directed to generate text that would make sense to a young child. Perhaps a similar type of technology could be employed to explain how a predictive model was built—which variables were used and why each variable was considered. On the matter of separating correlation from causality, Spinner suggests this is a job for data scientists, who are increasingly being brought in by insurance companies to develop and test AI-driven models.

Fairness

Jim MacKenzie, a risk management consultant for the government of Saskatchewan and an instructor of business ethics at the University of Regina, describes the insurance business as an exercise in “splitting.” Companies use risk models to try to determine which groups of people are less likely to make claims against those more likely to. “And now we’ve got this new tool that’s even better at finding those buckets,” says MacKenzie. “There’s a lot of pressure to use this kind of information before your competitors find it. Information moves fast and AI is going to make it move quicker.”

But if applied too quickly, new insight on potential risk might put some clients at an unfair disadvantage. As a hypothetical, MacKenzie pointed to knitting as a potential variable in someone’s risk profile. AI might find that being a knitter lowers a person’s likelihood of making a claim. This may be actuarially sound, but is it really fair to charge non-knitters higher insurance premiums on the basis of a brand new AI-produced model (that may not be all that explainable)? This is a question that bears consideration, tempting as it may be to rush out and corner the market of insurance-buying knitters.

MacKenzie points to another AI application that raises questions about fairness: models that predict how quickly a customer is likely to settle a claim. It is to insurance companies’ advantage to move policy claims through expediently and get them off the books, but is it to the client’s advantage? “An early settling claimant tends to settle for less money,” notes MacKenzie. “There may be some satisfaction gained from having the claim settled and done, and some people might be willing to make that trade. But this is AI predicting, not so much as asking the question. Are we respecting the autonomy of individuals to decide for themselves if AI is saying, ‘Well, a person of this ethnicity and in this economic category in this part of Canada is more likely to want to get paid quickly?’”

This raises the thorny topic of bias, which AI has been shown to have, often to the detriment of women and people of colour. The IIC report cites an experience at Amazon, where an AI tool favoured male candidates after being fed data submitted mainly by men. This is but one of many examples of AI bias that have been reported on, and machine learning fairness toolkits have been used in other industries in an effort to ameliorate the effects of bias. Still, says the IIC report, the ability of these so-called fairness systems “is difficult to judge in a broader context” and “it is critical that the industry be vigilant in the search for unanticipated bias in these complex new analytical tools.”

Availability

Climate change has led to some insurance companies leaving certain markets that have become very high risk, for example in Florida and California, and this has left some customers stranded without the insurance they need to drive or to maintain a loan on their homes. There is the possibility that proliferated use of AI in the industry might exacerbate this problem. As noted in the IIC report, the “hyper-personalized risk assessments” AI tools are so good at generating “could leave some individuals ‘uninsurable’ by revealing previously unseen indicators of risk.” But the report also notes that AI can in fact be used to expand the reach of insurance. “The availability of more information and stronger analytical tools will likely determine, over the next 10 years, that new coverage is viable,” it says, and points to the emergence of residential flood insurance as an example of the kind of solution that is possible.

Privacy

Concern for the security of consumer information pervades many sectors, including banks, health care institutions and government agencies. The insurance industry gathers vast amounts of data from and about customers and is responsible for protecting this information while also offering transparency about its use. Does AI have access to this data? How is it being incorporated in rating and risk assessments of the customers who provided it? And what measures are insurance companies taking to ensure the data is as safe as possible from hackers? These are the kinds of questions insurers must anticipate having answers for.

In fact, Sven Roehl, EVP Innovation at msg global and founder of Cookhouse Labs, suggests insurance companies should know these answers at the hypothetical stage, while still designing and testing AI models. How a company intends to prevent privacy breaches and how they would address one were it to happen are among the issues that should be laid out in a clear AI strategy worked out in advance. “If you don’t have a strategy,” says Roehl, “you’re running into the trap of, ‘Oh, we have this issue right now so let’s just use this solution.’ You’re going to build all these island solutions, and it’s really going to be tricky and complicated to have an overview.”

Overarching AI strategies can address such basic questions as what kinds of models a company is going to use, whether ready-made ones such as ChatGPT or ones that are internally developed. Roehl pointed out that when companies design their own models, they have some measure of control over the nature and quality of data being fed into the machine learning, which can help prevent bias—AI is very much “what it eats” when it comes to data—and also over its traceability. Roehl described the possibility of creating “AI ledgers” which would allow companies to track how decisions are arrived at, which data were used, and by which systems. This can go a long way to addressing the explainability conundrum and allow for greater accountability in the event of unfairness or bias, if, for example, an AI model leads a company to overcharge a particular social group. An overarching strategy would build in ways of catching these kinds of errors, and of correcting them.

Of course, these strategies will need to be adapted as generative AI evolves, which is exactly what it is built to do. “We are right now at the very beginning,” says Roehl. “We’re playing Donkey Kong in front of our parents’ TV. That’s where we are with generative AI. Years from now, I’m pretty sure we will have achieved artificial general intelligence, which just means the systems are going to be smarter than any of the smartest humans we have.” Should this worry us? Are we at risk of being redundant? “I think we are in a generation where will we see a lot of the good things coming from AI,” says Roehl. He also thinks we have an opportunity right now, when we are so engaged in interacting with AI, to explore how we can create “good AI” and do our best to protect our future selves, “to really remain ethical in the future,” says Roehl, “when we are not the ones developing the systems anymore.”