Are brokers legally responsible if AI spits out bad information to clients?
Brokers can expect to be held legally responsible for any incorrect information that their brokerage’s artificial intelligence spits out to consumers, a Dentons lawyer cautions.
Dentons senior associate Jaime Cardy made this prediction during a Tuesday webinar, Provincial Insurance Regulatory Update. She was discussing new regulatory guidance on AI use issued in late May by the Registered Insurance Brokers of Ontario (RIBO).
“This [issue of responsibility for incorrect information] was flagged in the RIBO study, and it pertains to [the question of] who should be responsible when incorrect information that has been generated by an AI tool has been provided to a customer, and the customer relies on that information to make decisions,” Cardy said. “We don’t have RIBO’s answer to this question yet, but if I were to wager a guess based on existing case law — albeit not from the insurance context, but I’m sure the same principles would apply — I would say that the liability would rest with brokers here.
“Including a human in the loop to verify AI-generated outputs is not only a best practice for responsible [AI] use, but it’s also a significant risk mitigation measure.”
Specifically, RIBO’s guidance on the topic says brokerages should ensure that anything generated or altered by an AI tool is overseen by a licensed member before being presented to a client.
“For example, if AI is being used for underwriting, firms should be auditing and monitoring generated outputs to ensure that the model is performing as intended and is not subject to systemic biases,” RIBO’s guidance states.
RIBO’s guidance also states that brokerages must be transparent to the public about their use of customer-facing generative AI tools such as chatbots or online quote engines.
“The use of generative AI by brokerages should be closely monitored by one or more licensed brokers to always keep a ‘human in the loop,’” RIBO’s guidance states. “This allows licensed members to respond to coverage questions and take corrective action(s) in the event of any incorrect or misleading results.”
Cardy suggests brokers take one step beyond the guidance. “Basically, customers need to be informed if they’re engaging with AI instead of a human,” she says. “So, a practical tip in this regard is requiring that disclaimers be employed when customers are interacting directly with AI.
“And to go a little bit above and beyond what RIBO has recommended, I’d say also give customers the ability to opt out of those AI-based interactions by electing to deal with a human representative instead.”
Most brokerages are not building their own in-house AI capabilities but rather relying on third-party technologies, Dentons partner Marisa Cogginas noted in the webinar.
“Because of that, there’s inherently going to be an element of third-party risk that needs to be addressed and mitigated on an ongoing basis,” Goggin says. “AI tools can certainly help brokers meet their duties and expectations of customers. However, brokers do need to ensure that they’re reviewing any AI-generated documents, information or any marketing messages.
“And this isn’t an AI-specific comment, but the focus really needs to be on providing the customer with the highest level of service and ensuring all information provided to the customer or to the public, for that matter, is clear, complete and accurate.”
Keeping a client’s information private is also of paramount importance when dealing with third parties on AI projects, Cardy adds. “In terms of a practical tip for your governance policy, just extend any privacy-specific policies and procedures that you have to the use of AI as well.”
