How to incorporate AI threats into your cyber risk training
Employee training on AI-associated risks is crucial.
Formal training should be quarterly and cover the basics like data security, privacy and how to spot social engineering cyber threats like phishing, says Leanne Taylor, senior manager of technology, cyber and professional lines at Sovereign Insurance.
The key is to make training practical and refresh it regularly.
“People typically learn best when they can see how AI or cybersecurity issues impact their actual roles,” she says. “Instead of just using theory, companies should build training around real scenarios, so things like real phishing attempts, data privacy challenges or how AI decisions affect your particular clients.”
Training should be continuous, adds Sam Chapman, CFC’s Canada technology team leader. Plus, it should include scenario-based phishing simulations.
Companies can also deploy verification processes like two-person sign-off for financial transactions and sharing of sensitive information with third parties, Taylor says. And they should have AI auditing processes in place. AI itself also benefits from training.
Build Cyber Resilience Without a Big IT Budget
By
Sponsor ImageRelated: Ransomware chatbot negotiators, HR scams, LLM credential-stuffing: What’s new in cybercrime
Employees need to know reporting channels for raising AI-related concerns or flagging potential misuse, says Jaime Cardy, senior associate at Dentons Canada LLP. “…Emphasize the importance of maintaining a ‘human-in-the-loop’ to critically assess AI outputs and clarify how employees can meaningfully review AI outputs.”
Cardy says training should include:
- The implementation of clear policies that define appropriate versus inappropriate use of AI in the workplace
- Restricting employees’ use of AI in the workplace to employer-vetted and approved AI tools
- Requiring employees to complete training before using any AI tools in the course of their duties, ensuring they understand both the capabilities and limitations of these tools.
A strong workplace culture and open mindset will also help make employees feel comfortable challenging things such as potential phishing scams, Chapman says.
“We need to put our hand[s] up and recognize that something doesn’t look right,” he says. “You’re not taking up someone’s time.
“I don’t think many senior executives would be aggrieved if you were to come [to] them just to confirm that what they’ve sent you is actually true…and it mitigates a much bigger issue.”
Related: Why cyber insurance turned the corner to profitability
Cyber incident response planning is a key control in reducing an organization’s likelihood of experiencing a breach-related claim, according to a late-August report from the Cyber Risk Intelligence Center of Marsh McLennan. The report finds organizations that regularly engage in tabletop exercises and scenario-based breach response drills are 13% less likely to experience a material cyber event than those that don’t.
AI-generated scams often have tell-tale warning signs such as urgency, a demand to cut corners and usual processes, and unusual tones or manners of speaking. To counter potential scams, sources recommend calling people on a known number rather than one that’s been provided in the ‘urgent’ email or visiting them in the office. “If it doesn’t feel right, it typically isn’t,” Chapman says.
“AI is an amazing tool that has revolutionized the way we do business,” Taylor adds. “The goal here isn’t to create fear, it’s to build healthy skepticism…
“Having the right cyber policy in place to cover your business should an employee miss a red flag is equally as important. As businesses are forced to be agile and adapt to the changes brought forth by AI, so too must their cyber insurance policies.”