ASB
How ASB is using AI to beat AI
Systems embedded with AI prove successful in detecting and preventing scams, says ASB chief executive Vittoria Shortt.
Every other day, Jaymin Kim wakes up to headlines about technological changes and advancements in generative AI, then starts thinking about the risk impact.
In her role as Senior Vice President, Emerging Technologies, Global Cyber Insurance Center for Marsh, she needs to “proactively stay abreast” of the changing technological landscape, assess whether there are new risks and be able to address client organisations’ needs.
“Over the past few years, 120 per cent of my time has been devoted to generative AI because that is where the onslaught of client questions has come from – from a risk mitigation and a risk transfer perspective,” says the Toronto- and Quebec City-based Canadian-Kiwi.
One thing is clear: the adoption of ChatGPT and other generative AI technology has accelerated across all sectors, public and private; and organisations, large and small.
One of the big questions is how it will be regulated across the world. Regulations and legislation are already in place in the US at both federal and state levels. In October 2023, the Biden Administration issued an Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
Kim says AI-specific legislation and regulation is what is added on top of existing regulations and legislation, such as consumer protection laws and data privacy laws, including the General Data Protection Regulation in the EU and the California Consumer Privacy Act in the US.
“The EU tends to be more rules-based when it comes to regulations, whereas the US tends to be more principles-based. If you look at the EU AI Act, they have listed what they consider are unacceptable, high, limited, and minimal risk systems. There are corresponding requirements that organisations must adhere to.
“I think the question for New Zealand is, will it follow more of a rules-based framework like the EU or a principles-based one? – this will come down to a bit of politics and culture.”
Kim says she is increasingly seeing AI- specific regulations emerge in certain industries in the US and specific states – typically highly regulated industries, such as healthcare and financial services.
“I have seen various regulations explicitly calling out board oversight and senior management oversight, and the need for a centralised framework and guidance to be put in place so there is a top-down risk management and strategic guidance provided to any and all employees.”
Kim says her personal take on why this is happening is because generative AI marks one of those rare historical cases where advanced technology was put in the hands of everyday individuals before organisations had a chance to figure out how to use it.
“When ChatGPT came out, I had senior executives tell me their company policy was their employees were not allowed to use generative AI. That is not an enforceable policy because anyone can access the internet, and some kind of device can and will use it. The question is not will you allow its use; rather, it is how will you provide appropriate guidelines and processes to ensure appropriate use.”
She points to an example of workers knowing they should not use their personal Gmail accounts for work business. “In a similar vein, we need to develop that kind of cultural norm for employees to know they should not use the public version of ChatGPT to enter proprietary company data. That is where board oversight and senior management direction is critical.”
“Hopefully, the board’s meeting minutes show AI oversight has been an agenda item, and that appropriate governance frameworks, policies and guidelines, in partnership with senior management, have been discussed over time.”
Coming from the insurance world, she says she sometimes hears misconceptions about how AI exposures can primarily be covered by Cyber and Technology Errors and Omissions insurance.
“This technology is being used for everyday business activities, which means a whole lot could go wrong and trigger pretty much all commercial lines of insurance, including directors and officers' liability,” she says.
Kim gives a hypothetical: a public corporation has deployed a customer- facing chatbot. The chatbot hallucinates and says a bunch of nonsense about product safety and based on this some customers experience physical bodily injury, or some kind of financial loss. This may result in a class action lawsuit and cause the stock price to plummet. Enhanced regulatory scrutiny follows and the question will arise, ‘what kind of governance was in place?’
“Hopefully, the board’s meeting minutes show AI oversight has been an agenda item, and that appropriate governance frameworks, policies and guidelines, in partnership with senior management, have been discussed over time.
“That is the message I’ve been trying to convey across various audiences. The risk of this technology should not just lie with your CISO or CTO. It is also the CHRO when it comes to training, communications and cultural change, plus the CFO and business leaders, who are deploying and using the technology and providing that vital feedback loop in terms of what’s working well and not working well.”
She says boards and business leaders need to ask themselves, ‘what is the business outcome we seek to achieve and is generative AI the optimal technology to get us to that outcome?’ They also need to be proactively ahead of the downside risks because an incident involving generative AI is a matter of when, not if.
The ‘when’ has already happened for lawyers in the US and Canada who submitted fake cases created by a chatbot. Some New Zealand lawyers were also tricked by a chatbot in 2023.
“Most organisations I’ve spoken to are still trying to figure out how to implement governance policies. A key challenge is there are silos across different functions. And there typically isn’t a natural way for the CISO, the CHRO, legal and compliance, and the CFO business leads to come together to talk about AI unless a formal meeting is established for that purpose. A lot of organisations are still asking questions such as, ‘what are the risks and what roles should the board play?’
“We need to see more movement towards the phase where boards are working in partnership with senior management to create dedicated subcommittees and/or delegate tasks and think more about moving on from writing up a policy to effectively implementing policy. And how do we institute feedback loops to ensure our governance structure is working well?”
Kim also suggests public-private partnerships could be beneficial in New Zealand because interests will diverge and no single organisation or industry is going to be able to think through every possibility. “That’s part of what I do – thinking through what could possibly go wrong.”