MARSH
Weighing the risks of the AI revolution
As emerging technologies appear almost overnight, Jaymin Kim assesses the risks from a mitigation and risk transfer perspective.
Although the hype around AI in the workplace has reached its crescendo, organisations need to get serious about how they are going to implement the right AI tools. Security and privacy must be at the forefront of these decisions. This is where the board plays a role, ensuring risks are balanced with opportunities.
AI systems such as Microsoft Copilot and Google Gemini feed off data. This could be information you give them directly or even insights picked up from the transcript of recorded meetings. Some AI tools have the power to scan through your emails, your calendar and your company’s files.
It is imperative that businesses put the appropriate guardrails in place to ensure AI does not have unmediated access to sensitive or confidential data.
So, what can directors and boards do to achieve success while ensuring off-limits data is not compromised along the way? Here are five considerations to ensure your business reaps the rewards:
It is easy to get caught in the hype over the latest AI technology available on the market, so step back and ask what the AI business use case is.
This is where the board can help – asking questions around objectives and whether AI is the right tool to help achieve these. For example, when wanting to increase revenue or customer satisfaction, is that something AI can help with and what are the associated risks and rewards?
It is worth noting most ‘secure’ AI systems charge for the amount of data you put through them, which draws into focus the business value in using such systems, in addition to the potential privacy and security issues.
The key is to set boundaries upfront. Decide clearly what data you want your AI to have access to, before you implement. Limiting AI to accessing data contained in your business’ CRM system (where personal customer information is excluded) will prevent it from accessing data that falls outside of this, such as your financial management system.
It is also important to choose an AI tool that fits your business model best. Large language models work well as generative tools and are useful for content generation, customer support and product design. Other AI tools provide specific business insights, such as sales forecasting, customer insights or scientific endeavours. An example in the world of protein science would be AlphaFold3 – this advanced mathematical model would undoubtedly suit specific business cases better.
It is about choosing the AI tool to suit the business challenge or opportunity, instead of the other way around.
AI success relies on good data governance – the basic guiding principles that keep data from falling into the wrong hands. Boards are responsible for ensuring secure AI use across the businesses they govern via data governance policies.
A good data governance policy should provide guidelines for responsible data handling and, if its robust enough, will suit AI and human use alike. Some businesses, however, opt to build on these by adopting additional, more specific AI policies. Kordia has developed a handy AI policy checklist to help boards and executives develop additional AI policies.
If you don’t have a data governance policy in place, you will need to do so before implementing any AI tools into your business.
Many tools, including the likes of Office 365, have built-in AI elements that may not be able to be switched off completely. Such tools mean AI may be present inadvertently within the business, which brings questions around security and best practice even further to the fore.
Before any data is input into AI tools it should be sanitised – that is, making sure it is up to date, relevant to the business use case that is underlying the AI tool, and does not contain any personal or sensitive details. This ensures the tool works effectively, while still protecting confidential business information.
Another important part of this process is reviewing the terms and conditions of any AI model carefully.
Terms and conditions frequently state the AI provider is entitled to ownership of the data input, generative outputs, as well as the model itself, including custom AI models designed to suit your unique business. This is a potential recipe for disaster as sensitive information
and business IP could potentially be compromised and may have serious legal implications.
In addition to carefully reading the Ts & Cs, directors should engage in careful negotiations with AI providers to clarify ownership rights up front.
Regularly reviewing company-wide AI practice is an essential part of secure AI use. This should look like six-monthly or yearly reviews of the outputs of AI within the business, the types of data being input and the return on investment from AI operations.
Looking at AI operations in a measured and objective way, on a regular basis, helps to track whether key business goals are being met, as well as whether privacy guardrails are functioning effectively.
It also helps businesses keep on top of the rapidly evolving nature of AI tools, ensuring changes are made when needed to ensure these tools are still tackling present challenges and/or opportunities, while still delivering a measurable return on investment.