From buzzword to boardroom: making AI work for your organisation

type
Boardroom article
author
By Campbell Featherstone, Commercial, Technology and Privacy Partner, Dentons
date
18 Dec 2024
read time
3 min to read
From buzzword to boardroom: making AI work for your organisation

The prospect of implementing AI tools within an organisation can be daunting. Many AI tools, especially those incorporating generative AI, are largely untested and unreliable. And plenty of horror stories exist about the misapplication of AI (not just those seen on the silver screen). 
 
With lawyers relying on AI-fabricated cases in the US courts, to chatbots going ‘rogue’ and delivering hallucinogenic advice, the global media has been awash with news about the failed use of AI tools. 
 
However, directors should not let this deter them from starting their AI journey. By approaching implementation with a strategic mindset, directors can harness the benefits of AI while mitigating potential risks to a sensible degree. 
 
Those who fail to seize the moment to harness the use of AI in 2025 may find the moment is lost in time – like tears in the rain. 
 
It goes without saying that organisations are responsible for the outcomes of the AI tools they implement. Despite the best efforts of Air Canada this year to argue its chatbot was “responsible for its own actions”, it  was still found liable for misinformation about its fare policies that its chatbot had communicated to its customers. 
 
Using AI tools, especially in customer interfaces, is far from a risk-free proposition – but don’t let that put you off. Humans make mistakes, too. So long as you have appropriate processes in place for ensuring quality control, the fact that AI could get it wrong should not be enough to stop you from deploying it. 

The longest journey starts with the first step

Consider, as a first step, implementing AI tools in low-risk, non-customer facing environments, where the risks can be controlled and appropriate guardrails implemented. 
 
Start small. Many organisations are looking for the deus ex machina to arrive centre stage and solve all their problems in one fell swoop. However, AI implementation does not have to start with grand, sweeping changes. 
 
If you know your customer and you know yourself, you need not fear the result. Identify your organisation’s most straightforward use cases for AI solutions, such as where the organisation is undertaking repetitive tasks. 
 
While the problems that these AI solutions solve might not be ground- breaking, they can deliver tangible benefits while allowing staff to become familiar with using AI as an everyday tool. This can help build a platform to implement more wide-reaching, transformative AI processes. 
 
Spend the time also to get to know your customers and stakeholders, especially those who will interact with you through AI tools. While most people expect organisations to adopt AI tools in service delivery, they are also only expecting those tools to supplement human interaction. 
 
Striking an appropriate balance between using AI tools and human interaction can enhance customer satisfaction and maintain reputation, while freeing up human employees to focus on higher value tasks, such as relationship building. 
 
Think carefully about reputation management. You will need to control the narrative. Think carefully about what AI means in terms of your ESG commitments and what it might mean for your workforce. Do your employees need to be reskilled and how can you support them in the transition to using AI tools? 
 
If you are going to use your customers to train your AI, think about transparency and control. Apart from the legal requirement that customers are aware how their personal information will 
be used, customers also want choice and to be in control. Think carefully before opting customers into using their information to train large language models by default – it is highly unlikely you have the social licence to do so. 

When there’s a will, there’s a way

Chances are employees within your organisation are already using AI tools daily. It is human nature to search for ways to make life easier. Rather than having those employees play with publicly available AI tools in a high-risk environment, think carefully about establishing a safe space for them to use AI, making tools available that have built-in guardrails. 
 
Use AI tools yourself. The best way to understand the limitations of what AI tools can do, and how they think, is to have a good play around with their capabilities. There are plenty of free tools available which, providing you don’t input sensitive information, can be used to practice your prompt engineering to generate content. 

The more familiar you are with these tools, the more insight you will have into how your organisation, and the people within it, may benefit from the adoption of AI. 

Campbell Featherstone advises clients on all aspects of the introduction of new tech to the market, on the procurement of tech, and on all manner of privacy and data issues. He is also a member of the firm’s AI steering group, which is looking at the deployment of AI and other legal tech.