Harried by the AI hype? choose your moment

type
Boardroom article
author
By Noel Prentice, Editor, IoD
date
18 Dec 2024
read time
3 min to read
Harried by the AI hype? choose your moment

If you are not going to be first, make sure you are not the last. That is the advice from technologist Breccan McLeod-Lundy CMInstD as another “serious AI hype cycle” sweeps the world. 

McLeod-Lundy, the founder and CEO of software development company Ackama, says boards and businesses need to weigh the benefits of jumping in now or waiting for someone to build the desired AI model. 

“Any AI has developed to the point where everyone should be exploring it, but you do not necessarily need to start now because it is still advancing so quickly. If the benefits you would get are marginal, you could easily wait six months, and the benefits you get by doing an implementation then will be even greater because the speed at which it is advancing is incredibly quick – the tooling gets better and better.

“The implementation tooling cost is hitting in a good direction now and waiting to let the dust settle a little bit. If you are not going to base your business model on being first and taking advantage, you are not going to be disadvantaged in any measure.”

Breccan McLeod-Lundy

McLeod-Lundy says it is an interesting challenge for businesses because if someone else jumps in and starts getting benefits, they might have an advantage for a while. And if you are big enough that you can keep making that investment every six months as the technology changes, then jumping in earlier is good. 

“But if you are going to do an AI implementation and sit on it for three or four years, I’d still wait another six to 12 months and then do my big capital-intensive project because, otherwise, you’re going to be sitting there thinking, ‘Oh, I’m still stuck with the first version of the iPhone when everyone else has an iPhone 4’.

The Wellington-based 35-year-old says boards should, at least, be experimenting with AI and working out what looks good and where the benefits are the greatest. 
 
“For a lot of organisations, AI preparedness is looking at your data management strategy and making sure you have finished digitising all your forms. AI will confidently take your data and believe it, so if you give it bad data it will start throwing out some bad outcomes.” 
 
Training new models is incredibly expensive and fully deploying an AI-driven bot without someone paying careful attention can have all sorts of consequences, from the humorous to the potentially dangerous. In 2023, Pak’nSave attracted plenty of unwanted attention when its new AI meal- bot generated meal plans including recipes for toxic gas and poisonous meals. 
 
Generative AI is very much in the human-in- the-loop space, says McLeod-Lundy, where you use a human verifier and then use the output. The next level is human-on-the-loop, which is a human keeping a close eye on the outputs, but they are sent automatically. 
 
“The reality is we are not at a place where we can trust it without human intervention,” he says. “The best uses for AI are still informing people who understand the tool and understand what they are dealing with.” 
 
He advocates at least a couple of board members should be taking the time to properly experiment and “not just read some opinion pieces because there is a pretty serious hype cycle happening at the moment”. 
 
“It is a technology that is easy to interact with, but you can hop into Claude or ChatGPT, upload documents, have a play, and quickly find the edges of what it does well and what it does badly. That is constantly evolving.” 

“There is certainly a danger of organisations making an early judgement that this is not useful because it is still a little bit broken, a little bit overconfident, and saying ‘we won’t look at it again for three years and that would definitely be a bad outcome’.”

McLeod-Lundy, who is a director of Ackama Group and its subsidiary boards, says it all comes down to trusting AI is not going to run off with your data and use it as part of its training. 

For boards, there is a debate around the risk of not having a strategy. Free versions of tools, which is what people will play with if they do not have a strategy, will take your data because that is usually the trade- off for using the free version. Paid versions will make some guarantees, but whether you can trust it will not use your data is another question. 
 
McLeod-Lundy says the bigger questions are around issues such as digital identity and data sovereignty because there are “basically no large language models running domestically in New Zealand where you can trust your data isn’t going offshore, at least for processing”. 
 
He says Aotearoa is behind the rest of the world in AI research “and I don’t think there’s any reason to believe we’re going to catch up”, with the big players pouring hundreds of billions of dollars into buying huge data centres and hiring the best people with deep research expertise. 
 
But when it comes to the actual application of that AI and finding good spots in business or technology or anything else to apply it, New Zealand is not far behind, he says. 
 
“There is certainly a danger of organisations making an early judgement that this is not useful because it is still a little bit broken, a little bit overconfident, and saying ‘we won’t look at it again for three years and that would definitely be a bad outcome’. 
 
“The really interesting stuff with AI in the long run is data management because any sort of automated tooling that we might develop over the next few years is reliant on having good, centralised data that you can feed into that tooling in a large volume.” 
 
The biggest impact, he sees, is the technology replacing workers at the junior knowledge level, and the entry pathway from education to industry becoming much harder – “as we are seeing in tech”. That will be the long-term challenge, he says.