The AI guardrails schism – where should NZ stand?
World leaders are no longer in step when it comes to regulating the impacts of AI.
While Chinese startup DeepSeek may be the new flavour – or foe – of the month, it shows there are plenty of horses to back in the AI race, says generative AI strategy consultant Tim Sharp.
ChatGPT may be the favourite, but DeepSeek rocked big tech and Wall Street in late January – AI chip giant Nvidia lost nearly US$600 billion in market value – with the launch of its sophisticated, cost-efficient AI model.
The startup’s perceived threat has snowballed with Australia banning the model from all government devices over an “unacceptable level of security risk”.
And this week Parliamentary Service advised all New Zealand MPs that DeepSeek and three other Chinese apps – WeChat, Red Note and CapCut – will be blocked from any device on the Parliamentary network.
Sharp, who will be a key speaker at the IoD’s Governing AI Forum in March, says it is one of the first times a particular tool, model or program has captured people’s imagination since ChatGPT launched in late 2022.
“There are a lot of questions about whether it was trained for as little as has been reported (US$5.6 million) – I think probably not,” says the Kiwi. “There’s a lot we don’t know but it certainly shows other countries such can compete with the US in terms of capability.
“It is notable that models like this are open, if not truly an open source. I don’t think it’s going to dethrone ChatGPT, for example. It doesn’t have the distribution or the infrastructure; it’s more of an experiment. But it shows it doesn’t have to be a three- or four-horse race when it comes to developing large language models.”
Sharp says it is an exciting development and it should not concern the director community – perhaps with the caveat that you want to be looking at the terms and conditions of whatever AI-based tool you are using to make sure you know where your data is going. That is true of any tool, whether that be ChatGPT, Claude or Perplexity.
“We need to take the FOMO out of all of this and not be acting or setting strategy from a position of ‘we are being left behind’. It is still early days and things move very quickly. One of my big learnings is, every time I make an assumption about AI or a generative AI, I usually have to toss it out the window.”
Sydney-based Sharp, who works with customers and partners worldwide, says he does not believe DeepSeek necessitates a change of strategy because it is more of a high-level disruption among the big tech in the US.
“We have seen the likes of OpenAI and Microsoft scrambling to respond. We’re talking about a small set of companies that have the resources to train large language models and, suddenly, the thinking is that maybe a bigger universe of companies can train their own models.”
Sharp stresses the need for boards and directors to understand “what GenAI is and what it’s not”, and warns it is an emerging experimental technology that is being released before it is ready.
“If a board looks at GenAI, or more traditional machine learning, and says, ‘I’m worried about things like privacy, or security, or intellectual property, or accuracy, or whatever it may be’, then that’s great, provided it comes from a position of understanding the tech.
“As a business, you are impacted by AI, whether you’re adopting it or not. There are big threats through external actors, bad actors, deepfakes or impersonation scams. It takes eight seconds for a voice to be cloned, a little bit longer for our likeness,” he says, highlighting the finance worker in Hong Kong who was tricked into transferring millions in a deepfake scam.
Boards also need to be aware of shadow AI, the use of GenAI within an organisation and outside its IT governance.
“It’s common for me to come into a business and find, anonymously, that more than 50 per cent of team members are using something like ChatGPT daily, weekly, even if it’s not technically allowed.
“More often than not, there’s no policy in place and either of those conditions creates an environment where you don’t know how far it is being used, but you know it is. So, as a board, you need to be right on top of that.
“It’s an easy win in equipping your people to work faster and better – the question really is what are you going to do with all that time saved? And I would caution against trying to restructure your business overnight, for example, and instead reinvest productivity savings into innovation.”
AI is not a silver bullet for productivity, but the gains are there and relatively easy to capture, he says. “For example, I work with a regional Australian technology company and we believe three or four off-the-shelf AI tools save them 300 hours a week, in a crack squad of 30 people. The key thing is to benchmark that over time.”
As a specialist in marketing, Sharp says disruption is taking root faster in that part of business and improving all the time. “All of a sudden, you can shoot a professional quality TV commercial without picking up a camera.
“One of the biggest challenges is evaluating performance in the real world. Most of the time we know what good looks like and we are using tools such as ChatGPT to get better, or to get to ‘good’ faster. We have the benefit of knowing what good looks like because we have been there, done that, for years.
“That’s when you have those jaw-dropping moments, when you think, ‘Cool, I’ve saved a whole lot of time here and got a good output’, then you’re crafting it and iterating it along the way.
“You can do things you couldn’t do before. You can generate high-quality imagery, video, audio and voice without picking up a camera or a microphone from a couple of text prompts. You can conduct market research much faster and, perhaps, more effectively than before.”
Accompanying that is an uneasy sense you may be automating yourself out of a role in the future, he says.
“And on top of that, you have the governance layer. This is an entirely new paradigm we have not really encountered before. That includes intellectual property. For example, who owns the copyright to a generated image?
“Once again, the technology is far ahead of the regulation and so the onus, rightly or wrongly, is falling onto the users to make sure we’re doing it in a responsible and ethical way, which is easier said than done.”
Tim Sharp is the founder and principal at Sydney-based generative AI strategy consultancy GEN8, providing strategy, training and governance to complex teams worldwide. In parallel, he develops and teaches ‘AI for Marketers’ at online business school Section. He will be speaking at the IoD’s Governing AI Forum 2025 in Auckland on 19 March.