The AI guardrails schism – where should NZ stand?

World leaders are no longer in step when it comes to regulating the impacts of AI.

type
Article
author
By Peter Griffin
date
20 Feb 2025
read time
4 min to read
The AI guardrails schism – where should NZ stand?

US Vice-President JD Vance’s warning to the European Union against heavily regulating artificial intelligence, coupled with the US and UK refusal to sign the “inclusive and sustainable” AI declaration at the Paris AI Action Summit, signals a significant shift in the global approach to AI governance.

According to attendees, the third global summit on artificial intelligence struck a different tone this week.

While the Bletchley Declaration, hosted by UK Prime Minister Rishi Sunak in 2023, was all about countering the existential threat to humanity posed by AI, and its 2024 follow-up meeting in Seoul continued the theme, safety was stripped from the title of the Paris event.

Nevertheless, the letter co-signed by this informal group of nations, New Zealand among them, actually broadened the responsible AI theme to look at the sustainability of AI, given its intense energy requirements and the implications of job displacement as AI use grows.

The UK wasn’t willing to sign up for that, worried about the implications of sustainability goals limiting AI’s development and puzzled as to how global governance could be applied to AI. JD Vance made the US position crystal clear in a speech delivered in front of the summit’s hosts, French President Emmanuel Macron and India’s PM, Narendra Modi.

“To restrict [AI’s] development now would not only unfairly benefit incumbents in the space, it would mean paralysing one of the most promising technologies we have seen in generations,” he told them, taking aim in particular at the European Union’s “onerous international rules” on AI and efforts to regulate Big Tech.

Déjà vu in Paris

The Paris AI Summit has shades of a previous effort to band together on digital safety led by Macron and our own Dame Jacinda Ardern.

While the Paris Protocol on online safety, held in the wake of the Christchurch mosque massacre that was live-streamed by the perpetrator on Facebook, has largely fizzled out, the rolling AI summits have more momentum.

For instance, this week’s event saw the creation of Current AI, a partnership of countries and industry giants like Google and Salesforce that will receive US$400 million (NZ$705.9m) in investment to lead public interest AI projects, such as the development of open-source tools.

It is aiming to attract $2.5 billion in capital over five years.

The 58 countries who signed the most recent AI declaration accept the need to get on the same page regarding AI safety and sustainability and many of them are already regulating AI to some degree.

But President Trump is all about “pro-growth AI policies”. Long before he returned to the White House to put his elaborate signature to a slew of executive orders, Trump signalled his desire to revert to a hands-off approach to AI regulation, emphasising innovation and economic growth over stringent safety measures.

One of his first acts on taking office was to revoke the 2023 executive order issued by President Biden aimed at ensuring the safe and responsible development of AI.

A provision of that included requiring the likes of OpenAI and Google to share details of their cutting-edge AI development with government regulators early.

That was a sound idea.

The Trump factor

But Trump’s executive order ditched that requirement, instead focusing on using AI to boost economic development.

His StarGate announcement the day after his inauguration lauded an industry consortium featuring Softbank, OpenAI and Oracle, who have pledged to invest up to US$500b in AI data centres and other infrastructure.

This stance contrasts sharply with the EU's more cautious approach, exemplified by regulations like the Digital Services Act and the AI Act, which, as of early this month, granted the European Union the power to ban AI systems they deemed to pose “unacceptable risk” to citizens.

At the same time he was talking up responsible AI, Macron wasn’t completely oblivious to the changing geopolitical landscape in Paris.

“We will simplify,” Macron said of regulations seen to be holding back AI development. “It's very clear we have to resynchronise with the rest of the world.”

The UK and the US are nevertheless on a divergent path with the EU and China on AI’s future. So where does that leave little old New Zealand?

We joined 58 other countries in endorsing the AI Action Summit’s statement.

But so did China, which is quite happy to apply its state censorship mechanisms to Chinese-developed AI chatbots like DeepSeek and sees achieving AI supremacy over the US as a national security goal.

Light-touch regulation a strength

The AI framework for New Zealand’s public sector was released two weeks ago and the plan on a page included much of the same generic and rather nebulous language as has been bandied about in Paris this week.

In reality, we are pursuing a “light-touch, proportionate, and risk-based” approach to regulating AI. In real terms, we are doing very little at a government level either to police AI’s use or stimulate its uptake.

As BusinessDesk [previously] reported, we’ve plunged down the Oxford Insights global rankings of government AI readiness.

Minister for Digitising Government Judith Collins told public service leaders in a speech this week that “the use of data and artificial intelligence is the big opportunity of our time”.

“New Zealanders already interact with AI-powered services daily. They expect government agencies will be analysing data to gain insights into customer behaviour, preferences and needs,” she added.

“I’d like to see the public service embrace the potential of AI.”

So would I.

With less regulatory burden, New Zealand could become an attractive destination for AI companies and researchers seeking a more permissive environment.

As our Government rolls out the welcome mat to international investors, that’s an angle that may appeal to US and UK investors, in particular, who are looking to AI to make their governments more efficient and user-friendly.

The approach to AI safety clearly remains a delicate balancing act.

We must remain committed to ethical AI development. But with the global consensus on stricter AI governance weakening, we also need to get real and identify opportunities to accelerate the use of a technology we’ve so far failed to exploit to its full potential.

The article first appeared on www.BusinessDesk.co.nz on 13 February; republished by permission.

Peter Griffin is a Wellington-based science and technology journalist with 20 years’ experience in the New Zealand media covering science, technology, media and business.