AI regulation in Australia: the implications for New Zealand directors
Australia is leading discussions on regulating AI. A recently released document outlines proposals that could soon impact businesses operating in high-risk sectors. For New Zealand directors with interests across the Tasman, these developments are worth paying attention to.
The Australian Government’s proposals include a mix of voluntary guidelines and potential mandatory rules, meaning companies and organisations offering AI products or services could face new obligations. As Australia explores its regulatory approach, the question for New Zealand is: should we be following their lead?
Australia’s AI guardrails: what’s happening?
The Voluntary AI Safety Standard, introduced in September 2024, is the first step in guiding organisations toward responsible AI use. It includes 10 voluntary guardrails designed to mitigate risks while allowing innovation.
These guardrails emphasise transparency, risk management and human oversight. For example, transparency requires organisations to disclose how AI decisions are made. In practical terms, this could mean detailed documentation of AI algorithms.
For high-risk sectors, Australia is proposing mandatory compliance, requiring:
-
- Testing and validation: AI systems must undergo rigorous testing to ensure they perform as intended particularly in sensitive sectors.
- Human oversight: Organisations must implement mechanisms allowing human intervention in AI decision-making processes.
- Transparency: Organisations will be required to disclose how AI systems make decisions ensuring outcomes can be scrutinised and understood by those affected.
- Conformity assessments: AI systems in high-risk sectors will be subject to ongoing compliance checks, either through internal or third-party assessments.
Australia plans to finalise the mandatory AI regime by mid-2025 with legislation potentially taking effect by 2026. For directors of organisations operating in Australia, this means thinking about compliance measures, particularly if AI is central to their business models. Selling AI products or services in Australia may trigger compliance obligations.
How does Australia compare globally?
Australia’s regulatory approach aligns with international trends but differs in its execution. The European Union (EU), with its EU Artificial Intelligence Act, also adopts a risk-based approach, categorising AI systems by risk levels – from unacceptable to minimal.
The EU imposes stringent rules on high-risk AI, including requirements for third-party conformity assessments, and bans certain AI uses, such as social scoring (classifying people for the allocation of public assistance benefits) by governments. Non-compliance can result in significant penalties of up to six per cent of a company’s global turnover.
In contrast, Australia’s regime is more flexible and phased. It starts with voluntary guardrails to allow industries time to adapt before implementing mandatory regulations. The mandatory regulations are focused on high-risk AI systems, such as AI used in healthcare, law enforcement or critical infrastructure.
While Australia has not yet proposed outright bans on specific AI use cases such as real-time biometric identification, these are areas that may see future restrictions.
Should New Zealand follow suit?
New Zealand is in the initial stages of AI regulation development, with no comprehensive framework introduced yet. The Minister of Science, Innovation and Technology, Hon Judith Collins KC, has emphasised the adaptability of New Zealand’s existing laws and the development of an AI roadmap to ensure safe innovation.
Ongoing discussions, including a recent Cabinet paper, highlight the need for a “light-touch, proportionate and risk-based approach”. The key challenges in designing appropriate regulation can be found here.
While New Zealand is beginning to shape its AI regulatory framework, several commentators have identified gaps that may need addressing. For instance, relying on existing laws such as the Privacy Act 2020, which was not specifically designed for AI, has raised concerns about whether these frameworks are robust enough to manage the unique challenges of AI, especially in sectors such as healthcare.
Additionally, there is growing awareness around Māori data sovereignty, a distinct issue for New Zealand that requires more coordinated and culturally appropriate governance solutions.
What should directors be thinking about?
When operating overseas, directors need to stay informed of local regulatory environments and how AI regulations are evolving in key markets, such as the EU and Australia. Organisations exporting AI solutions or offering them to international customers may find their AI systems subject to audits or transparency requirements, depending on how these jurisdictions define the scope of their regulations.
Directors must ensure their organisations are planning accordingly and assess whether these evolving regulatory regimes will affect their business arrangements.
In New Zealand, directors need to ensure their organisation’s use of AI complies with existing legal frameworks, such as the Privacy Act 2020 and the Copyright Act 1994. For a summary of New Zealand legislation that applies to AI, see the Understanding AI: A glossary for boards and directors.