The new frontier in corporate governance

type
Boardroom article
author
By Simon Berglund, APAC Senior Vice President & General Manager, Diligent
date
18 Dec 2024
read time
3 min to read
The new frontier in corporate governance

Artificial intelligence is going to shake up the world of corporate governance in the next few years, with major shifts in how organisations are governed. The Fifth Industrial Revolution is here, and AI is at the heart of it. 

This presents huge opportunities, but it also comes with big risks. We must be considered about how we use AI, carefully managing risks to ensure it is used responsibly, safely, and in a way that makes sense for organisations in the long run.

The rules are constantly changing, as are the risks, technologies, and shareholder and stakeholder expectations. Directors must stay on top of it all because AI is being used by their people, their organisation, their suppliers and upon them by bad actors. 

This means understanding AI, managing data and AI responsibly, having strong risk controls in place, and ensuring an organisation’s systems are secure. It is all part of performing roles with care, diligence and skill. 

With a robust AI governance framework, organisations can harness AI as a tool that boosts human expertise and capabilities. 

AI can help organisations get ahead by improving due diligence, making competitive analysis more efficient, and spotting risks and opportunities faster, leading to better strategic decisions, better outcomes for clients and, ultimately, bigger profits for shareholders. 

What are the biggest risks in AI use for directors?

From a corporate governance standpoint, the biggest risks boil down to whether directors are meeting their legal duty of care, as outlined in Section 137 of the Companies Act 1993

For example, directors could be personally liable if they drop the ball, either by not overseeing AI properly or not having strong risk management in place. Recent court decisions lean towards holding directors responsible, even if they were not directly involved in incidents or violations.

It is also important for directors to make sure their organisations follow all the current laws and guidelines. These include the Australian Government’s Voluntary AI Safety Standards and the proposed guardrails in the proposal paper before public consultation, the Harmful Digital Communications Act 2015, Fair Trading Act 1987, Human Rights Act 1993, Privacy Act 2020, and the New Zealand Privacy Commissioner’s Guidelines on Artificial Intelligence and Information Privacy Principles. Understanding and following these rules is key to meeting legal obligations as directors.

AI is the new frontier in corporate governance so Diligent is investing accordingly, delivering and further developing AI-powered effectiveness tools to help directors handle their changing responsibilities with greater confidence, skill and efficiency.

For example, the AI-powered One Platform can summarise and compare board materials to spot inconsistencies, identify current and future risks, and update essential action items. These enhancements help directors make informed decisions faster, helping improve value creation and the organisation’s bottom line.

The AI Assistant in Compliance, on the other hand, summarises and categorises new and old regulations, making it easier to stay compliant with changing regulations, use resources more effectively, and improve overall governance.

How can directors ensure safe and ethical AI use?

Ethics is typically part of the AI conversation and to serve customers better, AI features must be developed with robust safety and security measures that work within a global context.

The New Zealand Government has just signed the Bletchley and Seoul Declarations. It is also working with the World Economic Forum (WEF) to co-design AI governance frameworks via a multi-stakeholder, evidence-based approach. These international efforts show New Zealand’s commitment to working with other countries to develop safe, responsible and ethical AI practices.

Organisations must work in step with these global developments, and commit to building AI capabilities that are trustworthy, accurate, innovative, secure, and prioritise privacy. This approach aligns with internationally recognised standards for AI governance, such as the OECD AI Principles.

It is important to note that, while Diligent’s AI innovations are built on industry-standard security protocols, organisations must still take ownership of how they use AI in their business. Ultimately, using AI effectively in governance and having good governance over AI are both key to success in today’s digital world. 

With AI becoming such a big part of how businesses operate, regulations must keep up and address the intersections of AI with other areas, such as privacy, sustainability, corporate responsibility, cyber security and geopolitics.

Since data is the foundation of AI and the whole digital economy, AI laws and regulations must also address the structural inequalities arising from the exploitation of commodified data, which can put consumer welfare at risk. 

AI can also spread bias at scale, so laws and regulations must place safeguards to mitigate systemic inequality arising from discriminatory practices.

But, as with any new technology or nascent regulatory framework, we can only see the effectiveness of these regulations once AI is widely used, and legal precedents are set. Over time, this greater understanding will help shape future AI laws, ensuring they support economic productivity, market competition, consumer wellbeing, and sustainable growth.