OpenAI and open wounds: What New Zealand boards can learn
What happens when a board fires a CEO, then begs for his return? The OpenAI saga holds warnings for New Zealand boards.
Here’s how to talk about AI with your board.
At the recent AI Forum in Auckland, I found myself chairing a fictional board meeting for a company that doesn’t exist. The stakes, however, felt very real.
Each table of directors was handed the same case study – Delicious Foods NZ Ltd, a fictional FMCG company under pressure to adopt AI. We were provided with a management report and then the instruction: choose one of three ways to begin the conversation.
The options included focusing on potential AI investments (costs, risks and long-term value), considering the organisational impacts (ethics and stakeholders), or starting with AI from a board governance perspective (risk and oversight). The simplicity of the choice masked the complexity that followed.
Some tables dived straight into the technology – how AI could help with quality control or marketing. Others paused to debate the values at stake: fairness, job loss, transparency. A few focused on oversight, asking what policies, skills and guardrails would be needed. Whether a separate committee was required. What kind of reporting should be in place.
What struck me, as chairing one of these fictional boards, was how hard it is to hold a productive conversation when the terrain is unfamiliar, the language is evolving and the urgency feels both immediate and abstract.
And that’s exactly what many real boards are now facing as they begin their first conversations on AI.
The reality is that AI isn’t coming – It’s already here, with most organisations using it in some form, whether they know it or not. Shadow AI, plug-and-play co-pilot tools, vendor-led automation, and staff experimenting with free tools all raise immediate governance questions, including opportunities and risks to manage. But the challenge isn’t just understanding the technology – it’s knowing how to talk about it.
So how should a chair structure that first board conversation?
It depends on the organisation. But a few principles apply across the board.
Is the goal of the conversation to surface risk, explore opportunity or build foundational understanding? If AI tools are already in use – even informally – the board may need to begin with privacy, data governance or ethical boundaries. As Privacy Commissioner Michael Webster reminded us at the forum, AI becomes a privacy issue the moment it starts fabricating personal information. Boards must ask whether the tools in use are trustworthy – and whether the underlying data is fit for purpose.
Boards often include directors with deep digital expertise, and others who feel several steps behind. The chair’s role is to keep the conversation grounded in governance – not technical depth. All directors should be able to ask: What is the risk? What is the upside? What does this mean for our stakeholders, our people, our purpose?
As the IoD’s Four Pillars of Best Practice Governance reminds us, chairs shape not just the agenda but the culture around the board table. Curiosity, transparency and permission to learn are essential – especially on issues like AI, where the board is learning alongside management.
A formal agenda item might work for some boards. Others may benefit from a dedicated workshop – off-cycle, exploratory and less constrained by usual procedures. What matters most is that the conversation is intentional, inclusive and well structured. The case study used at the Forum – soon to be available as an IoD tool – proved an effective way to simulate board decision-making in a complex, high-stakes context, without the risk of getting it wrong.
This was a recurring theme throughout the day. Use AI in one part of the organisation. Experiment in a low-risk context. Let people get used to it. Then review, adapt and build from there. Director Nagaja Sanatkumar CMInstD noted that boards can overestimate how much needs to be perfect before getting started. Her message was clear: clean up what you can, then go into experimentation mode – with intent and transparency.
One way to do this is by setting up a sandbox - a contained space where teams can test tools, understand how data behaves, and explore risks and benefits without exposing the business to uncontrolled outcomes. Structured experimentation helps boards and staff gain confidence while keeping risk in check.
NZIER’s recent report on how AI could transform the New Zealand economy (part 1 and part 2) reinforced this point, noting that early productivity gains often come from modest, targeted applications – particularly in supporting less-experienced staff with repetitive tasks. Getting some experience before taking on something big helps ground the conversation in real capability.
Ethical use, transparency and social licence are foundational issues. Whether it’s AI-generated pricing, predictive hiring tools or dynamic customer segmentation, directors need to ask: Is this fair? Is it explainable? Are we still aligned with our values?
AI isn’t a one-off decision. It’s a capability, a risk and a strategic lever that will evolve over time. That means regular conversations about progress, unintended impacts and organisational readiness. Governance frameworks will need to be tested and revised as the technology, regulation and market conditions change.
Throughout the day, another consistent theme emerged: AI is changing more than just processes – it’s shifting how work gets done, how decisions are made and what leadership looks like. Matt Ensor encouraged boards to consider how AI might change the rhythm of their organisations: shorter planning cycles, faster feedback loops and new expectations around responsiveness. These aren’t just operational shifts. They affect culture, authority and how people experience change. Boards have a critical role not just in overseeing these transitions, but in shaping the conditions under which they happen – by setting tone, supporting adaptability and ensuring leadership is equipped for what’s next.
That’s what I came away with. The technology is moving fast, but the core governance challenge remains relational. It’s about structuring, guiding and holding the right conversations – especially the first ones. And that’s something chairs are uniquely placed to do well.
The Institute of Directors offers a suite of practical resources to support directors in strengthening their governance of artificial intelligence. These include A Director’s Guide to AI Board Governance, a targeted course on AI Governance for Boards, and access to recordings from the AI Forum sessions. Together, these tools are designed to help boards build confidence, ask the right questions and lead responsibly in an AI-enabled future.