We will have to learn to live with machines that can think
The impact of artificial intelligence on productivity could be epoch-making.
It’s February 2026, the first board meeting of the year for Alpha Corporation. With the December meeting feeling like ancient history, the directors prepare by listening to a generative AI-generated (GAI) podcast summarising the last meeting.
This isn’t just a dry recap of decisions. The podcast picks up on strategic themes, decision-making trajectories, and even the humour and quirks of the directors. The synthetic hosts’ playful banter about their good-natured ribbing elicits knowing smiles.
Each director approaches the meeting with their own areas of expertise. After reading the board papers, they use prompts to delve deeper into their specific interests and priorities.
Hemi, a director with a long pedigree in marketing, is focused on trends in cost-per-click and cost-per-conversion from digital campaigns. He prompts GAI to compare the latest results with the previous three years and spots a concerning increase in drop-offs in the conversion pipeline. He quickly emails the chief marketing officer for further details.
Chair Christine has her sights on whether Alpha’s long-term strategy remains relevant. With the ongoing cost-of-living crisis and its impact on consumer behaviour, she uses a prompt to compare the alignment between recent tactical decisions and the company’s strategic planks.
While the results are mostly reassuring, a flagged cultural issue in one division stands out. She alerts the board and requests extra agenda time to address the matter. A good idea agrees the CEO.
Meanwhile, the CEO is using GAI to prepare for the meeting by analysing competitor activity, recent media coverage, and the public trading results of a potential acquisition target. These insights are synthesised into concise talking points, ensuring he has the latest information for the board discussion.
When the meeting day arrives, Alpha’s board moves efficiently through formalities to focus on three strategic issues: the potential acquisition of the company in an adjacent segment, cultural challenges within the finance division following the CFO’s departure five months earlier, and the company’s transformational project to use GAI for freeing staff from repetitive tasks, enabling more creative and customer-focused work.
Across the street, Beta Corporation is also holding its first board meeting of the year, but the scenario couldn’t be more different.
Two of the five directors haven’t actually read the board papers. Instead, they have used the free version of ChatGPT to summarise the papers and generate a couple of ‘provocative’ questions. Rather than submitting these questions in advance to support a constructive discussion, they plan to spring them on the day, believing that surprise keeps the executive team on edge.
“The Beta board meeting turns into a synthetic arms race, with directors and executives jockeying to outsmart one another using AI-generated insights.”
Adding to the chaos, the minutes of the previous meeting, generated by a large language model, were circulated weeks ago but remain unchecked for accuracy. They include errors such as the distribution team allegedly driving products across the Cook Strait Bridge and the shareholding minister undergoing a sudden (and fairly unlikely) gender change.
Beta’s CEO, new to the role and still finding his feet, leans heavily on AI for meeting preparation. He asks Gemini to analyse directors’ CVs, predict the questions they are likely to ask, and even prepare scripted answers which he learns by rote. The result is anything but insightful.
The Beta board meeting turns into a synthetic arms race, with directors and executives jockeying to outsmart one another using AI-generated insights.
In trying to appear smarter than one another, they all end up looking like muppets. Trust erodes as the meeting descends into a showcase of pseudo intellectual one-upmanship. Worse still, the careless use of AI by the two languid directors has exposed sensitive board papers to potential leaks, leaving the company vulnerable.
The contrasting outcomes at Alpha and Beta illustrate two starkly different approaches to AI in the boardroom. The New Zealand Institute of Directors’ guide on AI adoption emphasises trust as the foundation for success.
Alpha Corporation has embraced this, using GAI to enhance collaboration and decision-making, and combining targeted and transparent use with good communications.
Beta, by contrast, demonstrates what happens when AI is used carelessly or as a substitute for genuine engagement and accountability.
The key difference between the two boards lies in the role of humans – effectively the floppy input device.
At Alpha, directors use AI as a tool to support their governance responsibilities, ensuring decisions align with the company’s long-term success. At Beta, AI is misused, leading to dysfunction and undermining the board’s common purpose.
In New Zealand, directors have a legal duty to act in the best interests of their companies. This responsibility cannot be delegated to AI or outsourced to others. While generative AI can provide valuable insights and streamline processes, ultimate accountability rests with the board and the human beings that constitute it.
So, as you look ahead to the challenges and opportunities of the coming year, ask yourself: in a year’s time will your board operate like Alpha, embracing AI to build trust and enhance governance?
Or will you risk becoming a Beta, where careless use of technology erodes trust and fudges progress? The choice is yours.
Time to get cracking.
Mike ‘MOD’ O’Donnell is a professional director and chair, and a periodic tutor for the IoD.
This article is reprinted with the kind permission of Stuff and thepost.co.nz