Challenging your assumptions
Assumptions are a core part of normal life that help us to act and react at pace – but they can also get in our way and stymie innovation.
Last week Otautahi /Christchurch really turned it on for the New Zealand Institute of Directors annual conference.
The days broke teeth-numbingly cold, then the sun raked its way across the Canterbury Plains, delivering perfectly clear blue-sky days.
Perfect days for the 700 odd directors meeting to brush up their leadership and governance skills.
Rod Carr delivered an arresting address on climate opportunities, the need for positive action by directors and what a meaningful transition looks like.
Dr Reuben Abraham made attendees wake up to what the next decade will bring by doing a deep dive on India, the world’s fastest growing economy, and what’s around the corner for business practices and governance thinking.
And while it wasn’t an official theme of the conference, generative artificial intelligence (GAI) and large language models were never far away from discussion – whether it be the official papers or discussion between directors over the two days.
There were three official sessions, but much of the networking time was spent around discussing the challenges and opportunities both for business itself and for the business of governance.
Coincidentally at the same time on the other side of the planet two of the global leaders in GAI were releasing the latest and greatest versions of their tools.
OpenAI launched the flagship version of ChatGPT. GPT-4o has more speed and more power thanks to a single model handing multiple modalities (previously three models had to communicate with each other via text/voice/image before responding to your commands).
The tightly scripted launch video featured Open AI’s CTO Mira Murati and a couple of her team showing the full extent of the new model. In very simple terms it’s a heck of a lot faster, but is also interruptible and multi-mode responsive.
In the demonstration it solved hand-scribbled algebra problems in real time and read a person’s face to interpret emotions. And it’s showed instant real time ability to translate languages. All this in a voice that sounded like a flirty Scarlett Johansson.
A day later Google launched its latest tool in the AI arms race. Google Gemini Live takes the existing Google AI Assistant (which you use if you harness verbal search or Google Home) and supercharges it. It provides Gemini users with a fully conversational experience that allows Assistant to take actions on your behalf.
Google also provided a glimpse of its next generation “do everything” agent called Astra which will be able to do everything from interpreting your health through to helping lodge your taxes – and potentially with a video interface.
What this all means is that GAI – which has already made huge in-roads on everyday life and business - is going to become engrained more deeply and more broadly.
Apart from lightweight uses like sub-editing board papers, auto generation of minutes from online board meetings and the like; there are increasingly more substantive uses of GAI for directors to consider.
These include compliance, intellectual property, privacy and staff well-being. On the one hand all these areas could see increased levels of insight around the results and more efficiency around the information collection and reporting.
On the other hand, they bring with them real risks; risks that currently aren’t being consistently tracked at a board level.
Currently GAI is penetrating our lives at a rate much greater than our ability to reflect on it, much less consider net risks. But like any other risk, it needs consideration at a board level.
Boards regularly use dashboards to stay across risk, cyber security and compliance. I think it’s time to do the same for GAI.
As a starter for 10, I think a GAI dashboard for directors would need to cover four dimensions – key metrics, privacy, security and accuracy.
Key metrics here could cover the total number of AI systems used, the departments that are using it and a summation of monthly outputs. In simple terms you need to know the scale of AI before you can measure it.
The next area is around privacy. Understanding what AI is ring-fenced so any data put into the model stays within the confines of the company, versus what data is put “into the wild”, or shared with companies and services on the far side of your firewall. The key thing here is restricting how personal and sensitive data are processed.
Security metrics for GAI would likely cover vulnerabilities, mitigation, alerts and incidents. The two big risks here are around data leakage and adversarial attacks.
Lastly the accuracy piece is some sort of analysis of the error rates resulting from using GAI generated content, and any impactful errors. Recent examples include Air Canada being successfully taken to court after its AI chatbot provides incorrect fare information, and UK judges throwing out case law that was made up by a large language model.
One of the nuggets I got from this year’s IOD conference was a throw away comment by Matt Ensor, Chair of New Zealand’s AI Forum.
He noted that one of the most useful command prompts for any GAI tool was the phrase “try harder” to have it deliver higher quality results.
I reckon directors in New Zealand need to try harder when it comes to extending good governance to GAI.
This article was first published by Stuff.co.nz
Mike "MOD" O'Donnell MInstD is a professional director, facilitator and writer with a particular interest in digital disruption and consumer centricity. He is chair of Garage Project, deputy chair of New Zealand Trade and Enterprise and also deputy chair of global music company Serato. He is also an executive director of Radio New Zealand, PaySauce, Sandfield Software, HiTech NZ and www.realestate.co.nz. MOD is an independent weekly business columnist for Stuff and is host of the TVNZ series Start Up.