Bringing startup expertise to the boardroom
Supporter of innovative businesses, Maria King MInstD was thrilled to see startup governance in the IoD Advanced Directors' Course.
Boards need to wake up to the new world and learn the tricks, say generative AI leader Tim Warren.
The world has seen the magic show. Now we will get to see the view from the side, where the woman climbs into the box and does not get cut in half. We will see how the magician works, says generative AI leader Tim Warren.
The co-founder of AI software company Ambit is talking about artificial intelligence that is “bigger than the wheel” and will become “bigger than the internet”.
For the first time in human history we have created something that can be smarter than us, Warren says, and people are going to be handing over decisions to another intelligence – artificial intelligence.
It’s already working at billions of operations a second and getting faster. Moore’s law states computational power doubles nearly every two years, while halving in cost. Its projected IQ will be at a level we can’t even comprehend – and that reality, Warren says, could be only months away.
“AI is going to change everything you do. A lot of directors like to think they are in the position of setting the overall structure and guidance. Yet this is moving faster than any previous technology and it’s bigger than any. I reckon the board’s role is to look at that longer term, and if the AI ‘revolution’ isn’t fundamentally disruptive to the way you operate, I don’t know what is.”
AI is moving faster than governance possibly can, he says, unless governance changes and adapts to the times. And if it doesn’t, he warns governance will literally be lost.
“If you’re not worried about it, you don’t understand it. It’s fundamental. If you didn’t think the internet was going to change your business, you were wrong and the risk is you’re going to be wrong about a whole lot of things. I think a lot of directors probably end up in the situation thinking ‘I’m right about most things’, but probably you’re right about 51 per cent of the time.”
Warren says he would even put money on around 40 per cent of directors of companies in the knowledge space that could be replaced by AI today. “But the point is it doesn’t have to replace you. You’ve got this incredibly powerful tool you can use – it’s your choice if you want to ride that wave.
“We’re in this situation where there is something that is potentially smarter and more capable than any individual, and probably the entire collection of people that are on any board. And if you are going to look away and think it’s not going to affect me, you’re going to be out of a job.”
Or more importantly, he says, your company is going to be out of business. “And with these things, you can’t get your timing exactly right. You can’t say, ‘Hey, let’s just time a run for learning and understanding it’. You’re either going to be early or you’re going to be late. The risk of being late is professional destruction. Do you want to take that risk?”
Warren likens it to climate change. If you don’t take it seriously, the downside is disaster because there will be nowhere to live. The same with AI. “We don’t know exactly when it will become so powerful and insurmountable, but it certainly will. The greatest minds are pretty aligned on this,” he says.
“AI is something that can actually think. People might argue that point, but it can demonstrate intelligence. You’ve potentially got something here that can outthink the people that are trying to regulate it.”
While the world has become excited – and fearful – with the emergence of this human- like capability, Warren is now looking at what he calls the edge cases, or the boundary developments.
“We’ve got something that at its core is very powerful, yet it takes eager and learned people to fashion it into something that actually is a true business solution. A business person does not want to go and learn about AI to get the benefits. They just need the tools to get a good outcome, a solution.”
He predicts 2024 will see “improvements under the hood”. “At level one we have these raw AI models and they’re very powerful, but they’re hard to use. At level two, you have what you call infrastructure or operations. It coordinates and orchestrates the other levels and gives you the capability to build level three, which are applications. An application is something that gives you a business outcome.”
Warren says language models such as ChatGPT are fast outgrowing their teenage years when they could be wrong but were certain they were right. If you asked ChatGPT something it didn’t know, he says it would just make it up – just like a teenager, and then argue with you.
“In the last year, it’s gone from being a 15-year-old argumentative teen to a 25-year- old academic, but not just one academic – every academic. What happens in another six months?” he ponders.
So, when does it stop doing silly things to the extent you can trust it like an adult?
“AI learns from us and it’ll stop doing that soon. The standard of the way we act as humans is what we’re feeding into it. AI is this cultural mirror and if we look into the mirror as boards that is the standard we’ll get to support governance. I don’t know if it’s a high watermark or a low watermark. It depends on who is in control.”
He predicts changes in models that will go beyond governance – “from what we call monolithic massive chunks of data to modular or composable. What that means is instances of AI talking to other instances of AI without a human in the middle.”
Boards are waking up to the new world though, he says, citing one board which, in six months, has gone from having nothing about AI on its board papers to it being included in every item. And an AI tool even took the minutes.
So where to start? With your risk register, he says, then expand it and embed it until it becomes a factor in everything, and subsumed into everything you do – just like CSR and ESG.
“A core role of a board and governance is to ask questions and never be comfortable, never be confident. Then ask more questions. They need to be regularly learning and attending events to see how it will disrupt their business strategy.”
He struggles to see how regulators will stay on top of it because “they tend to fundamentally work the same way they always have. AI is something that can actually think. People might argue that point, but it can demonstrate intelligence. You’ve potentially got something here that can outthink the people that are trying to regulate it.”
“Governance related to data security and privacy will become even more critical because I can guarantee you there’s going to be a massive data loss event at some stage and then it’ll be taken seriously. That’s probably when boards and regulators will start doing something real about it. I would like it if was earlier.”