When tech goes wrong

type
Article
author
By Kim Gordon MInstD, Managing Partner, Ara Digital
date
2 Oct 2024
read time
3 min to read
When tech goes wrong

Cyber security has long been considered around the board table. And, more recently, AI has joined the ‘IT’ or ‘digital’ topics directors need to be across. However, despite many high-profile, large technology programme (and project) failures in both the public and private sectors, it is not often we see this area being considered in governance education or conferences. 
 
This is also surprising given the increasing impact of technology-related change on organisations, whether it is the replacement of end-of-life systems, the moving of all things to the cloud, the continuing adoption of ‘as-a-service’ solutions and, not least, the increasing role of digital in all businesses. 
 
The impact of technology programme failure goes far beyond the impact of a budget or timeline blowout. They are often the headline when programmes go off track, but the real impact is the delay, or the outright failure to realise the strategic opportunity or outcomes being targeted through that programme. 
 
Loss of market share, loss of margin, increasing cost to serve and – very relevant for New Zealand – an inability to improve productivity are among some of the consequences. And that is before you even get to the ability to innovate through the application of technology. 
 
So, what should directors be considering and raising in relation to technology programmes? The focus of boards is often outward – the software vendor, the service providers, the consultants. And, in relation to those third parties, considerations such as fixed price, ‘one throat to choke’ and ‘skin in the game’ tend to dominate, essentially transferring risk. 
 
Less consideration is given to internal capability, such as the organisation’s part in the programme. In fact, the role of the customer (the organisation paying for the technology and services) is a material driver of the success of 
a programme, if not the number one determinant of success.

This is the case irrespective of the commercial model (for example, fixed price v time and materials) and irrespective of the size and scale of the programme. This remains the case when a third party is appointed as the lead, or prime, for a programme. 

“An impartial, objective, warts-and-all understanding is needed. And therein lies the challenge.”

As with any risk, programme delivery risk should be held and managed by the party with the greatest level of control over that risk. Terms such as fixed price or risk-reward (skin in the game) arrangements are of little value unless they are underpinned by a high degree of clarity in relation to programme scope, complexity, roles and responsibilities. 
 
This clarity must be owned and driven by the customer – it is the customer who owns the investment and the benefits and therefore has the greatest incentive to achieve the programme objectives.  No amount of commercial or contractual positioning will change this. 
 
Given the criticality of the customer’s role, the board needs to understand the capability and maturity of the organisation to deliver the programme. An impartial, objective, warts-and-all understanding is needed. And therein lies the challenge. 
 
As with any maturity assessment, there are frameworks and methods of measuring programme delivery capability within an organisation. And, as with any organisational maturity, continuing measurement and delivery is required. 
 
An understanding of that maturity is required when faced with the approval of a programme business case but, more importantly given the role of digital in any business, maturity is a key determinant of success in leveraging technology as an enabler of the organisation’s strategic objectives. 
 
The board’s view of maturity should not rely solely on the executive’s view. Again, no different to any other maturity measure within the organisation, the board needs to satisfy itself, through various means, that capability is being seen through an objective lens. 

“As with any maturity assessment, there are frameworks and methods of measuring programme delivery capability within an organisation.”

The board’s understanding and continuing support for the development of digital capability is a topic in itself. Here are some points all boards should consider and raise when presented with a programme of technology-enabled change or strategic objectives within an organisation or by its customers: 

    • What is our role in this programme? 
    • What does this role mean in terms of the capability required to undertake that role? 
    • How has that capability been assessed and understood? 
    • What level of objectivity and independence underpins that assessment and understanding? 
    • Have we done this before? If so, how does the outcome of that past performance support our assessment and understanding? 
    • Does our capability meet the level required to deliver our strategic objectives? 
    • How does our view of capability compare with the view of our technology vendors and service providers? 
    • How does our performance compare to others in our sector or, importantly, beyond our sector to those with similar challenges and opportunities (for example, digital disruption from new entrants)?
    • How do we know if the executive view of capability is robust? 

These questions need to be carefully considered and evidenced. Boards should assess their approach to this by comparison to any other consideration of organisational capability or maturity. 

Technical jargon, acronyms and, dare I say it, hype need to be pulled apart and understood in plain English and in the context of the business strategy, not the technology.


Kim Gordon MInstD specialises in technology consulting and is a managing partner in Ara Digital and a director at ACC New Zealand and the New Zealand Lottery Commission.