AI: too much information?
Yuval Noah Harari and Parmy Olson on how the race for superintelligence may amplify the worst of human nature.
2024 IOD LEADERSHIP CONFERENCE OUTTAKE
At the 2024 IoD Leadership Conference in Christchurch, one of the many engaging sessions, ‘Privacy in the Age of AI’, brought to light the importance of managing AI responsibly, with a strong focus on protecting privacy.
AI can involve processing large volumes of personal data, which introduces significant privacy risks. As AI tools take on more tasks, privacy protections become harder to maintain because they enable new ways to gather and combine personal information. They also make it more challenging to see, understand and explain how personal information is used.
During the session, Liz MacPherson, the Deputy Privacy Commissioner, captured this sentiment perfectly, noting: “It’s not that AI creates new privacy risks. It basically put them on steroids. If AI is being used inappropriately in your organisation, it will expose all the vulnerabilities you currently have.”
Based on the discussions from the session, we have developed the following guidance to support directors in managing privacy concerns related to AI, helping their organisations stay compliant and trustworthy.
It’s essential to remember that personal information is simply defined as any information that can identify an individual. This includes details such as a name, address, contact information or photographs. It can also include technical metadata such as map coordinates, internet protocol (IP) addresses or device identifiers related to a person. Even inaccurate or fabricated information, such as fake social profiles and deepfake images, constitutes personal information.
As part of your director and board toolkit for governing privacy in the age of AI, boards should:
Privacy needs to be a core component of the organisation’s AI strategy from the outset. Boards play a key role in overseeing the creation and implementation of a privacy-focused AI governance plan that aligns with the Privacy Act 2020 and supports the organisation’s broader strategic goals. Regular audits and checks should confirm that AI initiatives adhere to privacy standards and do not introduce unintended risks.
Boards can collaborate with management to develop and enforce clear policies governing AI use, data management (including data retention and data handling classification) and ethical practices. To strategically foster innovation, it may be advantageous for an organisation to establish strong safeguards that not only protect but also encourage the responsible use of generative AI. This involves ensuring human review prior to acting on AI outputs to reduce risks of inaccuracy and bias. Policies should be regularly updated to reflect technological advances and legal changes. Mandatory training on responsible AI use across the organisation should be implemented, with an emphasis on transparency to maintain stakeholder trust.
Boards may find it valuable to mandate a comprehensive register of all AI systems in use, detailing data sources, privacy risks and data flow through these systems. This register should include any third-party services involved. It’s also important to bring shadow AI into the light, ensuring all AI tools, even those developed outside of formal oversight, are included. Regular reporting on this register enhances transparency and allows the board to monitor and mitigate privacy risks effectively.
Boards should engage with Māori and other stakeholders to ensure AI systems are culturally appropriate and adhere to the organisation’s values. This may involve cultural competency training for the board and ongoing consultation with diverse groups to meet societal expectations. There should be an awareness of different views about AI and consideration of long-term potential impacts. Be aware that systems developed overseas may not work accurately or without bias with particular groups, such as facial recognition technology.
It is crucial for boards to ensure privacy impact assessments (PIAs) are conducted for AI projects, especially those involving personal data, to demonstrate a disciplined approach has been taken. A PIA should explain the data sources used to train the tool and their relevance and reliability for the organisation’s needs. PIAs should rigorously evaluate privacy risks and propose safeguards. The results of PIAs should be integrated into the organisation’s risk management framework, with a focus on transparency to those affected by AI systems.
When working with personal information, the Office of the Privacy Commissioner’s Information Privacy Principles (IPPs) set out legal requirements on how you collect, use and share personal information.
Setting up strong monitoring systems to oversee AI use, with a focus on data protection, access control and the accuracy of outputs, is essential. Boards should ensure that AI systems undergo regular reviews to maintain fairness and reliability. Clear metrics for assessing privacy and trust can be defined, and management should provide frequent updates on AI-related privacy issues.
Boards need to ensure they are well-informed about these risks to oversee AI effectively. Regular briefings and expert education sessions are essential to keep the board updated on AI developments and their privacy implications.
To supplement the key steps outlined above, board and directors can ask:
IoD A Director’s Guide to AI Board Governance: Understanding AI governance at the board level.
IoD Cyber Risk: A Practical Guide: Understand how cybersecurity relates to AI and privacy.
Do you want to hear more and get insights like these? Keep an eye out for details of the 2025 IoD Leadership Conference – details coming soon.