Mind over machine: future-proofing the world
Curious about what the future of AI looks like? The day is coming when machines can talk to each other without human intervention, says Dr Mahsa Mohaghegh (McCauley) MInstD.
What this means is that we need to set ethical boundaries today, says Mohaghegh, a computer engineer specialising in artificial intelligence and natural language processing.
Fast forward to a time not too far away and we will likely be in the age of artificial super intelligence, where the intelligence of the machine surpasses the average human intelligence. “It’s imperative we look to the future and get things right today,” she says.
That intelligence will be light years ahead of today’s artificial narrow intelligence, where a machine is designed to do one task, and tomorrow’s artificial general intelligence, where a machine can learn in one domain and apply that knowledge on a different domain – similar to human intelligence.
But it is artificial super intelligence that we should be preparing for, says Mohaghegh.
“Artificial super intelligence might seem like a long way off, but getting the fundamentals and ethics right now is necessary because humans will have less of a role to play in training these machines in the future.
“At that time, they will have the capability to learn from each other and apply this learning. If we don’t set a good foundation for this, there is
strong potential for things to go horribly wrong. It’s important we see more work happening about what it means to have ethical AI, responsible AI.”
She says AI is not a ‘black box’ yet. “There are a lot of things being done to ensure transparency in AI development, but I don’t believe we can regulate AI as a technology itself, because that’s simply not practical. Regulation needs to come in at the application level – to prevent it being used for unethical purposes.”
Mohaghegh has a ‘super’ knowledge and understanding of technologies. Her expertise centres on both the technical and societal dimensions of AI, cyber security and the Internet of Things (IoT).
She is an associate professor at the School of Engineering, Computer & Mathematical Sciences at the Auckland University of Technology; the founder/director of She Sharp, a non-profit connecting women in technology; board member on the Executive Council of AI Forum, NZTech, EdTech, and AI Researchers Association boards.
Google and Microsoft are leading the way with their own AI principles, while the European Union has its AI Act, and Mohaghegh says it is the responsibility of everyone, especially the developers and users, to play a role in education, training and strategies to ensure AI is ethically and fundamentally sound.
It is a work in progress in New Zealand, says Mohaghegh, who has been a member of AI Forum for the past five years. The NGO brings together New Zealand’s AI community, including innovators, end users, investors, regulators, researchers, educators, entrepreneurs and interested public.
“A lot of work has been happening to bring everyone to the table and come up with some strategy for New Zealand. The Minister for Science, Innovation and Technology, and Digitising Government [Hon Judith Collins] is embracing it and taking serious action on this front.”
In October, the Government launched its new virtual assistant, GovGPT, and in the same month joined the UK’s Bletchley Declaration on Artificial Intelligence. Five months earlier, it signed the Seoul Ministerial Statement for Advancing AI Safety.
“Previously, it was all about the connection, to the machine and each other. This is not about the connection. We are dealing with a machine or system that creates new things which haven’t existed before. And there is a level of fear in not being in control and not knowing what is coming next."
Market research leaders in New Zealand said earlier this year that “around two- thirds of New Zealanders are nervous about AI, making us one of the most concerned countries globally”. It also said New Zealand has a lower understanding of which products and services are using AI.
“This is concerning,” says Mohaghegh. “We will be far behind in terms of adopting AI. It’s a lot to do with fear because this is a little different from previous technological advancements. With every advancement we see a period of resistance and unbelief. AI is different because of the speed of change.
“Previously, it was all about the connection, to the machine and each other. This is not about the connection. We are dealing with a machine or system that creates new things which haven’t existed before. And there is a level of fear in not being in control and not knowing what is coming next.”
Mohaghegh says the speed of progress is so “ridiculous” that sometimes she literally walks into her class at AUT and changes slides because what she was talking about last week is no longer valid.
As a member of several boards, Mohaghegh says she tries to wear her hat as an educator and has seen the light turn on for decision-makers thanks to the introduction of ChatGPT in November 2022.
The reason why so many people are talking about AI is because ChatGPT hit 100 million users in two months, she says, far outstripping other machine-learning models in that period. “That was the game-changer.”
Some boards are more proactive in understanding how to use AI to improve productivity, for example, and using tools to identify risk and return on investment, but she says the real value will come when AI can inform the board.
But she warns any outputs must be taken with a grain of salt as well. “Look at it as a support decision-making resource, rather than a decision-making machine. That is important. When we change the mindset, the board will have a better understanding of what they have, and what they want to get from it. Then test and validate what is coming out of these systems.”
As machines take further steps closer to becoming sentient, Mohaghegh is super excited by the future, saying, “What a time to be alive.”