The rise of AI

type
Article
author
By Tao Lin
date
29 Aug 2017
read time
11 min to read

Earlier this year, a Japanese insurance company made headlines for doing something that company executives and directors around the world have been anticipating - and fearing - for years.

Fukoku Mutual Life Insurance made 34 of its staff redundant and replaced them with artificial intelligence (AI) system IBM Watson. Japanese newspaper The Mainichi reported the company will be using Watson to determine payout amounts and check customer cases against their insurance contracts. Other Japanese insurance companies have announced they are looking at or are already using AI for similar purposes and The Japan Times reported in April that the country’s Ministry of Economy, Trade and Industry was planning to trial AI to help government workers write draft answers for questions put to Cabinet ministers.

Evidently, the future of AI is already here and technology has been changing the world at a dramatic pace. Like other innovations before it though, AI brings with it challenges and questions around job automation, ethical issues and what business leaders and company directors can do now to better future-proof their organisations.

Robot nation

One nation famed for its technological prowess is Japan. Sony Walkmans, pocket calculators and Playstations were all Japanese inventions that had a huge impact on the world. But, the Japanese economy has been characterised by deflation for the past two decades and according to the World Economic Forum, the country’s aging population means it will lose 600,000 people a year by 2020, with a significant portion being of working age. It is in this context that AI is being developed in Japan.

Makoto Shiono is an ethics committee member of The Japanese Society for Artificial Intelligence and says because Japan is suffering from an increasing labour shortage, it has to find a way to use AI to maintain economic growth.

Shiono says while the Japanese government and universities are currently trying to catch up to the likes of the United States in AI development, he believes the country will come to lead in part due to its expertise in robotics.

“[The] AI world is currently shifting to ‘real’ or ‘physical’ world, like robots. And Japanese companies have long been the number one in industrial robots,” he says.

Japan has a long history with robots, with the first in Japan thought to have been built in 1928. Called Gakutensoku, which means “learning from the laws of nature”, the original creation used rubber tubes and air pressure to open and close its eyes, smile, move its appendages, puff out its cheeks and write Chinese characters. Fast forward to the present day and there are robots that greet customers in stores and provide information at help desks. The robotics arm of Japanese telecommunications company SoftBank released its first humanoid robot, “Nao”, in 2006 and “Pepper”, a robot designed to read human emotions, in 2014.

One of Japan’s biggest banks, Bank of Tokyo-Mitsubishi, started using Nao a few years ago as a robot bank teller and Pepper is used in SoftBank shops around Japan to attract and greet customers and keep them entertained in-store. Nestle Japan has also been using Pepper in its stores since 2014 to attract customers, encourage conversation and recommend the best coffee machines based on their preferences. In 2015 Osaka and Kyoto universities and the Advanced Telecommunications Research Institute International released “Erica”, a humanoid that uses AI to listen to and respond to human interaction. Also in 2015 a hotel opened near Nagasaki that was named by Guinness World Records as the first robot-staffed hotel in the world. Guests arriving at Henn na Hotel are greeted by a multilingual robot and a robotic arm helps them store their luggage. Last year, Japanese company Vinclu started taking preorders for a holographic virtual assistant for the home called Gatebox. Using projection and sensory technology, users can communicate with a virtual character inside a projection tube called Azuma Hikari. For 298,000 Japanese yen (NZ$3800), users can get Azuma, who is meant to be 20 years old and likes donuts, to do things like wake them up, welcome them home from work, tell them the weather and turn on the television. This all works through wireless internet, bluetooth and infrared.

According to a statement provided by the company, the aim of Gatebox is to provide a character who is “always with you” in your daily life through technology. But it is not all futuristic robots and talking virtual assistants for Japan, one expert says.

William Sato is an entrepreneur and part of a number of IT taskforces for the Japanese government. He says due to Japan being the first country in history to be both a rapidly shrinking and aging nation, Japanese society will be reliant on AI and ancillary technologies like automated cars and robotics in the future. But, he says the country is quite far behind in AI and machine learning development due to a lack of software programming skills being taught in schools and a lack of the big data required to build AI. The government has only really just come onboard with the measures required for furthering AI in the country, Sato says. Could the world see the next big thing in AI come out of Japan, then? Sato thinks this is besides the point.

“It’s more advantageous for Japan to address its unique issues and focus on the value-added applications that take advantage of AI, instead of trying to create the next greatest AI engine, which it probably can’t at this point,” he says.

What about New Zealand?

Chairman of the recently established Artificial Intelligence Forum of New Zealand Stuart Christie says AI development in New Zealand is still in its infancy and mostly focused around cost-reduction solutions. However, there are some innovations in improving customer service and quality of user experience, as well as good work leveraged off the back of the film industry, he says.

For example, Soul Machines is a Kiwi company that has developed an avatar as a customer service representative that can read a person’s natural expressions and address a customer’s concerns. Another company Booktrack has a large digital library of ambient sounds and music that is synchronised and overlaid onto digital books. AI allows Booktrack to reduce the cost of production and to serve parts of the market that were too expensive to serve before.

Christie says the real opportunities lie where big data is.

“That will be in all aspects of our economy, not only in the financial services industry and knowledge economy, but also in traditional businesses, like farming and horticulture and manufacturing,” Christie says.

University of Auckland computer science lecturer Paul Ralph says New Zealand has a great opportunity to become the “Silicon Islands” that replace the Silicon Valley of today. However, he says New Zealand needs to invest more money on the right things because countries like China, Singapore and South Korea are aggressively pursuing those Silicon Island goals.

“The 21st century will not look kindly on a dairy-centric economy. Investing 4% of GDP in research will vastly improve the average New Zealander’s quality of life,” Ralph says.

AI confusion

While AI is very much present already - think of Apple’s Siri, facial recognition technology, medical imaging and diagnosis - it is also still very much emerging, meaning even the definition of AI can be confusing. Christie says AI is the capability of a machine to imitate intelligent human behaviour, specifically computer systems that can perform tasks requiring human intelligence.

A subset of that is machine learning, where computers are given the ability to learn without being explicitly programmed. An example of this is Google Translate, where Google puts all translated text available into a machine and uses pattern recognition to translate a fresh piece of language based on what is already in the database. The computer learns to optimise the programme by repeating this pattern recognition with new data sets as they arise.

There are a number of other subsets including cognitive computing, which involves a computer system using data mining, pattern recognition and natural language processing to copy the way the human brain works. Sensory computing is another subset and is more commonly seen in technology like autonomous cars and walking robots, where the computer sensors the environment around it in real-time.

Auckland University of Technology artificial intelligence research centre director Albert Yeap says for him, AI is about figuring out how the mind works and then working out how to reproduce it on a computer.

“Throughout our lives, we perceive and accumulate and possess and develop a lot of information in our heads. But then we die and it’s all gone.

“But if we can understand how the human mind works, we just programme it and we live forever.”

Yeap, who has been working on AI for 40 years, believes his vision of reproducing the human mind on a computer will come true one day.

“Think back about 100 years ago. Do you think people thought a plane could fly in the sky?

“Everything is possible. If it’s not possible, then it’s only because of our imagination.”

No right answer

With the increasing sophistication of AI has come some ethical issues. One of the most discussed issues right now is the moral dilemma of a driverless car. Who lives and who dies if such a car were to head into a serious accident?

In an effort to understand and figure out some of these complex issues, the University of Otago announced a three-year project earlier this year with support from the Law Foundation, with the aim of looking into the possible implications of AI for law and public policy. University of Otago associate professor of philosophy James Maclaurin is part of this project and says the machine in the driverless car scenario must have an ethical dimension to it in order to make such a decision.

But, it is not as simple as placing the onus of making the right decision - whatever that may be - entirely on the creators of the technology, which seems the most obvious option. There is a famous business ethics case involving car manufacturer Ford from the 1970s with its Pinto model, where the car failed national safety standards on rear-end impact. Testing showed the car was a serious fire hazard should it be rear-ended, even in low-speed situations. Rather than redesign the gas tank to make the car safer, Ford went ahead with the original design for the next six years because the company calculated it was more cost-beneficial than redesigning the gas tanks. As a result of this decision, the Pinto was responsible for a number of fire-related deaths, including those of three teenagers whose Pinto gas tank exploded after a van crashed into the back of the car.

Maclaurin says based on this example, it becomes problematic to place all liability on the creators of AI technology because there might be instances where a cost-benefit calculation is made and less effort is put into training the technology and ensuring it works well. Training a machine to make ethical and moral decisions is
also tricky because it is not a straightforward equation.

“People roughly use theories… but we also use cognitive architecture, including our emotions, as part of making moral decisions. We don’t sit down and do calculations,” Maclaurin says.

Researchers at the Massachusetts Institute of Technology released a report last year that showed respondents to a research survey generally preferred autonomous vehicles to minimise casualties, meaning they would prefer the car to choose to harm the least number of people, even if that meant hurting its passengers. But, they also found that people were less likely to use a car that was programmed this way. The researchers noted that there appeared to be no easy way right now to design an algorithm that could reconcile moral values and self-interest in the way evidenced by the research results.

Chapman Tripp partner Bruce McClintock says where factors like safety, human judgement and transparency are required, ethical issues can be profound. He says there are clear ethical challenges where AI is used as a recruitment screening tool, an aid to judges in sentencing, an aggregator of personal information or as an employee or citizen surveillance tool.

About five years ago the Chicago Police Department started using a computer-generated list that marked certain residents, who had not done anything wrong, as likely to be involved in a future crime. Police visited these people to tell them that officers would be keeping an eye on them, effectively pointing them out for crimes they have not committed.

In 2013 American man Eric Loomis was sentenced to six years’ imprisonment after he was found driving a car that had been used in a shooting. His sentence was determined in part by AI software COMPAS that is meant to predict someone’s risk of reoffending. The judge used the software to determine that Loomis was at a high risk to the community.

“Companies who implement or use such AI tools for screening may need to ask: what sort of cognitive biases are built into the tool and how can we reassure people the tool is accurate?” McClintock says.

Ralph says insurance companies are starting to use AI and genetic data to predict diseases and mortality, which effectively discriminates people who are prone to certain diseases.

“Ever seen the movie Gattaca? This is how it starts,” he says.

In Gattaca, the protagonist lives in a world that is ruled by eugenics, where babies are conceived using genetic engineering and genetic discrimination dictates people’s futures. Facebook uses AI to decide which stories are shared widely, but Ralph says the tech and media giant fails to identify and stop propaganda, which undermines democracy.

“I don’t think people appreciate how incredibly dangerous that is. Russia allegedly meddling in the American elections is just the beginning.”

The fourth industrial revolution

As someone who works in a university, Maclaurin deals with young people all the time and he says one of the topics that always gets their attention is the future
of work.

“You can hear a pin drop when you talk about what jobs are going to be around in 10 years’ time,” he says.

The effect of AI on jobs has been a constant source of speculation, fear and debate in recent years, although there have not yet been widespread lay-offs in the fashion of Japan’s Fukoku Mutual Life Insurance. A widely cited 2013 study, “The Future of Employment: How susceptible are jobs to computerisation?”, showed jobs in transportation, logistics and office/administration support were at high risk of automation. The study found most management, business and finance jobs and roles that required social intelligence were largely low risk, as were jobs in education, healthcare, engineering, science, the arts and media.

A 2016 study by Chartered Accountants Australia and New Zealand found 46% of New Zealand jobs were at risk of automation over the next 20 years, with about 12 per cent of professional roles at risk.

A report from earlier this year by McKinsey Global Institute found that almost all occupations had the potential to be partially automated, but that automation would take years to eventuate fully. Researchers estimated that half of today’s work could be automated by 2055, but that could happen 20 years earlier or later depending on a number of factors including development of technology, cost of technology, performance benefits and social and regulatory acceptance. The institute also found that people will need to keep working alongside machines in order to keep up growth and it expects business processes and workers to change. This idea is not entirely foreign considering the magnitude of change the developed world went through when it shifted away from agriculture and towards industry.

Sometimes called the “Fourth Industrial Revolution”, the digital revolution will, like the previous industrial revolution, have both good and bad effects.

Maclaurin says just like in the past, we might like what technology gives us if we go forward enough into the future. In the short term, it could mean a lot of social dislocation.

“There certainly will be a big change in the marketplace and one that’s going to require a lot of negotiation between countries and businesses and governments about how we cope,” Maclaurin says.

Towards the future

As for what company directors can do now to better future-proof their organisations, there is no one answer. McClintock says diversity of thought and perspectives on boards will increasingly be a competitive advantage for companies.

With plummeting costs of usable AI, directors should be encouraging experimentation and ensure that they and senior leadership teams are learning about new technologies, he says.

The World Economic Forum’s The Future of Jobs report indicated that an immediate focus for organisations should be on creating and supporting diverse workforces. It states that businesses will need to put talent development and future workforce strategy first and foremost in their quest for growth. Organisations cannot be “passive consumers of ready-made human capital”.

Support for innovation in businesses is important and Christie says the three important elements that need addressing in this respect are investment, company culture and governance support. Instead of putting innovation on the backburner, it needs to be a priority.

“Have that big goal,” he says.

Ralph says companies need to invest in research and development (R&D) and overcome the short-sightedness that sometimes plagues corporations.

“R&D is unpopular in the short term because it takes time to produce results,” he says.

Partner with universities, get the smartest people you can find working on the company’s long term problems, he says. For smaller companies, Ralph says it costs nothing to do joint research with universities. Call departments and ask if anyone is interested in specific AI topics and talk to them. There are also joint grants available from the Ministry of Business, Innovation and Employment and elsewhere.

“One thing I love about New Zealand is the pervasive attitude that we're all in this together. That collaborative mentality gives us an edge over many other countries.”

Tao Lin is a Kiwi journalist based in Japan.

Published in Boardroom Aug Sep 2017 issue