New Zealand Law Society - Artificial Intelligence will be highly disruptive, says white paper

Artificial Intelligence will be highly disruptive, says white paper

This article is over 3 years old. More recent information on this subject may exist.

Work needs to be done to ensure that New Zealand can cope with the disruption that evolving Artificial Intelligence (AI) will bring, says a White Paper by the Institute of Directors and Chapman Tripp.

The white paper, Determining our future: Artificial Intelligence, Opportunities and challenges for New Zealand: A call to action, is calling for the New Zealand government to establish a high-level working group on artificial intelligence (AI). The paper says the working group should be made up of expertise in science, business, law, ethics, society, and government and be tasked with:

  • considering the potential impacts of AI on New Zealand
  • identifying major areas of opportunity and concern, and
  • making recommendations about how New Zealand should prepare for AI-driven change

The paper says AI is likely to drive “highly disruptive change” to our economy, society and institutions.

“AI will raise major social, ethical, and policy issues in almost every sector. It is critical – for New Zealand’s sake – that we actively consider, lift awareness of, and prepare for the changes AI will bring. This work needs to start now.”

What is AI?

The term artificial intelligence covers technologies that seek to perform functions normally carried out by humans such as speech recognition, decision-making, learning and problem solving.  Already machines can play chess, recognise your face, map your journey and even drive your car.

The Good

The white paper says use of AI technologies could lead to greater productivity, enhanced social good and the creation of new fields of work.

The paper says AI may bring significant benefits to poorer communities by using predictive modelling to make better use of limited resources and personalising services and support.

“We have a duty to seek a deeper understanding of New Zealand’s potential as an AI-assisted economy and society, to ensure AI is a positive part of New Zealand’s future.”

The Bad

The risks of AI include greater inequality and unemployment from disrupted industries and professions. The white paper says while low-skilled and repetitive jobs are most at risk of being displaced by technology, what determines vulnerability to automation “is not so much whether the work concerned is manual or skilled, but whether or not it is routine”. It says AI-related industrial applications will replace humans in a number of readily disrupted fields, including call centres, customer services, legal document review, or any other industry involving other routine tasks.

The Ugly

If you’ve seen any of The Terminator movies, you’ll no doubt have an inkling of how things could go wrong if the machines outsmart us and become the masters. Some experts don’t think these gloomy Hollywood imaginings are too far off the mark. In a 2014 interview with the BBC, Professor Stephen Hawking said that thinking machines could threaten our existence. “The development of full artificial intelligence could spell the end of the human race,” Mr Hawking said. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Stanford’s 100 Year Study of AI notes that “we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes – and that such powerful systems would threaten humanity”.

Legal and policy issues in NZ

Putting aside the threat to our very existence, there are a few legal and policy issues that also need to be addressed. The white paper says questions need to be asked about whether decisions made by AI systems should be attributed to their creators; if AI systems should be recognised in law as legal persons and whether New Zealand’s regulatory and legislative processes are adaptive enough to respond to and encourage innovations in AI.

“AI presents substantial legal and regulatory challenges. These challenges include problems with controlling and foreseeing the actions of autonomous systems. How will we assign legal and moral responsibility for harm caused by autonomous technologies that operate with little or no human intervention?” the white paper asks, citing accidents involving driverless cars as an example.

The paper also brings up the ‘flash crash’ of 2010 as an example of how AI systems may act unpredictably. In that case algorithmic trading systems created a US$1 trillion stock market crash and rebound in just 36 minutes.

“For the development of artificial intelligence, the law may require new ways of attributing agency and causation. These issues will require special consideration in New Zealand due to our unique system of accident compensation. For instance, New Zealand must ensure that manufacturers and developers of AI technologies are not unduly subsidised by the application of ACC’s no fault principle.”

Workplace safety regimes and the liability attached to them will also be affected by the development of AI systems. “As AI systems become increasingly autonomous, employment and health and safety legislation will need to be clear about the responsibilities and liabilities of directors and organisations.”

Privacy and ethics

Existing AI technologies have already raised privacy concerns. “In Russia, there was an outcry over the use of the app Findface, which allows users to photograph strangers and determine their identity from profile pictures on social networks,” the white paper says.

While there are obvious benefits of AI technology in law enforcement and intelligence, the paper says the use of AI by state agencies raises privacy concerns that have not been the subject of much public debate in New Zealand. It says the development of AI tools for law enforcement could raise issues of bias. “For instance, government agencies and AI developers should ensure that AI systems used in risk-based security screening processes at airports and ports do not engage in biased profiling.”

Taking ownership

More and more data is being collected via internet-connected sensors and monitoring devices in fitness devices, clothing, city infrastructure and even people themselves. “At the same time, more data is being gathered and shared for medical purposes, law enforcement, education and welfare. Big data techniques are increasingly using AI to create population-scale data and produce individualised analytics and recommendations.” The white paper says it is not always clear who owns the data, how it can be used and who can profit from it.

Some related links

Can artificial intelligence ever give legal advice?

Contracting with artificial intelligence

Can robots be lawyers? Yeah ... Nah

Robots could replace lawyers, claims Massey researcher.

Lawyer Listing for Bots