New Zealand Law Society - Regulation of government AI algorithms needed, says report

Regulation of government AI algorithms needed, says report

This article is over 3 years old. More recent information on this subject may exist.

Regulatory measures are needed to guard against the dangers of government algorithm use, a report from the University of Otago's Artificial Intelligence and Law in New Zealand Project says.

The report, Government use of artificial intelligence in New Zealand, was funded by the New Zealand Law Foundation.

It reports on phase 1 of a two-phase project looking at the use of AI. Phase 1 focuses on regulatory issues, while phase 2 will focus on the implications on employment of the increasingly widespread use of AI. The authors are Colin Gavaghan, Alistair Knott, Jame Maclaurin, John Zerilli and Joy Liddicoat.

Focusing on "predictive algorithms", the report says these are an important class of algorithms that includes machine learning algorithms. It says the use of predictive algorithms within the New Zealand government sector is not a new phenomenon, with algorithms such as RoC*Rol in the criminal justice system having been in use for two decades.

"The increasing use of these tools, and their increasing power and complexity, presents a range of concerns and opportunities," it says. The primary concerns around use of predictive algorithms in the public sector relate to accuracy, human control, transparency, bias and privacy.

Accuracy: The report says there should be independent and public oversight of the predictive models being used in government. This is of central importance, but such information is not yet readily or systematically available.

Human control: If the need for human involvement is not approached carefully, it could served as "regulatory placebos". In some situations, the addition of a human factor to an automated system may have a detrimental effect on that system’s accuracy. The authors say if a general right to human involvement is deemed desirable, it should be accompanied by a "right to know" that automated decision-making is taking place. Statutory authorities that use algorithmic tools as decision aids must be wary of improper delegation to the tool, or otherwise fettering their discretion.

Transparency and a right to reasons/explanations: There is a right to reasons for decisions by official agencies already through section 23 of the Official Information Act 1982. Where there is a right to an explanation, predictive tools used by the government must support meaningful explanations. The algorithms used by government departments should also be publicly inspectable.

Bias, fairness and discrimination: As "fairness" in a predictive system can be defined in several ways, it may be impossible to satisfy all definitions simultaneously. Government agencies should consider the type(s) of fairness appropriate to the contexts in which they use specific algorithms.

Privacy: The authors believe more specific requirements are needed to identify the purpose of collection of personal information. Introduction of a right to reasonable inferences should also be considered.

The report authors propose the creation of an independent regulatory agency. They say there are several possible models for this.

"Our preference is for a relatively 'hard-edged' regulatory agency, with the authority to demand information and answers, and to deny permission for certain proposals. However, even a light-touch regulatory agency could serve an important function. The recent Algorithm Assessment Report acknowledged use of algorithms across New Zealand government to be somewhat piecemeal.

"If a regulatory agency is to be given any sort of hardedged powers, consideration will need to be given to its capacity to monitor and enforce compliance with these. If the agency is to be charged with scrutinising algorithms, it must be borne in mind that these are versatile tools, capable of being repurposed for a variety of uses. Scrutiny should apply to new uses/potential harms and not only new algorithms."

The report also recommends that predictive algorithms used by government, whether developed commercially or in-house must feature in a public register, be publicly inspectable, and be supplemented with explanation systems that allow lay people to understand how they reach their decisions.