New Zealand Law Society

Navigation menu

Contracting with Artificial Intelligence

22 July 2015

By Joshua Woo

In general, contracts can only be entered into by an entity recognised by law as having legal personality. An entity without legal personality has no right to sue or be sued.

In the past, this simple paradigm was tested again and again to expand the rights of women, minors, minorities and the mentally incapacitated. Further, legal personality has been legislated on artificial entities such as corporations, partnerships, charities and, more recently, rivers.

The following article examines whether the emergence of advanced artificial intelligence (AI) requires us to consider the question of legal personhood of AIs, and to examine what the development of AI means for legal practitioners.

What is Artificial Intelligence?

There are various definitions of an AI. The first image that comes to our minds may be Steven Spielberg's 2001 movie of the same name. And that's not an inaccurate definition for the purposes of discussing legal personhood of an AI.

For our purposes, an AI is an advanced intelligent agent in the form of a machine or software that has the ability to evaluate its environment and take action on its own without human intervention.

The definition matters because the level of human control will determine whether an AI can be said to be contracting in its own person or not. Smart, automated and advanced applications that nevertheless require human intervention to carry out its functions cannot be said to be an AI. In fact, much of the legal considerations of such applications are already accounted for under existing legal principles.

On the other hand, an AI that is fully autonomous and has self-determination may not be controlled at all by humans. The legal considerations of the autonomous AI are unaccounted for, and require some serious thinking.

As the science around the subject advances, lawyers and legislators should consider when an advanced intelligent agent is no longer in need of human intervention and acts with self-determination.

The table below summarises key legal issues concerning smart, advanced applications and AIs, compared against humans as legal person.

Key issues

Humans as legal person

Smart, advanced applications

Artificial intelligence

Legal personality

Has legal personality

No legal personality

Possible to have legal personality through common law or legislation


Free to contract subject to laws relating to minors, drunks, for example

Medium through which principals with legal personality can contract

Possible to contract on its own right, or as a medium


Personally liable

Principal with legal personality is liable

Possible to be personally liable, or have liability imposed on third parties with an interest in the AI

Smart, automated and advanced applications

Smart, automated and advanced applications that do not satisfy the definition of an AI can be considered to be agents of human principals (or nonhuman principals that have legal personality, such as a company). This is because the principal would have control over the actions of the application.

Contractually, the advanced applications are a medium through which the principal enters into binding agreements with another party. Vending machines or ticketing machines in carparks have no legal personality, but bind the users of the machines to the principals who have rights over the machines. The rights and obligations, including contractual liability, are of the principal. The Law Commission looked at this issue in its reports (NZLC R50, R58 and R68) on the Electronic Commerce, which led to the legislation of the Electronic Transactions Act 2002.

However, as we inch closer to an AI, lawyers are likely to see advanced applications that will, without the intervention of its human principal, enter into a "negotiated" contract (but can still be controlled by human principal). Such applications could use big data to determine the preference, requirement and ability of its counterparty to maximise the utility of an individualised contract. In fact, such applications are already in use in the airline industry where the airfare charged may be based on the point-of-sale location of the customer. See the Huffington Post article titled "Use a 'Fake' Location to Get Cheaper Plane Tickets".

Lawyers need to start thinking about how the advanced applications can enter into contracts with a greater degree of flexibility. To avoid the possibility (through litigation) that a contract entered into by an advanced application is void for lacking the human control, lawyers may consider using standard contractual frameworks that have been pre-approved by parties with legal personhood.

A framework that leaves blank precise details of the agreement is not uncommon, provided that the final terms are within the pre-defined boundaries. In these scenarios, the rights and obligations of the contracting smart applications are those of the principals who set the initial framework.

Legal implications of an AI

In the not too distant future, we will see AIs that can evaluate their environment and take action on their own account. The AIs would have reached a stage where human control is not necessary (or possible). Hence, the agency principles for less advanced applications may not be suitable. How then should we consider the legal implications of the AIs? To answer this question, we need to determine whether an AI has legal personality (or should be given one).

An AI is likely to obtain legal personhood in either one of the following two means: under the common law (such as through the habeas corpus proceedings) or through the legislation of statutory personhood.

The common law writ of habeas corpus is available to "persons" recognised by the courts to set them free of illegal detention. This is likely to involve a very long litigious process. It will centre on key terms such as "self-determination" and "autonomy" of AIs that mark them as "persons", not "properties".

Contemporary parallels are found in litigations that advocate for the recognition of nonhuman personhood of animals. The Nonhuman Rights Project, the group spearheading the debate, argue that some animals have self-determinative and autonomous traits that prove those animals are more like "persons" rather than "things", and so must be released from their unauthorised detention. It is possible that similar litigations advocating for the recognition of legal personhood for artificial agents become fashionable as AIs become more human-like.


The real medium through which AIs are likely to obtain a legal personhood, however, is legislation.

Legislators may consider the issue of AIs' personhood proactively, or as a reaction to what courts voice. Either way, legislation will be the best manifestation of what the society believes an AI's personhood should be, and to create its associated rights, powers, duties and liabilities.

This form of statutory personhood is common. In New Zealand, the Companies Act 1993, the Partnership Act 1908, the Incorporated Societies Act 1908, as well as recently legislated deed of settlement between the Crown and Whanganui Iwi are examples of statutory creations of legal personhood.

Given the internationality of AIs, a statute may be predetermined by multilateral conventions on the subject. However, to the extent that there is a scope for variation, a statute may factor in some of the following matters:

Definition of an AI

Parliamentarians should provide a legally ascertainable definition of an AI. Registration may be required for an AI to qualify as a legal person.

Ownership of AIs

A person recognised under the common law writ of habeas corpus cannot be owned. It is likely that a statute on AIs will be used to clarify the position that an AI, in law, can be owned. Further, the exact nature of the ownership (through shares, for example) should be considered.

Capacity, powers and validity of actions

Rights of an AI to enter into legal relationships with other legal persons must be set out. The validity of actions of an AI should also be considered.

Liability of AIs and third parties

The legislators will need to consider how AIs will be held to account for their actions, both civil and criminal. Liability of third parties (such as the legal person "owning" and "benefiting" from the actions of the AI) should be clearly considered.

The statute will present an opportunity to foster the innovation, as well as a risk that limits the future development in this area. For example, creating a liability to innovators that perhaps have the closest connection to the AIs they create may de-incentivise innovation.

The liability regime must reflect the balance between the public good and harm in the use of AIs, and the economic analysis of the regime (for example, by applying the rule of the least cost avoider).

Case for lawyers

It has been an industry-wide trait that lawyers (and lawmakers) react after the fact to pertinent issues rather than proactively consider them.

While brief, this article attempts to provide an opportunity for practitioners to consider the impacts of AIs on the legal world, especially in relation to contractual law implications.

On the very internal question of whether we should be concerned at all: the answer is yes. After all, when Bill Gates, Stephen Hawkins and Elon Musk, arguably the three most influential leaders in technology and innovation reach a consensus that AI poses "existential threats" to mankind, all of us, including lawyers, should be concerned.

Joshua Woo is a solicitor at Webb Henderson's Auckland office. He advises on technology, media and telecommunications sectors, often forming legal views on new digital products that challenge the relevancy of existing legal regimes. Joshua also tutors contract law at Auckland University's Law Faculty.

This article was also published in LawTalk 860, 13 March 2015, page 34.

Last updated on the 22nd July 2015