Since my previous article on machine learning and the future of law (LawTalk 873), artificial intelligence (AI) has made another major breakthrough. In March this year, AlphaGo, the super computer built by Google, won a historical match against world champion Lee Sedol in the game of Go, a board game invented in China more than 2,000 years ago.
What makes this win remarkable? After all, IBM’s computer Deep Blue beat chess Grandmaster Garry Kasparov 20 years ago.
The key difference is this. Deep Blue won the match by applying brutal force – it searched all possible moves and chose the best ones. However, this approach cannot be used in Go. Go is probably the most complex game in the world. A Go game could have many more potential moves than the number of atoms in the universe. Searching all possible moves is simply impossible. It has long been thought that mastering Go requires human intuition, a quality which computers do not have. Yet, AlphaGo, against all the odds, handsomely beat Lee Sedol by a landslide 4:1.
Will this breakthrough bring us a step closer to an AI lawyer who can provide legal advice? It’s possible, but we have to be very cautious. As it happens, law involves two of the most formidable challenges in the field of artificial intelligence: natural language processing and logical reasoning.
Natural language processing
Law is expressed in natural language. As all lawyers know too well, natural language is full of uncertainties and subtle imprecisions. In fact, after decades of effort, an artificial intelligence system is still far away from being able to understand and converse properly in natural language, something which comes naturally to the average human.
Part of the reason for this is because words are context-based. In other words, the meaning of a word is not just defined by itself, but also by the context in which it is used. In the last couple of years encouraging progress has been made. In particular, scientists have now found a way to associate the meaning of a word with its context. After processing documents containing billions of words, the computer allocates a unique series of numbers (called “vector”) for each word.
Once words are turned into vectors, computers can start performing powerful mathematical operations to find the hidden relations among words. For example, if you ask the computer what should be to “Italy” as “Paris” to “France”, the computer will add the vectors for Italy and Paris together, then subtract the vector for France, and tell you the result is “Rome”. Recently, Google has taken this idea of word vector to the next level – “thought vector”, which represents the meaning of a sentence. It seems that computers are now starting to understand natural language in a way more closely resembling our own.
If computers can read and understand natural language, it might be tempting to think that we could easily build an AI lawyer by feeding it all the statutes and judgments.
However, legal advice is based not merely on statutes and case law. It also requires an understanding of common sense, derived from our general world knowledge. Computers lack common sense. For example, one of the things IBM’s super computer Watson can do is to invent new cooking recipes. While some of its ideas are very delicious, one recipe requires “green peas to be cut into ¾ pieces, then placed on a barbecue”.
One way to teach computers common sense is to design hard-coded rules. That was the approach taken by Computer Science Professor Doug Lenat 35 years ago. Over this time, he and his team have spoon-fed a whopping 15 million logical rules into to a computer called the “common sense engine”, including that parents are older than their children.
Another approach, believed by many scientists to be better, is simply to let the computer learn common sense by reading beyond legal documents and to traverse into the arts, history, science, religion, politics and so on. But the question is this: Is the “common sense” obtained through purely secondary experience the same as the one obtained from first-hand experience? Take tennis as an example. A computer can read everything about tennis and watch all the videos, but I suspect its understanding of tennis is not the same as the person who has actually picked up a racket.
This is particularly true with certain abstract, yet important concepts in law, such as fairness, justice and reasonableness. Even assuming a computer can somehow obtain an understanding of “morals”, is its understanding similar to, or fundamentally different from, the way we human beings see it through our real experiences in life? Indeed, I would’ve thought that it is very difficult, if not impossible, for a computer to truly replicate humanity’s lofty yet worthwhile morals, shaped predominately by our experiences and emotions.
The search for an AI lawyer is certainly worth pursuing. In addition to the benefits of saving costs for lawyers and providing access to legal services to those who cannot afford them, the search may also reveal some fundamental characteristics of the law. However, as shown from the short excerpt of a conversation below (conducted during an experiment by a group of Google scientists last year), the road to creating a workable AI lawyer will be a long journey.
Human: What is moral?
Machine: What empowered humanity, what intellectual the essence is.
Human: What is immoral?
Machine: The fact that you have a child.
Benjamin Liu is a commercial law lecturer at Auckland University. His research interests include securities law, financial derivatives and information technology and law. He is qualified in New Zealand and England and Wales (although not currently practising).