In Philip K. Dick’s book “Do Androids Dream of Electric Sheep” – made into the brilliant movie “Bladerunner” directed by Ridley Scott – the genetically engineered replicants, indistinguishable from human beings, were banned from Earth and set to do work on off-world colonies. There was a fear of the threat that these “manufactured” beings could pose to humans.
Isaac Asimov’s extraordinarily successful “Robot” series of short stories and books had a similar premise – that intelligent robots would pose a threat to humans. In “Androids” the way that the replicants were regulated was that they were shipped off-world and if they returned to Earth they were hunted down and “retired”. Asimov’s regulatory solution was a little more nuanced. Robots, upon the creation of their positronic brains, were programmed with the Three Laws of Robotics. These were as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These three laws were the foundation of all the tensions that arose in Asimov’s stories. How were the Three Laws to be applied? What happens when there is a conflict? Which rule prevails?
The stories are classified as science fiction. I prefer to treat them as examples of statutory interpretation. But underpinning the Three Laws and the reason for them was what Asimov called “The Frankenstein Complex” – a term he coined for fear of mechanical men or created beings that resemble human beings. And his answer to that fear and how it could be mitigated was the Three Laws.
A similar call recently went out about how we should deal with Artificial Intelligence. A report entitled “Determining Our Future: Artificial Intelligence” written collaboratively by people from the Institute of Directors and the law firm Chapman Tripp, whilst pointing out the not insubstantial benefits that artificial intelligence or “smart systems” may provide, has a significant undertone of concern.
The report calls for the Government to establish a high-level working group on AI which should consider
“the potential impacts of AI on New Zealand, identifying major areas of opportunity and concern, and making recommendations about how New Zealand should prepare for AI-driven change.”
The writers consider AI is an extraordinary challenge for our future and the establishment of a high level working group is a critical first step to help New Zealand rise to that challenge. It seems to be the New Zealand way to look to the Government to solve everything, problematical or otherwise which says interesting things about self-reliance.
The report is an interesting one. It acknowledges the first real problem which is how do we define AI. What exactly does it encompass? Is it the mimicking of human cognitive functions like learning or problem solving. Or is it making machines intelligent – intelligence being the quality that enables an entity to function appropriately and with foresight in its environment.
Even although there seems to be an inability to settle upon a definition a more fruitful part of the examination is in the way in which “smart” computing systems are used in a range of industries and there is the observation that there has been a significant increase in investment in such “smart” systems by a number of players.
The disruptive impact of AI is then considered. This is not new. One of the realities of the Digital Paradigm is continuing disruptive change. There is little time to catch breath between getting used to one new thing and having to confront and deal with a new new thing. Disruption has been taking place from before the Digital Paradigm and indeed back to the First Industrial Revolution.
There is a recognition that we need to prepare for the disruptive effects of any new technology, but what the report fails to consider is the way in which disruptive technologies may ultimately be transformative. There is some speculation that after an initial period of disruption to established skills and industries, AI may lead to greater employment as new work becomes available in areas that have not been automated.
The sense of gloom begins to increase as the report moves to consider legal and policy issues. Although the use of AI in the legal or court process – I prefer to use the term expert legal systems – is not discussed, issues such as whether AI systems should be recognised as persons are mentioned. In this time of Assisted Birth Technologies and other than purely natural creation of life, it is not an easy question to answer. “Created by a human” doesn’t cut it because that is the way that the race has propagated itself for millennia. “Artificially created by a human” may encompass artificial insemination and confine people who are otherwise humans to some limbo status as a result. But really what are we talking about. We are talking about MACHINE intelligence that is driven by algorithms. I don’t think we are talking about organic systems – at least not yet.
But it is the last question in that section that gives me cause for pause. Are New Zealand’s regulatory and legislative processes adaptive enough to respond to and encourage innovations in AI? What exactly is meant by that? Should we have regulatory systems in place to control AI or to develop it further? That has to be read within the context of the introductory paragraph
“AI presents substantial legal and regulatory challenges. These challenges include problems with controlling and foreseeing the actions of autonomous systems.”
Then the report raises the “Frankenstein Complex.” The introductory paragraph reads as follows:
“Leaders in many fields have voiced concerns over safety and the risk of losing control of AI systems. Initially the subject of science fiction (think Skynet in the Terminator movies), these concerns are now tangible in certain types of safety-critical AI applications – such as vehicles and weapons platforms – where it may be necessary to retain some form of human control.”
The report goes on to state:
Similar concerns exist in relation to potential threats posed by self-improving AI systems. Elon Musk, in a 2014 interview at MIT, famously called AI “our greatest existential threat”.
Professor Stephen Hawking, in a 2014 interview with BBC said that “humans, limited by slow biological evolution, couldn’t compete and would be superseded by AI”.
Stanford’s One-Hundred Year Study of AI notes that
“we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes – and that such powerful systems would threaten humanity”.
Google’s DeepMind lab has developed an AI ‘off-switch’, while others are developing a principles-based framework to address security.
Then the question is asked
“What controls and limitations should be placed on AI technology.”
I think the answer would have to be as few as possible consistent with human safety that allow for innovation and continued development of AI. It must be disturbing to see such eminent persons such as Hawking and Musk expressing concerns about the future of AI. The answer to the machine lies in the machine as Google has demonstrated – turn it off if need be.
The report closes with the following observation.
The potential economic and social opportunities from AI technologies are immense. The public and private sectors must move promptly and together to ensure we are prepared to reap the benefits, and address the risks of AI.
And regulation is the answer? I think not.
Artificial Intelligence as a Tool for Lawyers
My particular interest in AI has been in its application to the law so let’s have a brief look at that issue. Viewed dispassionately the proposals are not “Orwellian” nor do they suggest the elevation of “Terminator J” to the Bench. It may also serve to put a different perspective on AI and the future.
In a recent article, Lex Machina’s Chief Data Scientist observed that data analytics refined information to match specific situations.
“Picture this: You’re building an antitrust case in Central California and want to get an idea of potential outcomes based on everything from judges, to districts, to decisions and length of litigation. In days of law past, coming up with an answer might involve walking down the hall and asking a partner or two about their experiences in such matters, then begin writing a budget around a presumed time frame. “
Howard says that analytics change the stakes. “Not only are you getting a more precise answer,” he attests, “but you’re getting an answer that is based on more relevant data.”
Putting the matter very simplistically legal information either in the form of statutes or case law is data which has meaning when properly analysed or interpreted. Apart from the difficulties in location of such data, the analytical process is done by lawyers or other trained professionals.
The “Law as Data” approach uses data analysis and analytics which match fact situations with existing legal rules.
Already a form of data analysis or AI variant is available in the form of databases such as LexisNexis, Westlaw, NZLii, Austlii or Bailii. Lexis and Westlaw have applied natural language processing (NLP) techniques to legal research for 10-plus years. The core NLP algorithms were all published in academic journals long ago and are readily available. The hard (very hard) work is practical implementation. Legal research innovators like Fastcase and RavelLaw have done that hard work, and added visualizations to improve the utility of results.
Using LexisNexis or Westlaw, the usual process involves the construction of a search which, depending upon the parameters used will return a limited or extensive dataset. It is at that point that human analysis takes over.
What if the entire corpus of legal information is reduced to a machine readable dataset. This would be a form of Big Data with a vengeance, but it is a necessary starting point. The issue then is to:
- Reduce the dataset to information that is relevant and manageable
- Deploy tools that would measure the returned results against the facts or a particular case to predict a likely outcome.
Part (a) is relatively straight forward. There are a number of methodologies and software tools that are deployed in the e-Discovery space that perform this function. Technology-assisted review (TAR, or predictive coding) uses natural language and machine learning techniques against the gigantic data sets of e-discovery. TAR has been proven to be faster, better, cheaper and much more consistent than human-powered review (HPR). It is assisted review, in two senses. First, the technology needs to be assisted; it needs to be trained by senior lawyers very knowledgeable about the case. Second, the lawyers are assisted by the technology, and the careful statistical thinking that must be done to use it wisely. Thus, lawyers are not replaced, though they will be fewer in number. TAR is the success story of machine learning in the law. It would be even bigger but for the slow pace of adoption by both lawyers and their clients.
Part (b) would require the development of the necessary algorithms that could undertake the comparative and predictive analysis, together with a form of probability analysis to generate an outcome that would be useful and informative. There are already variants at work now in the field of what is known as Outcome Prediction utilising cognitive technologies.
There are a number of examples of legal analytics tools. Lex Machina, having developed a set of intellectual property (IP) case data, uses data mining and predictive analytics techniques to forecast outcomes of IP litigation. Recently, it has extended the range of data it is mining to include court dockets, enabling new forms of insight and prediction. Now they have moved into multi-District anti-trust litigation.
LexPredict developed systems to predict the outcome of Supreme Court cases, at accuracy levels which challenge experienced Supreme Court practitioners.
Premonition uses data mining, analytics and other AI techniques “to expose, for the first time ever, which lawyers win the most before which Judge.”
These proposals, of course, immediately raises issues of whether or not we are approaching the situation where we have decision by machine.
As I envisage the deployment of AI systems, the analytical process would be seen as a part of the triaging or Early Case Assessment process in the Online Court Model, rather than as part of the decision making process. The advantages of the process are in the manner in which the information is reduced to a relevant dataset performed automatically and faster than could be achieved by human means. Within the context of the Online Court process it could be seen as facilitative rather than determinative. If the case reached the decision making process it would, of course, be open to a Judge to consider utilising the “Law as Data” approach with, of course, the ultimate sign-off. The Judge would find the relevant facts. The machine would process the facts against the existing database that is the law and present the Judge with a number of possible options with supporting material. In that way the decision would still be a human one, albeit machine assisted.
Conclusion
As we embark down this road let us ensure that we do not over-regulate out of fear. Let us ensure that innovation in this exciting field is not stifled and that it continues to develop. The self-aware, self-correcting, self-protecting Skynet scenario is not a realistic one and, in my view, needs to be put to one side as an obstruction and recognised for what it is – a manifestation of the Frankenstein complex. And perhaps, before we consider whether or not we travel the path suggested in the report we should make sure that the Frankenstein complex is put well behind us.