Digital Discourse

The case of Senior v Police [2013] NZFLR  356 has some interesting things to say about the nature of discourse on Internet social media platforms – Facebook in particular.

The case was about a breach of a protection order. Senior was subject to a domestic protection order. The protected person was his former partner. One of the terms of the protection order required Senior “not to engage or threaten to engage in other behaviour including intimidation or harassment, which amounts to psychological abuse of any person”.

Senior placed a post on his Facebook page. It was abusive of his former partner. She was not one of his Facebook friends so would not have automatically received notification of the post and its content. However, the niece of the protected person was one of Senior’s Facebook friends and drew the attention of the protected person to the post. She complained to the Police.

Senior was convicted and took the case on appeal. The issue was whether or not the appellant had, in legal terms, the appropriate mens rea, or intention, or reckless state of mind when lodging this abuse on Facebook.

The Judge, Justice Fogarty, held that very strong  personal abuse directed at a former partner, placed on Facebook, read by a large number of friends, some of whom would inevitably have contact in the natural social network with the person being abused, was at the very least highly reckless.

In coming to this conclusion the Judge took an interesting path and took judicial notice  of the fact that persons who use Facebook are very aware that the contents of the Facebook are often communicated to persons beyond the “friends” who use Facebook. When information is put on a Facebook page, to which hundreds of people have access, the persons putting the information on the page know that that information will likely extend way beyond the defined class of “friends”. … It is somewhat improbable to say, which was not said here, “Oh, I never thought it was possible that the person I was abusing could possibly have known about this”.

Judicial notice is a tool used by Judges to recognise certain well known facts without expert evidence. Often judicial notice is deployed in cases of well  known and well recognised technologies. Judicial notice, for example, has been taken of the accuracy of a GPS device without the necessity of proof of accuracy or how it works.

The observations by Fogarty J constitute a technologically suspect assertion that ignores the nuances of privacy settings and the way in which a user may configure his or her page. For example, depending upon the privacy settings a user may not communicate content to anyone, but only receive messages.

But if we accept the very bald finding of Fogarty J it sends an interesting message about the nature of discourse using social media platforms which rely on the ability to share information among a coterie of, for want of a better word, associates.

Discourse has always involved coteries be they another individual or group of individuals. Oral discourse and communication by writing provide examples. In the past communication by letter between individuals carried a high expectation of privacy. On occasion, however, letters may be circulated to others or a group of others. But again the expectation was that publication would be limited to the coterie.

Edmund Plowden, whose case reports were published in 1571, clearly had circulated manuscript copies of his reports within a coterie of lawyers in the Inns of Court. It was a fear that they might find their way into the hands of a printer and be published without Plowden’s permission or supervision that prompted him to undertake the printing of his reports himself. The aversion to print was also expressed by John Donne whose controversial tract on suicide Biathanatos  was circulated within a coterie but Donne forbade its printing or its burning – at least during his lifetime.

The printing press enabled the wide dissemination of ideas and communications to a wider coterie – the public. Indeed the word “publish” means to make public. The discourse that followed in the form of tracts, arguments and counter arguments were a characteristic of the political and religious landscapes after the sixteenth century. This outpouring of discourse naturally attracted the attention of the authorities who in various ways attempted to moderate or suppress the discourse. One way was to attempt to regulated the technology in the form of the Licensing Acts from 1662 to 1696, the expiry of which led to the debate about the right to copy, culminating in the Statute of Anne in 1710.

The importance of printed discourse was recognised by the fledging United States Republic in the First Amendment to the Constitution prohibiting government interference with the freedom of speech or the press. The freedom of expression is incorporated into section 14 of the New Zealand Bill of Rights Act 1990 as the right to receive and impart information – a section that clearly recognises the importance of discourse and the exchange of ideas.

Following the printing press other forms of communications technologies such as radio and television have provided the ability to enhance the nature of discourse, although the degeneration of 21st century mainstream television content may well challenge that proposition.

What the internet and digital technologies enable is a form of publication or dissemination that has two elements.

One element is the appearance that information is transmitted instantaneously to both an active (on-line recipient) and a passive (potentially on-line but awaiting) audience. Consider the example of an e-mail. The speed of transmission of emails seems to be instantaneous (in fact it is not) but that enhances our expectations of a prompt response and concern when there is not one. More important, however, is that a matter of interest to one email recipient may mean that the email is forwarded to a number of recipients unknown to the original sender.

Instant messaging is so-called because it is instant and a complex piece of information may be made available via a link by Twitter to a group of followers which may then be retweeted to an exponentially larger audience.

The second element deals with what may be called the democratization of information dissemination. This aspect of exponential dissemination exemplifies a fundamental difference between digital information systems and communication media that have gone before.

In the past information dissemination has been an expensive business. Publishing, broadcast, record and CD production and the like are capital intensive businesses. It used to (and still does)  cost a large amount of money and required a significant infrastructure to be involved in information gathering and dissemination. There were a few exceptions such as very small scale publishing using duplicators, carbon paper and photocopiers. Generally  dissemination was very small.

Another aspect of early information communication technologies is that they involved a monolithic centralized communication to a distributed audience. The model essentially was one of “one to many” communication or information flow.

The Internet turns that model on its head. The Internet enables a “many to many” communication or information flow with the added ability on the part of recipients of information to “republish” or “rebroadcast”. It has been recognized that the Internet allows everyone to become a publisher. No longer is information dissemination centralized and controlled by a large publishing house, a TV or radio station or indeed the State. It is in the hands of users.

News organizations regularly source material from Facebook, YouTube or from information that is distributed on the Internet by Citizen Journalists.   Once the information has been communicated it can “go viral” a term used to describe the phenomenon of exponential dissemination as Internet users share information via e-mail, social networking sites or other Internet information sharing protocols. This characteristic has been recognised by politicians and the recent use of Twitter by President Trump demonstrates the difficulty of articulating complex issues of policy – indeed some might say that any form of coherent articulation beyond 140 characters was beyond the capabilities of Mr Trump.

Internet based publication or dissemination exacerbates the quality of Information Persistence or “the document that does not die” in that once information has been subjected to Exponential Dissemination it is almost impossible to retrieve it or eliminate it.

It is possibly the potential for exponential dissemination to which Fogarty J refers in Senior and upon which he bases his sweeping assertion that people who post to Facebook know and therefore intend (or are reckless) that it will go further.

This assumption – and it can only be that – underlies The New Zealand Herald editorial of 7 January which laments the apparent lack of understanding on the part of a digital native who posted a comment on a social media platform and who then took the post down when she found “people were going a bit overboard with threats and racist comments”. Interestingly the editor asks the rhetorical question ” How many years of living on this web will it take before we treat it with more caution?”

The question is no longer relevant. That  particular horse has well and truly bolted, and despite the fact that the Herald, like so many other news outlets, now has an online presence, the editor fails to have grasped the fact that the medium itself – rather than its content – is the driver of changed attitudes to debate and indeed communication. As Marshall McLuhan said “we shape our tools and thereafter our tools shape us”.

For digital natives the verbal or face to face communication methods of the pre-digital paradigm are passe. Social media platforms enable communication with a wider coterie of digital acquaintances who may, in Facebook for example, be designated with the word friends.

In the same way that verbal discussions can develop a certain amount of heat, and often the way to resolve such confrontations is to disengage, so it is with social media platforms. The digital native in question simply disengaged.

What the Herald editor has done has been to apply the pre-digital concept of coterie communication to a paradigm that dictates a re-evaluation of the nature of discourse and of the expectations of information and its communication. Digital natives are aware that their posted information MAY be shared and it MAY be that information will go viral. Whether they actually intend that outcome is another matter and to ascribe that intent merely by posting is perhaps a bridge too far. Digital technology have redefined our expectations and our use of and our relationship with information.

In the case of the digital native in the Herald article the digital paradigm enabled distribution of her content to a much wider coterie than she initially expected. The level and quality of the debate increased exponentially.  In these days of unreason, many of the attacks were ad hominem (or perhaps ad feminam would be more accurate). It is a sad reflection of discourse that the messenger rather than the message becomes the target.

The digital native, as I have observed, disengaged and walked away from the extended coterie –  a sensible thing to do. She had made her point and exercised her right to freedom of expression.  Others exercised theirs. Had she suffered serious emotional distress as a result of the various posts which she received, she may have had a remedy under the Harmful Digital Communications Act 2015. But that is something of a nuclear option. But she chose rather to disengage.

Fine Tuning the Internet?

New Zealand Herald Tech Blogger Juha Saarinen has written an interesting piece in the technology column of today’s Herald. He blames the Internet for 2016 and is gloomy about the future.

He focusses upon cybecrime, ransomware, malware and the hostile nature of the environment, conveniently forgetting that the kinetic world is a hostile place. Social media comes in for a hit, providing a platform for extremists as well as posing a threat to privacy.  Hatefulness is poison – no doubt – but I am always reminded when I hear calls to “shut them down” of the title of a book by Anthony Lewis – “Freedom for the Thought That We Hate”.

Freedom of speech is nothing if it is the freedom to say things with which we agree, and the echo chamber seems to be a phenomenon of the ghastly post-truth world. One’s commitment to freedom of speech is tested when one is confronted with something truly disagreeable but which, nevertheless, the speaker or writer is free to express. I have always subscribed to Thomas Jefferson’s marketplace of ideas theory. The good ideas will receive traction. That bad ones will fall away. Idealistic? Yes, but rather better than muzzling.

Juha closes by suggesting that the Internet is sliding towards bad things and needs fine tuning to fit people better, arguing that next year wouldn’t be too soon to start on that process.

This sounds like a call for some sort of Internet regulation. Juha properly recognises that the Internet in fact is just the communications backbone. In that respect it is content neutral. It is merely a means of transporting data. It is what is “bolted on” to the backbone that is where the interest lies.

Permissionless innovation has always been a positive characteristic of the development of Internet platforms. Perhaps it is this aspect that needs regulation. Perhaps Tim Berners-Lee should have had to go through a bureaucratic process before letting the Web protocol loose on the Internet. Similarly Google – a group perhaps of code and consequence vetters should ensure that the platform is fit for purpose and “safe” to use – oh and by the way, if you want to make any changes to the code you will need to have out approval.

Permissionless innovation and the lack of red tape accompanying bolting a platform on to the backbone has been one of the strengths of the Internet.

A few years ago I used a phrase – unadvisedly in the particular context – which I will repeat here. We have met the enemy and he is us. The Internet is not the problem. We are. And if that were not enough, factor in a level of disinhibition that seems to accompany on-line behaviour like trolling and it becomes clear that the problem is people – or rather some people.

We have laws in place already that deal with online behaviour. The controversial Harmful Digital Communications Act 2015 – an example of Internet exceptionalism – regulates behaviour online. The computer crimes sections of the Crimes Act 1961 – getting a bit creaky now after 13 years – deal with online fraud, hacking and systems compromises. Spam is covered by the Unsolicited Electronic Messages Act 2007. Unauthorised file sharing is dealt with under the Copyright Act 1994. Child porn falls within the Films Videos and Publications Classification Act 1993. These examples visit consequences upon users who breach the legislative provisions. The laws are there. I seriously doubt that the Internet itself needs further regulation if indeed you can do that to such a diverse and distributed network

In the same way that people take steps to protect their property by putting security systems in place to stop burglars or fraudsters or other villains with vile intent facilities are available to ensure that the work or home systems are as secure as they can be.

But there is one thing you can do if the Internet gets to be too much and that is pull the plug. That is rather harder to do in the kinetic space. The real world which is REALLY scary is a lot harder to switch off.

Artificial Intelligence and Law(s)

In Philip K. Dick’s book “Do Androids Dream of Electric Sheep” – made into the brilliant movie “Bladerunner” directed by Ridley Scott – the genetically engineered replicants, indistinguishable from human beings, were banned from Earth and set to do work on off-world colonies. There was a fear of the threat that these “manufactured” beings could pose to humans.

Isaac Asimov’s extraordinarily successful “Robot” series of short stories and books had a similar premise –  that intelligent robots would pose a threat to humans. In “Androids” the way that the replicants were regulated was that they were shipped off-world and if they returned to Earth they were hunted down and “retired”. Asimov’s regulatory solution was a little more nuanced. Robots, upon the creation of their positronic brains, were programmed with the Three Laws of Robotics. These were as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These three laws were the foundation of all the tensions that arose in Asimov’s stories. How were the Three Laws to be applied? What happens when there is a conflict? Which rule prevails?

The stories are classified as science fiction. I prefer to treat them as examples of statutory interpretation. But underpinning the Three Laws and the reason for them was what Asimov called “The Frankenstein Complex” – a term he coined for fear of mechanical men or created beings that resemble human beings. And his answer to that fear and how it could be mitigated was the Three Laws.

A similar call recently went out about how we should deal with Artificial Intelligence. A report entitled “Determining Our Future: Artificial Intelligence”  written collaboratively by people from the Institute of Directors and the law firm Chapman Tripp, whilst pointing out the not insubstantial benefits that artificial intelligence or “smart systems” may provide, has a significant undertone of concern.

The report calls for the Government to establish a high-level working group on AI which should consider

“the potential impacts of AI on New Zealand, identifying major areas of opportunity and concern, and making recommendations about how New Zealand should prepare for AI-driven change.”

The writers consider AI is an extraordinary challenge for our future and the establishment of a high level working group is a critical first step to help New Zealand rise to that challenge. It seems to be the New Zealand way to look to the Government to solve everything, problematical or otherwise which says interesting things about self-reliance.

The report is an interesting one. It acknowledges the first real problem which is how do we define AI. What exactly does it encompass? Is it the mimicking of human cognitive functions like learning or problem solving. Or is it making machines intelligent – intelligence being the quality that enables an entity to function appropriately and with foresight in its environment.

Even although there seems to be an inability to settle upon a definition a more fruitful part of the examination is in the way in which “smart” computing systems are used in a range of industries and there is the observation that there has been a significant increase in investment in such “smart” systems by a number of players.

The disruptive impact of AI is then considered. This is not new. One of the realities of the Digital Paradigm is continuing disruptive change. There is little time to catch breath between getting used to one new thing and having to confront and deal with a new new thing. Disruption has been taking place from before the Digital Paradigm and indeed back to the First Industrial Revolution.

There is a recognition that we need to prepare for the disruptive effects of any new technology, but what the report fails to consider is the way in which disruptive technologies may ultimately be transformative. There is some speculation that after an initial period of disruption to established skills and industries, AI may lead to greater employment as new work becomes available in areas that have not been automated.

The sense of gloom begins to increase as the report moves to consider legal and policy issues. Although the use of AI in the legal or court process – I prefer to use the term expert legal systems – is not discussed, issues such as whether AI systems should be recognised as persons are mentioned. In this time of Assisted Birth Technologies and other than purely natural creation of life, it is not an easy question to answer. “Created by a human” doesn’t cut it because that is the way that the race has propagated itself for millennia. “Artificially created by a human” may encompass artificial insemination and confine people who are otherwise humans to some limbo status as a result.  But really what are we talking about. We are talking about MACHINE intelligence that is driven by algorithms. I don’t think we are talking about organic systems – at least not yet.

But it is the last question in that section that gives me cause for pause. Are New Zealand’s regulatory and legislative processes adaptive enough to respond to and encourage innovations in AI? What exactly is meant by that? Should we have regulatory systems in place to control AI or to develop it further? That has to be read within the context of the introductory paragraph

“AI presents substantial legal and regulatory challenges. These challenges include problems with controlling and foreseeing the actions of autonomous systems.”

Then the report raises the “Frankenstein Complex.” The introductory paragraph reads as follows:

“Leaders in many fields have voiced concerns over safety and the risk of losing control of AI systems. Initially the subject of science fiction (think Skynet in the Terminator movies), these concerns are now tangible in certain types of safety-critical AI applications – such as vehicles and weapons platforms – where it may be necessary to retain some form of human control.”

The report goes on to state:

Similar concerns exist in relation to potential threats posed by self-improving AI systems. Elon Musk, in a 2014 interview at MIT, famously called AI “our greatest existential threat”.
Professor Stephen Hawking, in a 2014 interview with BBC said that “humans, limited by slow biological evolution, couldn’t compete and would be superseded by AI”.

Stanford’s One-Hundred Year Study of AI notes that

“we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes – and that such powerful systems would threaten humanity”.

Google’s DeepMind lab has developed an AI ‘off-switch’, while others are developing a principles-based framework to address security.

Then the question is asked

“What controls and limitations should be placed on AI technology.”

I think the answer would have to be as few as possible consistent with human safety that allow for innovation and continued development of AI. It must be disturbing to see such eminent persons such as Hawking and Musk expressing concerns about the future of AI. The answer to the machine lies in the machine as Google has demonstrated – turn it off if need be.

The report closes with the following observation.

The potential economic and social opportunities from AI technologies are immense. The public and private sectors must move promptly and together to ensure we are prepared to reap the benefits, and address the risks of AI.

And regulation is the answer? I think not.

Artificial Intelligence as a Tool for Lawyers

My particular interest in AI has been in its application to the law so let’s have a brief look at that issue. Viewed dispassionately the proposals are not “Orwellian” nor do they suggest the elevation of “Terminator J” to the Bench. It may also serve to put a different perspective on AI and the future.

In a recent article, Lex Machina’s Chief Data Scientist observed that data analytics refined information to match specific situations.

“Picture this: You’re building an antitrust case in Central California and want to get an idea of potential outcomes based on everything from judges, to districts, to decisions and length of litigation. In days of law past, coming up with an answer might involve walking down the hall and asking a partner or two about their experiences in such matters, then begin writing a budget around a presumed time frame. “

Howard says that analytics change the stakes. “Not only are you getting a more precise answer,” he attests, “but you’re getting an answer that is based on more relevant data.”

Putting the matter very simplistically legal information either in the form of statutes or case law is data which has meaning when properly analysed or interpreted. Apart from the difficulties in location of such data, the analytical process is done by lawyers or other trained professionals.

The “Law as Data” approach uses data analysis and analytics which match fact situations with existing legal rules.

Already a form of data analysis or AI variant is available in the form of databases such as LexisNexis, Westlaw, NZLii, Austlii or Bailii. Lexis and Westlaw have applied natural language processing (NLP) techniques to legal research for 10-plus years. The core NLP algorithms were all published in academic journals long ago and are readily available. The hard (very hard) work is practical implementation. Legal research innovators like Fastcase and RavelLaw have done that hard work, and added visualizations to improve the utility of results.

Using LexisNexis or Westlaw, the usual process involves the construction of a search which, depending upon the parameters used will return a limited or extensive dataset. It is at that point that human analysis takes over.

What if the entire corpus of legal information is reduced to a machine readable dataset. This would be a form of Big Data with a vengeance, but it is a necessary starting point. The issue then is to:

  1. Reduce the dataset to information that is relevant and manageable
  2. Deploy tools that would measure the returned results against the facts or a particular case to predict a likely outcome.

Part (a) is relatively straight forward. There are a number of methodologies and software tools that are deployed in the e-Discovery space that perform this function. Technology-assisted review (TAR, or predictive coding) uses natural language and machine learning techniques against the gigantic data sets of e-discovery. TAR has been proven to be faster, better, cheaper and much more consistent than human-powered review (HPR). It is assisted review, in two senses. First, the technology needs to be assisted; it needs to be trained by senior lawyers very knowledgeable about the case. Second, the lawyers are assisted by the technology, and the careful statistical thinking that must be done to use it wisely. Thus, lawyers are not replaced, though they will be fewer in number. TAR is the success story of machine learning in the law. It would be even bigger but for the slow pace of adoption by both lawyers and their clients.

Part (b) would require the development of the necessary algorithms that could undertake the comparative and predictive analysis, together with a form of probability analysis to generate an outcome that would be useful and informative. There are already variants at work now in the field of what is known as Outcome Prediction utilising cognitive technologies.

There are a number of examples of legal analytics tools. Lex Machina, having developed a set of intellectual property (IP) case data, uses data mining and predictive analytics techniques to forecast outcomes of IP litigation. Recently, it has extended the range of data it is mining to include court dockets, enabling new forms of insight and prediction. Now they have moved into multi-District anti-trust litigation.

LexPredict developed systems to predict the outcome of Supreme Court cases, at accuracy levels which challenge experienced Supreme Court practitioners.

Premonition uses data mining, analytics and other AI techniques “to expose, for the first time ever, which lawyers win the most before which Judge.”

These proposals, of course, immediately raises issues of whether or not we are approaching the situation where we have decision by machine.

As I envisage the deployment of AI systems, the analytical process would be seen as a part of the triaging or Early Case Assessment process in the Online Court Model, rather than as part of the decision making process. The advantages of the process are in the manner in which the information is reduced to a relevant dataset performed automatically and faster than could be achieved by human means. Within the context of the Online Court process it could be seen as facilitative rather than determinative. If the case reached the decision making process it would, of course, be open to a Judge to consider utilising the “Law as Data” approach with, of course, the ultimate sign-off. The Judge would find the relevant facts. The machine would process the facts against the existing database that is the law and present the Judge with a number of possible options with supporting material. In that way the decision would still be a human one, albeit machine assisted.

Conclusion

As we embark down this road let us ensure that we do not over-regulate out of fear. Let us ensure that innovation in this exciting field is not stifled and that it continues to develop. The self-aware, self-correcting, self-protecting Skynet scenario is not a realistic one and, in my view, needs to be put to one side as an obstruction and recognised for what it is – a manifestation of the Frankenstein complex. And perhaps, before we consider whether or not we travel the path suggested in the report we should make sure that the Frankenstein complex is put well behind us.

 

Technological Competence for Lawyers

The rise of technology and its pervasive effect on all our lives – whether we like it or not – has implications for everyone involved in the practice of law. Conveyancing transactions are done on-line. Some company documents can only be filed on line. The use of computer systems, on-line legal research, networked communications and the Internet all feature to some extent in legal offices.

Yet, how technologically aware are lawyers.

This is a matter that has been addressed as a matter of competence to practice in the United States. In 2012 the American Bar Association made several changes to its Model Rules and commentary.

The starting point is basic competence. Rule 1.1 states:

“A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”

Comment 8 to the Rule states what is required to achieve that level of competence.

“To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.”

On this basis, lawyers cannot plead ignorance or inability regarding the use of technology and its associated risks.

So what is the technology that needs to be understood? First is the equipment that forms part of day-to-day legal practice such as computers, tablets, smart phones, scanners, printers or copiers. This category also includes the use of email, and the electronic storage of documents and other information.

Then there is an understanding of the software and programs that are used that may streamline or simplify legal practice. This may include programs for storing, managing and reviewing electronically stored information as well as law practice and management software including matters such as client information, contacts, time entry, billing, document management, docketing and calendaring.

Lawyers also need to be aware of the technology used by their clients and how that has an impact upon business as well as technology that may  impose liability on clients, such as, for example, GPS technology, electronic logging, or automated driving technology.

For litigators there has to be knowledge of and familiarity with  courtroom technology.

All of this may seem pretty intimidating but in today’s technological age, beset as we are with continuing disruptive change, it is necessary.

If practitioners are concerned at the ABA proposals the Florida Bar has gone one step further. Application was made to the Supreme Court of Florida in September 2016 to amend the Bar Rules to require all lawyers to maintain technological competence by undertaking 3 CLE hours of approved technological education courses. Florida lawyers have to complete 33 hours every 3 years. The standard comes into effect on 1 January 2017

Interestingly enough there was been little resistance from Florida practitioners. The benefits seem to have made themselves clear.

The question that comes to mind is whether or not there should be a technological competence requirement along the lines proposed by the ABA for New Zealand law practitioners or whether the New Zealand Law Society should adopt some form of advisory about technological competence and upskilling for practitioners.

Technology for Lawyers: A competence requirement?

There has been a bit of publicity of late about the difficulties that some law firms are having in adapting to the continuing disruptive change that characterises the Digital Paradigm. The introduction and use of information technologies provides one example.

A lawyer’s stock in trade is not time, as has been suggested in the past. It is, in fact, information. The law office is an information hub with information flows coming in and out in the form of instructions, advice, the provision of information necessary to complete transactions, the preparation of materials to inform the Court of the nature of a dispute and the like. Information technologies must be a part of everyday legal practice. Technology enables law firms to do better in the provision of their services, and can assist in providing clients with cheaper, high-quality and nimble services.

Smaller law firms face a real challenge. There is not the division of labour in a smaller form, tuned to fulfilling client needs, to step back and take a deep perspective view of the future. Big law firms are able to attend to this aspect of practice management and often have internal teams working on little else. The smaller firms and the sole practitioners need to focus on things like websites, digital marketing, social media, e-discovery and cloud-based tools for practice and case management—hopefully giving them a competitive advantage in job market and spurring the use of new technology in every day legal practice. Cloud technology, for example, “is removing many obstacles by reducing hardware and in-house IT investments, quelling cybersecurity concerns, easing the complexity of scaling and updating software, and providing better overall access to general computing power. Additionally, as vendors continue to place more of a premium on providing online training modules for their products, the last great barrier to remain will simply be motivation” according to Mike Susong of Legaltech News.

At a recent forum titled “The Future of Legal Services in the United States: The ABA Issues a Clarion Call for Change” discussions centred on the problems citizens faced getting access to legal services. William Hubbard, immediate past-president of the American Bar Association suggested lawyers think more creatively about how they deliver legal services meaning that lawyers should “embrace technology and the benefits technology can bring to provide new avenues to provide legal services to those in need.”

There is a significant recognition, especially by the ABA, of the importance of technological knowledge and understanding as part of professional competence requirements. Four years ago, in 2012 the ABA Model Rules of Professional Conduct required, as part of the commentary to Rule 1.1, that lawyers have a duty to keep abreast of the benefits and risks associated with technology.

In September 2016 the Florida Supreme Court took the matter one step further, issuing an opinion adopting the Florida Bar Association’s proposal for mandatory technology Continuing Legal Education (CLE).

“In addition to adding the three-hour requirement, the court also amended a comment to its rule on lawyer competence to say that lawyers could retain non-lawyer advisers with “established technological competence in the relevant field.”

The court added that competent representation may also involve cybersecurity and safeguarding confidential information. “In order to maintain the requisite knowledge and skill, a lawyer should engage in continuing study and education, including an understanding of the risks and benefits associated with the use of technology,” the court held.”

It will be interesting to see if this move catches on. There can be no doubt that CLE programmes contain technology oriented modules, but the Florida move now makes it mandatory. This must be viewed as a necessary step as we move further and further into the Digital Paradigm and more and more aspects of technology permeate the legal landscape.

I have argued in the past that lawyers who argue technology related cases need to understand the technology and how it works. There is no point arguing a case about publication of material on social media without knowing how the platform operates, what its parameters and settings are, what the settings were at the relevant time and, importantly, how those settings can be located and examined. There is not really a “one size fits all” approach that can be adopted to social media and it would be unwise to make generalised assumptions about the qualities and operation of a platform.

There is little point in attending a case management conference about e-discovery unless the lawyers are aware of the various technologies that are available and, as importantly, how they work so that a reasonable and proportionate discovery proposal can be reached.

The nature of digital information, in and of itself, is paradigmatically different from that which is recorded on paper. Lawyers must understand this and recognise that although content is king most of the time, what lies beneath the content can be as informative, if not more so, than the content on the face of the document.

There are many more examples but the message is clear. Lawyers cannot be resistant to the Digital winds of change that are blowing. Bend, adapt, adopt must be the message for lawyers in the Digital Paradigm.

Memory Illusions and Cybernannies

Over the last week I read a couple of very interesting books. One was Dr Julia Shaw’s The Memory Illusion. Dr. Shaw describes herself as a “memory hacker” and has a You Tube presence where she explains a number of the issues that arise in her book.

The other book was The Cyber Effect by Dr Mary Aiken who reminds us on a number of occasions in every chapter that she is a trained cyberpsychologist and cyberbehavioural specialist and who was a consultant for CSI-Cyber which, having watched a few episodes, I abandoned. Regrettably I don’t see that qualification as a recommendation, but that is a subjective view and I put it to one side.

Both books were fascinating. Julia Shaw’s book in my view should be required reading for lawyers and judges. We place a considerable amount of emphasis upon memory assisted by the way in which a witness presents him or herself -what we call demeanour. Demeanour has been well and truly discredited by Robert Fisher QC in an article entitled “The Demeanour Fallacy” [2014] NZ Law Review 575. The issue has also been covered by  Chris Gallavin in a piece entitled “Demeanour Evidence as the backbone of the adversarial process” Lawtalk Issue 834 14 March 2014 http://www.lawsociety.org.nz/lawtalk/issue-837/demeanour-evidence-as-the-backbone-of-the-adversarial-process

A careful reading of The Memory Illusion is rewarding although worrisome. The chapter on false memories, evidence and the way in which investigators may conclude that “where there is smoke there is fire” along with suggestive interviewing techniques is quite disturbing and horrifying at times.

But the book is more than that, although the chapter on false memories, particularly the discussions about memory retrieval techniques, was very interesting. The book examines the nature of memory and how memories develop and shift over time, often in a deceptive way. The book also emphasises how the power of suggestion can influence memory. What does this mean – that everyone is a liar to some degree? Of course not. A liar is a person who tells a falsehood knowing it to be false. Slippery memory, as Sir Edward Coke described it, means that what we are saying we believe to be true even although, objectively, it is not.

A skilful cross-examiner knows how to work on memory and highlight its fallibility. If the lawyer can get the witness in a criminal trial to acknowledge that he or she cannot be sure, the battle is pretty well won. But even the most skilful cross-examiner will benefit from a reading of The Memory Illusion. It will add a number of additional arrows to the forensic armoury. For me the book emphasises the risks of determining criminal liability on memory or recalled facts alone. A healthy amount of scepticism and a reluctance to take an account simply and uncritically at face value is a lesson I draw from the book.

The Cyber Effect is about how technology is changing human behaviour. Although Dr Aiken starts out by stating the advantages of the Internet and new communications technologies, I fear that within a few pages the problems start with the suggestion that cyberspace is an actual place. Although Dr Aiken answers unequivocally in the affirmative it clearly is not. I am not sure that it would be helpful to try and define cyberspace – it is many things to many people. The term was coined by William Gibson in his astonishingly insightful Neuromancer and in subsequent books Gibson imagines the network (I use the term generically) as a place. But it isn’t. The Internet is no more and no less than a transport system to which a number of platforms and applications have been bolted. Its purpose –  Communication. But it is communication plus interactivity and it is that upon which Aiken relies to support her argument. If that gives rise to a “place” then may I congratulate her imagination. The printing press – a form of mechanised writing that revolutionised intellectual activity in Early-modern Europe – didn’t create a new “place”. It enabled alternative means of communication. The Printing Press was the first Information Technology. And it was roundly criticised as well.

Although the book purports to explain how new technologies influence human behaviour it doesn’t really offer a convincing argument. I have often quoted the phrase attributed to McLuhan – we shape our tools and thereafter our tools shape us – and I was hoping for a rational expansion of that theory. It was not to be. Instead it was a collection of horror stories about how people and technology have had problems. And so we get stories of kids with technology, the problems of cyberbullying, the issues of on-line relationships, the misnamed Deep Web when she really means the Dark Web – all the familiar tales attributing all sorts of bizarre behaviours to technology – which is correct – and suggesting that this could become the norm.

What Dr Aiken fails to see is that by the time we recognise the problems with the technology it is too late. I assume that Dr Aiken is a Digital Immigrant, and she certainly espouses the cause that our established values are slipping away in the face of an unrelenting onslaught of cyber-bad stuff. But as I say, the changes have already taken place. By the end of the book she makes her position clear (although she misquotes the comments Robert Bolt attributed to Thomas More in A Man for All Seasons which the historical More would never have said). She is pro-social order in cyberspace, even if that means governance or regulation and she makes no apology for that.

Dr Aiken is free to hold her position and to advocate it and she argues her case well in her book. But it is all a bit unrelenting, all a bit tiresome these tales of Internet woe. It is clear that if Dr Aiken had her way the very qualities that distinguish the Digital Paradigm from what has gone before, including continuous disruptive and transformative change and permissionless innovation, will be hobbled and restricted in a Nanny Net.

For another review of The Cyber Effect see here

Rozenberg QC on the Online Court – A Review

Joshua Rozenberg QC is an English journalist and commentator on matters legal. I have read his articles and commentaries now for some time. He is thoughtful and balanced, unafraid to call it as he sees it. He practiced as a barrister before moving into journalism and was appointed honorary Queens Counsel for his work as the “pre-eminent legal analyst or modern times.”

So it was that I saw a reference to his monograph entitled “The Online Court – will it work?” on his Facebook page. Rozenberg conceded that it was too long for any of his normal outlets to publish but this piece was available for download from Amazon. He hastened to point out that although much of his work is available at no charge the essay was not commissioned, sponsored nor supported by advertising so a small charge of £1.99 was levied at the UK Amazon store and $US2.49 at Amazon.com. A reasonable fee under the circumstances. Just one problem. The essay was available only to UK customers.

I have written before about the bizarre practice of geoblocking in an on-line borderless world. My earlier encounters with this loathsome practice have been in attempts to purchase software and video content. The physical product isn’t a problem. A proxy forwarding address in the US or UK solves most difficulties. However, additional issues arise when one is dealing solely with digital content. Without an English address, obtaining the content seems nigh impossible. What I cannot understand is why Amazon would want to restrict distribution in this way. After all, place doesn’t matter in the delivery of online content. No greater delivery or packaging costs are incurred. No explanation is given for restricting distribution.

However, that said, Rozenberg’s essay makes fascinating reading. He opens his discussion with the background to the current reforms starting with early attempts which were not very successful because they were not judge-led – indeed an essential requirement in any proposed reform of the Courts process. After all, next to Court staff Judges are the principal users of the Court system. Furthermore, when I talk about “Judge-led” I don’t mean that judges should be kept informed about what the IT people are doing, but that the judges actively lead the process. This was enabled in England by the formation of the Judicial Office which was set up in 2006 under leadership of the Lord Chief Justice. The development of a single courts service further assisted. Rozenberg sets out the way in which the current judicial leadership role came to be in a helpful overview.

He then passes on to cover the reform programme of Her Majesty’s Courts Service (HMCTS) and the three strands of work suggested by Lord Justice Briggs

  • The use of modern IT
  • Less reliance on Court buildings
  • The allocation of some work done by Judges to case lawyers

The allocation of funding in 2014 has remained in place, an achievement Rozenberg attributes to the influence of the Lord Chief Justice, Lord Thomas of Cwmgiedd.

Rozenberg then goes on to summaries the various projects, numbering in total 21, some of which, like the eJudiciary service, are already up and running. For those of us looking at the English IT reforms from the outside, this is an invaluable snapshot of where things are and where it is hoped they may go. Most of the publicity that one sees about the reforms focus upon the Online Court proposals but Rozenberg makes it clear that this is only a part of the story. I was impressed with the scope of the proposals. I was familiar with the eJudiciary service, having had it demonstrated to me by His Honour Judge John Tanzer in 2015. I was also familiar with the Rolls project but other elements were new.

Rozenberg then passes to deal with the online court which is probably the most revolutionary proposal.  He covers the initial proposals by Professor Richard Susskind and Lord Justice Briggs. The Online Court involves the innovative use of technology. Two paths were available. One was to use technology to imitate the existing system. This would merely be a digital replication of a system that would be recognisable to William Garrow or Charles Dickens. Digital technologies allow for disruptive change. Disruption in and of itself cannot be seen to be an end. But transformation by means of disruption, especially if that transformation improves, in this case, just outcomes is to be applauded.

The Susskind and Briggs proposals change the emphasis of the Court process. In the past, the process has been geared towards getting the case before the Court. That can be somewhat complex and that complexity will invariably involve the participation of lawyers, assisting the litigants through the procedural shoals to a hearing.

The online process is geared towards introducing the possibility of resolution from the very beginning. At all stages of the process resolution is the objective, rather than waiting for the judge to resolve the matter. This the various stages of the process offer opportunities for resolution, rather than being milestones that have to be passed on the way to a hearing.

The issue that has given cause for concern is that lawyers are not seen as essential to the process. Rozenberg covers this real area of concern by pointing out that lawyers will have a different role in the process, rather than being excluded from it all together. The use of an App will assist litigants although there is nothing to prevent a litigant seeking legal assistance or advice. But one of the objectives of the new process is to improve access to justice and if that can be achieved it will be a significant accomplishment and a validation of the use of IT.

Rozenberg examines the feasibility of the system uner the ambiguous heading “Will IT work”. There are two questions posed here. Will I(nformation)T(echnology) work which puts the focus upon the way in which the IT projects are put together. Or will IT (the big strategic plan) work. It is the first question that Rozenberg attempts to answer although, because the projects are IT dependent the answer to one will answer the other.

Rozenberg ends on a cautious note, stating, correctly in my view, that digitising the courts is the biggest challenge to the judicial system in 150 years and it is a reform that must not fail, if the restoration and maintenance of access to justice for those who need it most is to take place.

The essay or publication is an excellent example of the enabling power of technology. A close examination of highly significant and innovative approaches to the justice system by England’s leading legal commentator adds to informed debate. Rozenberg is to be congratulated for taking the initiative to put the information on line. It is a pity that Amazon’s policies limit its accessibility.

But for me the essay was extremely valuable in that it provides meaningful context to the on-line court – an innovation in which I have been very interested since I met and spoke with Professor Susskind about it in May of last year. That broader view, and the scope of the IT projects that are in train for the English system give added weight to Rozenberg’s conclusion. It is clearly written, as one would expect, well worth the £1.99 from Amazon and valuable assessment of the state of English Courts IT at the cross roads.