The fault, dear Brutus, is not in social media/ But in ourselves


The title of this post is a paraphrase of a couple of lines from “Julius Caesar” Act 1 Scene iii Lines 140 – 141 – apologies to Will Shakespeare of Stratford.

This post is a companion piece to one that I wrote about misinformation and to which reference is made. Lest there be any doubt I am not advocating for misinformation or disinformation. I dislike both. I am concerned with objective fact and reasoned opinion in an effort to ascertain truth and have been all my life.

It is easy – perhaps a soft option – to lay the spread of misinformation at the feet of social media. After all, people post to social media and in a sense the information remains passive until someone else reads it. And therein lies the problem. In my last post I advocated a position based on the employment of common sense and critical faculties – qualities that we all possess.

In this piece I discuss the importance of understanding the medium as a prelude to considering the “responsibility” of social media for the dissemination of misinformation. Exponential dissemination, as I argue, is an essential characteristic of digital communications systems and impacts upon our information expectations

In an earlier post I observed that the target of the concerns about misinformation is “the Internet” – a generalized target that encompasses a world wide communications network. A more recent comment on disinformation attributes its spread to social media.

In a sense, both critiques are correct but they both focus on the content layer rather than upon the medium itself. And it is when we understand the nature of the medium we realise that in many respects it enables many behaviours, some of which are execrable. But the problem is that the cat is out of the bag, the djinni is out of the lamp – which ever metaphor you prefer.

What we are facing are paradigmatically different behaviours in the communications space from anything that has gone before. And because the paradigm is a different one from that to which we are accustomed, we yearn to push back, to return to things “the way they were”. And in saying this we hearken back to an earlier communications paradigm that was, as is the present paradigm, defined and underpinned by the media of communication.

When Marshall McLuhan cryptically said “The Medium is the Message” he was saying that in understanding the impact of the message we must first understand the impact of the medium or media of communication. And although we tend to focus upon what we see and hear – the content layer – the real game changer lies much deeper than that – within the medium itself. It is the medium that enables behaviours and in many respects and as a result of continued use impacts upon the values that validate those behaviours.

Every medium of communication possesses certain properties or affordances that are not immediately obvious. My starting point is the analytical framework developed by the historian Elizabeth Eisenstein in her seminal work The Printing Press as an Agent of Change.[1] In that work Eisenstein identified a number of qualities present in print technology that differentiated the communication of information in print from that communicated in manuscript. These qualities were not the obvious ones of machine based creation of content but focussed upon the way in which printed material was going to and did impact upon the intellectual activities of educated elites in Early-modern Europe. These qualities were beneath the content layer; not immediately apparent but vital in considering the way in which readers dealt with and related to information and ultimately had an impact upon their expectations of information and how, in turn, they themselves used print to communicate.

Using McLuhan’s suggestion and developing the way in which Eisenstein identified her underlying qualities of print technology, I have identified a number of different qualities[2], some of which overlap and some of which are complementary.

However, rather than merely identify these qualities I have developed a form of taxonomy or classes of qualities which are occupied by specific exemplars.[3]

For example, I have identified what I call Environmental Qualities. They arise from the context within which digital technologies develop and are descriptive of the nature of change within that context, and some of the underlying factors which drive that change. Because digital technologies primarily involve the development of software tools which operate on relatively standard computing equipment, the capital investment in hardware and manufacturing infrastructure is not present in the development of digital tools, although it certainly is in the development of the hardware that those tools require.

Thus the development of digital software can take place in any one of a number of informal locations where the only requirements are a power supply, a computer and a programmer or programmers. This lack of infrastructural requirements enables the development of software tools which can be deployed via the non-regulated environment of the Internet giving rise to the qualities of permissionless innovation and continuing disruptive change which are discussed in detail.

A second set of qualities I have identified as technical qualities. These are so classified because they underlie some of the technical aspects of the new digital technologies. Some of these qualities are present in a different form in the print paradigm. Eisenstein identified dissemination of content as a quality of print that was not present within the scribal paradigm. I have identified exponential dissemination as an example of a technical quality – the way in which the technology enables not only the spread of content as was enabled by print, but dissemination at a significantly accelerated rate with a greater reach than was enabled by physical dissemination.

Another of the qualities that I identify as a technical one is that of information persistence, summed up in the phrase “the document that does not die.” Once information has been released on to Internet platforms the author or original disseminator loses control of that content. Given the fact that as digital information travels through a multitude of servers, copies are made en route meaning that the information is potentially retrievable even although it may have been removed from its original source.

Other examples of “technical qualities” such as the way in which linear progress through information challenged by navigation via hypertext link in what I call the delinearisation of information; the dynamic nature of information and its malleability in digital format; the way in which seemingly limitless capacity allows for storage of a greater amount of information than was previously considered possible; the apparent non-coherence of digital information and the need for the intermediation of hardware and software to render it intelligible and the problem of obsolescence of information caused by loss arising not from deterioration of the medium but as a result of the unwillingness of software companies to support earlier iterations of software which enabled the creation of an earlier and now inaccessible version of the content. All are aspects of technical qualities that underpin the content of digital information.

The third category of qualities are what I call user associated qualities – qualities that arise in the behaviour of users in response to digital information technologies. Among these user associated qualities is the searchability of digital information and its associated availability and retrievability arising from the development of ever more sophisticated search algorithms and platforms, and the ability of users to participate in the creation of and use of content as a result of the interactive nature of digital technologies, in particular social media.

In some respects aspects of these qualities overlap – they do not stand alone. Indeed the searchability of information presents its own special difficulties. Trying to locate information on the Network has been a problem even before the Internet went commercial. There were search tools such as Gopher in the early days but the advent of sophisticated algorithm driven search tools such as Google have changed the landscape entirely.

Algorithms also select and promote posts and information on social media and associated platforms and frequently select information that is “high engagement”. The algorithms that curate content do so to drive increased engagement. Thus we have a merging of searchability and user participation. The problem is that this imperative of increased engagement seems to attract users who are confused and often gullible and who seek information that confirms their worst fears. For them, social media becomes an echo chamber. But although it is the content that they seek, the availability of the content arises from the inherent qualities of the medium

Thus, all these qualities, cumulatively, have an impact upon our “relationship” with and expectations of information and which have an influence on behaviour.  One form of behaviour is what may be called the online disinhibition effect. This inevitably leads to a consideration of the contentious issue of the effect that new technologies have upon the way that we think. It is suggested that the issue is not so much one of neuroplasticity advanced by Susan Greenfield[4] or “dumbing down” of attention spans as suggested by Nicholas Carr[5] but a slightly more nuanced view of the way that the medium and the various delivery systems redefine the use of information which informs the decisions that we make.[6]

Paradigmatically different ways of information communication and acquisition are going to change the way in which we use and respond to information. And we must recognise that change has happened, that some of our preconceived notions about information and its reliability must change, and that we must adapt our approaches. It is no good trying to hold on to past standards regarding information. They have morphed as a result of the new communications paradigm. It will be interesting to see how the proposed Content Regulatory System Review develops. The target is content, described by McLuhan as akin to  “the juicy piece of meat carried by the burglar to distract the watchdog of the mind” whereas the true target of the review should be the medium and the way that it is changing attitudes to content.

This rather lengthy discussion of the underlying nature of communications systems in the Digital Paradigm is really an introductory to a comment on a piece by Dr Jarrod Gilbert which appeared in the NZ Herald on 23 August.

The article deals with some of the more bizarre manifestations of behaviour and information that seem to beset us. Dr Gilbert acknowledges that this sort of thing is not new but that what is new is the ability for such views to spread quickly and widely – like a contagion as he put it, phraseology that would seem to be apt in these plague-ridden times – but he then lays the responsibility for this at the feet of social media. Social media, he says, provides the oxygen and then proceeds to look for the spark which, if I read him correctly, he attributes to disinformation.

We have to be careful with this word because it can get confused with its close cousin “misinformation”. Just to recap, I have discussed misinformation in an earlier post but it has been defined by the Infodemic Report discussed in that post as  “false information that people didn’t create with the intention to hurt others”. Disinformation, in the same report, has an element of malevolence to it – it is defined as false information created with the intention of harming a person, group, or organisation, or even a country.

The definition in the Oxford English Dictionary is less threatening that that appearing in the report but the dissemination of deliberately false information is common to both. The OED defines disinformation as:

“The dissemination of deliberately false information, esp. when supplied by a government or its agent to a foreign power or to the media, with the intention of influencing the policies or opinions of those who receive it; false information so supplied.”

Dr Gilbert then goes on to consider how bizarre ideas disseminated on social media spread so easily. One aspect is the authoritarian explainer personality whose commentary has an aspect of credibility even although there may be no basis for it. Another is the personality drawn to paranormal thinking or conspiracy theories. Once one conspiracy is believed it becomes easy to believe others.

Having considered the human element and the gullibility of audiences, Dr Gilbert turns his attention to social media and there is no doubt that the use of algorithms, as I have discussed above, enhances engagement which is an essential aspect of the business model of many social media platforms. The association of disinformation and social media is well known and deserves to be highlighted although, as I later suggest in this post, there is a sinister aspect to this within the context of an “authorized truth.” Another feature of social media is that it is not generally viewed as a trusted source of information. In a recent survey two thirds of those questioned expressed low trust in social media. So those about whom Dr Gilbert complains are in a minority and probably prefer the echo chamber that social media affords.

But are social media platforms the problem. I suggest that to say so is to look for the low hanging fruit. The problem is far more nuanced and complex than that. If we look at the underlying properties of the medium we find user participation and exponential dissemination enable the spread of ideas rather than heaping the blame on “social media.” These inherent qualities of digital communications systems would exist despite social media. It is just that social media have managed to “piggy-back” on these characteristics in developing business models.

As the title of this post suggests, with appropriate paraphrasing, the real fault is not with social media but with ourselves. The problems of misinformation and disinformation are not technical issues but are human issues – behavioural issues. It may well be, as I suggest, that behaviours have been modified by the properties of digital communications systems. But in many respects those systems are passive purveyors rather than active influencers. People are influencers, utilizing the enhanced communications opportunities provided by digital systems.


[1] Elizabeth Eisenstein The Printing Press as an Agent of Change  (Cambridge University Press, Cambridge, 1979) 2 Vols. Reference will be made to the 1 volume 1980 edition; Elizabeth Eisenstein, The Printing Revolution in Early Modern Europe (Cambridge University Press (Canto), Cambridge, 1993).

[2] Eisenstein identified six for print.

[3] I have discussed the qualities or affordances of digital technologies in more detail in my book Collisions in the Digital Paradigm: Law and Rulemaking in the Internet Age (Hart Publishing, Oxford, 2017). The qualities that I identify (and which are summarized above) are:

Environmental Qualities:

                Continuing disruptive change

                Permissionless Innovation

Technical Qualities

                Delinearisation of information

                Information persistence or Endurance

                Dynamic Information

                Volume and capacity

                Exponential dissemination

                The non-coherence of digital information

                Format obsolescence

User Associated Qualities

                Availability, Searchability and Retrievability of Information

                Participation and interactivity

[4] Susan Greenfield “Modern Technology is Changing the Way our Brains Work, Says Neuroscientist” Mail Online, Science and Technology 15 May 2010 http://www.dailymail.co.uk/sciencetech/article-565207/Modern-technology-changing-way-brains-work-says-neuroscientist.html (last accessed 25 July 2016)

[5] Nicholas Carr The Shallows: How the Internet is changing the way we think, read and remember (Atlantic Books, London 2010); Nicholas Carr “Is Google Making Us Stupid: What the Internet is doing to our brains” Atlantic July/August 2008 On line edition http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/ (last accessed 25 July 2016)

[6] For a counter argument to that advanced by Greenfield and Carr see Aleks Krotoski Untangling the Web: What the Internet is doing to you (Faber, London, 2013) especially at pp.35 – 36. For a deeper discussion see Chapter 2 under the heading “The Internet and How we Think.”

[7] New Zealand Bill of Rights Act 1990 section 14.

Do Social Network Providers Require (Further?) Regulation – A Commentary

This is a review and commentary of the Sir Henry Brooke Student Essay Prize winning essay for 2019. The title of the essay topic was “Do Social Network Providers Require (Further?) Regulation

Sir Henry Brooke was a Court of Appeal judge in England. He became a tireless campaigner during retirement on issues including access to justice. His post-judicial renown owed much to his enthusiastic adoption of digital technology although he spear-headed early initiatives for technology in courts and led and was first Chair of the British and Irish Legal Information Institute (BAILII) – a website that provides access to English and Irish case and statute law. Upon his retirement many came to know of him through his blog and tweets. He drafted significant sections of the Bach Commission’s final report on access to justice, and also acted as patron to a number of justice organisations including the Public Law Project, Harrow Law Centre and Prisoners Abroad.

The SCL (Society for Computers and Law) Sir Henry Brooke Student Essay Prize honours his legacy.  For 2019 the designated essay question this year was 2000-2,500 words on the prompt “Do social network providers require (further?) regulation?” the winner was Robert Lewis from the University of Law. His essay considers some of the regulatory responses to social media. His starting point is the events of 15 March 2019 in Christchurch.

The first point that he makes is that

“(h)orrors such as Christchurch should be treated cautiously: they often lead to thoughtless or reflexive responses on the part of the public and politicians alike.”

One of his concerns is the possibility of regulation by outrage, given the apparent lack of accountability of social networking platforms.

He then goes on to examine some examples of legislative and legal responses following 15 March and demonstrates the problem with reflexive responses. He starts with the classification of the live stream footage and the manifesto posted by the alleged shooter. He referred to a warning by the Department of Internal Affairs that those in possession of the material should delete it.

He then examines some of the deeper ramifications of the decision. Classification instantly rendered any New Zealander with the video still in his computer’s memory cache, or in any of his social media streams, knowingly or not, potentially guilty of a criminal offence under s.131 of Films Videos and Publications Classification Act 1993. He comments

“Viewing extracts of  the footage shown on such websites was now illegal in New Zealand, as was the failure to have adequately wiped your hard drive having viewed the footage prior to its classification. A significant proportion of the country’s population was, in effect, presented with a choice: collective self-censorship or criminality.”

Whilst he concedes that the decision may have been an example of civic responsibility, in his opinion it did not make good law. Mr. Lewis points out that the legislation was enacted in 1993 just as the Internet was going commercial. His view is that the law targets film producers, publishers and commercial distributors, pointing out that

“these corporate entities have largely been supplanted by the social network providers who enjoy broad exemptions from the law, which has instead been inverted to criminalise “end users”, namely the public which the law once served to protect.”

He also made observations about the maximum penalties which are minimal against the revenue generated by social media platforms.

He then turned his attention to the case of the arrest of a 22 year old man charged with sharing the objectionable video online. He commented that

“that faced with mass public illegality, and a global corporation with minimal liability, New Zealand authorities may have sought to make an example of a single individual. Again, this cannot be good law.”

Mr. Lewis uses this as a springboard for a discussion about the “safe harbor” provisions of the Communications Decency Act (US) and EU Directive 2000/31/EC, which created the “safe harbour” published or distributed.

Mr Lewis gives a telling example of some of the difficulties encountered by the actions of social media platforms in releasing state secrets and the use of that released information as evidence in unrelated cases. He observes

“The regulatory void occupied by social network providers neatly mirrors another black hole in Britain’s legal system: that of anti-terrorism and state security. The social network providers can be understood as part of the state security apparatus, enjoying similar privileges, and shrouded in the same secrecy. The scale of their complicity in data interception and collection is unknown, as is the scale and level of the online surveillance this apparatus currently performs. The courts have declared its methods unlawful on more than one occasion and may well do so again.”

A theme that becomes clear from his subsequent discussion is that the current situation with apparently unregulated social media networks is evidence of a collision between the applicability of the law designed for a pre-digital environment and the challenges to the expectations of the applicability of the law in the digital paradigm. For example, he observes that

“The newspapers bear legal responsibility for their content. British television broadcasters are even under a duty of impartiality and accuracy. In contrast, social network providers are under no such obligations. The recent US Presidential election illustrates how invidious this is.”

He also takes a tilt at those who describe the Internet as “the Wild West”.

“This is an unfortunate phrase. The “wild west” was lawless: the lands of the American west, prior to their legal annexation by the United States, were without legal systems, and any pre-annexation approximation of one was illegal in and of itself. In contrast, the social network providers reside in highly developed, and highly regulated, economies where they are exempted from certain legal responsibilities. These providers have achieved enormous concentrations of capital and political influence for precisely this reason.”

He concludes with the observation that unlawful behaviour arises from a failure to apply the law as it exists and ends with a challenge:

“ In England, this application – of a millennium-old common law tradition to a modern internet phenomenon such as the social networks – is the true task of the technology lawyer. The alternative is the status quo, a situation where the online publishing industry has convinced lawmakers “that its capacity to distribute harmful material is so vast that it cannot be held responsible for the consequences of its own business model.””

The problem that I have with this essay is that it suggests a number of difficulties but, apart from suggesting that the solution lies in the hands of technology lawyers, no coherent solution is suggested. It cites examples of outdated laws, of the difficulty of retroactive solutions and the mixed blessings and problems accompanying social media platforms. The question really is whether or not the benefits outweigh the disadvantages that these new communications platforms provide. There are a number of factors which should be considered.

First, we must recognize that in essence social media platforms enhance and enable communication and the free exchange of ideas – albeit that they may be banal, maudlin or trivial – which is a value of the democratic tradition.

Secondly, we must recognize and should not resent the fact that social media platforms are able to monetise the mere presence of users of the service. This seems to be done in a number or what may appear to be arcane ways, but they reflect the basic concept of what Robert A. Heinlein called TANSTAFL – there ain’t no such thing as a free lunch. Users should not expect service provided by others to be absolutely free.

Thirdly, we must put aside doctrinaire criticisms of social media platforms as overwhelming big businesses that have global reach. Doing business on the Internet per se involves being in a business with global reach. The Internet extends beyond our traditional Westphalian concepts of borders, sovereignty and jurisdiction.

Fourthly, we must recognize that the Digital Paradigm by its very nature has within it various aspects – I have referred to them elsewhere as properties – that challenge and contradict many of our earlier pre-digital expectations of information and services. In this respect many of our rules which have a basis in underlying qualities of earlier paradigms and the values attaching to them are not fit for purpose. But does this mean that we adapt those rules to the new paradigm and import the values (possibly no longer relevant) underpinning them or should we start all over with a blank slate?

Fifthly, we must recognize that two of the realities in digital communications have been permissionless innovation – a concept that allows a developer to bolt an application on to the backbone – and associated with that innovation, continuous disruptive change.

These are two of the properties I have mentioned above. What we must understand is that if we start to interfere with say permissionless innovation and tie the Internet up with red tape, we may be if not destroying but seriously inhibiting the further development of this communications medium. This solution would, of course, be attractive to totalitarian regimes that do not share democratic values such as freedom of expression

Sixthly, we have to accept that disruptive change in communications methods, behaviours and values is a reality. Although it may be comfortable to yearn for a nostalgic but non-existent pre digital Golden Age, by the time such yearning becomes expressed it is already too late. If we drive focused upon the rear view mirror we are not going to recognize the changes on the road ahead. Thus, the reality of modern communications is that ideas to which we may not have been exposed by monolithic mainstream media are now being made available. Extreme views, which may in another paradigm, have been expressed within a small coterie, are now accessible to all who wish to read or see them. This may be an uncomfortable outcome for many but it does not mean that these views have only just begun to be expressed. They have been around for some time. It is just that the property of exponential dissemination means that these views are now available. And because of the nature of the Internet, many of these views may not in any event be available to all or even searchable, located, as many of them are, away from the gaze of search engines on the Dark Web.

Seventhly, it is only once we understand not only the superficial content layer but the deeper implications of the digital paradigm – McLuhan expressed it as “the medium is the message” can we begin to develop any regulatory strategies that we need to develop.

Eighthly, in developing regulatory strategies we must ask ourselves whether they are NECESSARY. What evil are the policies meant to address. As I have suggested above, the fact that a few social media and digital platforms are multi-national organisations with revenue streams that are greater than the GDP of a small country is not a sufficient basis for regulation per se – unless the regulating authority wishes to maintain its particular power base. But then, who is to say that Westphalian sovereignty has not had its day. Furthermore, it is my clear view that any regulatory activity must be the minimum that is required to address the particular evil. And care must be taken to avoid the “unintended consequences” to which Mr Lewis has referred and some of which I have mentioned above.

Finally, we are faced with an almost insoluble problem when it comes to regulation in the Digital Paradigm. It is this. The legislative and regulatory process is slow although the changes to New Zealand’s firearms legislation post 15 March could be said to have been done with unusual haste. The effect has been that the actions of one person have resulted in relieving a large percentage of the population of their lawfully acquired property. Normally the pace of legislative or regulatory change normally is slow, deliberative and time consuming.

On the other hand, change in the digital paradigm is extremely fast. For example, when I started my PhD thesis in 2004 I contemplated doing something about digital technologies. As it happens I didn’t and looked at the printing press instead. But by the time my PhD was conferred, social media happened. And now legislators are looking at social media as if it was new but by Internet standards it is a mature player. The next big thing is already happening and by the time we have finally worked out what we are going to do about social media, artificial intelligence will be demanding attention. And by the time legislators get their heads around THAT technology in all its multiple permutations, some thing else – perhaps quantum computing – will be with us.

I am not saying therefore that regulating social media should be put in the “too hard” basket but that what regulation there is going to be must be focused, targeted, necessary, limited to a particular evil and done with a full understanding of the implications of the proposed regulatory structures.

Misunderstanding the Internet

 

I heard an interesting interview on the radio on Saturday last. Kim Hill was interviewing Jonathan Taplin. Taplin has written a book entitled Move Fast and Break Things about the Internet and what is currently wrong with it.

First, a confession. I haven’t read Move Fast and Break Things. What I know about Mr Taplin’s views are what I heard him say on the radio and a report of the interview on the RadioNZ website and what I have to say is based on what I heard on the radio rather than a reading of his book. But it does sound to me that Mr Taplin occupies a space along with a number of other disenchanted by the Digital Paradigm including Andrew Keen who wrote The Internet is Not the Answer, Nicholas Carr who wrote The Shallows and Mary Aiken who wrote The Cyber Effect. A common theme among these writers seems to be that for one reason or another the Internet has lost its way, failed to fulfil its promise or that it has been hi-jacked. This last view is that expressed by Mr Taplin.

I don’t have a problem if that is what he thinks. But I do have a problem with some of his assertions of fact which simply do not stand up to scrutiny. Mr Taplin seems to engage in sweeping generalisations to support his position and then argues from that point. In other cases he misinterprets facts in a way that cannot be supported. But his main problem is that he fails to understand the nature of paradigmatic change and that in such an environment things are not going to remain the same, and old models, ways of doing things, concepts and values are either going to be swept away or are gradually going to be eroded and replaced with something else.

Let us look at some of his early assertions that he made on the broadcast. He claims that the Internet originated as the “hipster” project of a group of people who wanted to decentralise control. “Stewart Brand (author of The Whole Earth Catalog, a book which anticipated the internet) was Ken Kesey’s partner in the acid tests, Steve Jobs acknowledges taking LSD. It was a bunch of hippies” – or so Mr Taplin asserts.

Anyone who has studied the history of the Internet will agree that decentralisation was one of the early goals of the development of the network that later became the Internet, originally undertaken by DARPA – the Defence Advanced Research Projects Agency, an agency of the US Department of Defense. DARPA supported the evolution of the ARPANET (the first wide-area packet switching network), Packet Radio Network, Packet Satellite Network and ultimately, the Internet and research in the artificial intelligence fields of speech recognition and signal processing. Hardly a bunch of hippies. And were Brand, Kesey and Jobs involved in this early development. No they were not. Jobs involvement with the Internet came much later. In 1985 he suggested that the most compelling reason for most people to buy a computer would be to link it to a nationwide communications network. But it wasn’t until 1996 that he predicted the ubiquity of the Web. In 1996 Google was still a research project at Stanford and Amazon had only just begun selling books.

What Mr Taplin conveniently ignores is the enormous contribution made by computer engineers and developers to the development of the Internet – people like Vint Cerf and Bob Kahn, Ray Tomlinson who developed email – although that is contested by Shiva Ayyadurai – Jon Postel, Ted Nelson, Tim Berners-Lee and Robert Caillau.

Rather he focussed upon the high profile and very successful entrepreneurs like Peter Thiel, Larry Page and Jeff Bezos. He suggested that they “all are libertarians. They were schooled on Ayn Rand’s work, in which the businessman hero architect is always impeded by the mob, by democracy, by government, by regulation, and he has to be free.”

My reading of Rand would suggest that there are aspects of libertarianism that are inconsistent with her objectivist views. In fact Ayn Rand has become a whipping girl for those who would condemn the forge ahead entrepreneurial spirit untroubled by regulatory systems or collectivist thinking. True, Rand has had an influence on the right and upon libertarianism although some of her views were atypical of rightwing conservative thought. For example she was pro-choice and an atheist. But Mr Taplin throws Ayn Rand into the mix for perjorative rather than evidential value.

Another interesting comment that Mr Taplin made had to do with data. Here is what the report from RadioNZ said

“The core business of Facebook is creating a giant database of information on 2 billion individual people, says Taplin.

“What is the raw material to manufacture a product? You – your desires. You’re willing to leave everything hanging out there and they’re willing to scrape it and sell it to advertisers. It’s called rent. They’re renting [Facebook’s] database.”

That is a degenerate form of capitalism if it’s capitalism at all, he says.

“It doesn’t create anything, you’re renting. That’s the end of capitalism and the beginning of feudalism.”

And that indeed was how it came across on the broadcast. The problem is that Mr Taplin fails to understand the nature of the Digital Paradigm and how it disrupts current business models. He suggests that the user is the raw material – based upon data that has been left behind. I disagree. The data is the raw material of the new digital product and indeed it does create something – a more thoroughly refined and granular understanding the of the nature of markets. Raw materials are necessary for any product. It is just that the raw material now is data in digital format.

What distinguishes digital data from iron ore (another raw material) is that iron ore is sold by the mining company to the refinery or smelter. Iron ore is like any other traditional form of property. You own it by, among other things, exclusive possession. You sell it and by doing so part with exclusive possession. That vests it in someone else.

Now with digital material you can part with possession of a copy but retain the original. The Digital Paradigm turns the traditional property model on its head. Two people can possess the same item of property. And it is here that the “rent” argument advanced by Mr Taplin falls apart. The rent argument only works if there is one instantiation of the property. The “owner” leases the property – be it land or a car – to the tenant or lessee. The owner parts with possession for a period of time. At a later stage the owner retakes possession – when the tenancy or lease comes to an end. But the owner, during the term of the rental does not have possession of the property.

Remember what I said about digital property – two people can possess the same item. That concept is part of the disruptive effect that the Digital Paradigm has on property concepts. Now to say that data is “rented” is using a concept that does not hold up in the Digital Paradigm. To equate renting data with a form of feudalism – which was based upon an exchange of an interest in land for the rendering of a duty – is historically and legally incorrect. And to say that using data does not create anything ignores the fact that data is the raw material – not the individual – and the data goes to creating a profile for any one of a number of purposes of which market research may be one.

So Mr.Taplin’s analogy – like so many attempts to draw analogies between the digital and pre-digital world – fails.

But there is a bigger picture in that paradigm shifts bring paradigmatic change. The Internet and all those myriad platforms that are bolted on to the backbone have revolutionised communication and have opened up a market for digital products. But the content that the Internet enables is only a part of the story.

To understand the nature of the paradigm we need to look below the content layer and comprehend the medium. For, as McLuhan said, the medium is the message. I am sure Mr. Taplin understands this. But what I think he has difficulty in accepting is that the old ways of doing things are going to be swept away. There will be a period of co-existence of the digital and the pre-digital but that won’t last long. The paradigmatically different properties of exponential dissemination, dynamic information, information persistence, permissionless innovation and continuing disruptive change are all factors built in to the technology and cannot be changed. At the risk of sounding deterministic these and other underlying technological qualities are what will drive the inevitability of change.

The music market with which Mr Taplin was familiar has changed dramatically and part of the problems suffered by the industry and those associated with it involved an unwillingness to adapt. iTunes got the idea and now people buy by the song rather than by the album. Adaptation by content providers means that Netflix thrives – despite geoblocking – on-demand has replace appointment viewing and content providers have finally “got it” that consumer demand is for content now – not next week. Hence “Game of Thrones” and “Walking Dead” are advertised in New Zealand as screening on the same day as in the US. The reason for this – the Digital Paradigm provided alternatives – piracy and Bittorrent.

The reality is that many old business models will have to adapt to survive. Those that do not will fall by the wayside. The new paradigm will usher in new industries and new opportunities. But in the Digital Paradigm, business will be done on a global scale rather than from a local storefront. And the result of that scale is that many new digital businesses will do very well such as Google and Facebook and Amazon. Mr Taplin laments the advantages that these companies have, that their power is unaffected by who is in government. But should successful businesses be a matter of concern. For sure, conspiracy theories will abound; the spectre of rampant capitalism will be conjured up. But isn’t this just envy speaking?

I really think we should be embracing the opportunities that the new technologies bring and look for ways in which we can enhance our lives in the Digital Paradigm rather than moaning about it. Because it is not going away.

Internet Governance Theory – Collisions in the Digital Paradigm III

 

The various theories on internet regulation can be placed  within a taxonomy structure . In the centre is the Internet itself. On one side are the formal theories based on traditional “real world” governance models. These are grounded in traditional concepts of law and territorial authority. Some of these model could well become a part of an “uber-model” described as the “polycentric model” – a theory designed to address specific issues in cyberspace. Towards the middle are less formal but nevertheless structured models. Largely technical or “code-based” in nature that are less formal but nevertheless exercise a form of control over Internet operation.

 

On the other side are informal theories that emphasise non-traditional or radical models. These models tend to be technically based, private and global in character.

Internet Governance Graphic
Internet Governance Models – click on the image for a larger copy

 

What I would like to do is briefly outline aspects of each of the models. This will be a very “once over lightly” approach and further detail may be found in Chapter 3 of my text internet.law.nz. This piece also contains some new material on Internet Governance together with some reflections on how traditional sovereign/territorial governance models just won’t work within the context of the Digital Paradigm and the communications medium that is the Internet.

The Formal Theories

The Digital Realists

The “Digital Realist” school has been made famous by Judge Easterbrook’s comment that “there [is] no more a law of cyberspace than there [is] a ‘Law of the Horse.’” Easterbrook summed the theory up in this way:

“When asked to talk about “Property in Cyberspace,” my immediate reaction was, “Isn’t this just the law of the horse?” I don’t know much about cyberspace; what I do know will be outdated in five years (if not five months!); and my predictions about the direction of change are worthless, making any effort to tailor the law to the subject futile. And if I did know something about computer networks, all I could do in discussing “Property in Cyberspace” would be to isolate the subject from the rest of the law of intellectual property, making the assessment weaker.

This leads directly to my principal conclusion: Develop a sound law of intellectual property, then apply it to computer networks.”

Easterbrook’s comment is a succinct summary of the general position of the digital realism school: that the internet presents no serious difficulties, so the “rule of law” can simply be extended into cyberspace, as it has been extended into every other field of human endeavour. Accordingly, there is no need to develop a “cyber-specific” code of law.

Another advocate for the digital realist position is Jack Goldsmith. In “Against Cyberanarchy” he argues strongly against those whom he calls “regulation sceptics” who suggest that the state cannot regulate cyberspace transactions. He challenges their opinions and conclusions, arguing that regulation of cyberspace is feasible and legitimate from the perspective of jurisdiction and choice of law — in other words he argues from a traditionalist, conflict of laws standpoint. However, Goldsmith and other digital realists recognise that new technologies will lead to changes in government regulation; but they believe that such regulation will take place within the context of traditional governmental activity.

Goldsmith draws no distinction between actions in the “real” world and actions in “cyberspace” — they both have territorial consequences. If internet users in one jurisdiction upload pornography, facilitate gambling, or take part in other activities that are illegal in another jurisdiction and have effects there then, Goldsmith argues, “The territorial effects rationale for regulating these harms is the same as the rationale for regulating similar harms in the non-internet cases”. The medium that transmitted the harmful effect, he concludes, is irrelevant.

The digital realist school is the most formal of all approaches because it argues that governance of the internet can be satisfactorily achieved by the application of existing “real space” governance structures, principally the law, to cyberspace. This model emphasises the role of law as a key governance device. Additional emphasis is placed on law being national rather than international in scope and deriving from public (legislation, regulation and so on) rather than private (contract, tort and so on) sources. Digital realist theorists admit that the internet will bring change to the law but argue that before the law is cast aside as a governance model it should be given a chance to respond to these changes. They argue that few can predict how legal governance might proceed. Given the law’s long history as society’s foremost governance model and the cost of developing new governance structures, a cautious, formal “wait and see” attitude is championed by digital realists.

The Transnational Model – Governance by International Law

The transnational school, although clearly still a formal governance system, demonstrates a perceptible shift away from the pure formality of digital realism. The two key proponents of the school, Burk and Perritt, suggest that governance of the internet can be best achieved not by a multitude of independent jurisdiction-based attempts but via the medium of public international law. They argue that international law represents the ideal forum for states to harmonise divergent legal trends and traditions into a single, unified theory that can be more effectively applied to the global entity of the internet.

The transnationalists suggest that the operation of the internet is likely to promote international legal harmonisation for two reasons.

First, the impact of regulatory arbitrage and the increased importance of the internet for business, especially the intellectual property industry, will lead to a transfer of sovereignty from individual states to international and supranational organisations. These organisations will be charged with ensuring broad harmonisation of information technology law regimes to protect the interests of developed states, lower trans-border costs to reflect the global internet environment, increase opportunities for transnational enforcement and resist the threat of regulatory arbitrage and pirate regimes in less developed states.

Secondly, the internet will help to promote international legal harmonisation through greater availability of legal knowledge and expertise to legal personnel around the world.

The transnational school represents a shift towards a less formal model than the digital realism because it is a move away from national to international sources of authority. However, it still clearly belongs to the formalised end of the governance taxonomy on three grounds:

1.    its reliance on law as its principal governance methodology;

2.    the continuing public rather than private character of the authority on which governance rests; and

3.    the fact that although governance is by international law, in the final analysis, this amounts to delegated authority from national sovereign states.

 

National and UN Initiatives – Governance by Governments

This discussion will be a little lengthier because there is some history the serves to illustrate how governments may approach Internet governance.

In 2011 and 2012 there were renewed calls for greater regulation of the Internet.  These were driven by the events in the Middle East early in 2011 which became known as the “Arab Spring” seems more than co-incidental. The “Arab Spring” is a term that refers to anti-government protests that spread across the Middle East. These followed a successful uprising in Tunisia against former leader Zine El Abidine Ben Ali which emboldened similar anti-government protests in a number of Arab countries. The protests were characterised by the extensive use of social media to organise gatherings and spread awareness. There has, however, been some debate about the influence of social media on the political activism of the Arab Spring. Some critics contend that digital technologies and other forms of communication–videos, cellular phones, blogs, photos and text messages– have brought about the concept of a ‘digital democracy’ in parts of North Africa affected by the uprisings. Other have claimed that in order to understand the role of social media during the Arab Uprisings there is context of high rates of unemployment and corrupt political regimes which led to dissent movements within the region. There is certainly evidence of an increased uptake of Internet and social media usage over the period of the events, and during the uprising in Egypt, then President Mubarak’s State Security Investigations Service blocked access to Twitter and Facebook and on 27 January 2011 the Egyptian Government shut down the Internet in Egypt along with SMS messaging.

The G8 Meeting in Deauville May 2011

In May 2011 at G8 meeting in France, President Sarkozy issued a provocative call for stronger Internet Regulation. M. Sarkozy convened a special gathering if global “digerati” in Paris and called the rise of the Internet a “revolution” as significant as the age of exploration and the industrial revolution. This revolution did not have a flag and M. Sarkozy acknowledged that the Internet belonged to everyone, citing the “Arab Spring” as a positive example. However, he warned executives of Google, Facebook, Amazon and E-Bay who were present : “The universe you represent is not a parallel universe. Nobody should forget that governments are the only legitimate representatives of the will of the people in our democracies. To forget this is to risk democratic chaos and anarchy.”

Mr. Sarkozy was not alone in calling existing laws and regulations inadequate to deal with the challenges of a borderless digital world. Prime Minister David Cameron of Britain stated that he would ask Parliament to review British privacy laws after Twitter users circumvented court orders preventing newspapers from publishing the names of public figures who are suspected of having had extramarital affairs but he did not go as far as M. Sarkozy who was pushing for a “civilized Internet” implying wide regulation.

However, the Deauville Communique did not go as far as M. Sarkozy may have like. It affirmed the importance of intellectual property protection, the effective protection of personal data and individual privacy, security of networks a crackdown on trafficking in children for their sexual exploitation. But it did not advocate state control of the Internet but staked out a role for governments. The communique stated:

“We discussed new issues such as the Internet which are essential to our societies, economies and growth. For citizens, the Internet is a unique information and education tool, and thus helps to promote freedom, democracy and human rights. The Internet facilitates new forms of business and promotes efficiency, competitiveness, and economic growth. Governments, the private sector, users, and other stakeholders all have a role to play in creating an environment in which the Internet can flourish in a balanced manner. In Deauville in 2011, for the first time at Leaders’ level, we agreed, in the presence of some leaders of the Internet economy, on a number of key principles, including freedom, respect for privacy and intellectual property, multi-stakeholder governance, cyber-security, and protection from crime, that underpin a strong and flourishing Internet. The “e-G8” event held in Paris on 24 and 25 May was a useful contribution to these debates….

The Internet and its future development, fostered by private sector initiatives and investments, require a favourable, transparent, stable and predictable environment, based on the framework and principles referred to above. In this respect, action from all governments is needed through national policies, but also through the promotion of international cooperation……

As we support the multi-stakeholder model of Internet governance, we call upon all stakeholders to contribute to enhanced cooperation within and between all international fora dealing with the governance of the Internet. In this regard, flexibility and transparency have to be maintained in order to adapt to the fast pace of technological and business developments and uses. Governments have a key role to play in this model.

We welcome the meeting of the e-G8 Forum which took place in Paris on 24 and 25 May, on the eve of our Summit and reaffirm our commitment to the kinds of multi-stakeholder efforts that have been essential to the evolution of the Internet economy to date. The innovative format of the e-G8 Forum allowed participation of a number of stakeholders of the Internet in a discussion on fundamental goals and issues for citizens, business, and governments. Its free and fruitful debate is a contribution for all relevant fora on current and future challenges.

We look forward to the forthcoming opportunities to strengthen international cooperation in all these areas, including the Internet Governance Forum scheduled next September in Nairobi and other relevant UN events, the OECD High Level Meeting on “The Internet Economy: Generating Innovation and Growth” scheduled next June in Paris, the London International Cyber Conference scheduled next November, and the Avignon Conference on Copyright scheduled next November, as positive steps in taking this important issue forward.”

 The ITU Meeting in Dubai December 2012

The meeting of the International Telecommunications Union (ITU) in Dubai provided the forum for further consideration of expanded Internet regulation. No less an authority than Vinton Cerf, the co-developer with Robert Kahn of the TCP/IP protocol which was one of the important technologies that made the Internet possible, sounded a warning when he said

“But today, despite the significant positive impact of the Internet on the world’s economy, this amazing technology stands at a crossroads. The Internet’s success has generated a worrying desire by some countries’ governments to create new international rules that would jeopardize the network’s innovative evolution and its multi-faceted success.

This effort is manifesting itself in the UN General Assembly and at the International Telecommunication Union – the ITU – a United Nations organization that counts 193 countries as its members, each holding one vote. The ITU currently is conducting a review of the international agreements governing telecommunications and it aims to expand its regulatory authority to include the Internet at a treaty summit scheduled for December of this year in Dubai. ….

Today, the ITU focuses on telecommunication networks, radio frequency allocation, and infrastructure development. But some powerful member countries see an opportunity to create regulatory authority over the Internet. Last June, the Russian government stated its goal of establishing international control over the Internet through the ITU. Then, last September, the Shanghai Cooperation Organization – which counts China, Russia, Tajikistan, and Uzbekistan among its members – submitted a proposal to the UN General Assembly for an “international Code of Conduct for Information Security.” The organization’s stated goal was to establish government-led “international norms and rules standardizing the behavior of countries concerning information and cyberspace.” Other proposals of a similar character have emerged from India and Brazil. And in an October 2010 meeting in Guadalajara, Mexico, the ITU itself adopted a specific proposal to “increase the role of ITU in Internet governance.”

As a result of these efforts, there is a strong possibility that this December the ITU will significantly amend the International Telecommunication Regulations – a multilateral treaty last revised in 1988 – in a way that authorizes increased ITU and member state control over the Internet. These proposals, if implemented, would change the foundational structure of the Internet that has historically led to unprecedented worldwide innovation and economic growth.”

The ITU, originally the International Telegraph Union, is a specialised agency of the United Nations and is responsible for issues concerning information and communication technologies. It was originally founded in 1865 and in the past has been concerned with technical communications issues such as standardisation of communications protocols (which was one of its original purposes) that management of the international radio-frequency spectrum and satellite orbit resources and the fostering of sustainable, affordable access to ICT. It took its present name in 1934 and in 1947 became a specialised agency of the United Nations.

The position of the ITU approaching the 2012 meeting in Dubai was that, given the vast changes that had taken place in the world of telecommunications and information technologies, the International Telecommunications Regulations (ITR)that had been revised in 1988 were no longer in keeping with modern developments. Thus, the objective of the 2012 meeting was to revise the ITRs to suit the new age. After a controversial meeting in Dubai in December 2012 the Final Acts of the Conference were published. The controversial issue was that there was a proposal to redefine the Internet as a system of government-controlled, state supervised networks. The proposal was contained in a leaked document by a group of members including Russia, China, Saudi Arabia, Algeria, Sudan, Egypt and the United Arab Emirates. However, the proposal was withdrawn. But the governance model defined the Internet as an:

“international conglomeration of interconnected telecommunication networks,” and that “Internet governance shall be effected through the development and application by governments,” with member states having “the sovereign right to establish and implement public policy, including international policy, on matters of Internet governance.”

This wide-ranging proposal went well beyond the traditional role of the ITU and other members such as the United States, European countries, Australia, New Zealand and Japan insisted that the ITU treaty should apply to traditional telecommunications systems. The resolution that won majority support towards the end of the conference stated that the ITU’s leadership should “continue to take the necessary steps for ITU to play an active and constructive role in the multi-stakeholder model of the internet.” However, the Treaty did not receive universal acclaim. United States Ambassador Kramer of the announced that the US would not be signing the new treaty. He was followed by the United Kingdom. Sweden said that it would need to consult with its capital (code in UN-speak for “not signing”). Canada, Poland, the Netherlands, Denmark, Kenya, New Zealand, Costa Rica, and the Czech Republic all made similar statements. In all, 89 countries signed while 55 did not.

Quite clearly there is a considerable amount of concern about the way in which national governments wish to regulate or in some way govern and control the Internet. Although at first glance this may seem to be directed at the content layer, and amount to a rather superficial attempt to embark upon the censorship of content passing through a new communications technology, the attempt to regulate through a technological forum such as the ITU clearly demonstrates that governments wish to control not only content but the various transmission and protocol layers of the Internet and possibly even the backbone itself. Continued attempts to interfere with aspects of the Internet or embark upon an incremental approach to regulation have resulted in expressions of concern from another Internet pioneer, Sir Tim Berners-Lee who, in addition to claiming that governments are suppressing online freedom has issued a call for a Digital Magna Carta.

I have already written on the issue of a Digital Magna Carta or Bill of Rights here.

Clearly the efforts described indicate that some form of national government or collective government form of Internet Governance is on the agenda. Already the United Nations has become involved in the development of Internet Governance policy with the establishment of the Internet Governance Forum.

The Internet Governance Forum

The Internet Governance Forum describes itself as bringing

“people together from various stakeholder groups as equals, in discussions on public policy issues relating to the Internet. While there is no negotiated outcome, the IGF informs and inspires those with policy-making power in both the public and private sectors.  At their annual meeting delegates discuss, exchange information and share good practices with each other. The IGF facilitates a common understanding of how to maximize Internet opportunities and address risks and challenges that arise.

The IGF is also a space that gives developing countries the same opportunity as wealthier nations to engage in the debate on Internet governance and to facilitate their participation in existing institutions and arrangements. Ultimately, the involvement of all stakeholders, from developed as well as developing countries, is necessary for the future development of the Internet.”

The Internet Governance Forum is an open forum which has no members. It was established by the World Summit on the Information Society in 2006. Since then, it has become the leading global multi-stakeholder forum on public policy issues related to Internet governance.

Its UN mandate gives it convening power and the authority to serve as a neutral space for all actors on an equal footing. As a space for dialogue it can identify issues to be addressed by the international community and shape decisions that will be taken in other forums. The IGF can thereby be useful in shaping the international agenda and in preparing the ground for negotiations and decision-making in other institutions. The IGF has no power of redistribution, and yet it has the power of recognition – the power to identify key issues.

A small Secretariat was set up in Geneva to support the IGF, and the UN Secretary-General appointed a group of advisers, representing all stakeholder groups, to assist him in convening the IGF.  The United Nations General Assembly agreed in December 2010 to extend the IGF’s mandate for another five years. The IGF is financed through voluntary contributions.”

Zittrain describes the IGF as “diplomatically styled talk-shop initiatives like the World Summit on the Information Society and its successor, the Internet Governance Forum, where “stakeholders” gather to express their views about Internet governance, which is now more fashionably known as “the creation of multi-stakeholder regimes.”

Less Formal Yet Structured

The Engineering and Technical Standards Community

The internet governance models under discussion have in common the involvement of law or legal structures in some shape or form or, in the case of the cyber anarchists, an absence thereof.

Essentially internet governance falls within two major strands:

1.    The narrow strand involving the regulation of technical infrastructure and what makes the internet work.

2.    The broad strand dealing with the regulation of content, transactions and communication systems that use the internet.

The narrow strand regulation of internet architecture recognises that the operation of the internet and the superintendence of that operation involves governance structures that lack the institutionalisation that lies behind governance by law.

The history of the development of the internet although having its origin with the United States Government has had little if any direct government involvement or oversight. The Defence Advanced Research Projects Administration (DARPA) was a funding agency providing money for development. It was not a governing agency nor was it a regulator. Other agencies such as the Federal Networking Council and the National Science Foundation are not regulators, they are organisations that allow user agencies to communicate with one another. Although the United States Department of Commerce became involved with the internet, once potential commercial implications became clear it too has maintained very much of a hands-off approach and its involvement has primarily been with ICANN with whom the Department has maintained a steady stream of Memoranda of Understanding over the years.

Technical control and superintendence of the internet rests with the network engineers and computer scientists who work out problems and provide solutions for its operation. There is no organisational charter. The structures within which decisions are made are informal, involving a network of interrelated organisations with names which at least give the appearance of legitimacy and authority. These organisations include the Internet Society (ISOC), an independent international non-profit organisation founded in 1992 to provide leadership and internet-related standards, education and policy around the world. Several other organisations are associated with ISOC. The Internet Engineering Taskforce (IETF), is a separate legal entity, which has as its mission to make the internet work better by producing high quality, relevant technical documents that influence the way people design, use and manage the internet.

The Internet Architecture Board (IAB) is an advisory body to ISOC and also a committee of IETF, which has an oversight role. Also housed within ISOC is the IETF Administrative Support Activity, which is responsible for the fiscal and administrative support of the IETF Standards Process. The IETF Administrative Support Activity (IASA) has a committee, the IETF Administrative Oversight Committee (IAOC), which carries out the responsibilities of the IASA supporting the Internet Engineering Steering Group (IESG) working groups, the Internet Architecture Board (IAB), the Internet Research Taskforce (IRTF) and Steering Groups (IRSG). The IAOC oversees the work of the IETF Administrative Director (IAD) who has the day-to-day operational responsibility of providing the fiscal and administrative support through other activities, contractors and volunteers.

The central hub of these various organisations is the IETF. This organisation has no coercive power, but is responsible for establishing internet standards, some of which such as TCP/IP are core standards and are non-optional. The compulsory nature of these standards do not come from any regulatory powers, but because of the nature of the critical mass of network externalities involving internet users. Standards become economically mandatory and there is an overall acceptance of IETF standards which maintain core functionality of the internet.

A characteristic of IETF, and indeed all of the technical organisations involved in internet functionality, is the open process that theoretically allows any person to participate. The other characteristic of internet network organisations is the nature of the rough consensus by which decisions are made. Proposals are circulated in the form of a Request for Comment to members of the internet, engineering and scientific communities and from this collaborative and consensus-based approach a new standard is agreed.

Given that the operation of the internet involves a technical process and the maintenance of the technical process depends on the activities of scientific and engineering specialists, it is fair to conclude that a considerable amount of responsibility rests with the organisations who set and maintain standards. Many of these organisations have developed a considerable power structure them without any formal governmental or regulatory oversight – an issue that may well need to be addressed. Another issue is whether these organisations have a legitimate basis to do what they are doing with such an essential infrastructure as the internet. The objective of organisations such IETF is a purely technical one that has little if any public policy ramifications. Its ability to work outside government bureaucracyenables greater efficiency.

However, the internet’s continued operation depends on a number of interrelated organisations which, while operating in an open and transparent manner in a technical collaborative consensus-based model, have little understanding of the public interest ramifications of their decisions. This aspect of internet governance is often overlooked. The technical operation and maintenance of the internet is superintended by organisations that have little or no interactivity with any of the formalised power structures that underlie the various “governance by law” models of internet governance. The “technical model” of internet governance is an anomaly arising not necessarily from the technology, but from its operation.

ICANN

Of those involved in the technical sphere of Internet governance, ICANN is perhaps the best known. Its governance of the “root” or addressing systems makes it a vital player in the Internet governance taxonomy and for that reason requires some detailed consideration.

ICANN is the Internet Corporation for Assigned Names and Numbers (ICANN). This organisation was formed in October 1998 at the direction of the Clinton Administration to take responsibility for the administration of the Internet’s Domain Name System (DNS). Since that time ICANN has been dogged by controversy and criticism from all sides. ICANN wields enormous power as the sole controlling authority of the DNS, which has a “chokehold” over the internet because it is the only aspect of the entire decentralised, global system of the internet that is administered from a single, central point. By selectively editing, issuing or deleting net identities ICANN is able to choose who is able to access cyberspace and what they will see when they are there. ICANN’s control effectively amounts, in the words of David Post, to “network life or death”. Further, if ICANN chooses to impose conditions on access to the internet, it can indirectly project its influence over every aspect of cyberspace and the activity that takes place there.

The obvious implication for governance theorists is that the ICANN model is not a theory but a practical reality. ICANN is the first indigenous cyberspace governance institution to wield substantive power and demonstrate a real capacity for effective enforcement. Ironically, while other internet governance models have demonstrated a sense of purpose but an acute lack of power, ICANN has suffered from excess power and an acute lack of purpose. ICANN arrived at its present position almost, but not quite, by default and has been struggling to find a meaningful raison d’être since. In addition it is pulled by opposing forces all anxious to ensure their vision of the new frontier prevails

ICANN’s “democratic” model of governance has been attacked as unaccountable, anti-democratic, subject to regulatory capture by commercial and governmental interests, unrepresentative, and excessively Byzantine in structure. ICANN has been largely unresponsive to these criticisms and it has only been after concerted publicity campaigns by opponents that the board has publicly agreed to change aspects of the process.

As a governance model, a number of key points have emerged:

1.    ICANN demonstrates the internet’s enormous capacity for marshalling global opposition to governance structures that are not favourable to the interests of the broader internet community.

2.    Following on from point one, high profile, centralised institutions such as ICANN make extremely good targets for criticism.

3.    Despite enormous power and support from similarly powerful backers, public opinion continues to prove a highly effective tool, at least in the short run, for stalling the development of unfavourable governance schemes.

4.    ICANN reveals the growing involvement of commercial and governmental interests in the governance of the internet and their reluctance to be directly associated with direct governance attempts.

5.    ICANN, it demonstrates an inability to project its influence beyond its core functions to matters of general policy or governance of the internet.

ICANN lies within the less formal area of governance taxonomy in that it operates with a degree of autonomy it retains a formal character. Its power is internationally based (and although still derived from the United States government, there is a desire by the US to “de-couple” its involvement with ICANN). It has greater private rather than public sources of authority, in that its power derives from relationships with registries, ISPs and internet users rather than sovereign states. Finally, it is evolving towards a technical governance methodology, despite an emphasis on traditional decision-making structures and processes.

The Polycentric Model of Internet Governance

The Polycentric Model embraces, for certain purposes, all of the preceding models. It does not envelop them, but rather employs them for specific governance purposes.

This theory is one that has been developed by Professor Scott Shackelford. Shackelford in his article “Toward Cyberpeace: Managing Cyberattacks Through Polycentric Governance”  and locates Internet Governance within a special context of cybersecurity and the maintenance of cyberpeace He contends that the  international community must come together to craft a common vision for cybersecurity while the situation remains malleable. Given the difficulties of accomplishing this in the near term, bottom-up governance and dynamic, multilevel regulation should be undertaken consistent with polycentric analysis.

While he sees a role for governments and commercial enterprises he proposes a mixed model. Neither governments nor the private sector should be put in exclusive control of managing cyberspace since this could sacrifice both liberty and innovation on the mantle of security, potentially leading to neither.

The basic notion of polycentric governance is that a group facing a collective action problem should be able to address it in whatever way they see fit, which could include using existing or crafting new governance structures; in other words, the governance regime should facilitate the problem-solving process.

The model demonstrates the benefits of self-organization, networking regulations at multiple levels, and the extent to which national and private control can co-exist with communal management.  A polycentric approach recognizes that diverse organizations and governments working at multiple levels can create policies that increase levels of cooperation and compliance, enhancing flexibility across issues and adaptability over time.

Such an approach, a form of “bottom-up” governance, contrasts with what may be seen as an increasingly state-centric approach to Internet Governance and cybersecurity which has become apparent in for a such as the G8 Conference in Deauville in 2011 and the ITU Conference in Dubai in 2012.  The approach also recognises that cyberspace has its own qualities or affordances, among them its decentralised nature along with the continuing dynamic change flowing from permissionless innovation. To put it bluntly it is difficult to forsee the effects of regulatory efforts which a generally sluggish in development and enactment, with the result that the particular matter which regulation tried to address has changed so that the regulatory system is no longer relevant. Polycentric regulation provides a multi-faceted response to cybersecurity issues in keeping with the complexity of crises that might arise in cyberspace.

So how should the polycentric model work. First, allies should work together to develop a common code of cyber conduct that includes baseline norms, with negotiations continuing on a harmonized global legal framework. Second, governments and CNI operators should establish proactive, comprehensive cybersecurity policies that meet baseline standards and require hardware and software developers to promote resiliency in their products without going too far and risking balkanization. Third, the recommendations of technical organizations such as the IETF should be made binding and enforceable when taken up as industry best practices. Fourth, governments and NGOs should continue to participate in U.N. efforts to promote global cybersecurity, but also form more limited forums to enable faster progress on core issues of common interest. And fifth, training campaigns should be undertaken to share information and educate stakeholders at all levels about the nature and extent of the cyber threat.

Code is Law

Located centrally within the taxonomy and closely related to the Engineering and Technology category of governance models is the “code is law” model, designed by  Harvard Professor, Lawrence Lessig, and, to a lesser extent, Joel Reidenberg. The school encompasses in many ways the future of the internet governance debate. The system demonstrates a balance of opposing formal and informal forces and represents a paradigm shift in the way internet governance is conceived because the school largely ignores the formal dialectic around which the governance debate is centred and has instead developed a new concept of “governance and the internet”. While Lessig’s work has been favourably received even by his detractors, it is still too early to see if it is indeed a correct description of the future of internet governance, or merely a dead end. Certainly, it is one of the most discussed concepts of cyberspace jurisprudence.

Lessig asserts that human behaviour is regulated by four “modalities of constraint”: law, social norms, markets and architecture. Each of these modalities influences behaviour in different ways:

1.    law operates via sanction;

2.    markets operate via supply and demand and price;

3.    social norms operate via human interaction; and

4.    architecture operates via the environment.

Governance of behaviour can be achieved by any one or any combination of these four modalities. Law is unique among the modalities in that it can directly influence the others.

Lessig argues that in cyberspace, architecture is the dominant and most effective modality to regulate behaviour. The architecture of cyberspace is “code” — the hardware and software — that creates the environment of the internet. Code is written by code writers; therefore it is code writers, especially those from the dominant software and hardware houses such as Microsoft and AOL, who are best placed to govern the internet. In cyberspace, code is law in the imperative sense of the word. Code determines what users can and cannot do in cyberspace.

“Code is law” does not mean lack of regulation or governmental involvement, although any regulation must be carefully applied. Neil Weinstock Netanel argues that “contrary to the libertarian impulse of first generation cyberspace scholarship, preserving a foundation for individual liberty, both online and off, requires resolute, albeit carefully tailored, government intervention”. Internet architecture and code effectively regulate individual activities and choices in the same way law does and that market actors need to use these regulatory technologies in order to gain a competitive advantage. Thus, it is the role of government to set the limits on private control to facilitate this.

The crux of Lessig’s theory is that law can directly influence code. Governments can regulate code writers and ensure the development of certain forms of code. Effectively, law and those who control it, can determine the nature of the cyberspace environment and thus, indirectly what can be done there. This has already been done. Code is being used to rewrite Copyright Law. Technological Protection Measures (TPMs) allow content owners to regulate the access and/or use to which a consumer may put digital content. Opportunities to exercise fair uses or permitted uses can be limited beyond normal user expectations and beyond what the law previously allowed for analogue content. The provision of content in digital format, the use of TPMs and the added support that legislation gives to protect TPMs effectively allows content owners to determine what limitations they will place upon users’ utilisation of their material. It is possible that the future of copyright lies not in legislation (as it has in the past) but in contract.

 

Informal Models and Aspects of Digital Liberalism

Digital liberalism is not so much a model of internet governance as it is a school of theorists who approach the issue of governance from roughly the same point on the political compass: (neo)-liberalism. Of the models discussed, digital liberalism is the broadest. It encompasses a series of heterogeneous theories that range from the cyber-independence writings of John Perry Barlow at one extreme, to the more reasoned private legal ordering arguments of Froomkin, Post and Johnson at the other. The theorists are united by a common “hands off” approach to the internet and a tendency to respond to governance issues from a moral, rather than a political or legal perspective.

Regulatory Arbitrage – “Governance by whomever users wish to be governed by”

The regulatory arbitrage school represents a shift away from the formal schools, and towards digital liberalism. “Regulatory arbitrage” is a term coined by the school’s principal theorist, Michael Froomkin, to describe a situation in which internet users “migrate” to jurisdictions with regulatory regimes that give them the most favourable treatment. Users are able to engage in regulatory arbitrage by capitalising on the unique geographically neutral nature of the internet. For example, someone seeking pirated software might frequent websites geographically based in a jurisdiction that has a weak intellectual property regime. On the other side of the supply chain, the supplier of gambling services might, despite residing in the United States, deliberately host his or her website out of a jurisdiction that allows gambling and has no reciprocal enforcement arrangements with the United States.

Froomkin suggests that attempts to regulate the internet face immediate difficulties because of the very nature of the entity that is to be controlled. He draws upon the analogy of the mythological Hydra, but whereas the beast was a monster, the internet may be predominantly benign. Froomkin identifies the internet’s resistance to control as being caused by the following two technologies:

1.    The internet is a packet-switching network. This makes it difficult for anyone, including governments, to block or monitor information originating from large numbers of users.

2.    Powerful military-grade cryptography exists on the internet that users have access to that can, if used properly, make messages unreadable to anyone but the intended recipient.

As a result of the above, internet users have access to powerful tools which can be used to enable anonymous communication. This is unless, of course, their governments have strict access control, an extensive monitoring programme or can persuade its citizens not to use these tools by having liability rules or criminal law.

Froomkin’s theory is principally informal in character. Private users, rather than public institutions are responsible for choosing the governance regime they adhere to. The mechanism that allows this choice is technical and works in opposition to legally based models. Finally, the model is effectively global as users choose from a world of possibilities to decide which particular regime(s) to submit to, rather than a single national regime. While undeniably informal.

Unlike digital liberalists who advocate a separate internet jurisdiction encompassing a multitude of autonomous self-regulating regimes within that jurisdiction, Froomkin argues that the principal governance unit of the internet will remain the nation-state. He argues that users will be free to choose from the regimes of states rather than be bound to a single state, but does not yet advocate the electronic federalism model of digital liberalism.

Digital Libertarianism – Johson and Post

Digital liberalism is the oldest of the internet governance models and represents the original response to the question: “How will the internet be governed?” Digital liberalism developed in the early 1990s as the internet began to show the first inklings of its future potential. The development of a Graphical User Interface together with web browsers such as Mosaic made the web accessible to the general public for the first time. Escalating global connectivity and a lack of understanding or reaction by world governments contributed to a sense of euphoria and digital freedom that was reflected in the development of digital liberalism.

In its early years digital liberalism evolved around the core belief that “the internet cannot be controlled” and that consequently “governance” was a dead issue. By the mid-1990s advances in technology and the first government attempts to control the internet saw this descriptive claim gradually give way to a competing normative claim that “the internet can be controlled but it should not be”. These claims are represented as the sub-schools of digital liberalism — cyberanarchism and digital libertarianism.

In “And How Shall the Net be Governed?” David Johnson and David Post posed the following questions:

Now that lots of people use (and plan to use) the internet, many — governments, businesses, techies, users and system operators (the “sysops” who control ID issuance and the servers that hold files) — are asking how we will be able to:

(1)   establish and enforce baseline rules of conduct that facilitate reliable communications and trustworthy commerce; and

(2)   define, punish and prevent wrongful actions that trash the electronic commons or impose harm on others.

In other words, how will cyberspace be governed, and by what right?

Post and Johnson point out that one of the advantages of the internet is its chaotic and ungoverned nature. As to the question of whether the net must be governed at all they use the example of the three-Judge Federal Court in Philadelphiathat  “threw out the Communications Decency Act on First Amendment grounds seemed thrilled by the ‘chaotic’ and seemingly ungovernable character of the net”. Post and Johnson argue that because of its decentralised architecture and lack of a centralised rule-making authority the net has been able to prosper. They assert that the freedom the internet allows and encourages, has meant that sysops have been free to impose their own rules on users. However, the ability of the user to choose which sites to visit, and which to avoid, has meant the tyranny of system operators has been avoided and the adverse effect of any misconduct by individual users has been limited.

 Johnson and Post propose the following four competing models for net governance:

1.    Existing territorial sovereigns seek to extend their jurisdiction and amend their own laws as necessary to attempt to govern all actions on the net that have substantial impacts upon their own citizenry.

2.    Sovereigns enter into multilateral international agreements to establish new and uniform rules specifically applicable to conduct on the net.

3.    A new international organisation can attempt to establish new rules — a new means of enforcing those rules and of holding those who make the rules accountable to appropriate constituencies.

4.    De facto rules may emerge as the result of the interaction of individual decisions by domain name and IP registries (dealing with conditions imposed on possession of an on-line address), by system operators (local rules to be applied, filters to be installed, who can sign on, with which other systems connection will occur) and users (which personal filters will be installed, which systems will be patronised and the like).

The first three models are centralised or semi-centralised systems and the fourth is essentially a self-regulatory and evolving system. In their analysis, Johnson and Post consider all four and conclude that territorial laws applicable to online activities where there is no relevant geographical determinant are unlikely to work, and international treaties to regulate, say, ecommerce are unlikely to be drawn up.

Johnson and Post proposed a variation of the third option — a new international organisation that is similar to a federalist system, termed “net federalism”.

In net federalism, individual network systems rather than territorial sovereignty are the units of governance. Johnson and Post observe that the law of the net has emerged, and can continue to emerge, from the voluntary adherence of large numbers of network administrators to basic rules of law (and dispute resolution systems to adjudicate the inevitable inter-network disputes), with individual users voting with their electronic feet to join the particular systems they find most congenial. Within this model multiple network confederations could emerge. Each may have individual “constitutional” principles — some permitting and some prohibiting, say, anonymous communications, others imposing strict rules regarding redistribution of information and still others allowing freer movement — enforced by means of electronic fences prohibiting the movement of information across confederation boundaries.

Digital liberalism is clearly an informal governance model and for this reason has its attractions for those who enjoyed the free-wheeling approach to the internet in the early 1990s. It advocates almost pure private governance, with public institutions playing a role only in so much as they validate the existence and independence of cyber-based governance processes and institutions. Governance is principally to be achieved by technical solutions rather than legal process and occurs at a global rather than national level. Digital liberalism is very much the antithesis of the digital realist school and has been one of the two driving forces that has characterised the internet governance debate in the last decade.

Cyberanarchism – John Perry Barlow

In 1990, the FBI were involved in a number of actions against a perceived “computer security threat” posed by a Texas role-playing game developer named Steve Jackson. Following this, John Perry Barlow and Mitch Kapor formed the Electronic Freedom Foundation. Its mission statement says that it was “established to help civilize the electronic frontier; to make it truly useful and beneficial not just to a technical elite, but to everyone; and to do this in a way which is in keeping with our society’s highest traditions of the free and open flow of information and communication”.

One of Barlow’s significant contributions to thinking on internet regulation was the article, “Declaration of the Independence of Cyberspace”, although idealistic in expression and content, eloquently expresses a point of view held by many regarding efforts to regulate cyberspace. The declaration followed the passage of the Communications Decency Act.  In “The Economy of Ideas: Selling Wine without Bottles on the Global Net”,Barlow challenges assumptions about intellectual property in the digital online environment. He suggests that the nature of the internet environment means that different legal norms must apply. While the theory has its attractions, especially for the young and the idealistic, the fact of the matter is that “virtual” actions are grounded in the real world, are capable of being subject to regulation and, subject to jurisdiction, are capable of being subject to sanction. Indeed, we only need to look at the Digital Millennium Copyright Act (US) and the Digital Agenda Act 2000 (Australia) to gain a glimpse of how, when confronted with reality, Barlow’s theory dissolves.

Regulatory Assumptions

In understanding how regulators approach the control of internet content, one must first understand some of the assumptions that appear to underlie any system of data network regulation.

First and foremost, sovereign states have the right to regulate activity that takes place within their own borders. This right to regulate is moderated by certain international obligations. Of course there are certain difficulties in identifying the exact location of certain actions, but the internet only functions at the direction of the persons who use it. These people live, work, and use the internet while physically located within the territory of a sovereign state and so it is unquestionable that states have the authority to regulate their activities.

A second assumption is that a data network infrastructure is critical to the continued development of national economies. Data networks are a regular business tool like the telephone. The key to the success of data networking infrastructure is its speed, widespread availability, and low cost. If this last point is in doubt, one need only consider that the basic technology of data networking has existed for more than 20 years. The current popularity of data networking, and of the internet generally, can be explained primarily by the radical lowering of costs related to the use of such technology. A slow or expensive internet is no internet at all.

The third assumption is that international trade requires some form of international communication. As more communication takes place in the context of data networking, then continued success in international trade will require sufficient international data network connections.

The fourth assumption is that there is a global market for information. While it is still possible to internalise the entire process of information gathering and synthesis within a single country, this is an extremely costly process. If such expensive systems represent the only source of information available it will place domestic businesses at a competitive disadvantage in the global marketplace.

The final assumption is that unpredictability in the application of the law or in the manner in which governments choose to enforce the law will discourage both domestic and international business activity. In fashioning regulations for the internet, it is important that the regulations are made clear and that enforcement policies are communicated in advance so that persons have adequate time to react to changes in the law.

Concluding Thoughts

Governance and the Properties of the Digital Paradigm

Regulating or governing cyberspace faces challenges that lie within the properties or affordances of the Digital Paradigm. To begin with, territorial sovereignty concepts which have been the basis for most regulatory or governance activity rely on physical and defined geographical realities. By its nature, a communications system like the Internet challenges that model. Although the Digital Realists assert that effectively nothing has changed, and that is true to a limited extent, the governance functions that can be exercised are only applicable to that part of cyberspace that sits within a particular geographical space. Because the Internet is a distributed system it is impossible for any one sovereign state to impose its will upon the entire network. It is for this reason that some nations are setting up their own networks, independent of the Internet, although the perception is that the Internet is controlled by the US, the reality is that with nationally based “splinternets” sovereigns have greater ability to assert control over the network both in terms of the content layer and the various technical layers beneath that make up the medium. The distributed network presents the first challenge to national or territorially based regulatory models.

Of course aspects of sovereign power may be ceded by treaty or by membership of international bodies such as the United Nations. But does, say, the UN have the capacity to impose a worldwide governance system over the Internet. True, it created the IGF but that organisation has no power and is a multi-stakeholder policy think tank. Any attempt at a global governance model requires international consensus and, as the ITU meeting in Dubai in December 2012 demonstrated, that is not forthcoming at present.

Two other affordances of the Digital Paradigm challenge the establishment of tradition regulatory or governance systems. Those affordances are continuing disruptive change and permissionless innovation.  The very nature of the legislative process is measured. Often it involves cobbling a consensus. All of this takes time and by the time there is a crystallised proposition the mischief that the regulation is trying to address either no longer exists or has changed or taken another form. The now limited usefulness (and therefore effectiveness) of the provisions of s.122A – P of the New Zealand Copyright Act 1994 demonstrate this proposition. Furthermore, the nature of the legislative process involving reference to Select Committees and the prioritisation of other legislation within the time available in a Parliamentary session means that a “swift response” to a problem is very rarely possible.

Permissionless innovation adds to the problem because as long as this continues, and there is no sign that the inventiveness of the human mind is likely to slow down, developers and software writers will continue to change the digital landscape meaning that the target of a regulatory system may be continually moving, and certainty of law, a necessity in any society that operates under the Rule of Law, may be compromised. Again, the example of the file sharing provisions of the New Zealand Copyright Act provide an example. The definition of file sharing is restricted to a limited number of software applications – most obviously Bit Torrent. Work arounds such as virtual private networks and magnet links, along with anonymisation proxies fall outside the definition. In addition the definition addresses sharing and does not include a person who downloads but does not share by uploading infringing content.

Associated with disruptive change and permissionless innovation are some other challenges to traditional governance thinking. Participation and interactivity, along with exponential dissemination emphasise the essentially bottom up participatory nature of the Internet ecosystem. Indeed this is reflected in the quality of permissionless innovation where any coder may launch an app without any regulatory sign-off. The Internet is perhaps the greatest manifestation of democracy that there has been. It is the Agora of Athens on a global scale, a cacophony of comment, much of it trivial but the fact is that everyone has the opportunity to speak and potentially to be heard. Spiro Agnew’s “silent majority” need be silent no longer. The events of the Arab Spring showed  the way in which the Internet can be used in the face of oppressive regimes in motivating populaces. It seems unlikely that an “undemocratic” regulatory regime could be put in place absent the “consent of the governed” and despite the usual level of apathy that occurs in political matters, it seems unlikely that, given its participatory nature, netizens would tolerate such interference.

Perhaps the answer to the issue of Internet Governance is already apparent – a combination of Lessig’s Code is Law and the technical standards organisations that actually make the Internet work, such as ISOC, ITEF and ICANN. Much criticism has been levelled at ICANN’s lack of accountability, but in many respects similar issues arise with the IETF and IAB, dominated as they are by  groups of engineers. But in the final analysis, perhaps this is the governance model that is the most suitable. The objective of engineers is to make systems work at the most efficient level. Surely this is the sole objective of any regulatory regime. Furthermore, governance by technicians, if it can be called that, contains safeguards against political, national or regional capture. By all means, local governments may regulate content. But that is not the primary objective of Internet governance. Internet governance addresses the way in which the network operates. And surely that is an engineering issue rather than a political one.

 The Last Word

Perhaps the last word on the general topic of internet regulation should be left to Tsutomu Shinomura, a computational physicist and computer security expert who was responsible for tracking down the hacker Kevin Mitnick which he recounted in the excellent book Takedown:

The network of computers known as the internet began as a unique experiment in building a community of people who shared a set of values about technology and the role computers could play in shaping the world. That community was based on a shared sense of trust. Today, the electronic walls going up everywhere on the Net are the clearest proof of the loss of that trust and community. It’s a great loss for all of us.

Collisions in the Digital Paradigm II – Recorded Law

Collisions in the Digital Paradigm II – Recorded Law

 Author’s Note: This post develops further some of the themes that have appeared in earlier posts on this blog. The title reflects a particular post from March of last year which considered the nature of copyright in the digital paradigm. In particular this post develops and expands the analytical model further.

“When faced with a totally new situation, we tend always to attach ourselves to the objects, to the flavor of the most recent past. We see the world through a rear-view mirror. We march backwards into the future.”[1]

Introduction

Marshall McLuhan articulated two aphorisms that aptly encapsulate certain realities about the impact of the media of information communications. “The medium is the message”[2] – perhaps his most famous and yet opaque statement – emphasises the importance of understanding the way in which information is communicated. According to McLuhan, we focus upon the message or the content that a medium delivers whilst ignoring the delivery system and its impact. In most cases our expectation of content delivery is shaped by earlier media. We tend to look at the new delivery systems through a rear view mirror and often will seek for analogies, metaphors or concepts of functional equivalence to explain the new medium that do not truly reflect how it operates and the underlying impact that it might have.

“We become what we behold. We shape our tools and thereafter our tools shape us”[3] is the second aphorism that summarises the impact that new media may have. Having developed the delivery system, we find that our behaviours and activities change. Over time it may be that certain newly developed behaviours become acceptable and thus underlying values that validate those behaviours change. In the case of information delivery tools, our relationships with, expectations and use of information may change.

McLuhan’s first aphorism is that content alone does not cause these modifications. My suggestion is that it is the medium of delivery that governs new information expectations, uses and relationships. How does this happen? One has to properly understand the tool – or in the case of information communication, the medium – to understand the way in which it impacts upon informational behaviours, use and expectations.

In the first part of this paper I shall consider the underlying qualities or properties of new information media. The starting point is to consider the approach of Elizabeth Eisenstein in her study “The Printing Press as an Agent of Change”.[4]  I shall then consider the development of a similar approach to digital communications systems and particularly the Information Technologies of computers and the Internet. It will become clear that this is a complex and at times contradictory process for within the Digital Paradigm there are a number of tensions that result in nuanced conclusions rather than absolutes.

The second part of this paper moves to consider the impact of the Digital Paradigm upon the information matrix that is the Law. I argue that the authoritative basis of the Law lies in the way that the law is communicated, and that many of our assumptions about the certainty of law and the its foundations, particularly of the doctrine of precedent, have been built upon print technology. I suggest that the Digital Paradigm and an understanding of the new media for communicating legal information present some fundamental challenges to our assumptions about law and may well revolutionise established legal institutions and doctrines. In the course of this discussion I challenge the often advanced and convenient escape route that suggests that what the Digital Paradigm offers is merely content in a different delivery system which may be “functionally equivalent” to that which has gone before. I argue that escape route is now closed off in light of the fundamentally different manner by which content is delivered in the Digital Space.

Elizabeth Eisenstein and the Qualities of Print

There are some very sound reasons why it is that print has become an essential part of the authoritativeness of recorded legal information. These reasons may be located in the qualities that are associated with print itself – qualities that go below the initial nature of the content, and that differentiated print from the scribal system of recording information. The identification of these qualities has been at the heart of Elizabeth Eisenstein’s examination of the printing press and it impact upon the intellectual activities of literate elites in Early Modern Europe.[5]

Eisenstein’s theory holds that the capacity of printing to preserve knowledge and to allow the accumulation of information fundamentally changed the mentality of early modern readers, with repercussions that transformed Western society.[6] Ancient and Medieval scribes had faced difficulties in preserving the knowledge that they already possessed which, despite their best efforts, inevitably grew more corrupted and fragmented over time.[7] The advent of printed material meant that it was no longer necessary for scholars to seek rare, scattered manuscripts to copy. The focus shifted to the text and the development of new ideas or the development of additional information.

Six qualities of print were identified by Elizabeth Eisenstein that were the enablers that underpinned the distribution of content which enhanced the developing Renaissance, that spread Luther’s 97 arguments around Germany in the space of 2 weeks from the day that they were nailed on the Church door at Wittenberg, and allowed for the wide communication of scientific information that enabled experiment, comment, development and what we now know as the Scientific Revolution.  Within 300 years of the introduction of the printing press by Gutenberg the oral-memorial custom- based ever-changing law had to be recorded in a book for it to exist.

It would be fair to remark that Eisenstein’s approach was and still is contentious.[8] But what is important is her identification of the paradigmatic differences between the scribal and print cultures based upon the properties or qualities of the new technology. These qualities were responsible for the shift in the way that intellectuals and scholars approached information.

The six features or qualities of print that significantly differentiated the new technology from scribal texts identified by Eisenstein are as follows:

a) dissemination

b) standardisation

c) reorganization

d) data collection

e) fixity and preservation

f) amplification and reinforcement.[9]

Some of these features had an impact, to a greater or lesser degree, upon communication structures within the law.

Dissemination of information was increased by printed texts not solely by volume but by way of availability, dispersal to different locations and cost. For example, dissemination allowed a greater spread of legal material to diverse locations, bringing legal information to a wider audience. The impact upon the accessibility of knowledge was enhanced by the greater availability of texts and, in time, by the development of clearer and more accessible typefaces.[10]

Standardisation of texts, although not as is understood by modern scholars, was enabled by print. Every text from a print run had an identical or standardised content.[11] Every copy had identical pagination and layout along with identical information about the publisher and the date of publication. Standardised content allowed for a standardised discourse.[12] In the scribal process errors could be perpetuated by copying, and frequently in the course of that process additional ones occurred. However, the omission of one word by a compositor was a “standardised” error that did not occur in the scribal culture but that had a different impact[13] and could be “cured” by the insertion of an “errata” note before the book was sold.[14] Yet standardisation itself was not an absolute and the printing of “errata” was not the complete answer to the problem of error. Interaction on the part of the reader was required to insert the “errata” at the correct place in the text.[15]

In certain cases print could not only perpetuate error but it could be used actively to mislead or disseminate falsehood. The doubtful provenance of The Compleate Copyholder attributed to Sir Edward Coke is an example.[16] Standardisation, as a quality of print identified by Eisenstein, must be viewed in light of these qualifications.

Print allowed greater flexibility in the organization and reorganization of material and its presentation. Material was able to be better ordered using print than in manuscript codices. Innovations such as tables, catalogues, indices and cross-referencing material within the text were characteristics of print. Such ordering of material was seized enthusiastically upon by jurists and law printers.[17]

Print provided an ability to access improved or updated editions with greater ease than in the scribal milieu by the collection, exchange and circulation of data among users, along with the error trapping to which reference has been made. This is not to say that print contained fewer errors than manuscripts.[18]  Print accelerated the error making process that was present in the scribal culture. At the same time dissemination made the errors more obvious as they were observed by more readers. Print created networks of correspondents and solicited criticism of each edition.[19] The ability to set up a system of error-trapping, albeit informal, along with corrections in subsequent editions was a significant advantage attributed to print by the philosopher, David Hume, who commented that “The Power which Printing gives us of continually improving and correcting our Works in successive editions appears to me the chief advantage of that art.”[20]

Fixity and preservation are connected with standardisation. Fixity sets a text in place and time. Preservation, especially as a result of large volumes, allows the subsequent availability of that information to a wide audience. Any written record does this, but the volume of material available and the ability to disseminate enhanced the existing properties of the written record. For the lawyer, the property of fixity had a significant impact.

Fixity and the preservative power of print enabled legal edicts to become more available and more irrevocable. In the scribal period Magna Carta was published (proclaimed) bi-annually in every shire. However, by 1237 there was confusion as to which version of the “Charter” was involved. In 1533, by looking at the “Tabula” of Rastell’s Grete Abregement of the Statutys[21] a reader could see how often it had been confirmed in successive Royal statutes. It could no longer be said that the signing of a proclamation or decree was following “immemorial custom”. The printed version fixed “custom” in place and time. In the same way, a printed document could be referred to in the future as providing evidence of an example which a subsequent ruler or judge could adopt and follow. As precedents increased in permanence, the more difficult it was to vary an established “custom”. Thus fixity or preservation may describe a quality inherent in print as well as a further intellectual element that print imposed by its presence.

Although Eisenstein’s work was directed more towards the changing intellectual environment and activity that followed the advent of printing and printed materials, it should not be assumed that printing impacted only upon intellectual elites. Sixteenth and seventeenth century individuals were not as ignorant of their letters as may be thought. There are two aspects of literacy that must be considered. One is the ability to write; the other being the ability to read. Reading was taught before writing and it is likely that more people could read a broadside ballad than could sign their names. Writing was taught to those who remained in school from the ages of seven or eight, whereas reading was taught to those who attended up until the age of six and then were removed from school to join the labour force.[22] Proclamation of laws in print was therefore within the reach of a reasonable proportion of the population.[23]

Although the features that I have discussed did not impact upon legal doctrine immediately, recent research would suggest that a preference developed for printed “authorities” was not slow in developing.[24]

Eisenstein’s identification of qualities builds upon McLuhan’s aphorisms. The identification of the qualities of the medium (print) goes below content and examines factors inherent within it and identify fundamental differences between the new medium (print) and the old (scribal production of texts). But importantly the identification of the qualities goes further than merely differentiating a new medium of communication from an old one. It identifies factors inherent in the medium that change attitudes towards information, the way it can be used and the user’s expectations of information. Fixity and standardisation allowed for the development and acceptance of text in print as authoritative which would be a significant factor for the development of law and legal doctrines – an example of McLuhan’s second aphorism.

Thus, Eisenstein’s theory recognises that media works on two levels.  The first is that a medium is a technology that enables communication and the tools that we have to access media content are the associated delivery technologies and possesses certain qualities.

The second level is that a medium has an associated set of protocols or social and cultural practices including the values associated with information – that have grown up around the technology. Thus the delivery system is the technology containing certain qualities that give rise to the second level which generates and dictates behaviour.[25]

Eisenstein’s argument is that when we go beneath the content that the system delivers and look at the qualities or the properties of a new information technology, we are considering what shapes and forms the basis for the changes in behaviour and in social and cultural practices. The qualities of a paradigmatically different information technology fundamentally change the way that we approach and deal with information. In many cases the change will be slow and imperceptible.

Adaptation is usually a gradual process. Sometimes subconsciously the changes in the way in which we approach information change our intellectual habits. Textual analysis had been an intellectual activity since information was recorded in textual form. I contend that the development of principles of statutory interpretation, a specialised form of textual analysis, followed Thomas Cromwell’s dissemination and promulgation of the Reformation statutes, complete with preambles, in print.[26]

There can be no doubt that print ushered in a paradigmatically different means of communication based upon its qualities. My suggestion is that the developing reliance of lawyers upon printed sources is in fact informed by the underlying qualities of print rather than the content itself. Indeed, it could be suggested that the information that is the law is dependent upon the qualities of print for both its reliability and authoritativeness.

The advent of the Digital Paradigm, and the recording of legal information in digital format challenges some of the essential qualities of print – the qualities that lawyers, judges and law academics have confidently and unquestioningly relied upon for the authority of the law.

In an article that traced the use of “non-legal” information used in judgments the following observation was made:

“Now, however, the world of information readily available to lawyers and judges is vastly larger. Even apart from the on-line catalogs that make full university collections far more available than ever before to a person physically standing in the law library, there has been a dramatic change in what is available to the typical LEXIS or Westlaw subscriber in a law firm, court, or government agency; and Internet access multiplies the phenomenon even further. There is now a dramatically accelerating increase in the availability of nonlegal sources accessible through on-line information methods….

One of the most important features of law’s traditional differentiation has been its informational autonomy. In many respects legal decision making is highly information dependent and was traditionally dependent on a comparatively small universe of legal information, a universe whose boundaries were effectively established, widely understood, and efficiently patrolled.”[27]

One of the arguments that is advanced by those commenting on the Digital Paradigm and its effects upon law and legal scholarship is to try and locate a form of functional equivalence between the Print and the Digital Paradigms. And the reason for this lies in some of the unique qualities of the digital space itself. On a content level, superficially the content remains the same, but there are now present elements affecting the presence of content and its stability that challenge the certainties that accompanied information in print. I shall develop this point in the second part of this discussion but now I shall turn to the identification of the qualities of digital communications systems – qualities that make it paradigmatically different from what has gone before.

Identifying Digital Qualities

Using Eisenstein’s approach, the qualities present in the digital communications paradigm and which distinguish it from the pre-digital era, can be identified, although in so saying, the inventory that I have compiled is by no means complete.  Nevertheless, these qualities all underlie the way in which content is communicated.  They are not necessarily unique to the Digital Paradigm because, in some respects, some of them at least were present in some form in the pre-digital era.  For example, some of the qualities identified by Eisenstein that were unique to print, such as dissemination, data collection, information reorganization, amplification and reinforcement have a continued manifestation in information communication technologies since the advent of the printing press.  As new technologies have arrived, some of these qualities have been enhanced.  Certainly the quality of dissemination has undergone a quantum leap in the Digital Paradigm from the Print Paradigm.[28]

My tentative inventory has identified some 14 qualities, which dramatically differ digital technologies from those that have gone before.  They are as follows:

Continuing disruptive change.

In the past as new communication technologies have become available; there has been a period where the new technology has an opportunity to “bed in” before the next significant change takes place.  For example the advent of the printing press in 1450 was followed by its spread through Europe, but, apart from improvements in the technology, no new communications technology was present until the development of the electrical telegraph system by Samuel Morse, Joseph Henry and Alfred Vail in 1836.  Effectively, there had been a period of almost 400 years for the printing press to become accepted as a new means of communication.  The telegraph system addressed the tyranny of distance and was followed by Marconi’s long distance radio transmission in the last decade of the 19th century.  That was followed by developments in radio and within a short time thereafter, the development of television.

It can be seen from this very brief overview that the time between new technological developments in communications has shortened.  Nevertheless, there has been a “breathing space” of increasing brevity between each one.  The advent of digital technologies and particularly the rise of the Internet has meant effectively that breathing space has gone and continuing disruptive change is a reality. The nature of this change has been described in another context as “The Long Blur”[29] and as new information systems have driven change many earlier business practices have changed or, in some cases, become obsolete. Changes in work habits and attitudes, the concept of secure lifetime jobs have vanished along with associated concepts of loyalty to an employer and a recognition of the loyal employee.  Although many new high paying jobs requiring exceptional skills and intelligence exist, many business models are now effectively service industries of which, in some respects, the law may be considered one.[30]

In essence the law is an “information exchange” system with a number of “information flows” throughout it. And why should lawyers and Judges avoid changes in the way in which information is delivered or the impact of the qualities that underlie new communications technologies?  After all, both simply process information in a particular manner.

Permissionless Innovation

Associated with continuing disruptive change is the quality of permissionless innovation, particularly in so far as the Internet is concerned.  In some respects, these two qualities are interlinked and indeed it could be argued that permissionless innovation is what drives continuing disruptive change.  Permissionless innovation is the quality that allows entrepreneurs, developers and programmers to develop protocols using standards that are available and that have been provided by Internet developers to “bolt‑on” a new utility to the Internet.  Thus we see the rise of Tim Berners-Lee’s World Wide Web which, in the minds of many, represents the Internet as a whole.  Permissionless innovation enabled Shawn Fanning to develop Napster; Larry Page and Sergey Brin to develop Google; Mark Zuckerberg to develop Facebook and Jack Dorsey, Evan Williams, Biz Stone and Noah Glass to develop Twitter; along with dozens of other utilities and business models that proliferate the Internet.  There is no need to seek permission to develop these utilities.  Using the theory “if you build it, they will come”[31] new means of communicating information are made available on the Internet.  Some succeed but many fail.  No regulatory criteria need to be met other than that the particular utility complies with basic Internet standards.

What permissionless innovation does allow is a constantly developing system of communication tools that change in sophistication and the various levels of utility that they enable.  It is also important to recognize that permissionless innovation underlies changing means of content delivery.

Delinearisation of information

This quality recognizes the effect of hypertext linking although the idea behind it is not new, nor does it originate with Tim Berners-Lee and the development of the World Wide Web, for it was propounded as early as 1948 by Vannevar Bush.[32]  The reality of information delinerisation in many respects subtly changes approaches to intellectual activity that have their foundation in the printing press.  By virtue of the organization of information in print, certain approaches to intellectual activity and habits of intellectual activity were influenced.

I am not for one moment suggesting that the linear approach to human thought and analysis had its origins with the printing press, for certainly it did not.  But what print did was to enhance and solidify linear thinking.[33]  Where there is standardisation and fixity of information within print, it is possible to develop associated information location devices such as indices and tables of content.  Although tables of content were available within the scribal culture, that table of content was only relevant to the particular volume, given that it was difficult to achieve identical volumes, although it was not impossible.  The mass production of identical copies enabled by print meant that accurate, consistent indices and tables of content could be provided.  This meant that the linear approach to information was enhanced and emphasised and although footnotes could take the reader to different references, to follow such a line would require a departure from the primary text as the reader first located the referenced text and then sought the information within.  In some respects the utilisation of footnotes was a form of “proto‑delinerisation”, if I can put it that way, but it was not until the development of the World Wide Web and the centralisation of information resources within the context of the Internet that full delinerisation became a reality.

The Internet brings the information to the reader and I shall discuss shortly the ways in which the Internet enables that through other qualities.  The print paradigm essentially meant the reader or scholar had to seek texts or other forms of information in a library or various other locations.  In essence the Internet brings the library to the scholar and it is by virtue of that that the delinerisation becomes significant.

By the use of hypertext links, the scholar or reader is immediately able to seek out the other information.  This means that the reading, or information acquisition process, which is essentially linear in following a particular line of argument in the principal text, is interrupted as the scholar or reader follows the hypertext link to the other source of information.  This can be done instantaneously.  The equivalent within the context of the print paradigm would mean that they scholar or reader would stop at a particular footnote, go to the source of the information, say at a library, locate it, read it, consider it, and then return to the principal text.  This would mean that the reading of the principal text would be prolonged considerably.

However, the potential within delinerisation is that it means the primary text no longer need be considered the principal source of information, in that the gathering together of information following hypertext links may result in the totally different approach to analysis than there was before.  Not only would the principal text be subject to scrutiny and critique but it could be considered within the context of a vast range of additional information made possible by hypertext linking.

Delinerisation may well mean that a change could take place in the way in which an argument is developed, in that the manner  of analysis may change.  Linear structure is present throughout the law. For example, current judgment writing style follows a pre-ordained linear structure involving an introduction, identification of issues – both factual and legal – identification of evidence or information relevant to the issues, discussion and analysis of the evidence and matching them up with the issues and law and a conclusion.

New digital technologies mean already that some judgments utilise hypertext links. There are difficulties in this regard with the problem of “link rot”.[34]  Link rot occurs where a  given URL no longer links to material which is referenced, making a citation to that material worthless.  Neacsu observes that

  “many facets of print works such as fixity, uniformity, and authenticity26 are no longer automatically ensured, the concept of reliability has inevitably changed. However, because the fundamentals of Western scholarship are still the same, the need of accessibility of content-identical copies still remains.”[35]

 This is particularly the case with the ability to access cited material, exacerbated by the phenomenon of link rot. Neacsu makes a number of suggestions whereby this problem may be met.

a)      one is by the use of Persistent Uniform Resource Locators (PURLs) which provide a persistent way of locating and identifying electronic documents using the redirect feature built in to the HTTP protocol

b)      Archiving including systems like the Internet Archive whilst recognizing that the scope and the running of this archive does not answer the needs of legal scholarship in that there is no reliable coverage of sources cited in law review articles and no reliable institutional overview.

c)       Law Library preservation systems maintaining a digital archive that includes copies of all documents cited in all journals including student edited journals

d)      The “Legal URL Archive”  which would involve the creation of a mirror site in which duplicates of cited documents would be created with the objective of maintaining the document when it was cited (addressing the issue of stability) and keeping the information publicly available (addressing the issue of accessibility).[36]

As well as the ability to link to other material is the ability to embed other forms of content  – video, audio, animations, diagrams and the like – within a judgment.[37]  This may well mean in the future that the strict linear form of analysis which I have described may become a little less simplistic and a little more “information rich” involving a wider opportunity for the reader to explore some of the underlying support for a judgment and the analysis within it that is more immediate than was the case in the pre-digital paradigm, given the difficulties that one might have in tracking down sources.

This is not to say that linear analysis is dead, but what it does mean is that approaches to analysis may change in subtle ways as the result of the ability to bring all of the information that supports an argument into the one place at the one time.

Information Persistence or Endurance.

It is recognised that once information reaches the Internet it is very difficult to remove it because it may spread through the vast network of computers that comprise the Internet and maybe retained on any one of the by the quality of exponential dissemination discussed below, despite the phenomenon of “link rot.”[38] It has been summed up in another way by the phrase “the document that does not die.” Although on occasions it may be difficult to locate information, the quality of information persistence means that it will be on the Internet somewhere.  This emphasises the quality of permanence of recorded information that has been a characteristic of that form of information ever since people started putting chisel to stone, wedge to clay or pen to papyrus.  Information persistence means that the information is there but if it has become difficult to locate,and  retrieving it may resemble the digital equivalent of an archaeological expedition, although the spade and trowel are replaced by the search engine.  The fact that information is persistent means that it is capable of location.

  Dynamic Information

In some respects the dynamic nature of information challenges the concept of information persistence because digital content may change.  It could be argued that this seems to be more about the nature of content, but the technology itself underpins and facilitates this quality as it does with many others.

An example of dynamic information may be found in the on-line newspaper which may break a story at 10am, receive information on the topic by midday and by 1pm on the same day have modified the original story.  The static nature of print and the newspaper business model that it enabled meant that the news cycle ran from edition to edition. The dynamic quality of information in the Digital Paradigm means that the news cycle potentially may run on a 24 hour basis, with updates every five minutes.

Similarly, the ability that digital technologies have for contributing dialog on any topic enabled in many communication protocols, primarily as a result of Web 2.0, means that an initial statement may undergo a considerable amount of debate, discussion and dispute, resulting ultimately in change.  This dynamic nature of information challenges the permanence that one may expect from persistence and it is acknowledged immediately that there is a significant tension between the dynamic nature of digital information and the concept of the “document that does not die”.[39]

Part of the dynamic of the digital environment is that information is copied when it is transmitted to a user’s computer.  Thus there is the potential for information to be other than static.  If I receive a digital copy I can make another copy of it or, alternatively, alter it and communicate the new version.  Reliance upon the print medium has been based upon the fact that every copy of a particular edition is identical.  Thus, authors and publishers can control content.

In the digital environment individual users may modify information at a computer terminal to meet whatever need may be required.  In this respect the digital reader becomes something akin to a glossator of the scribal culture, the difference being that the original text vanishes and is replaced with the amended copy.  Thus one may, with reason, validly doubt the validity or authenticity of information as it is transmitted.

Dissociative Enablement

The quality of dissociative enablement has implications for user behaviour.  Another way of describing dissociative enablement could be disinhibition.  This quality inherent within Internet technologies enables an Internet user to engage in cyber-stalking, to embark on a discussion utilising language and tone that one would be reluctant to use to the correspondent or auditor face to face[40], and enables the Internet criminal to commit fraud or other forms of Internet crime without having to confront the victim.  Dissociative enablement or disinhibition enables behaviours to take place on the Internet that might not otherwise take place within the physical context.  Dissociative enablement or disinhibition have an impact on the nature and quality of discourse within the Internet space.

In many respects, it enables what perhaps may be a more robust form of discourse than might otherwise be the case.  Whether or not that is a good thing is not for me to say.  Yet it is behaviour that this quality enables that cannot be ignored, and, of course, it is ultimately tied in with the delivery of content and the nature and quality thereof.

Participation and Interactivity

A further aspect of the digital environment that differs from the print paradigm is one of interactivity and I have already made reference to this is my discussion about dynamic information.  Reading from the print media is essentially a passive activity and any interactivity may be on the part of the reader making notes or writing thoughts or concepts that develop as a result of the reading process.  In the digital environment the reader may interact with the text as it is presented.  In this respect the acquisition of information in the digital environment become associative and non-linear.

In some respects participation in the context of social interaction  is associated with dissociative enablement and with the information dynamic.  But the Internet enables a greater degree of participation in dialog and discourse than might earlier have been the case.  In the pre-digital paradigm participation within a discussion or engagement with an issue may only have been available through the “letters to the editor” column or perhaps, if one were so motivated, pamphleteering.  The Internet now enables immediate participation within a debate and the ability to share one’s thoughts through the use of blogs, Twitter, Facebook and other forms of social media.  Furthermore, the ability to participate, engage in debate, seek out information and engage with others probably is the greatest opportunity to embark upon a form of participatory democracy.  On a global sense, that mirrors the Athenian form of participation and perhaps may even be the first time that the community has had such an opportunity to so engage.  The quality of participation is driving many governments towards considering on-line voting,  recognising that the Internet enables an opportunity for greater engagement by the community with the political system.  It doesn’t stop there.  The participatory possibilities of the Internet could well mean that in the future juries would hear trials on-line rather than being physically present in a court room.

Volume and capacity

The data storage capabilities of digital systems enable the retention and storage of large quantities of information. Whereas books and other forms of recording information were limited by the number of pages or the length of a tape, the potential storage capabilities inherent in digital systems while not limitless nor infinite are, comparatively with other media, significantly greater. This phenomenal increase in the amount of information available has a corresponding downside in that information location can, by virtue of volume alone, become difficult. This is resolved by other qualities of the technology. However, information volume is an issue that has an impact upon certain understandings that apply to law and legal principles and which I shall address at a later point in this paper.

Exponential dissemination

Dissemination was one of the leading qualities of print identified by Eisenstein, and it has been a characteristic of all information technologies since. What the internet and digital technologies enable is a form of dissemination that has two elements. One element is the appearance that information is transmitted instantaneously to both an active (on-line recipient) and a passive (potentially on-line but awaiting) audience. Consider the example of an e-mail. The speed of transmission of emails seems to be instantaneous (in fact it is not) but that enhances our expectations of a prompt response and concern when there is not one. More important, however, is that a matter of interest to one email recipient may mean that the email is forwarded to a number of recipients unknown to the original sender. Instant messaging is so-called because it is instant and a complex piece of information may be made available via a link by Twitter to a group of followers which may then be retweeted to an exponentially larger audience.

The second element deals with what may be called the democratization of information dissemination. This aspect of exponential  dissemination exemplifies a fundamental difference between digital information systems and communication media that have gone before. In the past information dissemination has been an expensive business. Publishing, broadcast, record and CD production and the like are capital intensive businesses. It used to (and still does)  cost a large amount of money and required a significant infrastructure to be involved in information gathering and dissemination. There were a few exceptions such as very small scale publishing using duplicators, carbon paper and samizdats but in these cases dissemination was very small.[41]  Another aspect of early information communication technologies is that they involved a monolithic centralized communication to a distributed audience. The model essentially was one of “one to many” communication or information flow.[42]

The Internet turns that model on its head. The Internet enables a “many to many” communication or information flow  with the added ability on the part of recipients of information to “republish” or “rebroadcast”. It has been recognized that the Internet allows everyone to become a publisher. No longer is information dissemination centralized and controlled by a large publishing house, a TV or radio station or indeed the State. It is in the hands of users. Indeed, news organizations regularly source material from Facebook, YouTube or from information that is distributed on the Internet by Citizen Journalists.[43]  Once the information has been communicated it can “go viral” a term used to describe the phenomenon of exponential dissemination as Internet users share information via e-mail, social networking sites or other Internet information sharing protocols. This in turn exacerbates the earlier quality of Information Persistence or “the document that does not die” in that once information has been subjected to Exponential Dissemination it is almost impossible to retrieve it or eliminate it.[44]

The “non-coherence” of information

If we consider a document – information written upon a piece of paper – it is quite easy for a reader to obtain access to that information long after it was created. The only thing necessary is good eye sight and an understanding of the language in which the document is written.

Data in electronic format is dependent upon hardware and software. The data contained upon a medium such as a hard drive requires an interpreter to render it into human readable format. The interpreter is a combination of hardware and software. Unlike the paper document, the reader cannot create or manipulate electronic data into readable form without the proper hardware in the form of computers.[45]

Schafer and Mason warn of the danger of thinking of an electronic document as an object ‘somewhere there’ on a computer in the same way as a hard copy book is in a library. They consider that the ‘e-document’ is better understood as a process by which otherwise unintelligible pieces of data are distributed over a storage medium, are assembled, processed and rendered legible for a human user. Schafer and Mason observe that in this respect the document as a single entity is in fact nowhere. It does not exist independently from the process that recreates it every time a user opens it on a screen.[46]

Computers are useless unless the associated software is loaded onto the hardware. Both hardware and software produce additional information that includes, but is not limited to, metadata and computer logs that may be relevant to any given file or document in electronic format.

This involvement of technology and machinery makes electronic documents paradigmatically different from ‘traditional documents.’ It is this mediation of a set of technologies that enables data in electronic format – at its simplest, positive and negative electromagnetic impulses recorded upon a medium – to be rendered into human readable form. This gives rise to other differentiation issues such as whether or not there is a definitive representation of a particular source digital object. Much will depend, for example, upon the word processing programme or internet browser used.

The necessity for this form of mediation for information acquisition and communication explains the apparent fascination that people have with devices such as smart phones and tablets. These devices are necessary to “decode” information and allow for its comprehension and communication.

I made reference in the introduction to this paper to the issue of ‘functional equivalence’ and perhaps the only way in which an electronic document may be seen as ‘functionally equivalent’ to a paper based document may be in the presentation of information in readable form. In the case of a Firm of Solicitors v The District Court Auckland,[47] Heath J noted that s 198A of the Summary Proceedings Act 1957 was designed to deal with a paper based environment but that now more often than not, information is stored primarily in electronic form. He adopted a functional equivalence approach to executing a search warrant.

With respect I consider that ‘functional equivalence’ is an unhelpful concept, although to make the statute work in 2004, it was probably the only option available to Heath J. Functional equivalence can relate only to the end product and not to the inherent properties that underlie the way in which the material or information is created, stored, manipulated, re-presented and represented.

In the context of the New Zealand Search and Surveillance Act 2012 it is interesting that the complexity of electronic information is something that is capable of being searched for or ‘seized’ yet is described as an ‘intangible’ thing. The ultimate fruit of the search will be the representation of the information in comprehensible format, but what is seized is something paradigmatically different from mere information, the properties of which involve layers of information. It is clear that the legislation contemplates the end product – the content contained in the electronic data – yet the search also involves a number of aspects of the medium as well. In the ‘hardcopy’ paradigm the medium is capable of yielding information such as fingerprints or trace materials, but not to the same degree of complexity as its digital equivalent. Similarly, the complexities surrounding E-Discovery demonstrate that an entirely different approach is required from the traditional means of discovery.[48]  Although Marshall McLuhan intended an entirely different interpretation of the phrase, ‘the medium is the message,’[49] it is a truth of information in digital format.

 Format Obsolescence.

In the print and scribal paradigms, information was preserved as long as the medium remained stable. The Dead Sea Scrolls and early incunabula from the print paradigm provide examples. But, as I have observed above, no intermediate technology was required to comprehend the content.

The quality of continuing disruptive change means that not only are digital technologies and communications protocols in a state of change, but within many of the programs that are commonly used, new versions come available with enhancements and often new formats. This is further complicated by the unwillingness of software developers and distributors to continue support for products that have been replaced by new versions. The problem of content access is further exacerbated when earlier formats for content or data storage are replaced, and therefore the information stored in those earlier formats cannot be accessed.

For example, Microsoft Word uses the file extension .doc for its native file format. However, the reality is that the .doc extension encompasses four distinct file formats:

  1. Word for DOS
  2. Word for Windows 1 and 2; Word 4 and 5 for Mac
  3. Word 6 and Word 95 for Windows; Word 6 for Mac
  4. Word 97 and later for Windows; Word 98 and later for Mac

Most current versions of Word recognize the fourth iteration of the .doc format which is Binary File Format implementing OLE (Object Linking and Embedding) structured storage to manage the structure of the file format. OLE behaves rather like a hard drive system and is made up of a number of important key components in that each Word document is composed of “big blocks” which are almost always (but do not have to be) 512 byte chunks. Thus a Word document’s file size will be a multiple of 512.

“Storages” are analogues of the directory on a disk drive, and point to other storages or “streams” which are similar to files on a disk. The text in a Word document is always contained in the “WordDocument” stream. The first big block in a Word document, known as the “header” block, provides important information as to the location of the major data structures in the document. “Property storages” provide metadata about the storages and streams in a doc file, such as where it begins and its name and so forth. The “File information block” contains information about where the text in a Word document starts, ends, what version of Word created the document and other attributes. Microsoft has published specifications for the Word 97-2003 Binary File Format but it is no longer the default, having been replaced by the Office Open XML standard, indicated by the newer .docx extension.

The problem is that if one wishes to open a Word document created by a version preceding Word 97 in a recent iteration of Word (say Word 2010) it will be blocked. There is a work-around but that involves a level of complexity that may discourage average users.[50] Although there may be converters available for older formats, again this adds an additional layer of complexity for those who are not adept at computer use.[51]

An additional layer of difficulty arises where there is lack of interoperability with proprietary file formats from other data or text storage programs and where those programs have been discontinued and are no longer available. This is a problem encountered particularly in the E-Discovery when historic documents are sought. In addition, there may be hardware difficulties where data may be stored on old media such as old floppy disks. Modern computers no longer include a floppy disk drive – indeed data is rarely if ever stored on such low capacity media – and only allow USB storage devices to be used.

Because of the way in which information is encoded in the digital environment, the information exists, but is not available. Thus, the document is still “alive” but is in a state of suspended animation until its content can be accessed.

Format obsolescence does not challenge the concept of Information persistence or endurance because aspects of the Internet enable that. It is, however, a subset of the necessity for technological mediation between stored digital data and its rendering in a comprehensible form.

The last three qualities are interrelated.  These are:

Availability of Information

Searchability of Information

Retrievability of Information

As I have earlier suggested, these qualities are associated with persistence of information and the information dynamic, but what is important is, as I have already suggested, that the Internet enables the information to come to the user.  No longer does the user have to go to the information.  This recognises one of the fundamental realities that characterises the nature of information within the Internet space and one that must be also considered in terms of information and use generally and that is the concept of “information flow” to which I have already made reference.

The Internet enables and enhances the flow of information towards the user, rather than the user directing him or herself towards the information.  This is recognised by the availability of information which, as I have suggested, is associated with information persistence.  What is significantly different with the Internet is that the information is constantly available, 24 hours a day, 7 days a week, 365 days of the year.  The Internet is always on – it is always “open”.  Information availability is not restricted by time or the presence of a librarian and is impeded only if the site where the information is held is down for some reason.

One of the problems with information both in the sense of persistence and available is finding out where it is.  Before the Internet went “public” and was essentially a university or research based tool, users developed means of locating information of which Gopher was one example.  The arrival of the World Wide Web resulted in the development of various search engines of which Google has now become the dominant force.  Searchability of information means that the vast library of the Internet can reveal its treasures as long as one is competent in the use of a search engine.  Most users utilise pretty basic search terms but more sophisticated use of search terms and construction narrows the scope of the information sought and returns more precise results.  The important thing is that searchability of information means that the information availability is enhanced.

The third part of the trilogy of course is retreivability.  This means that the available information has been located by a search is instantly retrievable, and importantly, the information flow is towards the user, rather than the reverse.

Some Observations

Before concluding this discussion about the qualities of the Digital Paradigm there are two comments that need to be made The first is, as I have already suggested, there is an overlap or merger between some of the qualities that I have identified. The quality of Information Persistence exists in tension with the dynamic nature of information, with format obsolescence and digital information non-coherence. Certainly the Internet and its protocols give greater emphasis to Information Persistence than the other competing qualities, but it must be recognized that this example of a “qualities tension” means that any analysis of the impact of Digital Paradigm qualities must be a nuanced one. On the other hand some approaches may only involve the application or consideration of some of the qualities. Internet-based analysis is not so likely to involve format obsolescence which is very likely to arise in a consideration of E-Discovery approaches.

The second is about the wider issue of the effect the Internet may be having upon the way that we think.

Will delinearisation change the way that we think? This gives rise to the question of whether or not the internet changes us forever.  Underlying this theory is the concept of neuroplasticity – the ability of the brain to adapt to and learn from new stimuli.   The concept of neuroplasticity was picked up by Nicholas Carr in his book The Shallows: How the Internet is changing the way we think, read and remember.[52]  His book, based upon an earlier article that appeared in the Atlantic, has as it thesis that the internet is responsible for the dumbing down of society based upon the way in which our minds respond both to the wealth of information and its availability.

The neuroplasticity argument is picked up by Susan Greenfield[53] who believes the web is an instant gratification engine, reinforcing behaviours and neuronal connections that are making adults more childlike and kids hungry for information that is presented in a super simplistic way but in fact reduces their understanding of it.  Greenfield is of the view that the web spoon feeds us things to capture our attention. This means we are learning to constantly seek out material that stimulates us and our plastic minds are being rewarded by our “quick click” behaviour.  We want new interactive experiences and we want them now.

This view is disputed by Aleks Krotoski[54] who firstly observed that there is no evidential support for Greenfield’s propositions which pre-suppose that once we used the web we will forever online and never log off again.  According to Greenfield, says Krotoski, we become connected to our computers and other devices in a co-dependent exclusive almost biological way ignoring where how and why we are connecting.  Krotoski, for example, disputes internet addiction, internet use disorder or neurological rewiring.

Like Krotoski, William Bernstein[55] rejects Carr’s thesis. Bernstein points out that neuroplasticity is a phenomenon well known to brain researchers. He then goes on to ask and answer Carr’s question

“Does the Web rewire your brain? You bet; so does everything you actively or passively experience. Literacy is possibly the most potent cerebral rewire of all; for five thousand years humans have been reassigning brain areas formerly needed for survival in the natural environment to the processing of printed abstractions. Some of this commandeered real estate has almost certainly been grabbed, in its turn, by the increasing role of computers and the Internet in everyday post-industrial life. Plus ca change.[56]

Bernstein then goes on to examine Carr’s theory that Internet use decreases concentration on the matter at hand, emphasising the use of hyperlinks. Bernstein accepts that we have better information retention if it is placed in front of us on one page rather than chasing it through a maze of hypertext links. On the other hand, he observes, real life rarely supplies us with precisely the information that we need in one document. Those skilled at following informational threads through different sources will succeed more often than those spoon fed information.[57]

Bernstein finally confronts Carr’s argument in this way:

“Carr’s thesis almost automatically formulates its own counterargument: Life in the developed world increasingly demands non-rote, nonlinear thought. Shouldn’t learning to navigate hypertext skilfully enhance the ability to make rapid connections? Shouldn’t such abilities encourage the sort of nonlinear creative processing demanded by the modern work environment, and make us smarter, more productive, and ultimately more autonomous and fulfilled…..

If the Web really is making American stupid, then shouldn’t citizens of more densely wired nations, such as Estonia, Finland and Korea, be hit even harder? The question answers itself.”[58]

In some respects Carr and Greenfield are using the “low hanging fruit” of technological fear[59] to advance their propositions.  Krotoski’s rejection of those views is, on the other hand, a little too absolute and in my view the answer lies somewhere in between.  The issue is a little more nuanced than whether or not the Internet is dumbing us down or whether or not there is any evidence of that.

My argument is that the impact of the internet lies in the way in which it redefines the use of information and the way we access it, process it, use it, respond to it and our expectations of it and its availability.

This may not seem to be as significant as Carr’s rewiring or Greenfields neuroplasticity but it is, in my view, just as important.  Our decision making is based upon information.  Although some of our activity could be termed responses to stimuli, or indeed it might be instinctive, most of the stimuli to which we respond can in fact be defined as information – if not all of it.  The information that we obtain when crossing the road comes from our senses and sight and hearing but in many other of our activities we require information upon we which may deliberate and to which we respond in making decision about what we are going to do, buy and so on.

And paradigmatically different ways of information acquisition are going to change the way in which we use and respond to information. There are other changes that are taking place that arise from some of the fundamental qualities that underline new digital communications technologies – and all communication technologies have these particular properties or qualities underlying them and which attach to them; from the printing press through to the wireless through to the radio through to television and into the digital paradigm.  It is just that digital systems are so fundamentally different in the way in which they operate and in their pervasive nature that they usher in a new paradigm.[60]

Law and Precedent in the Print and Digital Paradigms

The assumptions that underlie the doctrine of precedent provide an example of how Digital Paradigm qualities present a new challenge to the law.

For hundreds of years law was declared or “discovered” by Common Law Judges on a case by case basis.  Judges might follow the decisions of other Judges in similar cases but because of the rarity of adequate written records, and the distance between Courts coupled with inadequate transportation and communication systems there was considerable variance between and even within jurisdictions.  Law was highly localised and individualised.  The goal of the Monarch’s law may have been, as Maitland put it in the context of feudal contract , to swallow all other law but it was no easy task to accomplish at the time.[61]

The advent of the Print Paradigm and the qualities of print affected the structure, the capabilities and functioning of law in various ways.  It is not “fine print” that characterises the law, but print itself.  Print effected and affected the organisation, growth and distribution of legal information.  The processes of law, the values of law and many of the doctrines of law required a means of communication that was superior to handwriting and handwritten manuscripts to store information.

Ethan Katsh puts forward the proposition that changes in the means used to communicate information are important to law because law has come to rely on the transmission of information in a particular form.  Katsh propounds that law does not simply produce information but structures, organises and regulates it.[62] It does this primarily through the medium of print.

Law before Gutenberg was different from law today in significant ways.  The printing press made it possible for the past to control the future as never before.  Prior to the printing press scribes merely took notes, under the Judges direction, of what was said and done.  Without a verbatim transcript of judicial proceedings, later Judges could not be certain what was said and done previously.  Thus with most law residing in the minds of Judges and not in black and white on paper, Judges could innovate and invent while pretending to follow strict precedent.  This ended with the printing press, printed judicial decisions, printed positive law and especially printed constitutions.[63]

Printing and the qualities identified by Eisenstein enabled many copies of one text to be distributed throughout a community or a country.  It meant that the mistakes, errors and glosses that had previously been a characteristic of the scribal culture were no longer perpetuated.  It meant that the words that were printed and read by a person in London were the same as those read from the same edition by a person in New Orleans.  The printed word could not be changed.  Once it was on paper it was immutable.  Printing replaced the brittle oral and script forms of communication with a stable, secure and lasting medium.  Memory, so vital for the oral tradition, could now be committed to print.  Instead of looking for the earliest or original manuscript that had not received the attention of glossators, one seeking information would look in the latest print edition.

In the medieval period, oral contracts were often preferred over written ones. The nature of writing and the idea of placing reliance upon or consulting a written document was not common.  Memory was considered to be more trustworthy than something written and practical questions were answered by oral testimony and not by reference to document.[64] If there was a dispute over land ownership and a written charter needed interpretation or was contradicted by what was remembered, memory took precedence over written proof and the principle that an oral witness deserved more credence than written evidence was a legal commonplace.

The development of movable type resulted in a product more fixed and stable than the work of a scribe.  Forgery and careless copying became less common.  Granted, printed works could contain errors but a large number of standardised copies provided works that were not easily changed.  That in itself gave a sense of authority and authenticity that had been lacking earlier.  A reader could assume that the printed word were the words of the author.

The advent of print provided a keystone for the legal process.  The development of the common law by a system of precedent is expedited when lawyers and Judges have a common reference point and can rely on the fact that there are exact copies of a case or a statute in different places.  Thus, lawyers and Judges are assured that the language that they are using is identical to the language consulted by others.

Printing enabled the standardisation of legal information and the words on paper began to acquire an authority that had been lacking in the scribal period.  Thus law, previously associated with custom and the remembered words of Judges and what was contained in the Year Books, gave way to law based upon books. Printing was introduced into England in 1476 and five years later the first law books were printed.  In 1485 the printing of Parliamentary Session Laws began.

The history of the doctrine of Judicial precedent is intimately bound up with the history of law reporting and the development of law reporting, as we know it, could not have taken place without print [65] By the Eighteenth Century the printed word was sufficiently reliable that;

 “Each single decision standing by itself had already become an authority which no succeeding Judge was at liberty to disregard.”[66]

In 1765 Lord Camden claimed that if the law was not found in the books it was not law[67].  By the end of the Eighteenth Century the importance of Law Reports was such that Edmund Burke claimed “to put an end to the reports is to put an end to the law of England”[68]

Thus the development of print and the development of precedent, a foundation stone of our common law legal structure, are inextricably linked.  Precedent provides fairness, in that like cases should be treated alike, and as an aid to judicial decision-making to prevent unnecessary reconsideration of established principles.[69]

The development of precedent provides certainty and security in the law so that citizens may rely upon it to order their affairs.  Yet, by the same token, the legal process is not renowned as innovative and has rarely been at the forefront of change.  Rather, it puts brakes on change by way of precedent.

Precedent has been adopted by the legal process to integrate legal change to the pace of change in society.  If the law is to become more tolerant of change the role of precedent will continue to evolve.  It will not disappear as a concept but it will not be the concept to which we have become accustomed.

The development of precedent has been somewhat serendipitous. Holdsworth observed “One of the main conditions for the success of the system of case law is a limit on the number of case reports”[70] and Grant Gilmore[71] has observed that;

 “When the number of printed cases becomes like the number of grains of sand on the beach, a precedent-based case law system does not work and cannot be made to work … the theory of precedent depends, for its ideal operation, on the existence of a comfortable number of precedents, but not too many.”

 The nature of the printing technology imposed a limitation on the number of cases that could be reported and printed and the speed with which they may be published.  Indeed, the authority of case law has been enhanced by a slow development where reported decisions are not rapidly modified.  Leading cases not only settle a particular point of law but also add to the general authority of decisions because they settle a point with some finality.

Thus, the very nature of the print paradigm has placed certain boundaries upon the development of law and has allowed for the development of the doctrine of precedent to the point where we are today.

Diana Botluk describes the challenges posed by Internet publication in the following way:

“Publication on the Web can often bypass.., traditional methods of filtering information for quality, thus making the end user of the information more responsible for the evaluation process.”[72]

The traditional methods to which she refers include determining that

a) “an authoritative source” has written or published the information;

b) that the information has been “authenticated by editorial review”; and

c) that it has been “evaluated by experts, reviewers, subject specialists or librarians.”[73]

Print and Precedent as a Brake on Change

Thus the law had an ally in working towards its goal of maintaining a measured pace of change.  The silent partner which has assisted in fostering a public image of law as an institution that is both predictable and flexible is the communications medium that has dominated the legal process for the past 500 years, the medium of print.  As the new digital media of the twentieth and twenty-first centuries have taken on some of the duties performed by print, one of the consequences will be to upset the balance of the law has worked so diligently to achieve over several centuries.[74]

The importance of publication – up until recently in print – as an authoritative concept is put into sharp focus by the following comment by Professor Robert Berring:

“The doctrines of the law are built from findable pieces of hard data that traditionally have been expressed in the form of published judicial decisions. The point of the search is to locate the nugget of authority that is out there and use it in constructing one’s argument.

Because legal researchers are so accustomed to this idea, it is difficult to realize how unique this concept is in the world of information. In most fields in the humanities or social sciences, a search of the literature will reveal certain orthodoxies or prevailing views, certain points in contention with each side having its own warrior-like adherents, but there are no points of primary authority. There are no nuggets of truth or treasure …. Legal researchers believe that there are answers out there that are not just powerfully persuasive, but are the law itself.”[75]

Precedent has been a brake on change. To continue the motoring metaphor, within the law it has also encouraged the rear-view mirror[76] as a mode of thinking about the present and future.  Paul Levinson suggests, in the context of media studies, that we frequently use backward looking metaphors for the new digital environment.  Realaudio becomes equated with “radio” – a metaphor enhanced by streaming content.  Research takes place in a “digital library”.  An online chat room is treated as a “café”. Information provided in a web browser is a “web page” and information that does not appear on the screen extends the print metaphor to one from the newspaper world – the information is “below the fold”.

These analogies, according to Levinson, call attention to the benefit of walking into the future with our eyes upon the past, but he also points out that the mirror may blind us to ways in which the new medium is not analogous to the media of the past.  If we use the Internet as a library, unlike the real world library, when the Internet connection crashes it is impossible to continue reading the text.  If for some reason the lights go out in the library an alternative light source can be found.  Levinson demonstrates the problem in this way:

“If we stare too long into the rear-view mirror, focussing only on how the new medium relates to the media of the immediate past, we may crash head-on into an unseen, unexpected consequence.  On the other hand, if we look only straight and stiffly ahead, with no image or idea of where we are coming from, where we have just been, we cannot possibly have a clear comprehension of where we are going. …  A quick glance in the rear-view mirror might suggest that electronic ink is an ideal solution: it allows the convenience of paper, with the word processing and telecommunication possibilities of text on computers with screens.  But, on more careful examination, we find that we may not have been looking at not the most relevant part of an immediately past environment.  One of the great advantages of words fixed on traditional paper is indeed that they are stationery with an “A”: we have come to assume, and indeed much of our society has come to rest upon the assumption, that the words in books, magazines, and newspapers will be there for us, in exactly the way we first saw them, any time we look at them again in the future.  Thus, the stationery as stationary, the book as reliable locus, is a function as important as their convenience in comparison to text on computers.  Of course, we may in the future develop electronic modes of text that provides security and continuity of text equivalent to that on paper – modes that in effect allow the liberation of text without any diminution of its reliability – but current electronic “inks” “papers” are ink and paper only via vision in a rear-view mirror that occludes a crucial desirable component of the original.”[77]

Using Levinson’s rear-view mirror and recognising that by developing that metaphor further  we are in a state of movement and in transition – moving away from the print paradigm and moving towards the digital paradigm, not yet divorced from the one and not fully attached to the other. It took a generation for print technology to move from the lectern-based bible of Gutenburg to the convenience of a handheld book that could be included in a traveller’s pack.  Henry VII recognised the value of the new technology when he came to the throne 10 years after Caxton introduced the press by appointing a Stationer to the King – an office which later became the King’s Printer. Nearly 100 years after Caxton, Edmund Plowden recognised the damage that could be done to his reputation if he did not supervise the printing of his Commentaries. Lawyers and Judges were giving credit to printed material in the early Seventeenth Century[78] at the same time as Sir Edward Coke was ensuring his approach to the law would be disseminated and preserved by the printing of his Reports and Institutes.

In some respects this explains why it is that we seek to explain and use new communications phenomena by the term “functional equivalence.” Functional equivalence in itself is a manifestation of rear view mirror thinking – an unwillingness to let go the understandings of information that we had in the past. It roots us in an environment where the informational expectations no longer pertain – where the properties of the equivalent technology are no longer applicable or valid

This reflection upon the transition from the scribal culture to that of print, whilst recognising that the two co-existed for a considerable period demonstrates that we must adapt to new technologies and at the same time adapt the old. But we should not adapt nor allow the values arising from the qualities of the old technology to infect or colour our understanding or approach to the new. Certainly the use of precedent, by its very nature involves use of the rear view mirror. This is not to decry the importance and necessity of precedent as a means of creating certainty and consistency in the law. But, as the argument develops, it may be seen that we may lose those two elements of the law that we take so much for granted as we move into an environment of constant, dynamic and disruptive change.

We are so familiar with the paradigm of print that we do not give its ramifications or its qualities a second thought. The qualities of dissemination, standardisation, fixity of text and the opportunity to cross reference to other printed sources go unnoticed. We have become enured to them. Yet they provide the foundation for our acceptance of printed law as reliable and authoritative. Earlier editions of the Blue Book, which is the American Uniform System of Citation not unlike the rules provided in the New Zealand Law Style Guide, provided that citations should be to paper versions. Rule 18.2 provided

This rule requires the use and citation of traditional printed sources, except when the information is not available in a printed source, or if the traditional source is obscure or hard to find and when the citation to an Internet source will substantially improve access to the same information contained in the traditional source. In the latter case, to the extent possible, the traditional source should be used and cited.[79]

Since that was written there have been two subsequent editions of the Bluebook, but the directive preferring print sources remains the same although they may be forgone if “there is a digital copy of the source available that is authenticated, official, or an exact copy of the printed source.”[80]

In its preference for printed sources, the Bluebook impliedly recognises that information in the Digital Paradigm by its nature and with its different underlying qualities presents an entirely different information environment[81]

One of the most significant aspects of the Digital Paradigm is continuing disruptive change.  Moore’s Law[82] is as applicable to information in cyberspace as it is to the development of microprocessor technology. New information becomes available and is disseminated more quickly and exponentially via the Internet than previously through the print media.  The Internet enables the distribution of Court decisions within hours of delivery rather than the months that it took for cases to be edited and printed in law reports and the information flow normally experienced where the student or lawyer would go to a library to access information is reversed – the information now is delivered to a local device.

New information can be manipulated more quickly by virtue of the dynamic document and participation. In the legal environment new cases and new developments may be publicised more rapidly but by the same token, unlike a paper document, a digital document “bears little evidence of its source or author.[83] In addition, greater credit may be given to “image-based” formats but Rumsford and Schwartz do not believe that “non-imaged” documents should receive the same treatment as paper.[84]

Continuing change challenges even the certainties that law librarians try to ascribe to certain formats – new information is constantly replacing old information and old information appears to be less and less relevant to the solution of modern problems. Our legal system, particularly in terms of the development of principle, has moved at a measured pace.  The availability of large amounts of new information and the change in perspective that that new information introduces creates challenges for a system that is accustomed to looking backwards towards precedent and that moves at a sedate pace.

The development of precedent is characterised by what could be referred to as landmark decisions or leading cases.  These settle a particular point of law.  They also add to the general authority of judicial decisions because they appear to settle the question with finality.  The digital environment provides us with more material in more recent cases that more swiftly modify the broad statements of principle contained in landmark decisions.

Other qualities also come into play, many of which challenge those of the print paradigm upon which the law relies. In my discussion about delinearisation of information I made reference to the fact that the primary text may no longer be considered the principal source of information, and that text could be considered within a wider informational context and could change the linear approach to analysis. This quality may underlie a challenge to a strict form of analysis based upon an a previously accepted line of cases.

Information persistence and endurance is what the law requires for its certainty – something that the Print Paradigm has been able to give it, but the tension arises with dynamic information which constantly develops, grows and changes. This is associated with the quality of volume and capacity. The storage capacity of computer systems is so large as to be almost unlimited.  At a time when print libraries are nearing capacity with the amount of printed information available digital systems can be seen as a blessing, but also pose serious challenges to established legal thinking. One of these lies in the identification of the necessity for a critical mass of decisions for the development of a precedent based principle.[85] The quality of high volume of decisions challenges this. The information is available, searchable and retrievable and because of the higher volumes of caselaw available the minutiae of fact situations or the nuance of legal interpretation becomes apparent, eroding the earlier certainties that were present with the “critical mass” of information that was a characteristic of precedent.

The Digital Revolution and the Legal Process

In 1996 in his book The Future of Law: Facing the Challenges of Information Technology[86] Richard Susskind suggested that today we are between the phases of print and information technology and, in essence, are in a transitional phase[87] similar to the co-existence of the scribal and print cultures in the law in the late Sixteenth and early Seventeenth Centuries.  He was of the view, correctly in my opinion,  that in the days of the oral tradition and the scribal culture, change was a rarity.  In the Digital Paradigm, information is dynamic and subject to regular alteration rather than remaining in static form and from this arise a number of consequences or features. Susskind described them as unfortunate, although it may be that there is a certain inevitability arising from the qualities of information in the Digital Paradigm.

Hyper-regulation, the Internet and Too Much Law

One of the features identified by Susskind is what he describes as the hyper-regulated society.  Susskind identifies the phenomenon which I suggest is driven by the qualities of the Digital Paradigm. Being hyper-regulated means that there is too much law for us to manage and our current methods for managing legal materials are not capable of coping with the quantity and complexity of the law which governs us.  Another of the difficulties pointed to by Susskind is that hyper-regulation is aggravated by difficulties in the promulgation or notification of legislation and case law.  One of the requirements of Lon Fuller in his book “The Morality of Law” was that a failure to publicise and make available rules that citizens are expected to observe results in bad or, at worst, no law.

Although the digital environment has not been solely responsible for the hyper-regulation described by Susskind, certainly the ability to generate large quantities of printed material has moved from the print shop to the photocopier and then onwards to the word processor with high capacity laser printers and now to digital space via the Internet.  Technology allows text to be transmitted and disseminated at minimal cost.

Susskind’s consideration of hyper-regulation is further evidenced by the large volume of legal material that now was available in print but now more so in electronic form from the Courts and from legislatures. Whilst one should resist the suggestion that such volume is overwhelming, certainly there is more material available for consideration. Pressures, particularly upon legislators and the Judiciary, to perform within set time frames simply mean that much potentially relevant material may well be overlooked.  A further consequence of the hyper-regulation described by Susskind is the wide variety of information resources provided by new technologies.  For example, television is no longer a limited number of network channels but Cable TV, satellite systems, Internet TV and on-line content distribution such as Hulu and Netflix thus allowing almost an infinite number of sources of information.  The television is an Internet portal for the home.

Thus, we have before us a huge selection of informational alternatives.  The information received from each source may be different in appearance and content from the information received by others.  Furthermore, the nature and content of information will change.  The day may not be too far off when information from the Courts in terms of decided cases may become akin to watching a breaking news story on YouTube or social media as more and more information becomes available from the Courts and is disseminated or becomes the subject of commentary.

Thus, from hyper-regulation and the vast amount of material provided by the digital environment, which, by its nature, is dynamic and subject to rapid and constant change, a lawyer or Judge searching for relevant cases now has more material to sift through, more detail to assimilate and more flexibility in terms of potential arguments or outcomes.  The consequence of this could be to change the nature of legal argument from what could be described as a linear progression through a line of cases in the development of a precedent to the point where the authority of those cases is diminished or negated by the wealth of material available.  Holdsworth comments that a system of precedent:

“Will not work so satisfactorily if the number of Courts, whose decisions are reported, are multiplied.  The law is likely to be burdened with so greater mass of decisions of different degrees of excellence that its principles, so far from being made more certain by the decisions of new cases, will become sufficiently uncertain to afford abundant material for the infinite disputations of professors of general jurisprudence.  A limitation is needed in the number of reported cases …  English lawyers have hardly realised that it was a condition precedent for the satisfactory working of our system of case law.”[88]

The more cases that are available the greater the flexibility and the creation of a legal argument but the adverse consequence is that in terms of developed principle the link with precedent becomes more ephemeral. The delinear approach may introduce an alternative to the linear progression that has marked the development of principle.

The Internet has made more legal information available to more people more immediately than at any other time in human history. Although this fulfils the philosophical and societal ideals of bringing law to the people and providing for a fully informed populace, the implications for informational reliability and for precedent are substantial.

Internet availability of judgements at a number of levels means that decisions are accessible everywhere, cross-jurisdictionally. The prohibitions on the citation of unpublished opinions in the United States may well crumble in the face this technological revolution.

It is clear that the increased availability of and access to judicial pronouncements and the number of opinions and judgements that are available in addition to traditional hard copy reported decisions has serious ramifications both for the precedential value of those decisions, and indeed for the concept of precedent itself. There is no doubt that the law is unable to resist the tides of change.  The question is: During this transitional period, how may the law accommodate change and maintain its integrity in providing the rules that regulate the activities and relationships of citizens within the community?  The law traditionally looks back to precedent but the digital environment means that the depth of field is shorter.  The problem is with the vast amount of material that is available, how can one maintain a precedent-based system that will rely upon dynamic changing material rather than the reliability provided by the printed law report.

In addition, an overly large volume of decisions may mean that cases become determined not on a carefully refined and developed legal principle, but on factual similarities. The authority of precedent in the past has depended upon the fact that the legal process does not rapidly modify reported decisions.[89]

The Twilight of Precedent ?

It seems that there may be two possible alternative ways forward. One is based on the concept of functional equivalence. This solution focuses upon the content layer of the Digital Paradigm and effectively ignores the fact that its qualities make the nature of information and its communication different from what went before. In addition content itself lacks the fixity or stability of printed text. It may be suggested that a number of rules might be developed around which the challenges posed by digital qualities may be met. Such an approach artificially tries to maintain a reality that is no longer present.

By the same token technological co-existence will allow the status quo to continue, at least for a reasonable period of time, in the same way that manuscript and scribal habits continued well past the introduction of the printing press. The pace of change, as suggested by Susskind, will overtake co-existence within a generation or so, if that, rather than over a period of centuries.

The real test will probably come as lawyers are drawn from the ranks of those commonly described as Digital Natives[90] – those who have grown up in the Digital Paradigm and know no other means of information communication apart from device driven digital ones.

A way in which functional equivalence may be maintained, however, is if the technology itself may provide an answer – a technological solution to the problems that digital qualities pose. This may be termed the “Charles Clark” solution deriving from his oft-quoted solution to challenges to intellectual property in the Digital Paradigm – “the answer to the machine is in the machine.”[91]

But what if the machine does not provide an answer and digital qualities do force a re-assessment of precedent as a result of the challenges posed by the qualities of digital information systems?  What shape will precedent and the common law then take? Will the detailed principles developed by precedent become a series of broadly stated principles rather than the refined an intricate intermeshing of decisions that exists at present? Will the common law as we understand it wither or perhaps be replaced by a rule-based system similar to that of some European countries? Given the suggestion that print sources incline one towards legal principles while keyword searches are more apt to generate groups of cases based upon similarities of fact[92] will litigants, frustrated by lack of clarity, consistency and predictability of outcome where judges rely only upon fact specific outcomes, turn to arbitrators and mediators who are quicker, cheaper and less troubled by the procedural arcana of a Court.

It may well be that by travelling the digital path (and that journey, once started, cannot be retraced) we are irrevocably committed to a course that will change the doctrine of precedent as we know it.


[1]Marshall McLuhan Understanding Media: The Extensions of Man (Sphere Books, London 1967)

[2] Ibid.

[3] Ibid. p. xxi

[4] Elizabeth Eisenstein The Printing Press as an Agent of Change  (Cambridge University Press, Cambridge, 1979) 2 Vols. Reference will be made to the 1 volume 1980 edition; Elizabeth Eisenstein, The Printing Revolution in Early Modern Europe (Cambridge University Press (Canto), Cambridge, 1993).

[5] Above n. 4

[6] Eisenstein The Printing Press as an Agent of Change above n. 4 p. 159.

[7] Thomas Jefferson observed that because a larger number of books were printed than available in manuscript, the chances of more copies surviving were greater. Thomas Jefferson to Ebenezer Hazard, 18 February 1791 in M.D Peterson (ed) Thomas Jefferson: Writings (Library of America, New York, 1984) p. 973. Jefferson also expounded on the preservative power of print in a letter to George Wythe dated 16 January 1796 stating of his researches into the laws of Virginia  “our experience has proved to us that a single copy, or a few, deposited in MS in the public offices cannot be relied on for any great length of time.” see p.1031

[8] See for example the debate between Adrian Johns and Elizabeth Eisenstein – Adrian Johns, “How to Acknowledge a Revolution” (2002) American Historical Review 106; Elizabeth Eisenstein, “An Unacknowledged Revolution Revisited”  (2002) American Historical Review 87; Elizabeth Eisenstein “A Reply – AHR Forum” (2002) American Historical Review 126. See also Adrian Johns, The Nature of the Book (University of Chicago Press, Chicago, 1998); David McKitterick, Print, Manuscript and the Search for Order 1450 – 1830 (Cambridge University Press, Cambridge, 2003); Eric J Leed “Elizabeth Eisenstein’s The Printing Press as an Agent of Change and the Structure of Communications Revolutions ”  (Review) (1982) 88 American Jnl of Sociology 413; Diederick Raven,“Elizabeth Eisenstein and the Impact of Printing” (1999) 6 European Review of History 223; Richard Teichgraeber “Print Culture” (1984) 5 History of European Ideas 323; William J Bouwsma “The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early-Modern Europe” (Review) (1979) 84 American Historical Review 1356; Jack Censer “Publishing in Early Modern Europe” (2001) Jnl Social History p.629; Charles B Schmitt “The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early-Modern Europe” (1980) 52 Jnl Modern Hist 110; Carolyn Marvin “The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early-Modern Europe” (Review) (1979) 20 Technology and Culture 793 together with a recent collection of essays examining Eisenstein’s theory and its impact Sabrina Alcorn Baron, Eric N Lindquist, Eleanor F Shevlin  (eds) Agent of change : print culture studies after Elizabeth L. Eisenstein ( University of Massachusetts Press, Amherst, 2007)

[9] Eisenstein The Printing Press above n.4 generally p. 43 et seq; especially p.71 et seq. The Printing Revolution above n. 4 p. 42 et seq.

[10] Early print fonts, especially in legal works, imitated scribal forms but later gave way, in the seventeenth century, to the more legible roman font. Some of these early styles were difficult to read, even for the highly literate. This was the case with early legal texts in England which were printed in “black letter”. The use of the Roman font did not become common until the early seventeenth century. For an example of “black letter” see Totell’s published Year Books or William Fulbecke  Directive or Preparative to the Study of the Lawe (Thomas Wight, London 1600). The title page and introduction are in Roman. The text is in black letter. The difference is immediately apparent. On the other hand Michael Dalton The Countrey Justice (Society of Stationers, London 1613) is printed in Roman and is more easily readable than the black letter font.

[11] Note however the use by Tottel of mixed sheets from different printings. J.H. Baker “The Books of the Common Law” in Lotte Hellinga and J.B. Trapp (eds) The Cambridge History of the Book in Britain (Vol 3) (Cambridge University Press, Cambridge 1999)    p. 427 et seq. Thus some of his printings were compilations.  This suggests that the economics of printing may have contributed to circumstances that might have challenged the uniformity that standardisation required.

[12] Eisenstein The Printing Press above n. 4 p 80 et seq

[13] Ibid. p. 81.

[14] If errors were not detected print could facilitate the dissemination of false, incorrect or misleading information, despite Eisenstein’s claims that such problems could be met by error trapping which presupposed subsequent printings.

[15] The presence of errata did not guarantee their use. See for example the manual amendment by Lambarde to the 1569 printing of Bracton Sheffield University Library RBR Q 347(B) Folio 115V. The errata addressed the very mistake that Lambarde noted in hand,

suggesting that he had read the text without noting the errata.

[16] The Compleate Copyholder (T. Coates for W Cooke, London,1641) Wing C4912. For a further example see Douglas Osler “Graecum Legitur: A Star is Born” (1983) 2 Rechtshistorisches Journal 194 which demonstrates the falsehood perpetuated by the founder of legal humanism, Alciatus, that he had consulted non-existent manuscripts as well as a surviving manuscript of the Digest (the Florentine Codex) when in fact he had not.

[17] Eisenstein The Printing Press above n.4 p. 103.

[18] The issue of error was a matter which had an impact upon the reliability of printed legal material.

[19]  Eisenstein The Printing Press above n. 4 p. 107 et seq.

[20] Cited by J.A. Cochrane Dr Johnson’s Printer:The Life of William Strahan (Routledge and K Paul, London, 1964)  p.19 at n.2.  See also Eisenstein, The Printing Press above n.4 p. 112.

[21] (John Rastell, London, 1531) STC 9521.

[22] Tessa Watt Cheap Print and Popular Piety: 1550 – 1640  (Cambridge University Press, Cambridge, 1991) p. 7. In 1600 52% of East Anglian tradesmen and craftsmen could sign their names compared with 80% in London.  By the 1640’s roughly 30% of adult males in rural England could sign their names. David Cressy Literacy and the social order. Reading and writing in Tudor and Stuart England (Cambridge, Cambridge, 1980) p. 72. The upper classes, both nobility and gentry, men and women were on the whole literate in both English and French and often in Latin as well. John Feather A History of British Publishing (Croom Helm, London,  1988) p. 20.

[23] his does not automatically mean that there was understanding of some of the technical language. The ability to read does not necessarily import deep understanding.

[24] I.S. Williams, “He Creditted More the Printed Booke – Common Lawyers Receptivity to Print 1550 – 1640”  (2010) 28 Law and History Review 38.

[25] Lisa Gitelman “Introduction: Media as Historical Subjects: in Always Already New: Media, History and the Data of Culture (MIT Press, Cambridge, 2008) p. 7.

[26] I acknowledge that this is a very bald assertion. The argument is a little more nuanced and involves a consideration of the use of the printing press by Cromwell, the significant increase in legislative activity during the course of the English Reformation, the political and legal purpose of statutory preambles, the advantages of an authoritative source of law in printed form for governing authorities, all facilitated by underpinning qualities of print such as standardisation, fixity and dissemination. I brief, the argument is this. Sir Thomas Egerton’s Discourse on the principles of statutory interpretation (S.E. Thorne (ed) A Discourse on the Exposition & Understanding of Statutes with Sir Thomas Egerton’s additions (Selden Society, London, 1942) was written in the middle part of the sixteenth century and heralded what was to become an important aspect of legal analysis – legal hermeneutics or the analysis of the language of a text – which depended first upon the textualisation of the law and secondly upon the availability of textualised law in multiple copies. Thorne states that the history of statutory interpretation starts in the sixteenth century and after the Henry VIII’s legislative outburst accompanying the break with Rome. Print in and of itself seems to have had little causative effect upon legislative and proclamatory activity of the Henrician Reformation, nevertheless the new technology was employed as a tool to disseminate the royal message. Thus detailed, textualised law, printed by the holder of the Royal Printing monopoly, became available in multiple copies enabling a greater consideration of the legislation and, importantly, its purpose as stated in the preamble. Assuming an absence of print, and assuming the historical progress of the Henrician Reformation with Cromwell’s legislative programme, it is possible that hermeneutics may have developed – probably more slowly – and it would have relied on manuscript materials.  It must be remembered that some instances of conflict between print and manuscript copies of legislation were resolved in favour of the latter but in either event the text of the manuscript or in print required scrutiny.  Print made this process easier for the lawyers and Judges in that the wider availability of printed material allowed for closer and lengthy engagement with the text.  Thus it can be suggested that print contributed to the development of hermeneutics although that development may have been one of the unintended consequences of the State’s interests in the printing of statutes.

Whilst print may not have a direct impact upon changes in the nature of statutes themselves it may have enabled them and it certainly must have had an impact upon the distribution, consideration and analysis of statutory material, providing a fresh context within which statutes might be considered. Even although there may have been occasions where a manuscript version of a text was preferred to a printed one, the fact that such a comparison was taking place demonstrates that printed material was occupying an important place in the spectrum of legal information.

[27] Frederick Schauer & Virginia J. Wise “Nonlegal Information and the Delegalization of Law” (2000) 29 J Legal Stud 495 at pp. 512, 514

[28] Hence I have redefined it – see below under Eponential Dissemination

[29] Jim Dator, “Judicial Governance of the Long Blur” (2000) 33  Futures page 181 – 197

[30] For an examination of the changes in the legal profession and a possible future see Richard Susskind The End of Lawyers:Rethinking Legal Services (Oxford University Press, Oxford 2008) and Tomorrow’s Lawyers (Oxford University Press, Oxford 2013).

[31] In fact a misquote that has fallen into common usage from the movie Field of Dreams (Director and Screenplay by Phil Alden Robinson 1989). The correct quote is “If you build it he will come” (my emphasis) http://www.imdb.com/title/tt0097351/quotes (last accessed 4 November 2013)

[32] Vannevar Bush “As We May Think” The Atlantic, 1 July 1945. http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/?single_page=true

[33] For further discussion, especially in the context of reading see Maryanne Wolff Proust and the Squid: The Story and Science of the Reading Brain (Harper Collins, New York 2007); Neil Postman The Disappearance of Childhood  (Vintage\Random House  New York 1994). Walter Ong Orality and Literacy: The Technologising of the Word (Routledge, Oxford 2002). Neil Postman Amusing Ourselves to Death: Public Discourse in the Age of  Showbusiness (Penguin Books, New York 1986); Sven Birkerts The Gutenberg Elegies: The Fate of Reading in an Electronic Age (Faber, Winchester MA, 1994); Sven Birkerts “Resisting the Kindle” (The Atlantic March 2009). and my discussion in “Why Do Jurors Go On-Line” The IT Countrey Justice  July 27 2012  https://theitcountreyjustice.wordpress.com/2012/07/27/why-do-jurors-go-on-line/ (last accessed 7 November 2013)

[34] See for example Neacsu, Dana, “Google, Legal Citations, and Electronic Fickleness: Legal Scholarship in the Digital Environment” (June 2007). Available at SSRN: http://ssrn.com/abstract=991190 or http://dx.doi.org/10.2139/ssrn.991190; Rumsey, Mary “Runaway Train: Problems of Permanence, Accessibility and Stability in the Use of Web Resources in Law Review Citations” (2002) 94 Law Library Jnl 27; Susan Lyons “Persistent Identification of Electronic Documents and the Future of Footnotes” (2005) 97 Law Library Jnl 681; Zittrain, Jonathan and Albert, Kendra, Perma: Scoping and Addressing the Problem of Link and Reference Rot in Legal Citations (September 21, 2013). Available at SSRN: http://ssrn.com/abstract=2329161 or http://dx.doi.org/10.2139/ssrn.2329161  Adam Liptak “In Supreme Court Opinions, Web Links to Nowhere” New York Times 23 September 2013 http://www.nytimes.com/2013/09/24/us/politics/in-supreme-court-opinions-clicks-that-lead-nowhere.html?smid=re-share&_r=1&  (last accessed 22 October 2013) See also the discussion below under the headings “Information Persistence”,” Format Obsolescence” and the “Non-coherence of Information.”

[35] Ibid. Neacsu

[36] Ibid. P. 12 – 15

[37] See David Harvey “Using Digital Tools to Give Reasons” 2011 Paper presented at the Courts Technology Conference 2011;  The IT Countrey Justice June 8 2012 https://theitcountreyjustice.wordpress.com/2012/06/08/using-digital-tools-to-give-reasons/ (last accessed 11 November 2013) http://www.scribd.com/doc/96370049/The-Digital-Decision-Paper (last accessed 11 November 2013)

[38] Which may arise for any one of a number of reasons, many of which involve relocation of information rather than its total removal.  For a counter-point to link rot see the discussion on Exponential Dissemination below.

[39] Although for the other side of this coin see the quality of format obsolescence

[40] The recent disclosures made by the soi-disant group “Roastbusters” on Facebook about their sexual exploits with young women is an example of dissociative enablement. In the “rea; word” they may have communicated with a limited number of individuals within their peer group. Now they communicate their behaviour – and the consequent embarrassment of their victims – to the world. For example see “Roast Busters: Over 63k call for PM to take action” NZ Herald 11 November 2013 http://www.nzherald.co.nz/nz/news/article.cfm?c_id=1&objectid=11154984 (last accessed 11 November 2013.) This article is one of a large number published since the story broke on TV3 news on 3 November 2013.

[41] See William Bernstein Masters of the Word (Atlantic Books, London 2013).  Especially chapter 8 “The Comrades Who Couldn’t Broadcast Straight” and pp.263 and following.

[42] Information flow in communication is important in analysing the impact of information. One of the most common errors in descriptions of the Internet is the suggestion that one should “go to” a certain address. The reality is that the enquirer goes no where. The information in fact flows in a direction opposite to that suggested in that the website is downloaded to the enquirer’s computer. This is perhaps an obvious an particularly egregious example of a misunderstanding of information flows especially within the technological sense. The practice of law is entirely about information flows and is an important element in determining, for example, the culpability of a juror for extra-curial communication. See “The Googling Juror: The fate of the Jury Trial in the Digital Paradigm” https://theitcountreyjustice.wordpress.com/2012/09/13/the-googling-juror-the-fate-of-the-jury-trial-in-the-digital-paradigm/ (last accessed 5 November 2013)

The “information flows” approach was developed by Professor Ian Cram. See Ian Cram “Twitt(er)ing Open Justice? or threats to fair trials in 140 characters) – A Comparative Perspective and A Common Problem”(Unpublished paper delivered at Justice Wide Open Conference, City University London, 29 February 2012) seehttp://www.city.ac.uk/centre-for-law-justice-and-journalism/projects/open-justice-in-the-digital-era (lastaccessed 4 April 2012). I am indebted to Professor Cram for providing me with his paper that he presented at the City of London Conference and for his analysis of information flows. The full paper may be found athttp://www.scribd.com/doc/97591724/Justice-Wide-Open-Ian-Cram-Twitt-er-ing-Open-Justice (last accessed 23 August 2012)

[43] The disclosure of Mayor of Auckland Len Brown’s affair with Bevan Cheung broke not via  newspapers, radio or television but via the Whaleoil blog – http://www.whaleoil.co.nz/2013/10/exclusive-len-brown-sordid-affair-run-town-hall/#axzz2jhkdzHRB (last accessed 5 November 2013)  Similarly the disclosure in October 2012 that Ministry of Social Development information kiosks were insecure was researched and exposed by Kenneth Ng on the Public Address blog http://publicaddress.net/8246 (last accessed 5 November 2013)

[44] This demonstrates that many of the qualities of the Digital Paradigm are interrelated.

[45] Burkhard Schafer and Stephen Mason, chapter 2 ‘The Characteristics of Electronic Evidence in Digital Format’ in Stephen Mason (gen ed) Electronic Evidence (3rd edn, LexisNexis Butterworths, London 2012) 2.05.

[46] Ibid.  2.06.

[47] [2004] 3 NZLR 748 at [110].

[48] E-Discovery demonstrates the interrelationships of Digital Paradigm qualities in that one of the basic problems within the discovery process is not only the “non-coherence” of information, but the volume of information spread across a vast array of platforms.

[49] Marshall McLuhan, Understanding Media above n.1

[50] File | Options | Trust Center | Trust Center Settings | File Block Settings allows the user to change the settings to enable opening files in earlier formats. Note that, if the user wants completely unrestricted access to these files, he or she needs to clear the check boxes for them so that whichever of the radio buttons is selected at the bottom does not apply to them. Alternatively, the user can leave them checked and opt to allow opening them in Protected View, with or without allowing editing.

[51] Some converters no longer work in new operating system environments. For example, converters designed for the Windows 32-bit system may be rendered obsolete by virtue of the fact that Windows 7 64-bit uses a different file path structure and registry keys.

[52] Nicholas Carr The Shallows: How the Internet is changing the way we think, read and remember. (Atlantic Books, London 2010). See also Nicholas Carr “Is Google Making Us Stupid” Atlantic Magazine 1 July 2008 http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/  (last accessed 31 May 2013)

[53] See especially Susan Greenfield “Living On-line is Changing Our Brains” New Scientist, 3 August 2011 http://www.newscientist.com/article/mg21128236.400-susan-greenfield-living-online-is-changing-our-brains.html (last accessed 31 May 2013) For this and for her assertions of “internet addiction” she has she has been criticised by Dr. Ben Goldacre for claiming that technology has adverse effects on the human brain, without having published any research, and retracting some claims when challenged. Goldacre suggested that “A scientist with enduring concerns about a serious widespread risk would normally set out their concerns clearly, to other scientists, in a scientific paper”  Ben Goldacre, “Serious Claims Belong in a Serious Scientific Paper” The Guardian 21 October 2011 http://www.guardian.co.uk/commentisfree/2011/oct/21/bad-science-publishing-claims (last accessed 31 May 2013)

[54] Untangling the Web: What the Internet is Doing to You  (Faber, London 2013). Presentation by Aleks Krotoski at the Writers and Readers Festival, Auckland 19 May 2013. Personal discussion between the author and Aleks Krotoski 19 May 2013.

[55] Above n. 41 esp at p.323 and following.

[56] Ibid. P. 323-4.

[57] Ibid. p. 324.

[58] Ibid p. 324-6

[59] Sometimes referred to as “The Frankenstein Complex”

[60] See above for some of the qualities of digital information technologies.

[61] Jim Dator “Judicial governance of the Long Blur,” Futures , Vol. 35, No. 1, January 2001; Frederick Pollock and Frederick William Maitland The History of English Law Vol 1 2nd ed. (Cambridge University Press, Cambridge 1968) p. 460

[62] Ethan Katsh The Electronic Media and the Transformation of Law New York Oxford University Press 1989

[63] Ibid. page 12

[64] Michael T Clanchy – From Memory to Written Record: England 1066 to 1307 (Oxford, Blackwell, 1993).

[65] T Ellis Lewis, “History of Judicial Precedent”  (1930) 46 Law Quarterly Review 207.

[66] William Markby “Elements of Law” in Readings on the History and System of the Common Law in R Pound and T F T Plunknett (eds) (Rochester, NY; Lawyers Co-Operative 1927) p.125.

[67] Entick v Carrington [1765] EWHC KB J98; 19 Howell’s State Trials 1029 (1765).

[68] Quoted in Holdsworth, William, Some Lessons from our Legal History New York, Macmillan, 1928, p.18

[69] Schauer, F. Precedent (1987) 39 Stanford L.R. 571, 595 – 60.

[70] Some Lessons From Our Legal History above n. 67  page 19.

[71] Legal Realism: Its Cause and Cure, (1961) 70 Yale Law Journal 1037.

[72] Diana Botluck, Evaluating the Quality of Web Resources  http://www.llrx.com/columns/webquality.htm  published 3 April, 2000 (last accessed 11 November 2013).

[73] Ibid.

[74] Katsh, above n. 62.

[75] Robert C Berring “ Collapse of the Structure of the Legal Research Universe: The Imperative of Digital Information” (1994) 69 Wash L. Rev. 9 at pp. 11 and 14

[76] Levinson, Paul Digital McLuhan – A Guide to the Information Millennium (Routledge, London, 1999). According to McLuhan’s laws of media, an environment obsolesces or reverses at the moment of fever pitch. The information environment, during its moment of superabundance, becomes obsolescent. It has passed into cliché, if we can see it at all. “’When faced with a totally new situation, we tend always to attach ourselves to the objects, to the flavor of the most recent past. We see the world through a rear-view mirror. We march backwards into the future.” Marshall McLuhan and Quentin Fiore The Medium is the Massage : An Inventory of Effects (Gingko, Berkeley 2001)

[77] Ibid.  p.176 – 177

[78] Ian Williams “He credited more the printed booke” above n. 24

[79] The Bluebook: A Uniform System Of Citation (Columbia Law Review Ass’n et al. eds., 17th ed.

2000). See also the reasons for an early reluctance to cite Internet materials including a lack of confidence in their reliability and accuracy. Many Web sites are transient, lack timely updates, or may have had their URLs changed. Thus many Internet sources .. . did not consistently satisfy traditional criteria for cite-worthiness. See Colleen Barger “On the Internet Nobody Knows You’re A Judge: Appellate Court’s Use of Internet Materials” (2002) Jnl Appellate Practice and Process 417 at 425.

[80] Ibid. 19th ed.

[81] Ethan Katsh Electronic Media and the Transformation of Law: Law in a Digital World New York, Oxford University Press, 1995. Richard Susskind, The Future of Law: Facing the Challenges of Information Technology (Oxford University Press Oxford 1996).

[82] Moore’s law is the observation that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The law is named after Intel co-founder Gordon E. Moore, who described the trend in his 1965 paper – Gordon Moore “Cramming More Components onto

Integrated Circuits” http://www.cs.utexas.edu/~fussell/courses/cs352h/papers/moore.pdf (last accessed 5 November 2013) His prediction has proven to be accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development.

[83] Mary Rumsey and April Schwartz “”Paper vs Electronic Sources for Law Review Cite Checking: Should Paper be the Gold Standard” (2005) 97 Law Libr J 31 at p.42.

[84] Ibid. At 46.

[85] See the comments of Holdsworth and Gilmore above notes 69 and 70.

[86] Richard Susskind The Future of Law: Facing the Challenges of Information Technology (Oxford University Press, Oxford 1996)

[87] Ibid. pge 91

[88] “Some Lessons from our Legal History” op. cit  page 22

[89] Katsh, Ethan The Electronic Media and the Transformation of Law  New York; Oxford University Press; 1989 p.46

[90] The terms “digital native” and digital immigrants” were first used by Marc Prensky in an article which suggested that students who were born into the Internet Age were no longer the people the educational system was designed to teach Marc Prensky “Digital Natives, Digital Immigrants” (2001) 9 On the Horizon 1 http://www.emeraldinsight.com/journals.htm?issn=1074-&121&volume=9&issue=5&articleid=1532742&show=pdf; http://www.marcprensky.com/…/prensky%20-%20digital%20natives,%20digital%20immigrants%20-%20part1.pdf  (last accessed 23 February 2012).  For a brief introduction the the development of Presnsky’s theory see Wikipedia “Digital Native” http://en.wikipedia.org/wiki/Digital_native (last accessed 23 February 2012)

Prensky spoke of the issues confronting education in the digital paradigm. He suggested that there was a growing culture of people who had grown up knowing nothing but the Internet, digital devices and seeking out information on-line. This group he called “Digital Natives” – those born after 1990. He contrasted this class with “Digital Immigrants” – those who had developed the information seeking and uses before the advent of the Internet. Digital Immigrants used digital communications systems but their thought processes were not as committed to them as Digital Natives. Although they could speak the same language as the Digital Natives, they had a different accent that derived from an earlier information paradigm.

Digital Immigrants have an approach to information that is based upon sequential thinking, single tasking and limited resources to enable communication, all underpinned by the fixity of text. For the Digital Immigrant text represents finality. A book is not to be reworked, and the authority of a text depends upon its finality. Information is presented within textual constraints that originate in the Print Paradigm.

Digital Natives inhabit a different information space. Everything is “multi” – multi-resource, multi-media, multi-tasking, parallel thinking. Information for the Digital Native may in its first instantiation be text but it lacks the fixity of text, relying rather on the dynamic, fluid, shifting qualities of the digital environment. Text does not mean finality. Text is malleable, copyable, moveable and text, like all other forms of information in the digital space, is there to be shared.

In the final analysis, the differences between Digital Immigrants and Digital Natives can be reduced to one fundamental proposition – it’s all about how we process information. For Digital Natives the information resources are almost without limitation and the Digital Native mind shifts effortlessly between text, web-page hypertext links, YouTube clips, Facebook walls, flikr and Tumblr, the terse, abbreviated tweet or text message and all of it not on a desktop or a laptop but a handheld smartphone.

[91]   Charles Clark ‘The Answer to the Machine is in the Machine’, in: P. Bernt Hugenholtz (ed.), The Future of copyright in a digital environment : proceedings of the Royal Academy Colloquium organized by the Royal Netherlands Academy of Sciences (KNAW) and the Institute for Information Law ; (Amsterdam, 6-7 July 1995), (Kluwer Law International, The Hague, 1996).Charles Clark

[92] F Allan Hanson “From Key Numbers to Key Words: How Automation Has Transformed the Law” (2002) 94 Law Libr J 563 at 583.

Taxing Overseas On-Line Purchases

The issue of whether or not sales tax – known in New Zealand as Goods and Services Tax (GST) – should apply to purchases overseas on-line purchases has hit the political radar. Articles from the New Zealand Herald addressing aspects of the debate may be seen here, here, here , here, here and here . Public responses to the proposal may be seen here.  

As fortune would have it I had the pleasure of assessing an excellent LLB  honours dissertation on this very topic. The dissertation referred to what the author referred to as a de minimis rule – that customs and IRD had set a figure on the value of transactions beneath $400.00 that would not attract GST. Transactions over $400 would attract GST. The basic reason for the de minimis figure is that the costs of collection exceed the tax collected providing a nil or negative return for the tax collector.

New Zealand retailers have been complaining for some time at the perceived difficulties that they face from off-shore on-line retailers. One argument that is advanced, as may be perceived from the articles referred to above, is that on-line transactions are more attractive because they are GST free (although in some cases the GST free component is cancelled out by shipping costs).

There are a number of issues that arise in this discussion. At the moment there appears to be an absence of publicity of hard evidence to support retailers’ claims. It would be helpful to the debate if that was made available. It would also be interesting to try and see what impact on-line sales within New Zealand are having upon “bricks and mortar” or “High Street” retailers. The playing field there is level in that both on-line and “High Street” retailers have to collect GST. There could be many other reasons apart from GST that would mean that the on-line retailer is more competitive than the “High Street” one. It would be interesting to see some evidence of these facets of the argument.

It may be seen from just that observation that the discussion is a very nuanced one, and the dissertation that I assessed dealt only with the legal and tax policy implications of the debate. I would urge the author, if he is reading this, to publish the dissertation either in a law journal or on-line, and I would welcome any comments that he wishes to make on this post.

I don’t presume to be an expert upon tax policy and the economics that surrounds it. But in this post, rather than engage or take a side in the debate, I should like to try and identify some of the issues that may need to be considered and that demonstrate that this is a nuanced and complex subject. I do this within a theme that has been present in many of my posts on this blog – the properties of the Digital Paradigm and how they must be considered in developing legal solutions in the Digital environment. I don’t express a view on the merits of either side nor should this post be interpreted in that way I try to identify some of the issues within the context of the Digital Paradigm

1. The New Paradigm

One of the qualities of the Digital Paradigm is continuing disruptive change. After the printing press had bedded in and society had become accustomed to the changes that the press enabled in the communication of information there was a time gap between the next information technology innovations such as telegraphy, the wireless and the radio. There was time to adapt – take a breath – reassess certainties. In an environment of continuing disruptive change that time for a pause – David Lange’s “cup of tea” – no longer exists. Retailers are facing doing business in a paradigm where the only certainty is uncertainty, unpredictability is the only thing that is predictable because change is a constant. I am sure that retailers are well aware of this. But it is a reality that must be faced.

One example of the way that change poses a challenge lies in the suggestion that if credit card transactions provide some sort of foundation for the applicability of GST to on-line transactions,  consumers will merely more to on-line payment facilities such as Paypal. Presumably a Paypal transaction will in some way mask the nature of transaction when it appears on the credit card. This demonstrates that the Digital Paradigm provides a number of alternatives to effecting the transaction which may add layers of difficulty to imposing a GST regime on on-line transactions, and this is an issue that would need to be addressed. And, of course, in an environment of continuing disruptive change, who is to say that other payment methods may not be in development or may already have been developed. Bitcoin is one example that comes to mind.

So the first issue will be to consider making any rule flexible and adaptable enough to apply to a constantly changing environment.

2. Consistency and Fairness of Application

Any tax regime must be applied consistently and fairly, and the less complex it is the better. One of the major changes in the New Zealand tax overhaul in the 1980’s was to rid the system of the myriad anomalies that clogged and complicated the tax system.

The Overseas Traveller

Let us take the following hypothetical example to demonstrate an inconsistent approach to the applicability of GST to on-line transactions. Assume that GST applies to all on-line transactions irrespective of value. I travel on business to confer with and advise a client in Hong Kong. Because this is a business trip the costs of travel and accommodation are tax deductible because I will pay tax and GST on the fee that I charge the client (assuming a normal charging regime based in New Zealand). I conclude my business and go for a walk. I pass a camera shop and see a remote wireless device for my camera. The price is $400.00. I purchase the item and return to New Zealand.  Taxes payable in Hong Kong (if any) are refunded to me at the Hong Kong International airport at Lantau prior to departure. I need not declare the item for it is for my personal use and is less than the $700.00 + that I am required to declare upon entering the country. I pay no GST.

The On-Line Purchaser

I go on-line and locate an on-line camera store operating from Hong Kong. I see they have a remote wireless device for my camera. The cost of the item is $400.00 including shipping to New Zealand. I order it. Upon arrival in New Zealand the customs department assess GST at 15% – $60.00. The total cost of the transaction therefore amounts to $460.00.

This demonstrates the inconsistency of approach to the purchase of the same item where the assessment for GST is dependent upon the method of the transaction. An across the counter purchase in Hong Kong attracts no GST. On on-line purchase does attract GST. It also demonstrates an inequality of treatment between the on-line purchaser and the overseas traveller.  In this regard I note that “High Street” retailers have not addressed the damage done to their businesses by travellers sourcing goods overseas.

Perhaps the solution may be to remove the exemption granted to travellers. In that way there would be consistency of approach.

3. Competition

“High Street” retailers claim that they cannot compete with the price of goods available on-line. This a highly complex area and has a large number of issues associated including the costs of a “brick and mortar” operation vs a server and a distribution system – essentially different retailing models. In some respects, the problems faced by “High Street” retailers hark back to the issue of doing business in an environment of continuing disruptive change. There is probably a small advantage in pricing for the on-line retailer and consumer when one factors in not only the absence of GST but also the additional shipping costs. And that argument may well be applicable when we are talking about apples and apples. The problem with the “unfair competition” argument is that often there is no competition between the “High Street” and on-line retailer. Let me give a couple of examples:

a) I like a particular brand of shirts and ties. I discovered the brand on a visit to the United States. The brand is available both in “High Street” outlets in the US and Australia. It is also available on-line through Macy’s website site. The brand is not stocked in New Zealand. Thus there is no competition between the “High Street” retailer in New Zealand and Macy’s On-Line because the New Zealand retailer is not standing in the market place with the item that I want. Essentially what the Internet enables is consumer choice. That is another aspect of the continuing disruptive change that the Digital Paradigm enables. The consumer is no longer limited to the choice provided by the retailer.

b) The on-line book store. I suppose Amazon is the most obvious example of the on-line retailer and because it began its business by selling books on-line, the discussion must turn to books. There can be no doubt that the book trade is suffering as a result of the challenges of the Digital Paradigm. I wonder if it may be a bit simplistic to say that retailers like Amazon have been responsible for the demise of Borders (both in NZ and the US) or the current troubles experienced by Barnes and Noble in the US. There are a number of challenges that booksellers face and not all of them will be resolved by imposing GST on on-line book purchases. The advent of the e-book and the proliferation of digital reading devices provide challenges in terms of availability and convenience that challenge the Print Paradigm itself. I use the term Print Paradigm to describe hard copy printed books because the only resemblance that E-Books have to printed works is that both employ text. Otherwise they are representatives of two entirely different paradigms.

This isn’t a discussion about the challenges faced by the printed book – that is a subject worthy of consideration all on its own – but it probably highlights the nature of continuing disruptive change within the Digital Paradigm.

Once again the issue of choice and whether there is actually a common market place between the High Street retailer and the on-line bookseller comes into focus. The on-line bookseller may be able to provide a title that is otherwise unavailable from the local High Street bookseller. The High Street bookseller can hardly complain of competitive unfairness if there is no competition. Only when the High Street bookseller is selling the same title that is available on-line can the issue of competitive unfairness be raised.

Thus, an issue that must be addressed and considered is whether or not there is in fact competitive unfairness in a marketplace where one player is not present.

The disadvantage in the on-line store is the lack of the shopping or retail therapy associated with considering and choosing, and the wonderful opportunity that the High Street retailer provides to browse and see the stuff you didn’t know about but that you find you wanted. Browsing the shelves of a bookstore beats browsing on Amazon – hands down.

4. Digital Products

The Digital Paradigm has introduced new digital products especially in the field of books, music and movies. If one were to impose GST regime on on-line purchases the assumed paradigm is that of a physical product coming across the border. The problem is that digital products are a stream of bits that come across the borderless Internet. The only time that territoriality becomes relevant is when the digital product is received on the purchaser’s computer.

Perhaps this demonstrates the disruptive nature of continuing change in the Digital Paradigm. The fundamental thinking behind the imposition of GST on on-line purchases assumes a physical package coming across the border. That is perhaps an assumption that derives from the pre-Digital paradigm and that may be obselete. One must be careful about fundamental assumptions in a time of continuing Paradigmatic Change.

The reality is that Digital Product purchases – e-books, movies, music, games and the like – are going to be impossible to track for the purposes of collecting GST. And this demonstrates issues of fairness and inconsistency that have been discussed above.

Clearly the issue of taxing on-line purchases of digital products is going to have to be addressed and the shift away from physcial sale of recorded music was considered at the recent Nethui.

5. Protectionism?

The final issue that occurs to me is whether or not the imposition of GST on all on-line purchases is a tax collection issue or a protectionist issue. The way the argument on behalf of the retailers is put forward in the news media is that a competitive advantage may be restored with the imposition of GST in that purchasers will be less inclined to go on-line shopping if they have to pay GST somewhere along the line.

I wonder if this argument is valid, because it seems to me that it ignores a number of the other factors that I have discussed above. The subtext of the argument seems to be that local retailers will benefit from the imposition of GST on on-line transactions and purchasers will return to the High Street market, deterred from on-line retailing by the imposition of tax. I may misunderstand the argument but it does seem that implicit in this approach is the use of GST as another form of trade tariff albeit at an individual consumer level, and I do not understand that to be the function of a transactions tax – as a deterrent to a particular type of consumption.

Certainly this is an issue that will have to be considered, and it may be that retailers may have to revisit this part of the arguement. It is an issue that I think will have to be addressed and resolved. What is the real purpose of the move – a fairer taxation system or protectionism for local retailers.

Conclusion

I don’t offer any solutions. That is not the function of this post. Rather it has been to identify some issues that may need to be considered to those who will participate in the debate.

But there is something else that this discussion emphasises. I have written on a number of occasions about the qualities of the Digital Paradigm and how these qualities challenge our assumptions about information, its use and its communication. In my view the various qualities of the new Digital Paradigm must be considered in any legal solution in which it is involved. That has been a constant theme in many of my posts on this blog and I hope that future posts will continue to develop that theme.

The IT Countrey Justice

14 July 2013

Law, Information and Technology

This is the text of a keynote speech that I gave to the New Zealand Law Society seminar on Technology Law held in Auckland on 20  June 2013. Because of appalling weather, the Wellington seminar was unable to be held but will be rescheduled (hopefully) later this yearThe powerpoint presentation used in the address may be found here.

It has been said that the only asset that a lawyer has is time.  This comment probably originated to justify time costing, a practice which, over the years, has become highly contestable.  But I would dispute the fundamental assertion.  A lawyer’s stock in trade is not time but it is information.  The law is no more and no less than a information acquiring, processing and sharing occupation.  The law in itself is information that limits or allows certain activities.  So that a lawyer may properly advise his client there is an information flow from client to lawyer.  The lawyer may then be required to look up the law in which case there is an information flow from the source of law, be it legislation or cases, to the lawyer.  The lawyer then communicates the information to the client thus the original information flow is reversed.  On the basis of the information (advice) received by the client, the client may make a choice as to the course of action that he or she follows.

Likewise Court proceedings are all about information.  Information takes certain forms, be it by way of pleadings which inform the Court what the dispute is about, evidence which informs the Court as to the strength of the assertions contained in the pleadings, submissions by which the Court is informed as to the possible approaches that it may adopt in determining the outcome, and from the Court to the lawyers and the parties when it delivers a decision.  In the course of processing the decision the Judge or Judges will embark upon their own information acquisition activities, looking up the law, checking the assertions or alternatively having recourse to an internal information exchange involving Judges Clerks.

In some way shape or form these have been  fundamental realities of the practice of law.

Before the introduction of the printing press the law was a mixture of primarily an oral culture supplemented by hand written manuscript material.  The introduction of a printing press enabled lawyers to approach and access information in a different way.  The printing press was utilised by the State to print legislation incorporating lengthy pre-ambles, which are as much political propaganda as anything else particularly during English Reformation.

Case books, which had been formerly hand written were reduced to print. In time the fundamental qualities underlying the content provided by the printing press – stability of text, fixity and standardisation of content, and wide spread dissemination allowed for the development of precedent which could not happen in the absence of a reliable text to which reference could be made.  It is also possible that legislation in print, and indeed the reduction of much legal information to print in the 16th century allowed lawyers to focus more carefully upon textual analysis and the way in which the printed word could be interpreted, giving rise to principles of statutory interpretation.

The printing press is an example of the impact of an information technology on the law.  My suggestion is that any new information technology is going to have an impact upon occupations or professions whose business is information. The rise of digital information technologies has already significantly changed the way in which we practise law.  Early examples can be seen in the introduction of word processing and computer based trust accounting. But they are earlier and rather clumsy examples of the way in which technology enables new ways of approaching information.

What we need to understand about new information technologies is that they have two major aspects.  One is the content layer and we are immediately familiar with this.  In fact it is probably the main thing that we think about when it comes to information technologies.  But there is more to it than that.  Every new information technology – and this has been the case from the printing press onwards – has its own particular properties or qualities that significantly differentiate it from other earlier information technologies.  This is particularly the case with digital information systems.  Examples of these properties are

  • Persistence,
  • Continuing change or what you could refer to as the disruptive element,
  • Dynamic information
  • Delinearisation of information
  • Dissociative enablement,
  • Permissionless innovation,
  • Availability
  • Participation
  • Searchability
  • Retrievability.

.Once you begin to understand the importance of the qualities or properties of a new information technology then you begin to get some insight into Marshall McLuhan’s comment “the medium is the message.”[1]   Although we may be dazzled by the content which McLuhan suggested was the piece of meat that attracts the lazy dog of a mind, we can begin to get some understanding of how it is that new information technologies are going to change not only our approaches towards information but also some of our fundamental behaviours.  This may reach the point where even the values that we may attribute to information that underlie certain behaviours may themselves change.   And this is the case with the law.  One only needs to consider the rise of remote evidence giving – both spatial as the case with audio visual links or temporal as is the case with pre-recorded evidence – to understand the revolutionary impact that these forms of evidence giving are going to have upon the traditional Court case.  Technology can enable the “non-presence” of witnesses. Formerly, court cases have been all about physical presence of all the “players”.  Indeed in the early days of pre-recorded evidence or audio visual links Judges scrambled to find reasons why these technologies should not be used emphasising among other things the importance of presence.[2]

However there are hidden sides to the impact of new technologies and these lie in the way in which the properties that I have described of new technologies influence us.  Marshall McLuhan said “we shape our tools and thereafter our tools shape us” and this of course gives rise to the question of whether or not the internet changes us forever.  Underlying this theory is the concept of neuroplasticity – the ability of the brain to adapt to and learn from new stimuli.   The concept of neuroplasticity was picked up by Nicholas Carr in his book The Shallows: How the Internet is changing the way we think, read and remember.[3]  His book, based upon an earlier article that appeared in the Atlantic, has as it thesis that the internet is responsible for the dumbing down of society based upon the way in which our minds respond both to the wealth of information and its availability.

The neuroplasticity argument is picked up by Susan Greenfield[4] who believes the web is an instant gratification engine, reinforcing behaviours and neuronal connections that are making adults more childlike and kids hungry for information that is presented in a super simplistic way but in fact reduces their understanding of it.  Greenfield is of the view that the web spoon feeds us things to capture our attention. This means we are learning to constantly seek out material that stimulates us and our plastic minds are being rewarded by our “quick click” behaviour.  We want new interactive experiences and we want them now.

This view is disputed by Aleks Krotoski[5] who firstly observed that there is no evidential support for Greenfield’s propositions which pre-suppose that once we used the web we will forever online and never log off again.  According to Greenfield, says Krotoski, we become connected to our computers and other devices in a co-dependent exclusive almost biological way ignoring where how and why we are connecting.  Krotoski, for example, disputes internet addiction, internet use disorder or neurological rewiring.

In some respects Carr and Greenfield are using the “low hanging fruit” of technological fear[6] to advance their propositions.  Krotoski’s rejection of those views is, on the other hand, a little too absolute and in my view the answer lies somewhere in between.  The issue is a little more nuanced than whether or not the Internet is dumbing us down or whether or not there is any evidence of that.

My argument is that the impact of the internet lies in the way in which it redefines the use of information and the way we access it, process it, use it, respond to it and our expectations of it and its availability.

This may not seem to be as significant as Carr’s rewiring or Greenfields neuroplasticity but it is, in my view, just as important.  Much of our decision making is based upon information.  Although some of our activity could be termed responses to stimuli, or indeed it might be instinctive, most of the stimuli to which we respond can in fact be defined as information – if not all of it.  The information that we obtain when crossing the road comes from our senses and sight and hearing but in many other of our activities we require information upon we which may deliberate and to which we respond in making decision about what we are going to do, buy and so on.

And paradigmatically different ways of information acquisition are going to change the way in which we use and respond to information.

Elizabeth Eisenstein argues this in considering the impact that the printing press had upon intellectual elites and the intellectual activity of the Early modern period.  The first information technology was an enabler – an agency of change – for the intellectuals of the Renaissance, the Reformation and  the Scientific Revolution.  And it had its own impact upon the intellectual elite of the English lawyers.[7]

I would suggest that in the digital information paradigm we are seeing similar although not identical, changes.  I am not talking about electronic land transfer and company registrations in and of themselves.  These in my view represent what could be called content layer changes.

There are other changes that are taking place that arise from some of the fundamental qualities that underline new digital communications technologies – and all communication technologies have these particular properties[8] or qualities underlying them and which attach to them; from the printing press through to the wireless through to the radio through to television and into the digital paradigm.  It is just that digital systems are so fundamentally different in the way in which they operate and in their pervasive nature that they usher in a new paradigm.[9]

But to get back to legal practice.  I mentioned land transfer transactions and company registration.  What in fact is happening with these transactions is that information technology de-personalises them. For the lawyer sitting in his or her office, the transaction takes place with a few mouse clicks and the entry of a few authentication codes and that’s an end to it.  The transaction is de-personalised in the sense that this manner of closing a transaction – as the Americans put it – or settling a transaction means that the gathering together of the legal representatives of the various players to exchange documents, check discharges of mortgage and the like no longer occurs.  These gatherings were important in terms of the culture of the pre-digital lawyer.  They developed a sense collegiality among the lawyers involved.  They allowed for the development of trusted relationships based upon continued personal contact and often upon the making of an undertaking coupled with a handshake.

Whether that vacuum arising from the use of technology has been filled with something else is not for me to say and I make the observation not with any sense of nostalgia but as an example of the way in which technology induces changes.

Consider email.  To discuss email in and of itself is to focus upon the content layer.  What I would like you to think about is the behavioural or changes in routine  enabled by email along with the quality of the communication that takes place.

But there is more to technological change than the way in which we modify our behaviours in the routine of practice.  Richard Susskind a British lawyer, technology expert and futurists has written four books – The Future of Law (1996), Transforming the Law (2000), The End of Lawyers (2008) and Tomorrow’s Lawyers (2013).  Susskind is of the view that the legal industry – note that he uses the word industry and not profession – is in an evolutionary state.  Technology, or a better designed process, is reducing the need for expensive, artisan trained lawyers.  In many cases by removing the lawyer from the value chain, cost goes down, quality goes up and service delivery time becomes faster.  This is because, among other things, the legal services market is being upended by new entrants who are offering legal inputs and legal products to law firms, legal departments and average citizens.  One example may be found in legal process outsourcing but there are many others.

Susskind argues that legal work is migrating from bespoke work[10]  to standardised[11]  to systemised[12]  to packaged[13]to commoditised.[14]  These changes are made possible by identifying recursive patterns in legal forms and judicial opinions which enables the use of process and technology to routinise and scale very cheap and very high quality solutions to the myriad of legal needs.

Susskind points out that clients do not want to pay a lot of money for their legal solution.  More significantly there is more money outside the shrinking quantity of bespoke legal work.  Susskind observes that the greatest profit making opportunities are lodged between the systemised and packaged parts of the continuum.  If an organisation can continuously innovate and create systemised or packaged solutions to legal issues and problems that can be sold over and over again to a large base of clients, the organisation can enjoy the prospect of making money while you sleep.[15]

This constant innovation approach is one of the challenges identified by Susskind which he describes the “more for less” challenge.[16]  Liberalisation is another – the opening up of legal work beyond traditionally educated and qualified law practitioners who are protected by legislation and who hold the monopoly on legal work.  These changes may be found in our present law practitioners’ legislation along with current changes to legal aid.  Although this movement is not worldwide Susskind anticipates that when such liberalisation gives rise to legal businesses and legal services that better meet clients needs and a growing “more for less” challenge then this will have a ripple effect around the world.

But it is in the field of information technology that Susskind has interesting things to say at least from my perspective.  Although many lawyers have computer equipment in their offices generally the legal profession, according to Susskind, has not been swift to embrace new systems or, if they have adopted them, to utilise the maximum potential of them.  Many lawyers consider that IT is over hyped but few will have heard of Moore’s Law, suggesting that every two years or so the processing power of computers would double and its cost cut in half.  The fact of the matter is it is now foreseeable that the average desk top machine will have more processing power than all of humanity combined.  It seems to be strange that it might take something like that to force lawyers to rethink some of their working practices.

Susskind considers it inconceivable that information technology, which is radically altering many aspects of our economy and society, may comfortably be ignored by the legal profession who may consider that legal work will be exempt from any change. That is a fatal view for a profession whose business is information. The fact of the matter is that information technology – a slave to the property of continuing disruptive change – enables participation via Web 2.0 where users become providers, readers become authors and recipients become participants and all users can contribute.  New ways of finding information and producing it, collaborating with one another whether as bloggers, users of social networks or contributors to shared online resources such as Wikipedia and Youtube are developing.  And because of continuing disruptive change there is no finishing line for IT or the internet.  Examples of such change may be seen in the fact that 3 years ago very few people had heard of Twitter and 7 years ago Facebook wasn’t on the map.  The problem with resisting twitter and new forms of communication in a profession where information is what we deal with is what Susskind calls “irrational rejectionism” – the dogmatic and visceral dismissal of a technology with which the sceptic has no direct personal experience.

As has so often been said and in science fiction movies – resistance is futile.  We need to be open minded as lawyers because we are living in an era of un-precedented technological change and the fact of the matter is not so much a question of automating but whether or not lawyers can innovate and practice law in ways which could not have been done in the past.

I have just outlined a few issues that impact upon lawyers and the legal profession and the way in which technology presents challenges.  I could go on and consider the issue of legal education because if the profession is going to change then the training systems for the new professionals are going to have to be responsive to those changes as well.  But that’s another story.


[1] Marshall McLuhan Understanding Media: The Extensions of Man  Critical Edition W Terrence Gordon (ed)(Gingko Press, Berkeley Ca 2003)

[2] Aeromotive v Page (High Court, Hamilton CP 31/99 16 May 2002 Harrison J) For discussion see David Harvey Internet.Law.NZ 3rd ed. (Wellington, LexisNexis, 2011) p 512 et seq

[3] (Atlantic Books, London 2010). See alson Nicholas Car “Is Google Making Us Stupid” Atlantic Magazine 1 July 2008 http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/  (last accessed 31 May 2013)

[4] See especially Susan Greenfield “Living On-line is Changing Our Brains” New Scientist, 3 August 2011 http://www.newscientist.com/article/mg21128236.400-susan-greenfield-living-online-is-changing-our-brains.html (last accessed 31 May 2013) For this and for her assertions of “internet addiction” she has she has been criticised by Dr. Ben Goldacre for claiming that technology has adverse effects on the human brain, without having published any research, and retracting some claims when challenged. Goldacre suggested that “A scientist with enduring concerns about a serious widespread risk would normally set out their concerns clearly, to other scientists, in a scientific paper”  Ben Goldacre, “Serious Claims Belong in a Serious Scientific Paper” The Guardian 21 October 2011 http://www.guardian.co.uk/commentisfree/2011/oct/21/bad-science-publishing-claims (last accessed 31 May 2013)

 

[5] Untangling the Web: What the Internet is Doing to You  (Faber, London 2013). Presentation by Aleks Krotoski at the Writers and Readers Festival, Auckland 19 May 2013. Personal discussion between the author and Aleks Krotoski 19 May 2013.

[6] Sometimes referred to as “The Frankenstein Complex”

[7] See David Harvey The Law Emprynted and Englysshed: The Printing Press as an Agent of Change in Law and Legal Culture 1475 – 1642 (Unpublished PhD Thesis, Auckland University 2012) http://www.scribd.com/doc/103191773/The-Law-Emprynted-and-Englysshed-The-Printing-Press-as-an-Agent-of-Change-in-Law-and-Legal-Culture-1475-1642 (last accessed 31 May 2013)

[8] Eisenstein identified 6 qualities that print technology possessed that differentiated it from the scribal form of written communication of information. These are

a) dissemination

b) standardisation

c) reorganization

d) data collection

e) fixity and preservation

f) amplification and reinforcement.

See Elizabeth Eisenstein The Printing Press as an Agent of Change One Volume(Cambridge University Press, Cambridge 1979) esp. At Chapter 2 pp 71 – 126

[9] See above for some of the qualities of digital information technologies.

[10] Courtroom practice

[11] Common form documents for a merger

[12] Document assembly for estate planning

[13] A turnkey regulatory compliance programme

[14] Any IT based legal product that is undifferentiated in a market with many competitors

[15] For a discussion of challenges facing the legal profession, including a consideration of Susskind’s work see William D. Henderson “A Blueprint for Change (2013) 40 Pepp. L Rev 461.

[16] For discussion see Richard Susskind Tomorrow’s Lawyers (Oxford, Oxford, 2013) esp at p 10 et seq.