The Marketplace of Ideas

I read Lizzie Marvelly’s “Words can hurt like sticks and stones” in the Herald for Saturday 8 April with interest. The theme of her argument is that with rights – such as the freedom of speech – come responsibilities. There is no difficulty with that. However, some of the arguments advanced must give cause for pause. What seems to be the substance of the argument is that there is freedom of speech – up to a point.

A recent statement of concern by Professor Paul Moon and a number of other prominent New Zealanders seems to have prompted the article. It is perhaps a little bit disappointing that Ms Marvelly devalues her argument adopting a note of disdain when she refers to this group describing them as a “fusty group of signatories of Moon’s missive – many of whom are long past their student days and unlikely to have faced either online abuse or the dangerous rhetoric of groups like the neo-masculinists or the alt-right.”

Further she seems to be critical the provisions of s. 14 of the New Zealand Bill of Rights Act 1990, suggesting that the freedom of expression – to seek, receive AND impart information – could not have contemplated the democratisation of expression enabled by online platforms.

Of course the history of freedom of expression goes much further back than  1990. And it has been involved with technology. The invention and use of the printing press was as revolutionary for the imparting of ideas as social media is today. In enabled the spread of the radical (and very controversial and unpopular) ideas of Martin Luther that led to the Reformation. And it attracted official interest from the beginning. The expression of dissent, be it religious or political, was severely suppressed in the days of the Tudors, the early and later Stuarts and the Commonwealth in England. The savage treatment visited upon those who expressed unpopular views is well recorded.

The move to a recognition of the freedom of speech came from the experiences of repressive tyrannies both in England and in the American colonies. The First Amendment to the United States Constitution arose as a response to the repressive conduct of the colonial power and to guarantee robust and open debate. Thomas Jefferson referred to the marketplace of ideas which freedom of speech enabled and within which ideas of doubtful or dubious value would fail.

I agree with Ms Marvelly that there are risks associated with the expression of an opinion. The contrary view may be expressed. That is what happens in the market place of ideas. But the marketplace should not be shut down just because some of the ideas may be controversial.  And that is the problem. In the same way that a person has the right to express a point of view, so a potential audience has a right not to listen. They need not even examine what is on offer in the marketplace. But the important thing is that the idea, however controversial – even repugnant – should be expressed and, in accordance with the Bill of Rights Act there is a right to receive those ideas. It is up to the audience to choose whether or not to accept or endorse them.

The real test of one’s commitment to freedom of expression is in being willing to allow the expression of those views with which we do not agree. As Justice Oliver Wendell Holmes said in United States v Schwimmer 279 US 644 (1929) “if there is any principle of the Constitution that more imperatively calls for attachment than any other, it is the principle of free thought—not free thought for those who agree with us but freedom for the thought that we hate.” The last phrase is the title of an excellent book which Ms Marvelly may profit from reading.

But the New Zealand Herald became the market place of ideas on this particular topic. Not only did it publish Ms Marvelly’s qualified approach to freedom of expression. It also published (Herald on Sunday 9 April 2017) a more expansive view of the freedom of expression by Heather du Plessis-Allan entitled “Being Offensive is not a Crime” concerned with abrogation of free expression (she calls them shout-downs) and the theme of that article is that there is no right NOT to be offended. Indeed Salman Rushdie, whom Ms du Plessis-Allan quotes at the end of her article said “what is freedom of expression? Without the freedom to offend it ceases to exist.” And then of course there is the article that started it all – “Free Speech Under Threat in NZ Universities” – in the Herald for 4 April 2014.

The abrogation of the freedom of expression, even partially, even if voluntarily assumed, is a burden on liberty. So I guess when I shop in the marketplace of ideas I prefer the more robust approach of Professor Moon and Ms du Plessis-Allan.

But by the same token it is fortunate and we should be grateful that we live in a society where the ideas expressed by Ms Marvelly were and are available for consideration.

Back to the Future – Google Spain and the Restoration of Partial and Practical Obscurity

Arising from the pre-digital paradigm are two concepts that had important implications for privacy. Their continued validity as a foundation for privacy protection has been challenged by the digital paradigm. The terms are practical and partial obscurity which are both descriptive of information accessibility and recollection in the pre-digital paradigm and of a challenge imposed by the digital paradigm, especially for privacy.  The terms, as will become apparent, are interrelated.

Practical obscurity refers to the quality of availability of information which may be of a private or public nature[1].  Such information is usually in hard copy format, may be indexed, is in a central location or locations, is frequently location-dependent in that the information that is in a particular location will refer only to the particular area served by that location, requires interaction with officials or bureaucrats to locate the information and, finally, in terms of accessing the information, requires some knowledge of the particular file within which the information source lies. Practical obscurity means that information is not indexed on key words or key concepts but generally is indexed on the basis of individual files or in relation to a named individual or named location.  Thus, it is necessary to have some prior knowledge of information to enable a search for the appropriate file to be made.

 Partial obscurity addresses information of a private nature which may earlier have been in the public arena, either in a newspaper, television or radio broadcast or some other form of mass media communication whereby the information communicated is, at a later date, recalled in part but where, as the result of the inability of memory to retain all the detail of all of the information that has been received by an individual, has become subsumed.  Thus, a broad sketch of the information renders the details obscure, only leaving the major heads of the information available in memory, hence the term partial obscurity.  To recover particulars of the information will require resort to film, video, radio or newspaper archives, thus bringing into play the concepts of practical obscurity. Partial obscurity may enable information which is subject to practical obscurity to be obtained more readily because some of the informational references enabling the location of the practically obscure information can be provided.

The Digital Paradigm and Digital Information Technologies challenge these concepts. I have written elsewhere about the nature of the underlying properties or qualities of the digital medium that sits beneath the content or the “message”. Peter Winn has made the comment “When the same rules that have been worked out for the world of paper records are applied to electronic records, the result does not preserve the balance worked out between the competing policies in the world of paper records, but dramatically alters that balance.”[2]

A property present in digital technologies and very relevant to this discussion is that of searchability. Digital systems allow the retrieval of information with a search utility that can take place “on the fly” and may produce results that are more comprehensive than a mere index. The level of analysis that may be undertaken may be deeper than mere information drawn from the text itself. Writing styles and the use of language or “stock phrases” may be undertaken, thus allowing a more penetrating and efficient analysis of the text than was possible in print.

The most successful search engine is Google which has been available since 1998.  So pervasive and popular is Google’s presence that modern English has introduced the verb “to Google” which means “To search for information about (a person or thing) using the Google search engine” or “To use the Google search engine to find information on the Internet”.[3] The ability to locate information using search engines returns us to the print based properties of fixity and preservation and also enhances the digital property of “the document that does not die”

A further property presented by digital systems is that of accessibilty. If one has the necessary equipment – a computer, modem\router and an internet connection – information is accessible to an extent not possible in the pre-digital environment. In that earlier paradigm, information was located across a number of separate media. Some had the preservative quality of print. Some, such as television or radio, required personal attendance at a set time. In some cases information may be located in a central repository like a library or archive. These are aspects of partial and practical obscurity

The Internet and convergence reverses the pre-digital activity of information seeking to one of information obtaining. The inquirer need not leave his or her home or office and go to another location where the information may be. The information is delivered via the Internet. As a result of this, with the exception of the time spent locating the information via Google, more time can be spent considering, analysing or following up the information. Although this may be viewed as an aspect of information dissemination, the means of access is revolutionarily different.

Associated with this characteristic of informational activity is the way in which the Internet enhances the immediacy of information. Not only is the inquirer no longer required to leave his or her home of place of work but the information can be delivered at a speed that is limited only by the download speed of an internet connection. Thus information which might have involved a trip to a library, a search through index cards and a perusal of a number of books or articles before the information sought was obtained, now, by means of the Internet may take a few keystrokes and mouse clicks and a few seconds for the information to be presented on screen

This enhances our expectations about the access to and availability of information. We expect the information to be available. If Google can’t locate it, it probably doesn’t exist on-line. If the information is available it should be presented to us in seconds. Although material sought from Wikipedia may be information rich, one of the most common complaints about accessability is the time that it takes to download onto a user’s computer. Yet in the predigital age a multi-contributing information resource (an encyclopedia) could only be located at a library and the time in accessing that information could be measured in hours depending upon the location of the library and the efficiency of the transport system used.

Associated with accessibility of information is the fact that it can be preserved by the user. The video file can be downloaded. The image or the text can be copied. Although this has copyright implications, substantial quantities of content are copied and are preserved by users, and frequently may be employed for other purposes such as inclusion in projects or assignments or academic papers.  The “cut and paste” capabilities of digital systems are well known and frequently employed and are one of the significant consequences of information accessibility that the Internet allows.

The “Google Spain” Decision and the “Right to Be Forgotten”

The decision of the European Court of Justice in Google Spain SL, Google Inc. v Agencia Española de Protección de Datos (AEPD), Mario Costeja González, has the potential to significantly change the informational landscape enabled by digital technologies. I do not intend to analyse the entire decision but rather focus on one aspect of it – the discussion about the so-called “right to be forgotten.” The restrictions placed on Google and other search engines as opposed to the provider of the particular content demonstrates a significant inconsistency of approach that is concerning.

The complaint by Mr Gonzales was this. When an internet user entered Mr Costeja González’s name in the Google search engine of  he or she would obtain links to two pages of the La Vanguardia’s newspaper, of 19 January and 9 March 1998 respectively  In those publications was an announcement mentioning Mr Costeja González’s name related to a real-estate auction connected with attachment proceedings for the recovery of social security debts.

Mr González requested, first, that La Vanguardia be required either to remove or alter those pages so that the personal data relating to him no longer appeared or to use certain tools made available by search engines in order to protect the data.

Second, he requested that Google Spain or Google Inc. be required to remove or conceal the personal data relating to him so that they ceased to be included in the search results and no longer appeared in the links to La Vanguardia. Mr González stated in this context that the attachment proceedings concerning him had been fully resolved for a number of years and that reference to them was now entirely irrelevant.

The effect of the decision is that the Court was prepared to allow the particular information – the La Vanguardia report – to remain. The Court specifically did not require that material be removed even although the argument advanced in respect of the claim against Google was essentially the same – the attachment proceedings had been fully resolved for a number of years and that reference to them was now entirely irrelevant. What the Court did was to make it very difficult if not almost impossible for a person to locate the information with ease.

The Court’s exploration of the “right to be forgotten”  was collateral to its main analysis about privacy, yet the development of the “right to be forgotten” section was as an aspect of privacy – a form of gloss on fundamental privacy principles. The issue was framed in this way. Should the various statutory and directive provisions be interpreted as enabling Mr Gonzales to require Google to remove, from the list of results displayed following a search made for his name, links to web pages published lawfully by third parties and containing true information relating to him, on the ground that that information may be prejudicial to him or that he wishes it to be ‘forgotten’ after a certain time? It was argued that the “right to be forgotten” was an element of Mr Gonzales’ privacy rights which overrode the legitimate interests of the operator of the search engine and the general interest in freedom of information.

The Court observed that even initially lawful processing of accurate information may, in the course of time, become incompatible with the privacy directive where that information is no longer necessary in the light of the purposes for which it was originally collected or processed. That is so in particular where the purposes appear to be inadequate, irrelevant or no longer as relevant, or excessive in relation to those purposes and in the light of the time that has elapsed.

What the Court is saying is that notwithstanding that information may be accurate or true, it may no longer be sufficiently relevant and as a result be transformed into information which is incompatible with European privacy principles. The original reasons for the collection of the data may, at a later date, no longer pertain. It follows from this that individual privacy requirements may override any public interest that may have been relevant at the time that the information was collected.

In considering requests to remove links it was important to consider whether a data subject like Mr Gonzales had a right that the information relating to him personally should, at a later point in time, no longer be linked to his name by a list of results displayed following a search based on his name. In this connection, the issue of whether or not the information may be prejudicial to the “data subject” need not be considered. The information may be quite neutral in terms of effect. The criterion appears to be one of relevance at a later date.

Furthermore the privacy rights override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in finding that information upon a search relating to the data subject’s name.

One has to wonder about the use of language in this part of the decision. Certainly, the decision is couched in a very formalised and somewhat convoluted style that one would associate with a bureaucrat rather than a judge articulating reasons for a decision. But what does the Court mean when it says “as a rule”? Does it have the vernacular meaning of “usually” or does it mean what it says – that the rule is that individual privacy rights override economic interests of the search engine operator and of the general public in being able to locate information. If the latter interpretation is correct that is a very wide ranging rule indeed.

However, the Court continued, that would not be the case if it appeared, for particular reasons, such as the role played by the data subject in public life, that the interference with his fundamental rights is justified by the preponderant interest of the general public in having, on account of inclusion in the list of results, access to the information in question.

Thus if a person has a public profile, for example in the field of politics, business or entertainment, there may be a higher public interest in having access to information.

Finally the Court looked at the particular circumstances of Mr Gonzales. The information reflected upon Mr Gonzales private life. Its initial publication was some 16 years ago. Presumably the fact of attachment proceedings and a real estate auction for the recovery of social security debts was no longer relevant within the context of Mr Gonzales’ life at the time of the complaint. Thus the Court held that Mr Gonzales had established a right that that information should no longer be linked to his name by means of such a list.

“Accordingly, since in the case in point there do not appear to be particular reasons substantiating a preponderant interest of the public in having, in the context of such a search, access to that information, a matter which is, however, for the referring court to establish, the Gonzales may, require those links to be removed from the list of results.”

There is an interesting comment in this final passage. The ECJ decision is on matters of principle. It defines tests which the referring Court should apply. Thus the referring Court still has to consider on the facts whether there are particular reasons that may substantiate a preponderant public interest in the information, although the ECJ stated that it did not consider such facts to be present.

Matters Arising

There are a number of issues that arise from this decision. The reference to the “right to be forgotten” is made at an early stage in the discussion but the use of the phrase is not continued. It is developed as an aspect of privacy within the context of the continued use of data acquired for a relevant purpose at one point in time, but the relevance of which may not be so crucial at a later point in time. One of the fundamental themes underlying most privacy laws is that of collection and retention of data for a particular purpose. The ECJ has introduced an element of temporal relevance into that theme.

A second issue restates what I said before. The information about the attachment proceedings and real estate sale which Mr Gonzales faced in 1998 was still “at large” on the Internet. In the interests of a consistent approach, an order should have been made taking that information down. It was that information that was Mr Gonzales’ concern. Google was a data processor that made it easy to access that information. So the reference may not appear in a Google search, but the underlying and now “irrelevant” information still remains.

A third issue relates to access to historical information and to primary data. Historians value primary data. Letters, manuscripts, records, reports from times gone by allow us to reconstruct the social setting within which people carried out their daily lives and against which the great events of the powerful and the policy makers took place. One only has to attempt to a research project covering a period say four hundred years ago to understand the huge problems that may be encountered as a result of gaps in information retained largely if not exclusively in manuscript form, most of which is unindexed. A search engine such as Google aids in the retrieval of relevant information. And it is a fact that social historians relay on the “stories” of individuals to illustrate a point or justify an hypothesis. The removal of references to these stories, or the primary data itself will be a sad loss to historians and social science researchers. What is concerning is that it is the “data subject” that is going to determine which the historical archive will contain – at least from an indexing perspective.

A fourth issue presents something of a conundrum. Imagine that A had information published about him 20 years ago regarding certain business activities that may have been controversial. Assume that 20 years later A has put all that behind him and is a respected member of the community and his activities in the past bear no relevance to his present circumstances. Conceivably, following the approach of the ECJ, he might require Google to remove search results to those events from queries on his name. Now assume a year or so later that A once again gets involved in a controversial business activity. Searches on his name would reveal the current controversy, but not the earlier one. His earlier activities would remain under a shroud – at least as far as Google searches are concerned. Yet it could be validly argued that his earlier activities are very relevant in light of his subsequent actions. How do we get that information restored to the Google search results? Does a news media organisation which has its own information resources and thus may have some “institutional memory” of the earlier event go to Google and request restoration of the earlier results?

The example I have given demonstrates how relevance may be a dynamic beast and may be a rather uncertain basis for something as elevated as a right and certainly as a basis for allowing a removal of results from a search engine as a collateral element of a privacy right.

Another interesting conundrum is presented for Mr Gonzales himself. By instituting proceedings he has highlighted the very problem that he wished to have removed from the search results. To make it worse for Mr Gonzales and his desire for the information of his 1998 activities to remain private, the decision of the ECJ has been the subject of wide ranging international comment on the decision. The ECJ makes reference to his earlier difficulties, and given that the timing of those difficulties is a major consideration in the Court’s assessment of relevance, perhaps those activities have taken on a new and striking relevance in the context of the ECJ’s decision. If Mr Gonzales wanted his name and affairs to remain difficult to find his efforts to do so have had the opposite effect, and perhaps his business problems in 1998 have achieved a new and striking relevance in the context of the ECJ’s decision which would eliminate any privacy interest he might have had but for the case.

Conclusion

But there are other aspects of the decision that are more fundamental for the communication of information and the rights to receive and impart information which are aspects of freedom of expression. What the decision does is that it restores the pre-digital concepts of partial and practical obscurity. The right to be forgotten will only be countered with the ability to be remembered, and no less a person than Sir Edward Coke in 1600 described memory as “slippery”. One’s recollection of a person or an event may modify over a period of time. The particular details of an event congeal into a generalised recollection. Often the absence of detail will result in a misinterpretation of the event.

Perhaps the most gloomy observation about the decision is its potential to emasculate the promise of the Internet and one of its greatest strengths – searchability of information –  based upon privacy premises that were developed in the pre-Internet age, and where privacy concerns involved the spectre of totalitarian state mass data collection on every citizen. In many respects the Internet presents a different scenario involving the gathering and availability of data frequently provided by the “data subject” and the properties and the qualities of digital technologies have remoulded our approaches to information and our expectations of it. The values underpinning pre-digital privacy expectations have undergone something of a shift in the “Information Age” although there are occasional outraged outbursts at incidence of state sponsored mass data gathering exploits. One wonders whether the ECJ is tenaciously hanging on to pre-digital paradigm data principles, taking us back to a pre-digital model or practical and partial obscurity in the hope that it will prevail for the future.  Or perhaps in the new Information Age we need to think again about the nature of privacy in light of the underlying qualities and properties of the Digital Paradigm.

 

[1] The term “practical obscurity” was used in the case of US Department of Justice v Reporters Committee for Freedom of the Press. 489 US 749 (1989)

[2] Peter A. Winn, Online Court Records: Balancing Judicial Accountability and Privacy in an Age of Electronic Information, (2004)79 WASH. L. REV. 307, 315

[3] Oxford English Dictionary

Towards an Internet Bill of Rights

 

Tim Berners-Lee, in an article in the Guardian of the 12th March 2014, building on a comment that he made that the Internet should be safeguarded from being controlled by governments or large corporations, reported in the Guardian for 26 June 2013,  claimed that an online “Magna Carta” is needed to protect and enshrine the independence of the internet.  His argument is that the internet has come under increasing attack from governments and corporate influence.  Although no examples were not cited this has been a developing trend.  The comments by Nicolas Sarkozy at the G8 meetings in 2011 and the unsuccessful attempts by Russia, China and other nation via the ITU at the 2012 World Conference on International Telecommunications to establish wider governance and control of the internet from a national government point of view provide examples.  Sarkozy’s comments were rejected by English Prime Minister David Cameron and the then Secretary of State for the United States, Hillary Clinton. More recently, on 29 April 2014 Russia’s Parliament approved a package of sweeping restrictions on the Internet and blogging.  Clearly there is an appetite for greater control by governments of the internet and, in the opinion of Berners-Lee, this must be resisted.  He considers that what is needed is a global constitution or a Bill of Rights.  He suggests that people generate a digital Bill of Rights for each country – a statement of principles that he hopes will be supported by public institutions government officials and corporations. I should perhaps observe that what is probably intended is an Internet Bill of Rights rather than a Digital one. I say this because it could well be difficult to apply some concepts to all digital technologies, some of which have little to do with the Internet.

The important point that Berners-Lee makes is that there must be a neutral internet and that there must be certainty that it will remain so. Without an open or neutral internet there can be no open government, no good democracy, no good healthcare, no connected communities and no diversity of culture.  By the same token Berners-Lee is of the view that net neutrality is not just going to happen. It requires positive action.

But it is not about direct governmental control of the Internet that concerns Berners-Lee. An example of indirect government interference with the Internet and with challenges to the utilisation of the new communications technology by individuals are the activities of the NSA and the GCHQ as revealed by the Snowden disclosures.  There have been attempts to undermine encryption and to circumvent security tools which face challenges upon individual liberty to communicate frankly and openly and without State surveillance.

What Would An On-Line “Magna Carta” Address

According to Berners-Lee, among the issues that would need to be addressed by an online “Magna Carta” would be those of privacy, free speech and responsible anonymity together with the impact of copyright laws and cultural-societal issues around the ethics of technology.  He freely acknowledges that regional regulation and cultural sensitivities would vary.  “Western democracy” after all is exactly that and its tenets, whilst laudable to its proponents, may not have universal appeal.

What is really required is a shared document of principle that could provide an international standard not so much for the values of Western democracy but for the values and importance that underlie an open Internet.

One of the things that Berners-Lee is keen to see changed is the connection between the US Department of Commerce and the internet addressing system – the IANA contract which controls the database of all domain names.  Berners-Lees’ view was that the removal of this link, if one will forgive the pun, was long overdue and that the United States government could not have a place in running something which is non-national.  He observed that there was a momentum towards that uncoupling but that there should be a continued multi-stakeholder approach and one where governments and corporates are kept at arm’s length.  As it would happen within a week or so after Berners-Lees expressions of opinion the United States government advised that it was going to de-couple its involvement with the addressing system.

Another concern by Berners-Lee was the “balkanisation” of the internet whereby countries or organisations would carve up digital space to work under their own rules be it for censorship regulation or for commerce.  Following the Snowden revelations there were indeed discussions along this line where various countries, to avoid US intrusion into the communications of their citizens, suggested a separate national “internet”.  This division of a global communications infrastructure into one based upon national boundaries is anathema to the concept of an open internet and quite contrary to the views expressed by Mr Berners-Lee.

Is This New?

The idea of some form of Charter or principles that limit or define the extent of potential governmental interference in the Internet is not new. Perhaps what is remarkable is that Berners-Lee, who has been apolitical and concerned primarily with engineering issues surrounding the Internet and the World Wide Web has, since 2013, spoken out on concerns regarding the future of the Internet and fundamental governance issues.

Governing the internet is a challenging undertaking. It is a decentralised, global environment, so governance mechanisms must account for many varied legal jurisdictions and national contexts. It is an environment which is evolving rapidly – legislation cannot keep pace with technological advances, and risks undermining future innovation. And it is shaped by the actions of many different stakeholders including governments, the private sector and civil society.

These qualities mean that the internet is not well suited to traditional forms of governance such as national and international law. Some charters and declarations have emerged as an alternative, providing the basis for self-regulation or co-regulation and helping to guide the actions of different stakeholders in a more flexible, bottom-up manner. In this sense, charters and principles operate as a form of soft law: standards that are not legally binding but which carry normative and moral weight.

Dixie Hawtin in her article “Internet Charters and Principles: Trends and Insights” summarises some of the steps that have been taken:

“Civil society charters and declarations

John Perry Barlow’s 1996 Declaration of Cyberspace Independence is one of the earliest and most famous examples. Barlow sought to articulate his vision of the internet as a space that is fundamentally different to the offline world, in which governments have no jurisdiction. Since then civil society has tended to focus on charters which apply human rights standards to the internet, and which define policy principles that are seen as essential to fulfilling human rights in the digital environment. Some take a holistic approach, such as theAssociation for Progressive Communications’ Internet Rights Charter (2006) and the Internet Rights and Principles Coalition’s (IRP) Charter of Human Rights and Principles for the Internet (2010). Others are aimed at distinct issues within the broader field, for instance, the Electronic Frontier Foundation’s Bill of Privacy Rights for Social Networks (2010), the Charter for Innovation, Creativity and Access to Knowledge (2009), and the Madrid Privacy Declaration (2009).

Initiatives targeted at the private sector

The private sector has a central role in the internet environment through providing hardware, software, applications and services. However, businesses are not bound by the same confines as governments (including international law and electorates), and governments are limited in their abilities to regulate businesses due to the reasons outlined above. A growing number of principles seek to influence private sector activities. The primary example is the Global Network Initiative, a multi-stakeholder group of businesses, civil society and academia which has negotiated principles that member businesses have committed themselves to follow to protect and promote freedom of expression and privacy. Some initiatives are developed predominantly by the private sector (such as the Aspen Institute International Digital Economy Accords which are currently being negotiated); others are a result of co-regulatory efforts with governments and intergovernmental organisations. The Council of Europe, for instance, has developed guidelines in partnership with the online search and social networking sectors. This is part of a much wider trend of initiatives seeking to hold companies to account to human rights standards in response to the challenges of a globalised world where the power of the largest companies can eclipse that of national governments. Examples of the wider trend include the United Nations Global Compact, and the Special Rapporteur on human rights and transnationalcorporations’ Protect, Respect and Remedy Framework.

 Intergovernmental organisation principles

There are many examples of principles and declarations issued by intergovernmental organisations, but in the past year a particularly noticeable trend has been the emergence of overarching sets of principles. The Organisation for Economic Co-operation and Development (OECD) released a Communiqué on Principles for Internet Policy Making in June 2011. The principles seek to provide a reference point for all stakeholders involved in internet policy formation. The Council of Europe has created a set of Internet Governance Principles which are due to be passed in September 2011. The document contains ten principles (including human rights, multi-stakeholder governance, network neutrality and cultural and linguistic diversity) which member states should upholdwhen developing national and international internet policies.

National level principles

At the national level too, some governments have turned to policy principles as an internet governance tool. Brazil has taken the lead in this area through its multi-stakeholder Internet Steering Committee, which has developed the Principles for the Governance and Use of the Internet – a set of ten principles including freedom of expression, privacy and respect for human rights. Another example is Norway’s Guidelines for Internet Neutrality (2009) which were developed by the Norwegian Post and Telecommunications Authority in collaboration withother actors such as internet service providers (ISPs) and consumer protection agencies”

 

A Starting Point – Initial Thoughts.

So what would be a starting point for the development of an internet or digital bill or rights?

Traditionally the “Bill of Rights” concept has been to act as a buffer between over-weaning government power on the one hand and individual liberties on the other.  The first attempt at a form of Bill of Rights occurred at the end of the English Revolution (1642 – 1689) and imposed limits upon the Sovereigns power.

The Age of Enlightenment and much of the philosophical thinking that took place in the late 17th and early 18th centuries resulted in statements or declarations of rights by the American colonies – the Declaration of Independence – the United States in  Amendments 1-10 to the Constitution (referred to as the Bill of Rights)  and the 1789 Declaration of the Rights of Man and the Citizen following the French Revolution.

An essential characteristic of these statements was to define and restrict the interference of the State in the affairs of individuals and guarantee certain freedoms and liberties.  It seems to me that a Internet Bill of Rights would set out and define individual expectations of liberty and non-interference on the part of the State within the context of the communications media made available by the Internet.

But the function of Charters has developed since the Age of Enlightenment approaches, especially with the development of global and transnational institutions. Hawtin notes that:

“Civil society uses charters and principles to raise awareness about the importance of protecting freedom of expression and association online through policy and practice. The process of drafting these texts provides a valuable platform for dialogue and networking. For example, the IRP’s Charter of Human Rights and Principles for the Internet has been authored collaboratively by a wide range of individuals and organisations from different fields of expertise and regions of the world. The Charter acts as an important space, fostering dialogue about how human rights apply to the internet and forging new connections between people.

Building consensus around demands and articulating these in inspirational charters provide civil society with common positions and tools with which to push for change. This is demonstrated by the number of widely supported civil society statements which refer to existing charters issued over the past year. The Civil Society Statement to the e G8 and G8, which was signed by 36 different civil society groups from across the world, emphasises both the IRP’s 10 Internet Rights and Principles (derived from its Charter of Human Rights and Principles for the Internet) and the Declaration of the Assembly on the Right to Communication. The Internet Rights are Human Rights statement submitted to the Human Rights Council was signed by more than 40 individuals and organisations and reiterates APC’s Internet Rights Charter and the IRP’s 10 Internet Rights and Principles.

As charters and principles are used and reiterated, so their standing as shared norms increases. When charters and statements are open to endorsement by different organisations and individuals from around the world, this helps to give them legitimacy and demonstrate to policy makers that there is a wide community of people who are demanding change.

While the continuance of practices which are detrimental to internet freedom indicates that these initiatives have not, so far, been entirely successful, there are signs of improvements. Groups like APC and the IRP have successfully pushed human rights up the agenda in the Internet Governance Forum. Other groups are hoping to emulate these efforts to increase awareness about human rights in other forums. The At-Large Advisory Committee, for instance, is in the beginning stages of creating a charter of rights for use within the Internet Corporation for Assigned Names and Numbers (ICANN).”

  Part of the problem with the “Charter Approach” is that there may be a proliferation of such instruments or proposals that may have the effect of diluting the moves for a universal approach. On the other hand, charters or statements of principle of a high quality with an acceptance that lends legitimacy may be more likely to attract adoption and advocacy by a growing majority of stakeholders. Some charters may be applicable to local circumstances. Those with a specific international orientation will attract a different audience and advocacy approach. As I understand it Berners-Lee is suggesting a combination of the two – an international statement of principle incorporated into local law recognising differences in cultural and customary norms. In some respects his approach seems to have an air the EU approach whereby an EU requirement is adopted into local law – often with a shift in emphasis that takes into account local conditions.

However, what must be remembered is the difficulty with power imbalances where economically and political powerful groups may drive a local (or even international) process. What is required is a meaningful multi-stake-holder approach that recognises equality of arms and influence. Hawtin also observes that with the proliferation of charters and principles, governments and corporates may “cherry pick” those standards which accord with their own interests. Voluntary standards have difficulties with engagement and enforcement.

A Starting Point – A Possible Framework

Because the Internet is primarily a means of communication of information – it’s not referred to as ICT or Information and Communication Technology for nothing – what is being proposed is an extension or redefinition of the rights of freedom of expression guaranteed in national and international instruments such as the First Amendment to the United States Constitution, section 14 of the New Zealand Bill of Rights Act 1990,  Section 2 of the Canadian Charter of Rights and Freedoms and Article 19 of the Universal Declaration of Human  Rights, to mention but a few. Thus an Internet Bill of Rights would have to be crafted as guaranteeing aspects or details of the freedom of expression, although the freedom of expression right also has attached to it other collateral rights such as the right to education, the right to freedom of association (in the sense of communicating with those with whom one is associated), the right to full participation in social, cultural and political life and the right to social and economic development. Perhaps a proper focus for attention should be upon the Internet as a means of facilitating the freedom of expression right.

This approach was the subject of the Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank LaRue, to the General Assembly of the United Nations, in August 2011.

In that Report he made the following observations

14. The Special Rapporteur reiterates that the framework of international human rights law, in particular the provisions relating to the right to freedom of expression, continues to remain relevant and applicable to the Internet. Indeed, by explicitly providing that everyone has the right to freedom of expression through any media of choice, regardless of frontiers, articles 19 of the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights were drafted with the foresight to include and accommodate future technological developments through which individuals may exercise this right.

 15. Hence, the types of information or expression that may be restricted under international human rights law in relation to offline content also apply to online content. Similarly, any restriction applied to the right to freedom of expression exercised through the Internet must also comply with international human rights law, including the following three-part, cumulative criteria:

(a) Any restriction must be provided by law, which must be formulated with sufficient precision to enable an individual to regulate his or her conduct accordingly and must be made accessible to the public;

(b) Any restriction must pursue one of the legitimate grounds for restriction set out in article 19, paragraph 3, of the International Covenant, namely (i) respect of the rights or reputation of others; or (ii) the protection of national security or of public order, or of public health or morals;

 (c) Any restriction must be proven as necessary and proportionate, or the least restrictive means to achieve one of the specified goals listed above.

The issue of the potential human right of access to the Internet was covered in this way:

61. Although access to the Internet is not yet a human right as such, the Special Rapporteur would like to reiterate that States have a positive obligation to promote or to facilitate the enjoyment of the right to freedom of expression and the means necessary to exercise this right, which includes the Internet. Moreover, access to the Internet is not only essential to enjoy the right to freedom of expression, but also other rights, such as the right to education, the right to freedom of association and assembly, the right to full participation in social, cultural and political life and the right to social and economic development.

 62. Recently, the Human Rights Committee, in its general comment No. 34 on the right to freedom of opinion and expression, also underscored that States parties should take all necessary steps to foster the independence of new media, such as the Internet, and to ensure access of all individuals thereto.

 63. Indeed, given that the Internet has become an indispensable tool for full participation in political, cultural, social and economic life, States should adopt effective and concrete policies and strategies, developed in consultation with individuals from all segments of society, including the private sector as well as relevant Government ministries, to make the Internet widely available, accessible and affordable to all.

In locating an Internet Bill of Rights within the concept of the freedom of expression, one must be careful to ensure that by defining subsets of the freedom of expression right, one does not impose limitations that may impinge upon the collateral rights identified by Mr. LaRue

Having made that observation, it is important to recall that an Internet Bill of Rights could guarantee the independence and neutrality of the means of communication – the Internet – and prohibit heavy handed secretive surveillance and intrusive interference with that means of communication.  Whilst it is acknowledged that there is a need for meaningful laws to protect the security of citizens both individually and as a group – and Mr LaRue recognises justified limitation on the freedom of expression in areas such as child pornography, direct and public incitement to commit genocide, Advocacy of national, racial or religious hatred that constitutes incitement to  discrimination, hostility or violence, and incitement to terrorism –  such laws cannot be intrusive into areas such as privacy or private activity and communication.

One of the problems about regulating the Internet or indeed preventing the regulation of the internet is to understand how it is used by end users.  In the United States Representatives Issa (R-Ca) and Senator Wyden (D-Or) developed an idea for a Digital Bill of Rights based upon ten principles:

  1. Freedom – The right to a free and uncensored Internet.
  2. Openness – The right to an open, unobstructed Internet.
  3. Equality – The right to equality on the Internet.
  4. Participation – The right to gather and participate in online activities.
  5. Creativity – The right to create and collaborate on the Internet.
  6. Sharing – The right to freely share their ideas.
  7. Access – The right to access the Internet equally, regardless of who they are or where they are.
  8. Association – The right to freely associate on the Internet.
  9. Privacy – The right to privacy on the Internet.
  10. Property – The right to benefit from what they create.

 

The Issa\Wyden categories are helpful in some respects, again as a starting point. One of the most significant things about their observations lies not so much in their categorisation but in the observation that the way that the Internet is used within the wider activity of communication and social activity must be understood.

Many of the Issa\Wyden principles are in fact subsets of the right to free expression.  Within the right to free expression there is a right not only to the means of expressing an opinion – described in s. 14 of the New Zealand Bill of Rights Act  as the right to impart information – but also the right to receive it.

The wording of the concept of “participation” in the Issa\Wyden proposal is important and in some respects reflects the LaRue concept of association within the Internet space. One must be careful, as Issa and Wyden have been to ensure that concepts applicable to the Internet space as a means of communication remain.

Expressions in favour of an Internet Bill of Rights have been put forward on the basis that the digital economy requires a reliable set of laws and procedures whereby individuals and corporations may do business and promote innovation.  It is suggested that an Internet Bill of Rights could well establish a nation that enacted and guaranteed such rights as being an innovative place within the digital environment which would guarantee a citizen privacy and promote a digital It may support a vision for a country as a data haven where people and businesses can have confidence that they have sovereignty over and unfettered ownership of  their data and that it will be protected.

Stability and certainty, particularly within the commercial environment, are necessary prerequisites for flourishing commercial activity.  I wonder, however, whether or not the concept of an Internet Bill of Rights fits comfortably within the “nation state” model of a secure, predictable and certain place where people can do business.

The Internet Bill of Rights ideally would guarantee certain national and minimum standard for Internet activity that could be mirrored worldwide.   Examples of digital paradigm legislation which attempt to harmonise principles transnationally may be found in New Zealand in the Electronic Transactions Act which has its genesis in international Conventions and the Unsolicited Electronic Messages Act where the principles applied in similar legislation in Australia favour a particular opt-in model for the continued receipt of commercial electronic messages. Legislation in the United States (The CAN-SPAM Act) favours an opt-out approach  based upon constitutional imperatives surrounding the First Amendment. Differing approaches to Spam control  based on local legal or cultural imperative provide a good example of the difficulty in achieving international harmonisation of national laws.

It was suggested by Issa and Wyden that it was necessary for there to be an understanding of the Internet and how it is used. I suggest that in considering a Internet Bill of Rights the enquiry must go further.  Not only must there be an understanding of how the Internet is used but also of how it works and essentially this involves a recognition of the paradigmatic differences between models of communications media and styles that existed before the Digital Age and understanding the way in which the qualities, properties or, as one writer has put it, the affordances of digital technologies work.

One of the present qualities of digital technologies and particularly of the internet is that of “permissionless innovation” – the ability to “bolt on” to the Internet backbone an application without seeking permission from any supervising or regulatory entitly.. This concept is reflected in items 2, 5, 6 and 10 of the Issa/Wyden list of rights   Permissionless innovation is inherent within digital technologies only because it is in existing default position and one which could well change depending upon the level of government interference.  Thus if one were to maintain net neutrality integrity and the importance of innovation the concept of permissionless innovation would have to be endorsed and protected.

A further matter to be considered is the way in which these various characteristics affordances properties or qualities impact upon human behaviour and upon expectations of information.  Our current expectations relating to information, its use, availability, dynamic quality, accessibility and searchability all impact upon our behaviours and responses within the context of the act of communication.  “Information now” – an expectation of an immediate  reply, an expectation of immediate access 24/7 – has developed as the result of the inherent and underlying properties of digital communication systems enabled by the Internet, email, instant messaging, internet telephony, Skype, mobile phone technology or otherwise.

The problem with the Issa\Wyden proposal is that it is cast within the very wide framework of guarantees for individual liberties. In this respect it reflects traditional “rights” instruments as being a definition of the boundaries between the individual and the State. In addressing the Internet – a medium of communication – there are some difficulties in this approach.  Of the items that they identify those of openness, freedom and access are those that might be the focus of attention of an Internet Bill of Rights. The other aspects deal with issues that inhabit the content layer, yet the technological layers are the ones that are really the subject of potential threat from the State. The objective is summed up by InternetNZ who seek an open and uncapturable Internet. This objective recognises the medium rather than the message that it conveys. But by the same token, the medium is critical as a means of fostering the guarantee of freedom of expression.

Moving Forward

It seems to me that the proper focus of an Internet Bill of Rights is that of the technology that is the Internet. Berners-Lee recognises this when he refers to “net neutrality” which is a term that is capable of a number of meanings. What must be guaranteed and recognised by States is that the means of communication must be left alone and should not be the subject of interference by domestic legal processes. An open and uncapturable Internet cannot be compromised by local rules governing technical standards which have world wide application. It is perhaps this global aspect that confounds a traditional approach to Internet regulation in that although it is possible for there to be local rules that interfere with Internet functionality, there cannot be given that such rules may impact upon the wider use of the Internet. Local interference with engineering or technical standards may have downstream implications for overall Internet use by those who are not subject to those local rules.

Recent efforts by the ITU to establish some form of regulatory or governance structures allowing government restriction or blocking of information disseminated via the internet and to create a global regime of monitoring internet communications – including the demand that those who send and receive information identify themselves would have wide ranging implications for Internet use. The proposal would also have allowed governments to shut down the internet if there is the belief that it may interfere in the internal affairs of other states or that information of a sensitive nature might be shared.  Although some of the proposals suggested less US control over the Internet, which is forthcoming is the disengagement of the US Department of Commerce from involvement with ICANN, nevertheless it is of concern that wider interference with Internet traffic should be seriously proposed under the umbrella of an agency whose brief is essentially directed towards the efficient functioning of communications networks, rather than obstructing them.

That there is such an appetite for regulation and control present at an international forum is a matter of concern and probably underscores an increased urgency for a rights-based solution to be put in place.

There are two main areas where the Bill of Rights for the Internet could be explored. One is through the Internet Society operating as an umbrella for those that make up the Internet Ecosystem including:

Technologists, engineers, architects, creatives, organizations such as the Internet Engineering Task Force (IETF) and the World Wide Web Consortium(W3C) who help coordinate and implement open standards.

 Global and local Organizations that manage resources for global addressing capabilities such as the Internet Corporation for Assigned Names and Numbers(ICANN), including its operation of the Internet Assigned Numbers Authority(IANA) function, Regional Internet Registries (RIR), and Domain Name Registries and Registrars.

 Operators, engineers, and vendors that provide network infrastructure services such as Domain Name Service (DNS) providers, network operators, and Internet Exchange Points (IXPs)

 The other is the Internet Governance Forum where its mission to “identify emerging issues, bring them to the attention of the relevant bodies and the general public, and, where appropriate, make recommendations” ideally encompasses discussions and recommendations around an Internet Bill of Rights. It seems to me that the development of a means by which the technical infrastructure of the Internet and the standards that underlie it – which have been in the hands of the ITEF and the W3 consortium – remain open, free and uncapturable should have some priority.

These are organisations that could properly address issues of how to maintain the neutrality and integrity of the engineering and technical aspects of the Internet – to ensure a proper means of ensuring from a principled position an identification and articulation of the technical aspects of the Internet that require protection by a statement of rights – which would be a non-interference approach – couple with the definition of the technological means that can be employed to ensure the protection of those rights.

The objection to such a proposal would be that all power would rest with the engineers, but given that the principle objective of an engineer is to make things work, that can hardly be a bad thing. Maintaining a system in good working order would be preferable to arbitrary and capricious interference with the mechanics of communication by politicians or organs of the State.

This is a project that will have to be developed carefully and analytically to ensure that what we have now continues and is not subverted, damaged or the potential that it may have for humanity in the future as a means of relating to one another is not compromised. It seems to me that protection of the technology is the means by which Berner-Lee’s goal of net neutrality may be maintained.

 

David Harvey

12 May 2014

On-Line Speech Harms

A Sketch of Issues to be Considered in Legislating for the Digital Paradigm

This is a paper that was presented to the Bullying, Young People and the Law Symposium held under the auspices of the Allanah and Madeline Foundation in Melbourne 18 – 19 July 2013. It was part of a New Zealand contribution to the symposium by Cate Brett of the Law Commission, Martin Cocker of Netsafe and the author. The presentation accompanying this paper may be found here.

Introduction

This paper argues that legislating for behaviour in the digital environment raises unique issues. Whereas legislating for the physical world has certain architectural and physical constraints, such constraints may not be present in the digital space, or may be so paradigmatically different that new considerations need to be employed. This paper considers firstly the qualities and properties of digital technologies that provide challenges for conventional legal processes. It then goes on to consider the New Zealand Law Commission proposals to deal with on-line speech harms and any limitations on the effectiveness of those provisions. It concludes with some thoughts about the application of values developed within one paradigm to those who live in another.

The Digital Paradigm

Mark Prensky, an American educator, spoke of the issues confronting education in the digital paradigm. He suggested that there was a growing culture of people who had grown up knowing nothing but the Internet, digital devices and seeking out information on-line. This group he called “Digital Natives” – those born after 1990. He contrasted this class with “Digital Immigrants” – those who had developed the information seeking and uses before the advent of the Internet. Digital Immigrants used digital communications systems but their thought processes were not as committed to them as Digital Natives. Although they could speak the same language as the Digital Natives, they had a different accent that derived from an earlier information paradigm.

Digital Immigrants have an approach to information that is based upon sequential thinking, single tasking and limited resources to enable communication, all underpinned by the fixity of text. For the Digital Immigrant text represents finality. A book is not to be reworked, and the authority of a text depends upon its finality.[1] Information is presented within textual constraints that originate in the Print Paradigm.

Digital Natives inhabit a different information space. Everything is “multi” – multi-resource, multi-media, multi-tasking, parallel thinking. Information for the Digital Native may in its first instantiation be text but it lacks the fixity of text, relying rather on the dynamic, fluid, shifting qualities of the digital environment. Text does not mean finality. Text is malleable, copyable, moveable and text, like all other forms of information in the digital space, is there to be shared.

In the final analysis, the fundamental differences between Digital Immigrants and Digital Natives can be reduced to one fundamental proposition – it’s all about how we process information. For Digital Natives the information resources are almost without limitation and the Digital Native mind shifts effortlessly between text, web-page hypertext links, YouTube clips, Facebook walls, flikr and Tumblr, the terse, abbreviated tweet or text message and all of it not on a desktop or a laptop but a handheld smartphone.

But there is more to this discussion than the content that media convergence enabled by digital technologies provides.  Content, as McLuhan said, is  “the juicy piece of meat carried by the burglar to distract the watchdog of the mind.” [2] It is as important to understand how it is that digital information technologies work. We need to understand the underlying qualities or properties of digital technologies to understand the way in which they drive our information uses, activities and behaviours. Permit me a brief digression while I offer an example.

Information Technology Properties – The Printing Press

In her seminal work on the printing press – The Printing Press as an Agent of Change – Elisabeth Eisenstein identified six fundamental qualities that the print technology introduced that dramatically challenged the way in which the scribal culture produced texts.   These particular qualities were the enablers that underpinned the distribution of content that enhanced the developing Renaissance, that spread Luther’s ninety-seven arguments around Germany in the space of two weeks from the day that they were nailed on the Church door at Wittenberg, and allowed for the wide communication of scientific information that enabled experiment, comment, development and what we now know as the Scientific Revolution.

And it also happened in my own field the law.  Within 300 years of the introduction of the printing press by Gutenberg the oral-memorial customary- based ever-changing law had to be recorded in a book for it to exist.

It would be fair to remark that Eisenstein’s approach was and still is contentious. But what is important is her identification of the paradigmatic differences between the scribal and print cultures based upon the properties or qualities of the new technologies. These qualities were responsible for the shift in the way that intellectuals and scholars approached information.

There were six features or qualities of print that significantly differentiated the new technology from scribal texts.

a) dissemination

b) standardisation

c) reorganization

d) data collection

e) fixity and preservation

f) amplification and reinforcement.

For example, dissemination of information was increased by printed texts not solely by volume but by way of availability, dispersal to different locations and cost. Dissemination allowed a greater spread of legal material to diverse locations, bringing legal information to a wider audience. The impact upon the accessibility of knowledge was enhanced by the greater availability of texts and, in time, by the development of clearer and more accessible typefaces.

Standardisation of texts, although not as is understood by modern scholars, was enabled by print. Every text from a print run had an identical or standardised content. Every copy had identical pagination and layout along with identical information about the publisher and the date of publication. Standardised content allowed for a standardised discourse. In the scribal process errors could be perpetuated by copying, and frequently in the course of that process additional ones occurred. However, the omission of one word by a compositor was a “standardised” error that did not occur in the scribal culture but that had a different impact and could be “cured” by the insertion of an “errata” note before the book was sold. Yet standardisation itself was not an absolute and the printing of “errata” was not the complete answer to the problem of error. Interaction on the part of the reader was required to insert the “errata” at the correct place in the text.

In certain cases print could not only perpetuate error but it could be used actively to mislead or disseminate falsehood. The doubtful provenance of The Compleate Copyholder attributed to Sir Edward Coke is an example.[3] Standardisation, as a quality of print identified by Eisenstein, must be viewed in light of these qualifications.

Print allowed greater flexibility in the organization and reorganization of material and its presentation. Material was able to be better ordered using print than in manuscript codices. Innovations such as tables, catalogues, indices and cross-referencing material within the text were characteristics of print. Indexing, cross-referencing and ordering of material were seized upon by jurists and law printers.

Print provided an ability to access improved or updated editions with greater ease than in the scribal milieu by the collection, exchange and circulation of data among users, along with the error trapping to which reference has been made. This is not to say that print contained fewer errors than manuscripts. Print accelerated the error making process that was present in the scribal culture. At the same time dissemination made the errors more obvious as they were observed by more readers. Print created networks of correspondents and solicited criticism of each edition. The ability to set up a system of error-trapping, albeit informal, along with corrections in subsequent editions was a significant advantage attributed to print by the philosopher, David Hume, who commented that “The Power which Printing gives us of continually improving and correcting our Works in successive editions appears to me the chief advantage of that art.”[4]

Fixity and preservation are connected with standardisation. Fixity sets a text in place and time. Preservation, especially as a result of large volumes, allows the subsequent availability of that information to a wide audience. Any written record does this, but the volume of material available and the ability to disseminate enhanced the existing properties of the written record. For the lawyer, the property of fixity had a significant impact.

Fixity and the preservative power of print enabled legal edicts to become more available and more irrevocable. In the scribal period Magna Carta was published (proclaimed) bi-annually in every shire. However, by 1237 there was confusion as to which “Charter” was involved. In 1533, by looking at the “Tabula” of Rastell’s Grete Abregement of the Statutys a reader could see how often it had been confirmed in successive Royal statutes. It could no longer be said that the signing of a proclamation or decree was following “immemorial custom”. The printed version fixed “custom” in place and time. In the same way, a printed document could be referred to in the future as providing evidence of an example which a subsequent ruler or judge could adopt and follow. As precedents increased in permanence, the more difficult it was to vary an established “custom”. Thus fixity or preservation may describe a quality inherent in print as well as a further intellectual element that print imposed by its presence.

Although Eisenstein’s work was directed more towards the changing intellectual environment and activity that followed the advent of printing and printed materials, it should not be assumed that printing impacted only upon intellectual elites. Sixteenth and seventeenth century individuals were not as ignorant of their letters as may be thought. There are two aspects of literacy that must be considered. One is the ability to write; the other being the ability to read. Reading was taught before writing and it is likely that more people could read a broadside ballad than could sign their names. Writing was taught to those who remained in school from the ages of seven or eight, whereas reading was taught to those who attended up until the age of six and then were removed from school to join the labour force. Print made information more available to ordinary people who could read.

Another thing that we have got to remember is that media work on two levels. The first is that a medium is a technology that enables communication and the tools that we have to access media content are the associated delivery technologies.

The second level, and this is important is that a medium has an associated set of protocols or social and cultural practices including the values associated with information – that have grown up around the technology. Delivery systems are just machines but the second level generates and dictates behaviour.[5]

Eisenstein’s argument is that when we go beneath the delivery system and look at the qualities or the properties of a new information technology, we are considering what shapes and forms the basis for the changes in behaviour and in social and cultural practices. The qualities of a paradigmatically different information technology fundamentally change the way that we approach and deal with information. In many cases the change will be slow and imperceptible. Adaptation is usually a gradual process. Sometimes subconsciously the changes in the way that we approach information changes our intellectual habits. Textual analysis had been an intellectual activity since information was recorded in textual form. I contend that the development of principles of statutory interpretation, a specialised form of textual analysis, followed Thomas Cromwell’s dissemination and promulgation of the Reformation statutes, complete with preambles, in print.[6]

From all this it would be fair to ask –  what’s the difference? What’s changed? All we’ve got is a bunch of machinery that allows us to do what we have always done which is to read and watch movies and do the same things that we did with radio or the television – the only thing is that it’s all been brought together – there has been a convergence of the various delivery systems.    And on the surface that’s perfectly correct because what you are talking about there is content.  You’re talking about the material that’s delivered rather than looking at the delivery system.

The Medium Is….

Once there is a recognition of the fact that there are properties that underlie an information technology that influence the way in which we address content, and that will govern or moderate information activities,  we begin to understand what Marshall McLuhan meant by his aphorism “The Medium is the Message.” Understanding the medium and the way it governs and moderates information activities allows us to understand the impact of the digital communications technologies – a convergence of everything that has gone before and the way in which it redefines the use of information and the way we access it, process it, use it, respond to it and our expectations of it and its availability.

The Properties of Digital Communications Technologies

Many of the properties that Eisenstein identified for print are present in digital technologies. Every new information technology – and this has been the case from the printing press onwards – has its own particular properties or qualities that significantly differentiate it from other earlier information technologies.

The properties that I identify are not an exclusive list. The identification of the properties or qualities of digital information technologies is very much a work in progress. But these are the ones that occur to me. Some of them have already been reflected in the discussion that has preceded and I give a very brief description of what each property means. A more detailed analysis has yet to be developed.

  • Persistence – summed up in the phrase “the document that does not die” – that once information is on the Internet it is more likely than not to remain there.

 Continuing change or what you could refer to as the disruptive element – continuing disruptive change is a characteristic of the digital space – the idea of a “breathing space” between times of accelerated change no longer exists. This quality is linked to “permissionless innovation” below.

 Delinearisation of Information – in essence, the effect of hypertext links that allow and enable thinking to follow other than a strictly logical sequence, but to branch of into related (sometimes tenuously related) areas of information

 Dynamic information – the ability to cut, paste, alter, change and modify text once it has been placed in digital format – exemplified by the ability of on-line newspapers to update stories or significantly alter them as new information comes to hand

 Dissociative enablement,  – the ability to sit behind a screen and say and do things that one would never contemplate face to face or in “meat space”

Permissionless innovation – you don’t need to ask to put a new tool or protocol on the Internet. Sir Tim Berners-Lee didn’t need anyone’s permission to bolt the World Wide Web onto the Internet; nor did Mark Zuckerberg with Facebook, Sergey Brin and Larry Page with Google, Jeff Bezos with Amazon or Jack Dorsey with Twitter. If you build it they will come sums up this quality.

Availability – information comes to the user. The print paradigm localised book based information in a library or a bookshop. The Internet brings directly information into the home.

Participation – this is a very wide concept which includes information and file sharing as well as the ability to comment on blog sites, post photos on Facebook, engage in Twitter exchanges, participate in IRC chatrooms and break new stories via a blog.

Searchability  is related to the next quality but is the first step in the information recovery process – a common feature of the Internet before it went commercial and thereafter has been to make some sense of the vast amount of information that is available. Thus from Gopher to Google the quest for making information available has been a constant, and it enables users to find what they are looking for.

Retrievability – and once the successful search has been carried out, the information is available and can be readily and immediately obtained – associated with information availability above.

This means that the information expectations of Digital Natives have been shaped and moulded by these qualities. Their uses and expectations of what happens in the on-line world are quite different to those of their parents (Digital Immigrants) or those of my generation (Digital Aliens). Thus any solution to on-line problems must be premised upon an understanding of the technology and the way that it shapes behaviours and values underlying those behaviours. The solution must also recognise another McLuhan aphorism – we shape our tools and thereafter our tools shape us.[7]

This of course gives rise to the question of whether or not the internet changes us forever.  Underlying this theory is the concept of neuroplasticity – the ability of the brain to adapt to and learn from new stimuli.   The concept of neuroplasticity was picked up by Nicholas Carr in his book The Shallows: How the Internet is changing the way we think, read and remember.[8]  His book, based upon an earlier article that appeared in the Atlantic, has as it thesis that the internet is responsible for the dumbing down of society based upon the way in which our minds respond both to the wealth of information and its availability.

The neuroplasticity argument is picked up by Susan Greenfield[9] who believes the web is an instant gratification engine, reinforcing behaviours and neuronal connections that are making adults more childlike and kids hungry for information that is presented in a super simplistic way but in fact reduces their understanding of it.  Greenfield is of the view that the web spoon feeds us things to capture our attention. This means we are learning to constantly seek out material that stimulates us and our plastic minds are being rewarded by our “quick click” behaviour.  We want new interactive experiences and we want them now.

This view is disputed by Aleks Krotoski[10] who firstly observed that there is no evidential support for Greenfield’s propositions which pre-suppose that once we used the web we will forever online and never log off again.  According to Greenfield, says Krotoski, we become connected to our computers and other devices in a co-dependent exclusive almost biological way ignoring where how and why we are connecting.  Krotoski, for example, disputes internet addiction, internet use disorder or neurological rewiring.

In some respects Carr and Greenfield are using the “low hanging fruit” of technological fear[11] to advance their propositions.  Krotoski’s rejection of those views is, on the other hand, a little too absolute and in my view the answer lies somewhere in between.  The issue is a little more nuanced than whether or not the Internet is dumbing us down or whether or not there is any evidence of that.

My argument is that the impact of the internet lies in the way in which it redefines the use of information and the way we access it, process it, use it, respond to it and our expectations of it and its availability.

This may not seem to be as significant as Carr’s rewiring or Greenfields neuroplasticity but it is, in my view, just as important.  Our decision making is based upon information.  Although some of our activity could be termed responses to stimuli, or indeed it might be instinctive, most of the stimuli to which we respond can in fact be defined as information – if not all of it.  The information that we obtain when crossing the road comes from our senses and sight and hearing but in many other of our activities we require information upon we which may deliberate and to which we respond in making decision about what we are going to do, buy and so on.

And paradigmatically different ways of information acquisition are going to change the way in which we use and respond to information. There are other changes that are taking place that arise from some of the fundamental qualities that underline new digital communications technologies – and all communication technologies have these particular properties or qualities underlying them and which attach to them; from the printing press through to the wireless through to the radio through to television and into the digital paradigm.  It is just that digital systems are so fundamentally different in the way in which they operate and in their pervasive nature that they usher in a new paradigm.[12]

Looking at Solutions

Thus if we seek a solution to some of the problems that involve Internet-based behaviour we must recognise these qualities and impacts of the digital communications technologies that underlie these behaviours. For example any solution must recognise:

    • The time factor – in “internet time” information moves faster than it does in the real world
    • Information is dynamic and spreads “virally”
    • “Dissociative enablement” means that people are going to behave differently when operating from the apparent anonymity of a private room or space and from behind a computer screen
    • Any remedy is going to be partial – given that information on the internet is going to remain in some shape or form (the quality of persistence or “the document that does not die”)
    • Normal civil and political rights including a robust recognition of freedom of speech and expression and that the internet is neutral.
    • Restrictions on a free and open internet must be minimal.

The New Zealand Solution

The New Zealand solution set out in the Digital Speech Harms paper from the Law Commission takes a two-pronged approach. One involves the creation of a new offence. The other involves a fast-track solution of a civil nature involving the creation of a Communications Tribunal.

A New Offence

The Law Commission considers that causing harm by the use of a communications device should be criminalised. The first thing that must be recognised is that the use of communications device is not criminalised, nor may this be seen as an attempt to regulate the Internet. What is being addressed is a behaviour involving the use of a communications device that causes harm to another.

The proposed language of the offence is as follows:

Causing harm by means of communication device

(1) A person (person A) commits an offence if person A sends or causes to be sent to another person (person B) by means of any communication device a message or other matter that is—

 (a) grossly offensive; or

 (b) of an indecent, obscene, or menacing character; or

 (c) knowingly false.

 (2) The prosecution must establish that—

 (a) person A either—

 (i) intended to cause person B substantial emotional distress; or

 (ii) knew that the message or other matter would cause person B substantial emotional distress; and

 (b) the message or other matter is one that would cause substantial emotional distress to someone in person B’s position; and

 (c) person B in fact saw the message or other matter in any electronic media.

 (3) It is not necessary for the prosecution to establish that the message or other matter was directed specifically at person B.

(4) In determining whether a message or other matter is grossly offensive, the court may take into account any factors it considers relevant, including—

 (a) the extremity of the language used:

 (b) the age and characteristics of the victim:

 (c) whether the message or other matter was anonymous:

 (d) whether the message or other matter was repeated:

 (e) the extent of circulation of the message or other matter:

 (f) whether the message or other matter is true or false:

 (g) the context in which the message or other matter appeared.

 (5) A person who commits an offence against this section is liable to imprisonment for a term not exceeding 3 months or a fine not exceeding $2,000.

(6) In this section, communication device means a device that enables any message or other matter to be communicated electronically.

The message set out in subsection (1) has to pass a very high threshold. Similarly the intention test in subsection (2) is high and the criteria in subparagraphs (a) – (c) are conjunctive. Each one must be proven to the criminal standard. Subsection (4) sets out matters that a Court may take into account, but these criteria are not exclusive.

One matter that must be taken into account is that the section would have to be interpreted and applied in accordance with the provisions of the New Zealand Bill of Rights Act 1990 (NZBORA). Now the section as it stands criminalises a certain quality of speech, thus engaging a consideration of the freedom of expression right guaranteed by s. 14 of NZBORA. That must take into account issues of a justified limitation upon the freedom of expression right. In my view the application of NZBORA would necessarily result in a very cautious approach by a Court. The evidence of the offending would have to be clear and unequivocal and could not really apply to a trivial matter.

A problem that arises with prosecutions for such an offence is the nature of the legal process which rarely matches “Internet time” and the fact that the section does not allow for the removal of any offending material, thus allowing the persistence of the information. The section addresses the behaviour but the message may still remain, preserved on the Internet.

The Communications Tribunal

The proposal for a Communications Tribunal, and for the powers and remedies that Tribunal may bring to play could well address some of the qualities of the digital environment, and possibly more effectively than a criminal prosecution, which, in my view, would be reserved only for the most extreme cases.

The Communications Tribunal

a) would have a limited jurisdiction

b) could provide limited and specific remedies

c) would deal with content and not criminality

d) would operate “on the papers”

e) would be a remedy of last resort after a filtering process has been carried out by the Approved Agency

 

Communications Principles

On the face of it, the Communications Tribunal has some significant powers which, at first glimpse, interfere dramatically with freedom of expression. The approach by the Tribunal must be within the context of Communications Principles proposed by the Law Commission. These are:

Principle 1

A communication should not disclose sensitive personal facts about an individual.

Principle 2

A communication should not be threatening, intimidating, or menacing.

Principle 3

A communication should not be grossly offensive to a reasonable person in the complainant’s position.

Principle 4

A communication should not be indecent or obscene.

Principle 5

A communication should not be part of a pattern of conduct that constitutes harassment.

Principle 6

A communication should not make a false allegation.

Principle 7

A communication should not contain a matter that is published in breach of confidence.

Principle 8

A communication should not incite or encourage anyone to send a message to a person with the intention of causing that person harm

Principle 9

A communication should not incite or encourage another person to commit suicide.

Principle 10

A communication should not denigrate a person by reason of his or her colour, race, ethnic or national origins, religion, ethical belief, gender, sexual orientation, or disability.

Matters that the Tribunal Would Have to Consider

In considering an application for relief the Tribunal would have to take into account the following:

(a) the content of the communication, its offensive nature, and the level of harm caused by it:

(b) the purpose of the communicator in communicating it:

(c) the occasion, context, and subject-matter of the communication:

(d) the extent to which the communication has spread beyond the original communicator and recipient:

(e) the age and vulnerability of the complainant:

(f) the truth or falsity of the statement:

(g) the extent to which the communication is of public interest:

(h) the conduct of the defendant, including any attempt by the defendant to minimise the harm caused:

(i) the conduct of the complainant, including the extent to which that conduct has contributed to the harm suffered.

The Law Commission also emphasised that in exercising its functions, the Tribunal should have regard to the important of freedom of expression. Thus an analysis pursuant to the provisions of NZBORA would have to be undertaken.

The Orders that the Tribunal Might Make

(a) an order requiring that material specified in the order be taken down from any electronic media:

(b) an order to cease publishing the same, or substantially similar, communications in the future:

(c) an order not to encourage any other person to engage in similar communications with the complainant:

(d) a declaration that a communication breaches a communication principle:

(e) an order requiring that a factually incorrect statement in a communication be corrected:

(f) an order that the complainant be given a right of reply:

(g) an order to apologise to the complainant:

(h) an order requiring that the author of a particular communication be identified.

These orders or parts of them may apply to the following:

(a) the defendant:

(b) an internet service provider:

(c) a website host:

(d) any other person, if the Tribunal considers that the defendant is encouraging, or has encouraged, the other person to engage in offensive communication towards the complainant.

Transparency would be ensured in that the Tribunal must publish its decisions and the reasons for them. This is necessary because if there are to be interferences with freedom of expression the reasons for such interference and the extent thereof should be published and made known to counter any suggestion of secret interference with freedom of speech.

As proposed the Communications Tribunal would have the following advantages in dealing with on-line speech harms and at the same time recognise some of the disruptive qualities of the digital paradigm:

a)  it would deal only with the most serious types of on-line speech harm, in that the Approved Agency would filter and deal with the majority of complaints.

b) It would provide a relatively swift response which would accord with “internet time” and at least attempt to mitigate some of the damage that could be done if the material in question was or going or was likely to go viral. Having said that, the persistence quality of information on the Internet may well provide an element of frustration, but responding to the source of the speech harm is a significant first step.

c) it would have an “on the papers” hearing which would obviate the need for conducting a full hearing with parties present, and which would have to fit in around other Court work. This said, with modern technology such as Skype it is possible that a “distributed hearing” where the participants would be other than in the Court building may be possible. New Zealand has specific legislation that allows this.[13]

d) it could provide a remedy by way of a take-down order but it should be noted that power would have to be exercised having regard to the freedom of expression provisions in NZBORA, and the correct analysis based on a proportional approach would have to be undertaken.

e) an order of the Tribunal would constitute a Court order which would receive recognition from providers such as Google or Facebook and thereby the removal of offending content could be expedited.

The Present State of the Play

The report has been received by the Minister.

She has indicated that the recommendation for a Communications Tribunal will not be adopted.

The proposed jurisdiction of the Communications Tribunal will be assumed by the District Court

Some of the issues that may arise and should be addressed as the policy develops into a Bill might include

a) lack of specialist expertise in the field of digital communications law on the Bench and the need for specialised training

b) potential procedural delays if Communications complaints are subsumed as part of the normal Court process – a “fast track” may need to be considered

c) variation or possible lack of consistency in the application of principles and the types of orders that may be made

d) whether or not a process may be developed which will take into account the qualities and realities of the digital paradigm and which recognise that the nature of Internet based communication is fundamentally different and potentially far more damaging than conventional bullying “speech”.

One thing is clear and it is that the activities of the Court in this area will be carefully scrutinised by lawyers, free speech advocates, Internet freedom advocates and the community in general.

A Cautionary Conclusion

There are some who follow the view of Edmund Burke – that each generation has a duty to succeeding generations. Because politics amounts to an intergenerational contract between one generation and the next, politicians should feel entrusted with the conservation of the past for future generations.

The problem is this – in a changing communications paradigm should digital immigrants tell digital natives how to live their lives in the digital environment?

The IT Countrey Justice

July 2013


[1] Ronald Collins and David Skover The Death of Discourse (Caroline Academic Press, Durham N.C. 2005)  p. xix. For a more detailed discussion of the difference between fixed and digital texts see Ronald Collins and David Skover “Paratexts” (1992) 44 Stanford Law Review 509.

[2] Marshall McLuhan Understanding Media: The Extensions of Man Critical Edition W Terrence Gordon (ed)(Gingko Press, Berkeley Ca 2003)

[3]The Compleate Copyholder (T. Coates for W Cooke, London,1641) Wing C4912.

[4] Cited by J.A. Cochrane Dr Johnson’s Printer: The Life of William Strahan (Routledge and K Paul, London, 1964) p.19 at n.2.

[5] Lisa Gitelman “Introduction: Media as Historical Subjects: in Always Already New: Media, History and the Data of Culture (MIT Press, Cambridge, 2008) p. 7.

[6] This is a very bald assertion. The argument is a little more nuanced and involves a consideration of the use of the printing press by Cromwell, the significant increase in legislative activity during the course of the English Reformation, the political and legal purpose of statutory preambles, the advantages of an authoritative source of law in printed form for governing authorities, all facilitated by underpinning qualities of print such as standardisation, fixity and dissemination.

[7] Marshall McLuhan Understanding Media: The Extensions of Man  above n. 2.

[8] (Atlantic Books, London 2010). See also Nicholas Car “Is Google Making Us Stupid” Atlantic Magazine 1 July 2008 http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/  (last accessed 31 May 2013)

[9] See especially Susan Greenfield “Living On-line is Changing Our Brains” New Scientist, 3 August 2011 http://www.newscientist.com/article/mg21128236.400-susan-greenfield-living-online-is-changing-our-brains.html (last accessed 31 May 2013) For this and for her assertions of “internet addiction” she has she has been criticised by Dr. Ben Goldacre for claiming that technology has adverse effects on the human brain, without having published any research, and retracting some claims when challenged. Goldacre suggested that “A scientist with enduring concerns about a serious widespread risk would normally set out their concerns clearly, to other scientists, in a scientific paper”  Ben Goldacre, “Serious Claims Belong in a Serious Scientific Paper” The Guardian 21 October 2011 http://www.guardian.co.uk/commentisfree/2011/oct/21/bad-science-publishing-claims (last accessed 31 May 2013)

 

[10]Untangling the Web: What the Internet is Doing to You  (Faber, London 2013). Presentation by Aleks Krotoski at the Writers and Readers Festival, Auckland 19 May 2013. Personal discussion between the author and Aleks Krotoski 19 May 2013.

[11] Sometimes referred to as “The Frankenstein Complex”

[12] See above for some of the qualities of digital information technologies.

[13] The Courts (Remote Participation) Act 2010

Collisions in the Digital Paradigm: Information Rights and Copy Rights

A Sketch of Thoughts for the ADA Copyright Forum 2013

Judge David J. Harvey

A Judge of the District Court, New Zealand

 This discussion has been a developing project. It still has some way to go. It started as some notes for a keynote speech at the Australian Digital Alliance Forum on 1 March 2013 and formed the basis for a powerpoint presentation together with some discussion points for a panel following the keynote.

 I had completed the notes for the keynote speech but was aware that the rights-based approach to gauging the applicability or strength of copyright protection required further development. I was fortunate enough to be invited to Kiwi Foo – a gathering of people to discuss issues of common interest organised by Nat Torkington and Russell Brown at Warkworth, north of Auckland, in the second weekend of February 2013. I took the opportunity to put the ideas before an audience and see what sort of reception they attracted and what further developments could take place. The session lasted for an hour although I am sure it could have lasted longer, but I was able to clarify some of my own thinking as well as benefitting from the collective wisdom of the group. I am very grateful to all those who attended the session and especially grateful to Lance Wiggs who recorded the various inputs and suggestions on a white board which I photographed with my iPad for further reference.

 I like to see how a proposal works and the Kiwi Foo session fed into that aspect of the development of this discussion and very much informed the latter part of this note where I move to consider how a rights-based approach to copyright would work.

 If this approach to copyright is to go further, much more work will need to be done to rigorously crystallise the basis for change (paradigmatic change in communications as a result of technology leads to changes in behaviours and values and their validity, which underpin the basis for rule making) and examine the way in which a rights based model may work. I see this as a collaborative undertaking and I welcome commentary and new ideas. It may well be that a rights based model may not be the way to go. An entirely different model or an entirely novel solution may emerge. But this is a debate worth having. Between 1695 and 1710 there was a debate about the way in which the trade and technology of printing should be governed. That debate culminated in the Statue of Anne and took place within the context of paradigmatic change in information communication by means of the first information technology. It seems appropriate that we address the issues of copyright anew in this time of paradigmatic change following the development of digital communications systems.

A copy of the Conference presentation (without media) may be found here Collisions in the Digital Paradigm Short

My keynote speech may be found on YouTube Here

 Introduction

 Copyright has collided with the digital paradigm and is in difficulty. There are reasons for this and one of the principal ones is that copyright was developed under a different paradigm. But the current copyright wars that are taking place at the moment are not new.  In fact they are part of continuing story that goes right back to the advent of copyright.

In this discussion I shall outline some of the background to copyright. I argue that paradigmatic change challenges our assumptions about and expectations of information. I contend that the digital paradigm is so revolutionary that it undermines some of the values and assumptions that underlie traditional copyright thinking. There can be no doubt that there must be some protection for intellectual property rights. I will conclude by suggesting a possible approach.

Copyright has always been contentious. It creates tensions on the part of content owners who don’t think they have enough protection, and consumers who think that content owners have too much protection. It is a tension as old as copyright itself. And although historically there have been examples of intellectual property protection before the Renaissance[1], the copyright debate began as a result of relatively recent event in human intellectual history. Copyright is the child of the print paradigm. The printing press was the first information technology and it enabled revolutionary change in the way in which people approached and used information.

The printing press mechanised the production of text.  The paradigm that preceded it – what I refer to as the scribal culture – involved the creation of written information by hand.  The volume of written information was limited by the number of copies that were available.  There are a number of consequences for this.  One involved approaches to and expectations of information.  In many cases, because of a limited number of copies, information was located at a central point.  Scholars would necessarily have to travel to that information point be it in a library or a collection where they could access the information and return to their own home to process it.  Necessarily they would take a copy of the information that they sought with them.  They would transcribe the information themselves.  This is the way in which information circulated in the pre-print world.  Copying was a reality.  It was the only way that information could be circulated and there was no concept of what we understand as “the copy right”.

Yet even the origin of our copyright has been contentious. Received wisdom suggests that it had its origins in the licensing regime that was part of the activities of the Stationers Company in England. I dispute this proposition.

The Stationers Company, Licensing and Industry Protection

When one carefully examines the activities of the Stationers Company, even before its incorporation in 1557, it is clear that its focus was directed towards the objective of industry protection for the benefit of its members and the control of the new means of reproducing information.[2]

The Stationers were a craft guild and had been in existence from the latter part of the 14th century.  Originally their guild incorporated everyone who was involved in the creation and manufacture of books.  Stationers were just one arm of the book production operation.  Primarily their role was in the sale and distribution of books that had been copied by scribes, illustrated by limners and bound by binders.  Indeed the formation of the guild suggests that the book trade was well developed in sufficiently competitive to make an early form of governance desirable. Guilds played a significant part of the economic and political life of a city, ensured that proper training for apprentices was undertaken and had a hierarchy of expertise within the guild itself.[3]

By their very nature the Stationers were interested in protecting their craft for the benefit of members of the guild and excluding from the practice of the craft those who weren’t.  Once the printing technology arrived and after the Stationers were incorporated in 1557 the importance of this protection was enhanced.  After all, anyone who had the capital to obtain a printing press could set up in the business of a printer, unsupervised by the Stationers who would challenge their monopoly on the production of books and adversely impact the financial and economic welfare of members of the Stationers Company.  The Stationers authorised certain printers to have the exclusive rights of printing certain books and these were registered in the Stationers Company register.

Now all this may be seen as a form copyright but in fact it was a means of ensuring that only members of the Stationers Company printed books and any books that were printed that had not been registered with the Stationers company at least prima facie could be viewed as books that were printed by a non-member whose activity should be suppressed. After incorporation the Stationers Company was vested with considerable powers to ferret out printers who were not members of the company.

Alongside Stationers Company licensing was the grant of privileges by the Crown to certain printers to have the exclusive rights to print certain works.  This was done by means of a Royal Patent.  These patents could be very valuable.  The patent, for example, that allowed a printer to print a prayer book was extremely valuable because nobody else could.  A prayer book was essential in a society where church attendance was compulsory. The patent that was granted to Richard Totell to print common law books essentially meant that Totell had a monopoly over legal publishing over the latter half of the 16th century.

Because patents were an exercise of royal prerogative power any disputes over the scope of patents would be litigated in the prerogative Court of Star Chamber.  Now it must be remembered that this litigation had nothing to do with author’s rights but everything to do with the protection of the publisher and the developing industry. The Star Chamber Decrees of 1587 and 1634 which, according to many commentators were more directed towards censorship than anything else are, in fact industry control mechanisms that arose out of litigation about patents, their infringement and scope.[4]  In fact if one considers carefully the background to the litigation, the enquiries that were carried out in the late 1570s and early 1580s and the concerns of the Stationers about “disorders” in the printing trade it becomes abundantly clear that the Stationers were interested in keeping their monopoly over the use of the new information technology, excluding non-members from its use, and ensuring that members of the company receive the economic benefits from it.

The disruptions of the Civil War from 1642 through to the Restoration meant essentially that there was an hiatus in the development of printing controls.  Following the Restoration a very rigorous system of print licensing, directed as much towards content as it was towards industry monopoly and control, followed the enactment of the Licensing Act 1662. The Act was enforced by the Stationers – continuing their control over the industry – and was renewed biannually through until 1694 when the licensing rules came to an end.

For a period of 15 years there was a debate about the control of publication of printed works. The focus of the debate began to shift from the publishers to the authors. The writers Jonathan Swift and Daniel Defoe were among the advocates for the author’s right to receive remuneration from the sale of his work.  In 1710, after considerable lobbying and debate, the first copyright statute was enacted – the Statute of Anne – and this was directed towards the new information technology of printing.

Since then copyright has been inextricably tied up with information technologies. It is really based on the use of technology rather than any underlying “property” principles, although it has been dressed up as such.

Scribal Culture Co-existence

Nothing is said in the Statute of Anne about manuscript works and I think that we’ve got to remember that the scribal culture co-existed with the developing print culture for a considerable period of time.[5]  It wasn’t until the advent of the typewriter that the individually created handwritten document effectively came to an end.  But we must remember that content was still available in manuscript form. A fundamental aspect of the scribal culture was that copying was a reality and effectively the only means by which manuscript works were circulated.

There were a number of reasons for the continuing interest in manuscripts.  Within the area of legal writing most lawyers who subsequently had their works printed – like Edmund Plowden[6] and Sir Edward Coke[7] – circulated their works among coteries of friends or fellow professionals within the Inns of Court. Manuscript publishing was for limited audiences.[8]  Printing addressed mass production.  So the Statute of Anne in fact reflects a recognition of the values of two cultures and the qualities of the printing press that differentiated it from the manuscript culture.

 Copyright Wars

Following The Statute of Anne there was a continuing debate about copyright. Publishers looked to other theories to protect the exclusivity on the right to produce content, arguing in Miller v Taylor[9] that there was a common law right to copyright which the Court upheld but which was later overturned in Donaldson v Beckett.[10]

When one looks at the litigation that took place in the early days of copyright –  Miller v Taylor, Donaldson v Beckett, Tonson v Collins[11] – we must ask ourselves whether or not any of the litigants were authors and the answer is no.  The battle then and almost exclusively since has been contended, at least on one side, by the publishing and distribution conglomerates.

There is a reason for that.  Commercial copying and distribution, starting with the printing press, was and is a capital intensive business.  Printing, radio broadcasting, television broadcasting, sound recording, movies are all capital intensive and require large corporate structures, capital investment and financing to publish and distribute the works that the various technologies allow.

Because copyright has called itself technology neutral – a theory which I would dispute vigorously – the principles that were developed in the early years of copyright that underpin the Statute of Anne have remained – principles that had their grounding in print technology.

Essentially conglomerates or monolithical organisations could feel relatively comfortable about their control and dissemination of their content.  The first real challenge to capital intensive complacency came in the form of the photocopier – a cheap, available and accessible means to copy printed works. Although the photocopier was a product of analog technologies, and was just another type of printing press, it was the first alarm bell for print based copyright. It was one of the first examples of the empowerment of individuals to access information other than through established commercial outlets.[12] With the onset of the digital revolution more and more means have become available for individuals to create their own content or to copy that of others.

The conglomerates and the copyright corporates recognise that the power balance has shifted as a result of the new technologies to the point where everyone is able to copy.

Yet the legal battles that have been waged recently reflect what happened in the early days of copyright – the litigation is at the urging of the corporate and conglomerates and authors don’t really seem to feature at all.  Examples may be found in the cases of A & M Records v Napster[13]; Recording Industry Association of America v Diamond Multi Media[14]; Universal City Studios v Reimerdes and Corley[15]; MGM Studios v Grokster[16]; Sony Computer Entertainment v Edmunds[17]; Sony v Ball[18]; Sony Music Entertainment Australia Ltd v University of Tasmania[19]; Sony v Stevens.[20]

In some cases the responses of the conglomerates has been to try and shut down the technology altogether – resist technological change by banning the technology, thus further emphasising the association of copyright with technology. This is an example of vested interest complacency and the failure to understand the view of Mcluhan about rear view mirror thinking –  by the time you recognise the problem caused by a new technology it is generally too late. Examples may be found in the Betamax case  – Sony Corporation of America v Universal City Studios[21] and in the English case about twin reel cassette tape recorders – CBS Songs v Amstrad.[22]

 Every copyright statute has in it provisions about infringement. However, those infringement remedies really can only be sought if it is economically feasible to do so. In today’s digital environment the costs of litigation are too high to pursue individual infringers so copyright conglomerates have managed to obtain an additional infringement remedy – graduated response regimes to deal with file sharing. Let’s be clear about a few things. The first is that copyright owners would have preferred a “guilt by accusation” system with a reverse onus on the alleged infringer. It is just another way of saying that everyone who has a computer or who downloads or has a file locker in the Cloud is a pirate. That was made clear in the original s. 92A debacle in New Zealand The second thing is that a graduated response regime is economically beneficial for copyright owners. In New Zealand complaints of infringement must be accompanied by a $25.00 fee – a little less than instructing a silk and instituting High Court infringement proceedings. Let us be under no illusion about this. The only ones who benefit from the graduated response regime are copyright owners and the cost savings are significant.

 The Answer to the Machine……

One of the problems that copyright theory faces is that we are now in a new information paradigm – a paradigm that is as different from the print and analogue as printing was from the scribal culture.  New copying technologies and digital systems challenge existing copyright thinking because digital technologies work on a premise that is so fundamental that it strikes right at the heart of copyright and that is that copying is necessary for digital technologies to work they can’t function without copying.

It was this reality that prompted Charles Clark to comment “the answer to the machine is in the machine.”[23]

Essentially what Clark was saying was the fundamental problems created by digital technologies have a  solution within the technology itself.  Content owners could take control the copying that was necessary to make digital technologies work.  Thus developed what Kirby J referred to as para-copyright[24]  – the development of technological protection measures (TPMs) and the legal protection of technological protection measures, which meant that attempts at circumvention or the provision of means of circumvention of TPMs were considered on a par with copyright infringement itself.

One of the unintended consequences of TPMs may be seen in the cases of Sony v Edmunds[25] and Sony v Ball[26] in England. These decisions opened the door to copyright by contract. Content owners could impose technological protection measures which could be circumvented if the approved equipment was used. In addition owners could impose standard terms and conditions of sale and could write their own copyright contract that went far and away beyond the careful balance that had been achieved in legislation.  The copyright owners’ dream in Miller v Taylor[27] was finally becoming a reality.

Para-copyright protections actually challenge the developing concepts of fair use and any other concepts that may develop in the digital environment.  TPMs can lock up content far beyond the copyright term.  They are indiscriminate in their prevention of copying and although they may claim to have a focus on copy protection many TPMs are in fact used for access protection as well which is something of an anomaly in the global world – an anomaly perpetuated by the regionalisation of content via Netflix, Hulu, Amazon Music and iTunes.

Clark’s adage about the answer lying in the machine runs up against a problem. Machines don’t operate on their own.  Machines are meant to be servants of people and challenging Clark is McLuhan’s concept of technology induced behavioural change based on another adage –  first we shape our tools and thereafter our tools shape us.[28] And the digital tools that have developed and are developing have already begun that shaping process. I shall develop that argument shortly.

 Welcome to the Machine[29]……Digital Natives, Information Expectations and Frustrations

I make no secret of the fact that I am an adopter of digital technology – a digital immigrant.  I am speaking to you as one who was brought up in the print paradigm.  In my childhood the main means of communication of information apart from the spoken word was by print – books and newspapers or by radio.  I remember the introduction of television.  I have grown up with that medium.  And I have seen the wonderful developments that computer based and digital information technologies can provide.  And I am an enthusiastic adopter of those technologies. My children and grandchildren are digital natives. They will grow up in a world where digital technology always has been around. The idea of a single function telephone that can only be used for vocal communication would seem to be an outrage to them. They are aware of the capabilities and potentials of the new technology and have certain expectations of information that run up against copyright law.  They know that certain seemingly harmless things are feasible even if the law does not permit them.

Digital natives – and I shall have more to say about them shortly – view copyright theory and the values of copyright that developed in the pre-digital world as atrophied and outdated. The position has been made worse by the “commodification” or “walmartisation” of intellectual property coupled with a failure by copyright owners and distributors to recognise that globalisation has been accelerated by the internet in a world where content is digital.

Digital natives find it difficult to understand why it is that they may be willing to pay for a product that copyright owners won’t let them purchase or access.  I can’t subscribe to Hulu because I live in the wrong part of the world.  I can’t download content because I live in the wrong part of the world.  Yet the internet and the globalisation of content and e-commerce have essentially made at least the commercial world a world without boundaries.[30]

A fundamental concept of contract law – that says that it is not in fact the person who has the goods on their shelves but the person who wants to buy the goods that is making the offer, and that the seller has the right to refuse or to accept the offer – provides the basis for copyright owners to regionalise their product.  But the digital native doesn’t see it that way.  They are prepared to pay.  The copyright owner is not prepared to accept the money.  So let’s then look at another solution. We know another way to get the content. Let’s file share.

Some New Zealand television channels screen episodes of popular US shows  a matter of days after they were screen in the United States.  That, to my view, is encouraging because it eliminates the necessity to download to find out what was going on in the show and one could possibly avoid the “spoiler community” for a couple of days.[31] More importantly it is at last a recognition by the content owners that there is growing consumer outrage towards a regionalisation of product that might have been understandable in the days when the movie was carried in a can across the Pacific on a steam ship but which today is instantly available.

In essence when we are looking at access to information and the distribution of information we are looking at aspects of expression – that essential that engages the “copy right”. We need to look at a new approach that recognises technological realities and what it does to behaviour, the values that underly behaviour and consequential expectations of information.

 We Shape Our Tools……

 Marc Prensky, an educationalist who wrote in the early 2000s identified “digital natives” as those who have spent their entire lives surrounded by and using computers, video games, digital music players, video cams, cell phones and all the other tools and toys of the digital age.  Digital natives, said Prensky, are native speakers of the digital language of computers video games and the internet.  But I’m not one of those.  As a digital immigrant I speak with a different accent from that of the digital native.  I have adapted to the new environment but I retain to a certain degree my accent that is my foot in the past.  I know how things were.  That “accent” can be seen in such things as preferring a book with pages to a Kindle or an iPad, turning to the internet for information second rather than first, or even reading the manual for a programme rather than assuming that the programme itself will teach me how to use it.  The digital language is a new language for me and a language learned later in life goes to a different part of the brain.

And that’s one of the interesting things that new technologies do for us.  They change us.  Sometimes we can recognise the changes that they make but there are other changes that are more difficult to recognise. They operate at a subconscious level.[32]

It may be surprising to know that learning to read is not something that comes naturally to people.  It isn’t like speech – our primary means of communication.  When you learn how to read what happens in the brain is that your neural pathways change.  And once they have changed they have changed forever.  Learning to write involves similar changes and what happens with both of those activities is that a remarkable amount of processing of information takes place and it all happens at a subconscious level.

You see writing is a code.  It’s a code for information that is initially conceived as an oral expression and is then rendered into phonetic alphabetically form and when it is read it is reprocessed so that it has meaning.  But in the way in which we read and we write we realise Marshall McLuchan’s comment that “We shape our tools and thereafter our tools shape us.”[33] And the use of new technologies is clearly just that – both behaviourally and physiologically.

 The Medium Is…….. Elizabeth Eisenstein and a Qualities Based Analysis of Print Media

Part of the problem is trying to identify what it is about our tools that allow these changes to happen or that enable them.  In her seminal work on the printing press – The Printing Press as an Agent of Change – Elisabeth Eisenstein identified 6 fundamental qualities that the print technology introduced that dramatically challenged the way in which the scribal culture produced texts.   These particular qualities were the enablers that underpinned the distribution of content that enhanced the developing Renaissance, that spread Luther’s 97 arguments around Germany in the space of 2 weeks from the day that they were nailed on the Church door at Wittenberg, and allowed for the wide communication of scientific information that enabled experiment, comment, development and what we now know as the Scientific Revolution.

And it also happened in my own field the law.  Within 300 years of the introduction of the printing press by Gutenberg the oral-memorial customary- based ever-changing law had to be recorded in a book for it to exist.

It would be fair to remark that Eisenstein’s approach was and still is contentious. But what is important is her identification of the paradigmatic differences between the scribal and print cultures based upon the properties or qualities of the new technologies. These qualities were responsible for the shift in the way that intellectuals and scholars approached information.

There were six features or qualities of print that significantly differentiated the new technology from scribal texts.

 a) dissemination

b) standardisation

c) reorganization

d) data collection

e) fixity and preservation

f) amplification and reinforcement.

 For example, dissemination of information was increased by printed texts not solely by volume but by way of availability, dispersal to different locations and cost. For example, dissemination allowed a greater spread of legal material to diverse locations, bringing legal information to a wider audience. The impact upon the accessibility of knowledge was enhanced by the greater availability of texts and, in time, by the development of clearer and more accessible typefaces.

Standardisation of texts, although not as is understood by modern scholars, was enabled by print. Every text from a print run had an identical or standardised content. Every copy had identical pagination and layout along with identical information about the publisher and the date of publication. Standardised content allowed for a standardised discourse. In the scribal process errors could be perpetuated by copying, and frequently in the course of that process additional ones occurred. However, the omission of one word by a compositor was a “standardised” error that did not occur in the scribal culture but that had a different impact and could be “cured” by the insertion of an “errata” note before the book was sold. Yet standardisation itself was not an absolute and the printing of “errata” was not the complete answer to the problem of error. Interaction on the part of the reader was required to insert the “errata” at the correct place in the text.

In certain cases print could not only perpetuate error but it could be used actively to mislead or disseminate falsehood. The doubtful provenance of The Compleate Copyholder attributed to Sir Edward Coke is an example.[34] Standardisation, as a quality of print identified by Eisenstein, must be viewed in light of these qualifications.

Print allowed greater flexibility in the organization and reorganization of material and its

presentation. Material was able to be better ordered using print than in manuscript codices. Innovations such as tables, catalogues, indices and cross-referencing material within the text were characteristics of print. Indexing, cross-referencing and ordering of material were seized upon by jurists and law printers.

Print provided an ability to access improved or updated editions with greater ease than in the scribal milieu by the collection, exchange and circulation of data among users, along with the error trapping to which reference has been made. This is not to say that print contained fewer errors than manuscripts. Print accelerated the error making process that was present in the scribal culture. At the same time dissemination made the errors more obvious as they were observed by more readers. Print created networks of correspondents and solicited criticism of each edition. The ability to set up a system of error-trapping, albeit informal, along with corrections in subsequent editions was a significant advantage attributed to print by the philosopher, David Hume, who commented that “The Power which Printing gives us of continually improving and correcting our Works in successive editions appears to me the chief advantage of that art.”[35]

Fixity and preservation are connected with standardisation. Fixity sets a text in place and time. Preservation, especially as a result of large volumes, allows the subsequent availability of that information to a wide audience. Any written record does this, but the volume of material available and the ability to disseminate enhanced the existing properties of the written record. For the lawyer, the property of fixity had a significant impact.

Fixity and the preservative power of print enabled legal edicts to become more available and more irrevocable. In the scribal period Magna Carta was published (proclaimed) bi-annually in every shire. However, by 1237 there was confusion as to which “Charter” was involved. In 1533, by looking at the “Tabula” of Rastell’s Grete Abregement of the Statutys a reader could see how often it had been confirmed in successive Royal statutes. It could no longer be said that the signing of a proclamation or decree was following “immemorial custom”. The printed version fixed “custom” in place and time. In the same way, a printed document could be referred to in the future as providing evidence of an example which a subsequent ruler or judge could adopt and follow. As precedents increased in permanence, the more difficult it was to vary an established “custom”. Thus fixity or preservation may describe a quality inherent in print as well as a further intellectual element that print imposed by its presence.

Although Eisenstein’s work was directed more towards the changing intellectual environment and activity that followed the advent of printing and printed materials, it should not be assumed that printing impacted only upon intellectual elites. Sixteenth and seventeenth century individuals were not as ignorant of their letters as may be thought. There are two aspects of literacy that must be considered. One is the ability to write; the other being the ability to read. Reading was taught before writing and it is likely that more people could read a broadside ballad than could sign their names. Writing was taught to those who remained in school from the ages of seven or eight, whereas reading was taught to those who attended up until the age of six and then were removed from school to join the labour force. Proclamation of laws in print was therefore within the reach of a reasonable proportion of the population.

Another thing that we have got to remember is that media work on two levels. The first is that a medium is a technology that enables communication and the tools that we have to access media content are the associated delivery technologies.

The second level, and this is important is that a medium has an associated set of protocols or social and cultural practices including the values associated with information – that have grown up around the technology. Delivery systems are just machines but the second level generates and dictates behaviour.[36]

Eisenstein’s argument is that when we go beneath the delivery system and look at the qualities or the properties of a new information technology, we are considering what shapes and forms the basis for the changes in behaviour and in social and cultural practices. The qualities of a paradigmatically different information technology fundamentally change the way that we approach and deal with information. In many cases the change will be slow and imperceptible. Adaptation is usually a gradual process. Sometimes subconsciously the changes in the way that we approach information changes our intellectual habits. Textual analysis had been an intellectual activity since information was recorded in textual form. I contend that the development of principles of statutory interpretation, a specialised form of textual analysis, followed Thomas Cromwell’s dissemination and promulgation of the Reformation statutes, complete with preambles, in print.[37]

From all this it would be fair to ask –  what’s the difference? What’s changed? All we’ve got is a bunch of machinery that allows us to do what we have always done which is to read and watch movies and do the same things that we did with radio or the television – the only thing is that it’s all been brought together – there has been a convergence of the various delivery systems.    And on the surface that’s perfectly correct because what you are talking about there is content.  You’re talking about the material that’s delivered rather than looking at the delivery system.

Another thing that Marshall McLuhan said – and he had a tendency to be a little bit opaque in some of the things that he said, and this is one of them – was that “the medium is the message”.  Now a lot of people have taken that to mean that McLuhan didn’t really care too much about content and he certainly did.  But whenever you are looking at the delivery of information by a means other than orally you got to examine the way in which it was delivered.

Using Eisenstein’s approach  I have managed to identify nine qualities (and there are probably more) which dramatically distinguish digital technologies from those that have gone before and they are

    •  Persistence,
    • Continuing change or what you could refer to as the disruptive element,
    • Dynamic information
    • Dissociative enablement,
    • Permissionless innovation,
    • Availability,
    • Participation
    • Searchability
    • Retrievability.

Within these nine qualities of digital technologies will ultimately lie most of the answers to the questions “where are we going?”

One sure thing follows from two of the qualities. The disruptive element which recognises a state of continual change, and permissionless innovation which means that new stuff is going to happen on the back bone of the internet. It all means we can’t be sure what’s around the corner.  But at least the qualities of new technologies, if considered, will at least give us some idea of possible direction.

 We look at the present through a rear-view mirror……

Now one of the problems that we have particularly in my field of the law is that you run up against a real tension with disruptive communication technologies that are continually changing as a result of permissionless innovation. The law is fundamentally a very conservative beast.  Lawyers really don’t like change.  The law must be certain, known and predictable. When you look at how lawyers work you can see this in a moment.

I’ll introduce this example with another of McLuhan’s adages “We look at the present through a rear-view mirror. We march backwards into the future.”[38]  Take the doctrine of precedent – using earlier decided cases to determine the outcome of a present problem.  Now if that is not an example of driving forward using a rear vision mirror I don’t know what is.  We look to the past to solve the problems of the future.  The difficulty is that many of the decisions of the past or the way in which problems were resolved in the past were based upon a society, a context and circumstances that existed then.  And when you have paradigmatic change – when the world is turned upside down – when you have that, the old rules cannot apply.

The other challenge to precedent that comes from the digital paradigm is this. Precedent depends upon the selection of a certain limited number of cases which are reported and which form the basis for the development of principle – a critical mass. In the print paradigm there was little problem with this. Law reporters and publishers carefully selected the cases that were going to appear in the reports. Unreported decisions were not seen as authoritative.

The qualities of the digital paradigm enable the collection and storage of vast amounts of legal information. Availability in vast data banks, searchability, retrievability and availability mean that vast digital libraries become the first research stop for the digital native lawyer. Because of the volume of legal information that is available, the critical mass allowed by print has been upset. Precedent will become an exercise in fact comparison rather than principle analysis.

Much of the foundation of the development of attitudes to information and its communication was developed within a particular information paradigm and that is the print paradigm.  We are now moving into the digital paradigm and the qualities that Eisenstein identified that applied in the print paradigm have been overtaken by the new qualities that I have suggested.

And so in the law what we do is that we anchor ourselves to the past while the world is changing around us.

Bringing it all back home…..[39]

Let me summarise the argument so far.

a) There are qualities that underlie the medium of communication of information

b) Those qualities dictate and influence behaviour and the development of social and cultural practices

c) The printing press – the first information technology – was an agent for a paradigm shift in relationships, behaviours and activities surrounding information. Many of our assumptions about information in general are grounded in the print paradigm e.g. stereotypes, “black letter law”, upper and lower case etc.

d) The printing press and the print paradigm was the basis for the development of concepts of copyright and was the specific target for the Statute of Anne.

e) The qualities of digital information systems are paradigmatically different from those of the print paradigm

f) These qualities are fundamentally altering our behaviours and values about and our uses, expectations and relationships with, information

 And the question that follows from this is whether or not a system of rules that were based upon and derived from the values that flowed from the print paradigm have any relevance in the digital paradigm. The law loses credibility if it does not accord with the underlying values of a community – the consent of the governed. To maintain a system of rules that run counter to community values is oppression.

This does not mean that creators should not have some kind of protection for their creation. It means that we are going to have to find some other form of justification for the protection of intellectual property and the extent of that protection.

There are a number of international conventions – and I don’t include IP specific conventions such as Berne, WIPO, TRIPS and the like – that provide for the general protection of intellectual property rights. The Universal Declaration of Human Rights demands protection of the right of

“[e]veryone … to the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he or she is the author.”[40]

The 2005 General Comment[41] on the equivalent article in the International Covenant on Economic, Social and Cultural Rights[42] emphasises the link between this right and the proposition that authors should enjoy an adequate standard of living, and that they are entitled to just remuneration. Among other things, the document requires us to take seriously the idea that liberty interests can be furthered by participation in functional markets for creative work.

But we must remember that copyright is fundamentally grounded upon expression and we cannot overlook the provisions of Article 19 of the International Covenant on Civil and Political Rights (ICCPR) which explicitly protects the media of expression and information and was intended to include after a rising technologies.[43] Article 19 has come into sharp focus following the report by special rapporteur Frank La Rue who was considering whether or not access to the internet constituted a human right qualifying for protection under Article 19.

Copyright theory needs to recognise and accept that freedom of expression involves not only the imparting of a particular point of view but also the reception of information. And as I have suggested, the Internet facilitates those right and enhances and has had an impact upon the modelling of our information expectations and our consequent information associated behaviours.

A recent case has recognised the freedom of expression in the context of copyright. In Ashby Donald and others v. France[44] the European Court of Human Rights clarified that a conviction based on copyright law for illegally reproducing or publicly communicating copyright protected material can be regarded as an interference with the right of freedom of expression and information under Article 10 of the European Convention. Such interference must be in accordance with the three conditions enshrined in the second paragraph of Article 10 of the Convention. This means that a conviction or any other judicial decision based on copyright law, restricting a person’s or an organisation’s freedom of expression, must be pertinently motivated as being necessary in a democratic society, apart from being prescribed by law and pursuing a legitimate aim. The case unambiguously declares Article 10 of the Convention applicable in copyright cases interfering with the right of freedom of expression and information of others, adding an external human rights perspective to the justification of copyright enforcement. However, due to the important wide margin of appreciation available to the national authorities in this particular case, the impact of Article 10 however is very modest and minimal.

I am suggesting that the ICCPR or that a rights based approach should be a starting point to measure the strength and extent of any copyright protection afforded to one who engages in content expression. This approach to copyright is in line with the consequences and development of the new information paradigm. Ashby Donald v France gives weight to such an approach. The judgment in this case has confirmed that copyright enforcement, restrictions on the use of copyright protected works and sanctions based on copyright law ultimately can be regarded as interferences with the right of freedom of expression and information. This requires inevitably a balancing test between the rights involved. In terms of predictability of the outcome of such a balancing test, a clear set of criteria needs to be developed.

A rights based approach to copyright has been considered by Graeme Austin and Laurence Helfer[45] and Austin had this to say about the rights based approach:

 “Human rights certainly provide compelling reasons for being concerned about the public domain, reasons that go beyond getting more stuff more cheaply. Human rights law draws attention to a broader set of values: educational rights, environmental rights, the right to food, an adequate standard of health, indigenous peoples’ rights – with which any decent intellectual property system, any decent society, must contend. And human rights lawyers have crafted a powerful lens through which to analyse these issues – these are not just ad hoc distributive justice claims du jour. At the same time, however, human rights laws recognise the importance and the rights imperatives associated with functioning markets. Hence the recognition in many human rights instruments of the right of property.”[46]

Perhaps there should be consideration of a new copyright model that recognises content user rights against a backdrop of the right to receive and impart information and a truly balanced approach to information and expression that recognises that ideas expressed are building blocks for new ideas. Underpinning this must be a recognition on the part of content owners that the properties of new technologies dictate our responses, our behaviours, our values and our ways of thinking. These should not be seen as a threat but an opportunity. It cannot be a one-way street with traffic heading only in the direction dictated by content owners.

The reality is that the law will always be behind technology.  It will always be dealing with an historical problem.  The file sharing legislation in New Zealand is already out of date because one of the critical parts of the legislation is a definition of file sharing that ignores technology such as virtual private networks or magnet links.  Dr. Rebecca Giblin has already pointed out the legal inadequacies of some of the file sharing approaches that have been adopted in the United States.[47]

The law – like TPMs – is a very blunt instrument for a very nuanced area. My suggestion is the redevelopment and rethinking of broad principles that are in accord with the new paradigm rather than being anchored in an earlier one.

We Can Work it Out [48]

There are two ways in which Article 19 can be considered in developing a new model for copyright protection. The first is to measure the strength of any copyright rule against the right to receive and impart information and consider whether the rule is a proportionate limitation of the information right. The second approach, which is very similar to the first, is to use Article 19 as a basis to determine whether a copyright rule/protection is disproportionate to the amount of interference with the Article 19 right, and such a consideration would take place throughout the development of a rule.

In the second scenario, which is the one that I prefer, the engagement of Article 19 could occur at each of the following levels:

 a) policy formation

b) legislation

c) application/interpretation

d) litigation – for enforcement\infringment

 and therefore acts as an umbrella over all aspects of the lifecycle of a copyright rule from basis to enforcement.

Justification may be achieved by weighing competing interests. Any rule that interferes with the Article 19 right must be proportionate and limited only so far as is reasonable and necessary to fulfil the copyright owners’ interests. In addition a rights based approach avoids the absolutes that attach to property theory and the metaphors of “theft”, “piracy” and “trespass” that arise within that context.

Rather than operate as a default rule with a number of exceptions the copy right would fall within the wider scope of a justifiable but proportionate limitation on the freedom of expression. With this approach, fair use, for example, would not be an exception to the copy right. It would constitute an element of the subsisting/continuing Article 19 right.

The proposal may summarised in the following way:

 1. Copyright should not be seen as a property tight – either actual or inchoate

 2. A copyright owner’s rights should not be absolute.

 3. Copyright should be seen as an exception to the wider rights of freedom to receive and impart information guaranteed by Art. 19 ICCPR – and, given copyright does not engage until expression (according to current copyright theory),  it must be subject to the supremacy of Article 19.

 4. Interference with Article 19 rights requires justification by the “copyright owner”.[49]

 5. Once interference with the Art 19 right is justified, any restrictions to the general right and any advantages that accrue for the benefit of the “copyright owner” may be permitted to the extent that they are:

a) necessary to meet the copyright owners interests and justification and

b) proportionate in terms of the extent of the interference

 6. Concepts such as fair use, protection term, remedies (and their extent) fall within the tests of necessity and proportionality rather than exceptions to a copyright owner’s right.

 7. The following brief examples which are presently implicated in current copyright models may demonstrate the approach:

a) Access controls that have no copying implications would not be justifiable.

b) Copying that is necessary for a technology to operate could not be considered justifiable.

c) Format shifting (of any medium) could not be justified in that a royalty had been paid at point of sale.

 We want the World……

It may well be that it will take an equivalent or parallel 15 years as with the case between 1695 and 1710 for us to develop a new copyright solution.  My suggestion to you is that we must recognise that the values of the digital native regarding information have been moulded by the technologies that are available and that will continue to develop – technologies that make information instantly available; that make circumvention of restrictions easy; that allow for the wide spread distribution of information in digital format that challenges the necessity for regionalisation of content; that is an “information now” environment – we want the world and we want it – now![50]  Perhaps a rights based approach may be a starting point.


[1] For a very early reference to a concern about intellectual property in dishes invented by caterers or cooks in the Greek colony of Sybaris see the Greek historian Phylarchaus quoted by AthenaeusThe Deinosophists (C. Burton Gulick trans.) Heinemann 1927  p. 348-9; see also Martial “Rumour asserts, Fidentinus, that you recite my works to the crowd, just as if they were your own. If you wish they should be called mine, I will send you the poems gratis; if you wish them to be called yours, buy my disclaimer of them.” (Martial, Epigrams, trans. Walter C. A. Ker (London and New York, 1920-25), I, 46-47. See also the protection granted to Brunelleschi by the Florentine Republic on 19 June 1421, along with the patent statutes of the Venetian Republic in 1474. Interestingly most of the protections for authors’ works in Europe came after the introduction of the printing press – Sabellico’s protection for his book Decades rerum Ventarum was granted in 1486 and Petrus Franciscus de Ravenna’a grant for Foenix was made in 1491. A French system of privileges started in 1498.

[2] For a detailed examination of the activities of the Stationers and their role in the regulation of printing activities in England 1475 – 1642 see Chapter 3 D.J. Harvey The Law Emprynted and Englysshed (PhD thesis, unpublished) available at http://www.scribd.com/doc/103191773/The-Law-Emprynted-and-Englysshed-The-Printing-Press-as-an-Agent-of-Change-in-Law-and-Legal-Culture-1475-1642 (last accessed 29 January 2013)

[3] By the 1440s the Stationers were known as the “Mistery of Stationers” although they were known as Stationers before that. In 1407 they were delegated with the task of providing copies of religious books that had been approved by the authorities following the suppression of the Lollards – a group of religious non-conformists led initially by John Wyclif.

[4] The Decrees were in fact the decisions of the Court of Star Chamber designed to address the various issues that had arisen in a number of cases involving complaints of printing patent infringement and aimed to set in place rules and structures so that patent holders would continue to receive exclusivity.

[5] See Harold Love Scribal Publication in Seventeenth Century England (Clarendon Press, Oxford, 1993).

[6] Edmund Plowden Les comentaries, ou les reportes de Edmunde Plowden vn

apprentice de le comen ley (Richard Tottell, London, 1571) STC 20040.

[7] Edward Coke, Les reports de Edward Coke L’attorney generall le Roigne de diuers resolutions & iudgements donnes auec graunddeliberation, per les tresreuerendes iudges, & sages de la ley,de cases & matters en ley queux ne fueront vnques resolue, ouaiuges par deuant, & les raisons, & causes des dits resolutions

& iudgements, durant les tresheureux regiment de tresillustre &renomes Roigne Elizabeth, le founteine de tout iustice, & la viede la ley (Adam Islip, London, 1600) STC 5493. 11 subsequent volumes were printed under Coke’s supervision. The twelfth volume was published posthumously. See also the publication of The first part of the Institutes of the lawes of England. Or, A commentarie vpon Littleton, not the name of a lawyer onely, but of the law it selfe. (Adam Islip for the Stationers, London, 1628) STC 15784 which became a standard text on land law.

[8] In addition manuscript circulation allowed the dissemination of unpopular or contentious political or religous content within a limited audience away from the critical gaze of print licensors. The recognition of the power of the manuscript and its circulation among coteries can be seen in the activities of the Crown to secure the libraries of Thomas Norton, Sir Robert Cotton and Sir Edward Coke after their deaths.

[9] (1769) 4 Burr. 2303, 98 ER 201.

[10] (1774) 2 Brown’s Parl. Cases 129, 1 Eng. Rep. 837; 4 Burr. 2408, 98 Eng. Rep. 257 ; 17 Cobbett’s Parl. Hist. 953 (1813).

[11] 1 Wm. Blackstone 301, 96 ER. 169 [1761]. Reargued: 1 Wm. Blackstone 322, 96 ER 180 [1762].

[12] Although they could manually transcribe a book should they want to, although that would amount to copyright infringement.

[13]  239 F.3d 1004 (2001).

[14] 180 F.3d 1072 (9th Cir. 1999).

[15] 273 F. 3d 429 – Court of Appeals, 2nd Circuit 2001.

[16]  545 U.S. 913 (2005).

[17] [2002] 55 IPR 429 (Ch).

[18] [2004] EWHC 1738 (Ch).

[19] (2003) 129 FCR 472.

[20]  (2005) HCA 58.

[21] 464 U.S. 417, 455, 104 S.Ct. 774, 78 L.Ed.2d 574 (1984).

[22] [1987] 3 All ER 151.

[23] Charles Clark ‘The Answer to the Machine is in the Machine’, in: P. Bernt Hugenholtz (ed.), The Future of copyright in a digital environment : proceedings of the Royal Academy Colloquium organized by the Royal Netherlands Academy of Sciences (KNAW) and the Institute for Information Law ; (Amsterdam, 6-7 July 1995), (Kluwer Law International, The Hague, 1996).

[24] Sony v Stevens above n. 19.

[25] Above n. 16.

[26] Above n. 17.

[27] Above n. 8.

[28] Marshall McLuhan Understanding Media: The Extensions of Man (Sphere Books, London, 1967).

[29] “Welcome to the Machine” Pink Floyd Wish You Were Here (1975 Pink Floyd Music Publishers Ltd., London, England) Track 2

[30] “But the Banshee brouhaha is yet another signal that modern viewers want more pick-and-choose flexibility. And also how hard it is to stamp something out on the intrawebs. For as I type, the first episode of Banshee is still available full and free to Kiwis through Cinemax’ website here “ (http://www.cinemax.com/banshee/video/?bctid=2083432700001)

Chris Keall “Sky TV gives HBO a nudge after hot new series Banshee put free online for Kiwis”  Keallhauled National Business Review Online 16 January 2013 http://www.nbr.co.nz/opinion/sky-tv-cops-role-youtube-episode-banshee-being-blocked-new-zealanders-CK (last accessed 16 January 2013)

[31] For a discussion of “spoilers” and television see Henry Jenkins Convergence Culture: Where Old and New Media Collide (New York University Press, New York 2008 especially Chapter 1 “Spoiling Survivor – The Anatomy of a Knowledge Community” at p. 25 et seq.

[32] For a pessimistic view of the “rewiring” effect see Nicholas Carr “Is Google Making Us Stupid” The Atlantic July/August 2008  available on-line at http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/ (last accessed 17 January 2013) and for a detailed approach see Nicholas Carr The Shallows: How the Internet is changing the way we think, read and remember (Atlantic Books, London, 2010).

[33] Above n. 27.

[34] The Compleate Copyholder (T. Coates for W Cooke, London,1641) Wing C4912.

[35] Cited by J.A. Cochrane Dr Johnson’s Printer: The Life of William Strahan (Routledge and K Paul, London, 1964) p.19 at n.2.

[36] Lisa Gitelman “Introduction: Media as Historical Subjects: in Always Already New: Media, History and the Data of Culture (MIT Press, Cambridge, 2008) p. 7.

[37] This is a very bald assertion. The argument is a little more nuanced and involves a consideration of the use of the printing press by Cromwell, the significant increase in legislative activity during the course of the English Reformation, the political and legal purpose of statutory preambles, the advantages of an authoritative source of law in printed form for governing authorities, all facilitated by underpinning qualities of print such as standardisation, fixity and dissemination.

[38] Marshall McLuhan and Quentin Fiore  The Medium is the Massage: An Inventory of Effects (Penguin, Harmondsworth 1967).

[39] The title of Bob Dylan’s fifth album released 27 March 1965 and released by Columbia.

[40] Universal Declaration of Human Rights GA Res 217A, A/810 (1948) art 27.

[41] Committee on Economic, Social and Cultural Rights General Comment No 17: The Right of Everyone to Benefit from the Protection of the Moral and Material Interests Resulting from Any Scientific, Literary or Artistic Production of Which He Is the Author E/C12/2005 (2005) art 15(1)(c).

[42] International Covenant on Economic, Social and Cultural Rights 993 UNTS 3 (opened for signature 19

December 1966, entered into force 3 January 1976).

[43] Article 19 reads as follows:

1.             Everyone shall have the right to hold opinions without interference;

2.             Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.

3.             The exercise of the rights provided for in paragraph 2 of this article carries with it special duties and responsibilities.  It may therefore be subject to certain restrictions, but these shall only be such as are provided by law and are necessary:

(a)           for the respect of the rights or reputations of others;

(b)           for the protection of national security or if public order or of public health or morals.

[44] ECHR Appl. nr. 36769/08.

[45] Laurence R Helfer and Graeme W Austin Human Rights and Intellectual Property: Mapping the Global

Interface (Cambridge University Press, New York, 2011).

[46] Graeme W Austin “Property on the Line: Life on the Frontier Between Copyright and The Public Domain” [2012] 43 VULR 1 at 14.

[47] Rebecca Giblin Code Wars: 10 Years of P2P Software Litigation (Edward Elgar Publishing,  2011); Rebecca Giblin , “On the (New) New Zealand Graduated Response Law (and Why It’s Unlikely to Achieve Its Aims)” (2012) 62(4) Telecommunications Journal of Australia 54.1-54.14. Available at SSRN: http://ssrn.com/abstract=2198116 (last accessed 17 January 2013).

[48] “We Can Work it Out” John Lennon and Paul Mcartney 1965, released as the B-Side to the single “Day Tripper”

Upon reflection, the lyrics may seem apposite to the current problem:

“Try to see it my way

Do I have to keep on talking till I can’t go on?

While you see it your way

Run the risk of knowing that our love may soon be gone”

[49] I use the terms “copyright” and “copyright owner” in this context only because I have not devised a label that aptly fits within the new model and that is not clumsy.

[50] “When the Music’s Over” Jim Morrison, Ray Manzarek, Robby Krieger and John Densmore (The Doors)  “Strange Days” The Doors Elektra Records 1967 Track 10.