Digital Data and Theft – Collisions in The Digital Paradigm IV

 

Under the law in New Zealand a digital file cannot be stolen. This follows from the Court of Appeal decision in Dixon v R [2014] NZCA 329 and depends upon the way in which various definitions contained in the Crimes Act coupled with the nature of the charge were interpreted by the Court.

Mr. Dixon, the appellant, had been employed by a security firm in Queenstown. One of the clients of the firm was Base Ltd which operated the Altitude Bar in Queenstown. Base had installed a closed circuit TV system in the bar.

In September 2011 the English rugby team was touring New Zealand as part of the Rugby World Cup. The captain of the team was Mr Tindall. Mr Tindall had recently married the Queen’s granddaughter. On 11 September, Mr Tindall and several other team members visited Altitude Bar. During the evening there was an incident involving Mr Tindall and a female patron, which was recorded on Base’s CCTV.

Mr Dixon found out about the existence of the footage of Mr Tindall and asked one of Base’s receptionists to download it onto the computer she used at work. She agreed, being under the impression that Mr Dixon required it for legitimate work purposes. The receptionist located the footage and saved it onto her desktop computer in the reception area. Mr Dixon subsequently accessed that computer, located the relevant file and transferred it onto a USB stick belonging to him.

Mr Dixon attempted to sell the footage but when that proved unsuccessful he posted it on a video-sharing site, resulting in a storm of publicity both in New Zealand and in the United Kingdom. At his trial the judge Phillips found that Mr Dixon had done this out of spite and to ensure that no one else would have the opportunity to make any money from the footage.

A complaint was laid with the Police and Mr Dixon was charged under s. 249(1)(a) of the Crimes Act 1961.

That section provides as follows:

249 Accessing computer system for dishonest purpose

(1) Every one is liable to imprisonment for a term not exceeding 7 years who, directly or indirectly, accesses any computer system and thereby, dishonestly or by deception, and without claim of right,—

(a) obtains any property, privilege, service, pecuniary advantage, benefit, or valuable consideration;

The indictment against Mr Dixon alleged that he had “accessed a computer system and thereby dishonestly and without claim of right obtained property.”

The issue before the Court was whether or not digital footage stored on a computer was “property” as defined in the Crimes Act.

“Property” is defined in section 2 of the Crimes Act in the following way:

property includes real and personal property, and any estate or interest in any real or personal property, money, electricity, and any debt, and any thing in action, and any other right or interest.

The Court considered the legislative history of the definition, noting that in the Bill that introduced the new computer crimes a separate definition of property specifically for those crimes had been provided. The definition was discarded by the Select Committee which rejected the suggestion that there should be different definitions of the word property for different offences.

The Court also noted that in the case of Davies v Police [2008] 1 NZLR 638 (HC) it was held  that internet usage (the consumption of megabytes in the transmission of electronic data) is “property” but in that case the Judge specifically distinguished internet usage from the information contained in the data. Thus, Dixon was the first case where the Court had to consider “property” as defined in the context of “electronically stored footage or images”.

In considering the decision of the trial Judge, the Court was of the view that he had been influenced by the very wide definition of property and the inclusion of intangible things, and that the footage in question seemed to have all the normal attributes of personal property. The Court also observed that Base Ltd who operated the CCTV system did not lose the file. What it lost was the right to exclusive possession and control of it. The Court considered the trial judge’s holding that the files were within the scope of the definition of property reflected “an intuitive response that in the modern computer age digital data must be property.” (para 20)

The Court concluded otherwise and held that digital files are not property within section 2, and therefore Mr Dixon did not obtain property and was charged under the wrong part of section 249(1)(a). Rather, held the Court, he should have been charged with accessing a computer and dishonestly and without claim of right obtaining a benefit.

The Court referred to the English decision of Oxford v Moss (1979) 68 Cr App R 183 which involved a University student who unlawfully acquired an examination paper, read its contents and returned it. The Court held that was not theft. The student had obtained the information on the paper – confidential it may have been, but it was not property, unlike the medium upon which it was written.

The Court of Appeal noted that Oxford v Moss was not a closely reasoned decision but it remained good law in England and had been followed by the Supreme Court of Canada in Stewart v R [1988] 1 SCR 963. Oxford v Moss had also been followed in New Zealand. In Money Managers Ltd v Foxbridge Trading Ltd (HC Hamilton CP 67/93 15 December 1993) Hammond J noted that traditionally the common law had refused to treat information as property, and in Taxation Review Authority 25 [1997] TRNZ 129 Judge Barber had to consider whether computer programs and software constituted goods for the purpose of the Goods and Services Tax Act 1985. He drew a distinction between the medium upon which information or data was stored – such as computer disks – and the information itself.

The Court considered the nature of confidential information and a line of cases that held that it was not property. The traditional approach had been to rely on the equitable cause of action for breach of confidence.

The Court went on to consider whether or not the digital footage might be distinguishable from confidential information. Once again it noted the distinction between the information or data and the medium, observing that a computer disk containing the information was property whilst the information contained upon it was not. It observed that a digital file arguably does have a physical existence in a way that information (in non-physical form) does not, citing the decision in R v Cox (2004) 21 CRNZ 1 CA at [49]. Cox was a case about intercepted SMS messages. The relevant observation was directed to the issue of whether or not an electronic file could be the subject of a search. The Court in Cox noted

“Nor do we see anything in the argument that the electronic data is not “a thing”. It has a physical existence even if ephemeral and that in any event the computer componentry on which it was stored was undoubtedly “a thing”.

Any doubt on this particular issue has been resolved by the Search and Surveillance Act 2012. However, as I will discuss below, although a digital file does have a physical existence, it is not in coherent form. One of the subtexts to the Court of Appeal’s observations of the “electronically stored footage” was that, when stored electronically it has a continuity similar to film footage. For reason that I will discuss later, this is not the case.

The Court then went on to discuss the nature of information in the electronic space. The Court stated at [31]:

It is problematic to treat computer data as being analogous to information recorded in physical form. A computer file is essentially just a stored sequence of bytes that is available to a computer program or operating system. Those bytes cannot meaningfully be distinguished from pure information. A Microsoft Word document, for example, may appear to us to be the same as a physical sheet of paper containing text, but in fact is simply a stored sequence of bytes used by the Microsoft Word software to present the image that appears on the monitor.

Having reviewed the background to the extension of the definition of “property’ following the decision in the case of R v Wilkinson [1999] 1 NZLR 403 (CA) where it was held that credit extended by a bank was not capable of being stolen because the definition of things capable of being stolen was limited to moveable, tangible things, and the fact that although the definition of document extended to electronic files the word “document” – thereby extending the definition of property to include electronic files – did not appear in the definition of “property”, along with the fact that the Law Commission in its hastily produced and somewhat flawed report Computer Misuse (NZLC R54 1999) referred to a possible redefinition of information as a property right, the Court took what it described as the orthodox approach. Parliament was taken to be aware of the large body of authority regarding the status of information and had it intended to change the legal position, it would have expressly said so by including a specific reference to computer-stored data.

This holding did not make section 249(1) of the Crimes Act meaningless. The section would still extend to cases where, for example, a defendant accesses a computer and uses, for example, credit card details to unlawfully obtain goods. In this case, the Court observed, Mr. Dixon had been charged under the wrong part of the section.

It is clear that prosecuting authorities will have to move with care in future. Under the Dixon holding, someone who unlawfully obtains an e-book for a Kindle or other reader could not be charged with theft, because an e-book is information in digital form. If the same book in hard copy form were taken without payment and with the requisite intention from a bookstore, a charge of theft could follow.

Comment

There can be no doubt that the decision of the Court of Appeal is correct technologically and in law and, although I do take a few minor points with the way in which the technological realities have been articulated.

The issue of where the property lies within medium\information dichotomy has been with us for a considerable period of time. I can own the book, but I do not “own” the content and do with it as I wish because it is the “property” of the author. The particular property right – the “copy right” gives the author the control over the use of the content of the book – the author may lose possession and control of the medium but he or she does not lose control of the message.

But the “copy right” has its own special statute and those legislatively created special property rights do not extend to the provisions of the Crimes Act – even although copyright owners frequently mouth the mantra that copyright infringement is “theft”. Clearly the decision in Dixon emphasising the principle that information is not property for the purposes of theft must put that myth to rest.

Information or Data in the Digital Space

To clearly understand the import of the decision in Dixon it is necessary to understand the nature of information or data in the digital space. The Court of Appeal refers to “information” because that is the basis of the “orthodox” conclusion that it reached. Information implies a certain continuity and coherence that derives from the way in which it was communicated in the pre-digital paradigm. Lawyers are so used to obtaining information that is associated primarily with paper, the medium takes second place to the message. Lawyers focus upon the “content layer” – an approach that must be reconsidered in the Digital Paradigm. For reasons which I shall develop, the word “data” can (and perhaps should) be substituted.

The properties of electronic and digital technologies and their product require a review of one’s approach to information. The nature of the print and paper based medium as a means of recording and storing information, and the digital equivalent are radically different. Apart from occasional incidents of forgery, with paper-based documents, what you saw was what you got. There was no underlying information embedded or hidden in the document, as there is with meta-data in the digital environment. The issue of the integrity of the information contained on the static medium was reasonably clear.

Electronic data is quite different to its predigital counterpart. Some of those differences may be helpful. Electronic information may be easily copied and searched but it must be remembered that electronic documents do pose some challenges.

Electronic data is dynamic and volatile. It is often difficult to ensure it has been captured and retained in such a way as to ensure its integrity. Unintentional modifications may be made simply by opening and reading data. Although the information that appears on the screen may not have been altered, some of the vital meta-data which traces the history of the file (and which can often be incredibly helpful in determining its provenance and which may be of assistance in determining the chronology of events and when parties knew what they knew) may have been changed.

To understand the difficulty that the electronic paradigm poses for our conception of data it is necessary to consider the technological implications of storing information in the digital space. It is factually and paradigmatically far removed from information recorded on a medium such as paper.

If we consider  data as information written upon a piece of paper it is quite easy for a reader to obtain access to that information long after it was created. The only thing necessary is good eye sight and an understanding of the language in which the document is written. It is information in that it is comprehensible and the content informs. Electronic data in and of itself does not do that. It incoherent and incomprehensible, scattered across the sectors of the medium on which it is contained. In that state it is not information in that it does not inform.

Data in electronic format is dependent upon hardware and software. The data contained upon a medium such as a hard drive requires an interpreter to render it into human readable format. The interpreter is a combination of hardware and software. Unlike the paper document, the reader cannot create or manipulate electronic data into readable form without the proper hardware in the form of computers.[1]

There is a danger in thinking of electronic data as an object ‘somewhere there’ on a computer in the same way as a hard copy book is in a library.  Because of the way in which electronic storage media are constructed it is almost impossible for a complete file of electronic information be stored in consecutive sectors of a medium. An electronic file is better understood as a process by which otherwise unintelligible pieces of data are distributed over a storage medium, are assembled, processed and rendered legible for a human user. In this respect the “information” or “file” as a single entity is in fact nowhere. It does not exist independently from the process that recreates it every time a user opens it on a screen.[2]

Computers are useless unless the associated software is loaded onto the hardware. Both hardware and software produce additional evidence that includes, but is not limited to, information such as metadata and computer logs that may be relevant to any given file or document in electronic format.

This involvement of technology and machinery makes electronic information paradigmatically different from traditional information where the message and the medium are one. It is this mediation of a set of technologies that enables data in electronic format – at its simplest, positive and negative electromagnetic impulses recorded upon a medium – to be rendered into human readable form. This gives rise to other differentiation issues such as whether or not there is a definitive representation of a particular source digital object. Much will depend, for example, upon the word processing program or internet browser used.

The necessity for this form of mediation for information acquisition and communication explains the apparent fascination that people have with devices such as smart phones and tablets. These devices are necessary to “decode” information and allow for its comprehension and communication.

Thus, the subtext to the description of the electronically stored footage which seems to suggest a coherence of data similar to that contained on a strip of film cannot be sustained. The “electronically stored footage” is meaningless as data without a form of technological mediation to assemble and present the data in coherent form.  The Court made reference to the problem of trying to draw an analogy between computer data and non-digital information or data and referred to the example of the Word document. This is part of an example of the nature of “information as process” that I have described above. Nevertheless there is an inference of coherence of information in a computer file that is not present in the electronic medium – references to “sequence of bytes” are probably correct once the assembly of data prior to presentation on a screen has taken place –  but the reality is that throughout the process of information display on a screen there is constant interactivity between the disk or medium interpreter, the code of the word processing program and the interpreter that is necessary to display the image on the screen.

In the final analysis there are two approaches to the issue of whether or not digital data is property for the purposes of theft. The first is the orthodox legal position taken by the Court of Appeal. The second is the technological reality of data in the digital space. Even although the new definition of property extends to intangibles such as electricity it cannot apply to data in the digital space because of the incoherence of the data. Even although a file may be copied from one medium to another, it remains in an incoherent state. Even although it may be inextricably associated with a medium of some sort or another, it maintains that incoherent state until it is subjected to the mediation of hardware and software that I have described above. The Court of Appeal’s “information” based approach becomes even sharper when one substitutes the word “data” for “information”. Although there is a distinction between the medium and the data, the data requires a storage medium of some sort. And it is this that is capable of being stolen

Although Marshall McLuhan intended an entirely different interpretation of the phrase, ‘the medium is the message,’[3] it is a truth of information in digital format.

 

[1] Burkhard Schafer and Stephen Mason, chapter 2 ‘The Characteristics of Electronic Evidence in Digital Format’ in Stephen Mason (gen ed) Electronic Evidence (3rd edn, LexisNexis Butterworths, London 2012) 2.05.

[2] Burkhard Schafer and Stephen Mason, chapter 2 ‘The Characteristics of Electronic Evidence in Digital Format’ 2.06.

[3] Marshall McLuhan, Understanding Media : The Extensions of Man (Massachusetts Institute of Technology  Cambridge 1994) Ch 1

 

Internet Governance Theory – Collisions in the Digital Paradigm III

 

The various theories on internet regulation can be placed  within a taxonomy structure . In the centre is the Internet itself. On one side are the formal theories based on traditional “real world” governance models. These are grounded in traditional concepts of law and territorial authority. Some of these model could well become a part of an “uber-model” described as the “polycentric model” – a theory designed to address specific issues in cyberspace. Towards the middle are less formal but nevertheless structured models. Largely technical or “code-based” in nature that are less formal but nevertheless exercise a form of control over Internet operation.

 

On the other side are informal theories that emphasise non-traditional or radical models. These models tend to be technically based, private and global in character.

Internet Governance Graphic
Internet Governance Models – click on the image for a larger copy

 

What I would like to do is briefly outline aspects of each of the models. This will be a very “once over lightly” approach and further detail may be found in Chapter 3 of my text internet.law.nz. This piece also contains some new material on Internet Governance together with some reflections on how traditional sovereign/territorial governance models just won’t work within the context of the Digital Paradigm and the communications medium that is the Internet.

The Formal Theories

The Digital Realists

The “Digital Realist” school has been made famous by Judge Easterbrook’s comment that “there [is] no more a law of cyberspace than there [is] a ‘Law of the Horse.’” Easterbrook summed the theory up in this way:

“When asked to talk about “Property in Cyberspace,” my immediate reaction was, “Isn’t this just the law of the horse?” I don’t know much about cyberspace; what I do know will be outdated in five years (if not five months!); and my predictions about the direction of change are worthless, making any effort to tailor the law to the subject futile. And if I did know something about computer networks, all I could do in discussing “Property in Cyberspace” would be to isolate the subject from the rest of the law of intellectual property, making the assessment weaker.

This leads directly to my principal conclusion: Develop a sound law of intellectual property, then apply it to computer networks.”

Easterbrook’s comment is a succinct summary of the general position of the digital realism school: that the internet presents no serious difficulties, so the “rule of law” can simply be extended into cyberspace, as it has been extended into every other field of human endeavour. Accordingly, there is no need to develop a “cyber-specific” code of law.

Another advocate for the digital realist position is Jack Goldsmith. In “Against Cyberanarchy” he argues strongly against those whom he calls “regulation sceptics” who suggest that the state cannot regulate cyberspace transactions. He challenges their opinions and conclusions, arguing that regulation of cyberspace is feasible and legitimate from the perspective of jurisdiction and choice of law — in other words he argues from a traditionalist, conflict of laws standpoint. However, Goldsmith and other digital realists recognise that new technologies will lead to changes in government regulation; but they believe that such regulation will take place within the context of traditional governmental activity.

Goldsmith draws no distinction between actions in the “real” world and actions in “cyberspace” — they both have territorial consequences. If internet users in one jurisdiction upload pornography, facilitate gambling, or take part in other activities that are illegal in another jurisdiction and have effects there then, Goldsmith argues, “The territorial effects rationale for regulating these harms is the same as the rationale for regulating similar harms in the non-internet cases”. The medium that transmitted the harmful effect, he concludes, is irrelevant.

The digital realist school is the most formal of all approaches because it argues that governance of the internet can be satisfactorily achieved by the application of existing “real space” governance structures, principally the law, to cyberspace. This model emphasises the role of law as a key governance device. Additional emphasis is placed on law being national rather than international in scope and deriving from public (legislation, regulation and so on) rather than private (contract, tort and so on) sources. Digital realist theorists admit that the internet will bring change to the law but argue that before the law is cast aside as a governance model it should be given a chance to respond to these changes. They argue that few can predict how legal governance might proceed. Given the law’s long history as society’s foremost governance model and the cost of developing new governance structures, a cautious, formal “wait and see” attitude is championed by digital realists.

The Transnational Model – Governance by International Law

The transnational school, although clearly still a formal governance system, demonstrates a perceptible shift away from the pure formality of digital realism. The two key proponents of the school, Burk and Perritt, suggest that governance of the internet can be best achieved not by a multitude of independent jurisdiction-based attempts but via the medium of public international law. They argue that international law represents the ideal forum for states to harmonise divergent legal trends and traditions into a single, unified theory that can be more effectively applied to the global entity of the internet.

The transnationalists suggest that the operation of the internet is likely to promote international legal harmonisation for two reasons.

First, the impact of regulatory arbitrage and the increased importance of the internet for business, especially the intellectual property industry, will lead to a transfer of sovereignty from individual states to international and supranational organisations. These organisations will be charged with ensuring broad harmonisation of information technology law regimes to protect the interests of developed states, lower trans-border costs to reflect the global internet environment, increase opportunities for transnational enforcement and resist the threat of regulatory arbitrage and pirate regimes in less developed states.

Secondly, the internet will help to promote international legal harmonisation through greater availability of legal knowledge and expertise to legal personnel around the world.

The transnational school represents a shift towards a less formal model than the digital realism because it is a move away from national to international sources of authority. However, it still clearly belongs to the formalised end of the governance taxonomy on three grounds:

1.    its reliance on law as its principal governance methodology;

2.    the continuing public rather than private character of the authority on which governance rests; and

3.    the fact that although governance is by international law, in the final analysis, this amounts to delegated authority from national sovereign states.

 

National and UN Initiatives – Governance by Governments

This discussion will be a little lengthier because there is some history the serves to illustrate how governments may approach Internet governance.

In 2011 and 2012 there were renewed calls for greater regulation of the Internet.  These were driven by the events in the Middle East early in 2011 which became known as the “Arab Spring” seems more than co-incidental. The “Arab Spring” is a term that refers to anti-government protests that spread across the Middle East. These followed a successful uprising in Tunisia against former leader Zine El Abidine Ben Ali which emboldened similar anti-government protests in a number of Arab countries. The protests were characterised by the extensive use of social media to organise gatherings and spread awareness. There has, however, been some debate about the influence of social media on the political activism of the Arab Spring. Some critics contend that digital technologies and other forms of communication–videos, cellular phones, blogs, photos and text messages– have brought about the concept of a ‘digital democracy’ in parts of North Africa affected by the uprisings. Other have claimed that in order to understand the role of social media during the Arab Uprisings there is context of high rates of unemployment and corrupt political regimes which led to dissent movements within the region. There is certainly evidence of an increased uptake of Internet and social media usage over the period of the events, and during the uprising in Egypt, then President Mubarak’s State Security Investigations Service blocked access to Twitter and Facebook and on 27 January 2011 the Egyptian Government shut down the Internet in Egypt along with SMS messaging.

The G8 Meeting in Deauville May 2011

In May 2011 at G8 meeting in France, President Sarkozy issued a provocative call for stronger Internet Regulation. M. Sarkozy convened a special gathering if global “digerati” in Paris and called the rise of the Internet a “revolution” as significant as the age of exploration and the industrial revolution. This revolution did not have a flag and M. Sarkozy acknowledged that the Internet belonged to everyone, citing the “Arab Spring” as a positive example. However, he warned executives of Google, Facebook, Amazon and E-Bay who were present : “The universe you represent is not a parallel universe. Nobody should forget that governments are the only legitimate representatives of the will of the people in our democracies. To forget this is to risk democratic chaos and anarchy.”

Mr. Sarkozy was not alone in calling existing laws and regulations inadequate to deal with the challenges of a borderless digital world. Prime Minister David Cameron of Britain stated that he would ask Parliament to review British privacy laws after Twitter users circumvented court orders preventing newspapers from publishing the names of public figures who are suspected of having had extramarital affairs but he did not go as far as M. Sarkozy who was pushing for a “civilized Internet” implying wide regulation.

However, the Deauville Communique did not go as far as M. Sarkozy may have like. It affirmed the importance of intellectual property protection, the effective protection of personal data and individual privacy, security of networks a crackdown on trafficking in children for their sexual exploitation. But it did not advocate state control of the Internet but staked out a role for governments. The communique stated:

“We discussed new issues such as the Internet which are essential to our societies, economies and growth. For citizens, the Internet is a unique information and education tool, and thus helps to promote freedom, democracy and human rights. The Internet facilitates new forms of business and promotes efficiency, competitiveness, and economic growth. Governments, the private sector, users, and other stakeholders all have a role to play in creating an environment in which the Internet can flourish in a balanced manner. In Deauville in 2011, for the first time at Leaders’ level, we agreed, in the presence of some leaders of the Internet economy, on a number of key principles, including freedom, respect for privacy and intellectual property, multi-stakeholder governance, cyber-security, and protection from crime, that underpin a strong and flourishing Internet. The “e-G8″ event held in Paris on 24 and 25 May was a useful contribution to these debates….

The Internet and its future development, fostered by private sector initiatives and investments, require a favourable, transparent, stable and predictable environment, based on the framework and principles referred to above. In this respect, action from all governments is needed through national policies, but also through the promotion of international cooperation……

As we support the multi-stakeholder model of Internet governance, we call upon all stakeholders to contribute to enhanced cooperation within and between all international fora dealing with the governance of the Internet. In this regard, flexibility and transparency have to be maintained in order to adapt to the fast pace of technological and business developments and uses. Governments have a key role to play in this model.

We welcome the meeting of the e-G8 Forum which took place in Paris on 24 and 25 May, on the eve of our Summit and reaffirm our commitment to the kinds of multi-stakeholder efforts that have been essential to the evolution of the Internet economy to date. The innovative format of the e-G8 Forum allowed participation of a number of stakeholders of the Internet in a discussion on fundamental goals and issues for citizens, business, and governments. Its free and fruitful debate is a contribution for all relevant fora on current and future challenges.

We look forward to the forthcoming opportunities to strengthen international cooperation in all these areas, including the Internet Governance Forum scheduled next September in Nairobi and other relevant UN events, the OECD High Level Meeting on “The Internet Economy: Generating Innovation and Growth” scheduled next June in Paris, the London International Cyber Conference scheduled next November, and the Avignon Conference on Copyright scheduled next November, as positive steps in taking this important issue forward.”

 The ITU Meeting in Dubai December 2012

The meeting of the International Telecommunications Union (ITU) in Dubai provided the forum for further consideration of expanded Internet regulation. No less an authority than Vinton Cerf, the co-developer with Robert Kahn of the TCP/IP protocol which was one of the important technologies that made the Internet possible, sounded a warning when he said

“But today, despite the significant positive impact of the Internet on the world’s economy, this amazing technology stands at a crossroads. The Internet’s success has generated a worrying desire by some countries’ governments to create new international rules that would jeopardize the network’s innovative evolution and its multi-faceted success.

This effort is manifesting itself in the UN General Assembly and at the International Telecommunication Union – the ITU – a United Nations organization that counts 193 countries as its members, each holding one vote. The ITU currently is conducting a review of the international agreements governing telecommunications and it aims to expand its regulatory authority to include the Internet at a treaty summit scheduled for December of this year in Dubai. ….

Today, the ITU focuses on telecommunication networks, radio frequency allocation, and infrastructure development. But some powerful member countries see an opportunity to create regulatory authority over the Internet. Last June, the Russian government stated its goal of establishing international control over the Internet through the ITU. Then, last September, the Shanghai Cooperation Organization – which counts China, Russia, Tajikistan, and Uzbekistan among its members – submitted a proposal to the UN General Assembly for an “international Code of Conduct for Information Security.” The organization’s stated goal was to establish government-led “international norms and rules standardizing the behavior of countries concerning information and cyberspace.” Other proposals of a similar character have emerged from India and Brazil. And in an October 2010 meeting in Guadalajara, Mexico, the ITU itself adopted a specific proposal to “increase the role of ITU in Internet governance.”

As a result of these efforts, there is a strong possibility that this December the ITU will significantly amend the International Telecommunication Regulations – a multilateral treaty last revised in 1988 – in a way that authorizes increased ITU and member state control over the Internet. These proposals, if implemented, would change the foundational structure of the Internet that has historically led to unprecedented worldwide innovation and economic growth.”

The ITU, originally the International Telegraph Union, is a specialised agency of the United Nations and is responsible for issues concerning information and communication technologies. It was originally founded in 1865 and in the past has been concerned with technical communications issues such as standardisation of communications protocols (which was one of its original purposes) that management of the international radio-frequency spectrum and satellite orbit resources and the fostering of sustainable, affordable access to ICT. It took its present name in 1934 and in 1947 became a specialised agency of the United Nations.

The position of the ITU approaching the 2012 meeting in Dubai was that, given the vast changes that had taken place in the world of telecommunications and information technologies, the International Telecommunications Regulations (ITR)that had been revised in 1988 were no longer in keeping with modern developments. Thus, the objective of the 2012 meeting was to revise the ITRs to suit the new age. After a controversial meeting in Dubai in December 2012 the Final Acts of the Conference were published. The controversial issue was that there was a proposal to redefine the Internet as a system of government-controlled, state supervised networks. The proposal was contained in a leaked document by a group of members including Russia, China, Saudi Arabia, Algeria, Sudan, Egypt and the United Arab Emirates. However, the proposal was withdrawn. But the governance model defined the Internet as an:

“international conglomeration of interconnected telecommunication networks,” and that “Internet governance shall be effected through the development and application by governments,” with member states having “the sovereign right to establish and implement public policy, including international policy, on matters of Internet governance.”

This wide-ranging proposal went well beyond the traditional role of the ITU and other members such as the United States, European countries, Australia, New Zealand and Japan insisted that the ITU treaty should apply to traditional telecommunications systems. The resolution that won majority support towards the end of the conference stated that the ITU’s leadership should “continue to take the necessary steps for ITU to play an active and constructive role in the multi-stakeholder model of the internet.” However, the Treaty did not receive universal acclaim. United States Ambassador Kramer of the announced that the US would not be signing the new treaty. He was followed by the United Kingdom. Sweden said that it would need to consult with its capital (code in UN-speak for “not signing”). Canada, Poland, the Netherlands, Denmark, Kenya, New Zealand, Costa Rica, and the Czech Republic all made similar statements. In all, 89 countries signed while 55 did not.

Quite clearly there is a considerable amount of concern about the way in which national governments wish to regulate or in some way govern and control the Internet. Although at first glance this may seem to be directed at the content layer, and amount to a rather superficial attempt to embark upon the censorship of content passing through a new communications technology, the attempt to regulate through a technological forum such as the ITU clearly demonstrates that governments wish to control not only content but the various transmission and protocol layers of the Internet and possibly even the backbone itself. Continued attempts to interfere with aspects of the Internet or embark upon an incremental approach to regulation have resulted in expressions of concern from another Internet pioneer, Sir Tim Berners-Lee who, in addition to claiming that governments are suppressing online freedom has issued a call for a Digital Magna Carta.

I have already written on the issue of a Digital Magna Carta or Bill of Rights here.

Clearly the efforts described indicate that some form of national government or collective government form of Internet Governance is on the agenda. Already the United Nations has become involved in the development of Internet Governance policy with the establishment of the Internet Governance Forum.

The Internet Governance Forum

The Internet Governance Forum describes itself as bringing

“people together from various stakeholder groups as equals, in discussions on public policy issues relating to the Internet. While there is no negotiated outcome, the IGF informs and inspires those with policy-making power in both the public and private sectors.  At their annual meeting delegates discuss, exchange information and share good practices with each other. The IGF facilitates a common understanding of how to maximize Internet opportunities and address risks and challenges that arise.

The IGF is also a space that gives developing countries the same opportunity as wealthier nations to engage in the debate on Internet governance and to facilitate their participation in existing institutions and arrangements. Ultimately, the involvement of all stakeholders, from developed as well as developing countries, is necessary for the future development of the Internet.”

The Internet Governance Forum is an open forum which has no members. It was established by the World Summit on the Information Society in 2006. Since then, it has become the leading global multi-stakeholder forum on public policy issues related to Internet governance.

Its UN mandate gives it convening power and the authority to serve as a neutral space for all actors on an equal footing. As a space for dialogue it can identify issues to be addressed by the international community and shape decisions that will be taken in other forums. The IGF can thereby be useful in shaping the international agenda and in preparing the ground for negotiations and decision-making in other institutions. The IGF has no power of redistribution, and yet it has the power of recognition – the power to identify key issues.

A small Secretariat was set up in Geneva to support the IGF, and the UN Secretary-General appointed a group of advisers, representing all stakeholder groups, to assist him in convening the IGF.  The United Nations General Assembly agreed in December 2010 to extend the IGF’s mandate for another five years. The IGF is financed through voluntary contributions.”

Zittrain describes the IGF as “diplomatically styled talk-shop initiatives like the World Summit on the Information Society and its successor, the Internet Governance Forum, where “stakeholders” gather to express their views about Internet governance, which is now more fashionably known as “the creation of multi-stakeholder regimes.”

Less Formal Yet Structured

The Engineering and Technical Standards Community

The internet governance models under discussion have in common the involvement of law or legal structures in some shape or form or, in the case of the cyber anarchists, an absence thereof.

Essentially internet governance falls within two major strands:

1.    The narrow strand involving the regulation of technical infrastructure and what makes the internet work.

2.    The broad strand dealing with the regulation of content, transactions and communication systems that use the internet.

The narrow strand regulation of internet architecture recognises that the operation of the internet and the superintendence of that operation involves governance structures that lack the institutionalisation that lies behind governance by law.

The history of the development of the internet although having its origin with the United States Government has had little if any direct government involvement or oversight. The Defence Advanced Research Projects Administration (DARPA) was a funding agency providing money for development. It was not a governing agency nor was it a regulator. Other agencies such as the Federal Networking Council and the National Science Foundation are not regulators, they are organisations that allow user agencies to communicate with one another. Although the United States Department of Commerce became involved with the internet, once potential commercial implications became clear it too has maintained very much of a hands-off approach and its involvement has primarily been with ICANN with whom the Department has maintained a steady stream of Memoranda of Understanding over the years.

Technical control and superintendence of the internet rests with the network engineers and computer scientists who work out problems and provide solutions for its operation. There is no organisational charter. The structures within which decisions are made are informal, involving a network of interrelated organisations with names which at least give the appearance of legitimacy and authority. These organisations include the Internet Society (ISOC), an independent international non-profit organisation founded in 1992 to provide leadership and internet-related standards, education and policy around the world. Several other organisations are associated with ISOC. The Internet Engineering Taskforce (IETF), is a separate legal entity, which has as its mission to make the internet work better by producing high quality, relevant technical documents that influence the way people design, use and manage the internet.

The Internet Architecture Board (IAB) is an advisory body to ISOC and also a committee of IETF, which has an oversight role. Also housed within ISOC is the IETF Administrative Support Activity, which is responsible for the fiscal and administrative support of the IETF Standards Process. The IETF Administrative Support Activity (IASA) has a committee, the IETF Administrative Oversight Committee (IAOC), which carries out the responsibilities of the IASA supporting the Internet Engineering Steering Group (IESG) working groups, the Internet Architecture Board (IAB), the Internet Research Taskforce (IRTF) and Steering Groups (IRSG). The IAOC oversees the work of the IETF Administrative Director (IAD) who has the day-to-day operational responsibility of providing the fiscal and administrative support through other activities, contractors and volunteers.

The central hub of these various organisations is the IETF. This organisation has no coercive power, but is responsible for establishing internet standards, some of which such as TCP/IP are core standards and are non-optional. The compulsory nature of these standards do not come from any regulatory powers, but because of the nature of the critical mass of network externalities involving internet users. Standards become economically mandatory and there is an overall acceptance of IETF standards which maintain core functionality of the internet.

A characteristic of IETF, and indeed all of the technical organisations involved in internet functionality, is the open process that theoretically allows any person to participate. The other characteristic of internet network organisations is the nature of the rough consensus by which decisions are made. Proposals are circulated in the form of a Request for Comment to members of the internet, engineering and scientific communities and from this collaborative and consensus-based approach a new standard is agreed.

Given that the operation of the internet involves a technical process and the maintenance of the technical process depends on the activities of scientific and engineering specialists, it is fair to conclude that a considerable amount of responsibility rests with the organisations who set and maintain standards. Many of these organisations have developed a considerable power structure them without any formal governmental or regulatory oversight – an issue that may well need to be addressed. Another issue is whether these organisations have a legitimate basis to do what they are doing with such an essential infrastructure as the internet. The objective of organisations such IETF is a purely technical one that has little if any public policy ramifications. Its ability to work outside government bureaucracyenables greater efficiency.

However, the internet’s continued operation depends on a number of interrelated organisations which, while operating in an open and transparent manner in a technical collaborative consensus-based model, have little understanding of the public interest ramifications of their decisions. This aspect of internet governance is often overlooked. The technical operation and maintenance of the internet is superintended by organisations that have little or no interactivity with any of the formalised power structures that underlie the various “governance by law” models of internet governance. The “technical model” of internet governance is an anomaly arising not necessarily from the technology, but from its operation.

ICANN

Of those involved in the technical sphere of Internet governance, ICANN is perhaps the best known. Its governance of the “root” or addressing systems makes it a vital player in the Internet governance taxonomy and for that reason requires some detailed consideration.

ICANN is the Internet Corporation for Assigned Names and Numbers (ICANN). This organisation was formed in October 1998 at the direction of the Clinton Administration to take responsibility for the administration of the Internet’s Domain Name System (DNS). Since that time ICANN has been dogged by controversy and criticism from all sides. ICANN wields enormous power as the sole controlling authority of the DNS, which has a “chokehold” over the internet because it is the only aspect of the entire decentralised, global system of the internet that is administered from a single, central point. By selectively editing, issuing or deleting net identities ICANN is able to choose who is able to access cyberspace and what they will see when they are there. ICANN’s control effectively amounts, in the words of David Post, to “network life or death”. Further, if ICANN chooses to impose conditions on access to the internet, it can indirectly project its influence over every aspect of cyberspace and the activity that takes place there.

The obvious implication for governance theorists is that the ICANN model is not a theory but a practical reality. ICANN is the first indigenous cyberspace governance institution to wield substantive power and demonstrate a real capacity for effective enforcement. Ironically, while other internet governance models have demonstrated a sense of purpose but an acute lack of power, ICANN has suffered from excess power and an acute lack of purpose. ICANN arrived at its present position almost, but not quite, by default and has been struggling to find a meaningful raison d’être since. In addition it is pulled by opposing forces all anxious to ensure their vision of the new frontier prevails

ICANN’s “democratic” model of governance has been attacked as unaccountable, anti-democratic, subject to regulatory capture by commercial and governmental interests, unrepresentative, and excessively Byzantine in structure. ICANN has been largely unresponsive to these criticisms and it has only been after concerted publicity campaigns by opponents that the board has publicly agreed to change aspects of the process.

As a governance model, a number of key points have emerged:

1.    ICANN demonstrates the internet’s enormous capacity for marshalling global opposition to governance structures that are not favourable to the interests of the broader internet community.

2.    Following on from point one, high profile, centralised institutions such as ICANN make extremely good targets for criticism.

3.    Despite enormous power and support from similarly powerful backers, public opinion continues to prove a highly effective tool, at least in the short run, for stalling the development of unfavourable governance schemes.

4.    ICANN reveals the growing involvement of commercial and governmental interests in the governance of the internet and their reluctance to be directly associated with direct governance attempts.

5.    ICANN, it demonstrates an inability to project its influence beyond its core functions to matters of general policy or governance of the internet.

ICANN lies within the less formal area of governance taxonomy in that it operates with a degree of autonomy it retains a formal character. Its power is internationally based (and although still derived from the United States government, there is a desire by the US to “de-couple” its involvement with ICANN). It has greater private rather than public sources of authority, in that its power derives from relationships with registries, ISPs and internet users rather than sovereign states. Finally, it is evolving towards a technical governance methodology, despite an emphasis on traditional decision-making structures and processes.

The Polycentric Model of Internet Governance

The Polycentric Model embraces, for certain purposes, all of the preceding models. It does not envelop them, but rather employs them for specific governance purposes.

This theory is one that has been developed by Professor Scott Shackelford. Shackelford in his article “Toward Cyberpeace: Managing Cyberattacks Through Polycentric Governance”  and locates Internet Governance within a special context of cybersecurity and the maintenance of cyberpeace He contends that the  international community must come together to craft a common vision for cybersecurity while the situation remains malleable. Given the difficulties of accomplishing this in the near term, bottom-up governance and dynamic, multilevel regulation should be undertaken consistent with polycentric analysis.

While he sees a role for governments and commercial enterprises he proposes a mixed model. Neither governments nor the private sector should be put in exclusive control of managing cyberspace since this could sacrifice both liberty and innovation on the mantle of security, potentially leading to neither.

The basic notion of polycentric governance is that a group facing a collective action problem should be able to address it in whatever way they see fit, which could include using existing or crafting new governance structures; in other words, the governance regime should facilitate the problem-solving process.

The model demonstrates the benefits of self-organization, networking regulations at multiple levels, and the extent to which national and private control can co-exist with communal management.  A polycentric approach recognizes that diverse organizations and governments working at multiple levels can create policies that increase levels of cooperation and compliance, enhancing flexibility across issues and adaptability over time.

Such an approach, a form of “bottom-up” governance, contrasts with what may be seen as an increasingly state-centric approach to Internet Governance and cybersecurity which has become apparent in for a such as the G8 Conference in Deauville in 2011 and the ITU Conference in Dubai in 2012.  The approach also recognises that cyberspace has its own qualities or affordances, among them its decentralised nature along with the continuing dynamic change flowing from permissionless innovation. To put it bluntly it is difficult to forsee the effects of regulatory efforts which a generally sluggish in development and enactment, with the result that the particular matter which regulation tried to address has changed so that the regulatory system is no longer relevant. Polycentric regulation provides a multi-faceted response to cybersecurity issues in keeping with the complexity of crises that might arise in cyberspace.

So how should the polycentric model work. First, allies should work together to develop a common code of cyber conduct that includes baseline norms, with negotiations continuing on a harmonized global legal framework. Second, governments and CNI operators should establish proactive, comprehensive cybersecurity policies that meet baseline standards and require hardware and software developers to promote resiliency in their products without going too far and risking balkanization. Third, the recommendations of technical organizations such as the IETF should be made binding and enforceable when taken up as industry best practices. Fourth, governments and NGOs should continue to participate in U.N. efforts to promote global cybersecurity, but also form more limited forums to enable faster progress on core issues of common interest. And fifth, training campaigns should be undertaken to share information and educate stakeholders at all levels about the nature and extent of the cyber threat.

Code is Law

Located centrally within the taxonomy and closely related to the Engineering and Technology category of governance models is the “code is law” model, designed by  Harvard Professor, Lawrence Lessig, and, to a lesser extent, Joel Reidenberg. The school encompasses in many ways the future of the internet governance debate. The system demonstrates a balance of opposing formal and informal forces and represents a paradigm shift in the way internet governance is conceived because the school largely ignores the formal dialectic around which the governance debate is centred and has instead developed a new concept of “governance and the internet”. While Lessig’s work has been favourably received even by his detractors, it is still too early to see if it is indeed a correct description of the future of internet governance, or merely a dead end. Certainly, it is one of the most discussed concepts of cyberspace jurisprudence.

Lessig asserts that human behaviour is regulated by four “modalities of constraint”: law, social norms, markets and architecture. Each of these modalities influences behaviour in different ways:

1.    law operates via sanction;

2.    markets operate via supply and demand and price;

3.    social norms operate via human interaction; and

4.    architecture operates via the environment.

Governance of behaviour can be achieved by any one or any combination of these four modalities. Law is unique among the modalities in that it can directly influence the others.

Lessig argues that in cyberspace, architecture is the dominant and most effective modality to regulate behaviour. The architecture of cyberspace is “code” — the hardware and software — that creates the environment of the internet. Code is written by code writers; therefore it is code writers, especially those from the dominant software and hardware houses such as Microsoft and AOL, who are best placed to govern the internet. In cyberspace, code is law in the imperative sense of the word. Code determines what users can and cannot do in cyberspace.

“Code is law” does not mean lack of regulation or governmental involvement, although any regulation must be carefully applied. Neil Weinstock Netanel argues that “contrary to the libertarian impulse of first generation cyberspace scholarship, preserving a foundation for individual liberty, both online and off, requires resolute, albeit carefully tailored, government intervention”. Internet architecture and code effectively regulate individual activities and choices in the same way law does and that market actors need to use these regulatory technologies in order to gain a competitive advantage. Thus, it is the role of government to set the limits on private control to facilitate this.

The crux of Lessig’s theory is that law can directly influence code. Governments can regulate code writers and ensure the development of certain forms of code. Effectively, law and those who control it, can determine the nature of the cyberspace environment and thus, indirectly what can be done there. This has already been done. Code is being used to rewrite Copyright Law. Technological Protection Measures (TPMs) allow content owners to regulate the access and/or use to which a consumer may put digital content. Opportunities to exercise fair uses or permitted uses can be limited beyond normal user expectations and beyond what the law previously allowed for analogue content. The provision of content in digital format, the use of TPMs and the added support that legislation gives to protect TPMs effectively allows content owners to determine what limitations they will place upon users’ utilisation of their material. It is possible that the future of copyright lies not in legislation (as it has in the past) but in contract.

 

Informal Models and Aspects of Digital Liberalism

Digital liberalism is not so much a model of internet governance as it is a school of theorists who approach the issue of governance from roughly the same point on the political compass: (neo)-liberalism. Of the models discussed, digital liberalism is the broadest. It encompasses a series of heterogeneous theories that range from the cyber-independence writings of John Perry Barlow at one extreme, to the more reasoned private legal ordering arguments of Froomkin, Post and Johnson at the other. The theorists are united by a common “hands off” approach to the internet and a tendency to respond to governance issues from a moral, rather than a political or legal perspective.

Regulatory Arbitrage – “Governance by whomever users wish to be governed by”

The regulatory arbitrage school represents a shift away from the formal schools, and towards digital liberalism. “Regulatory arbitrage” is a term coined by the school’s principal theorist, Michael Froomkin, to describe a situation in which internet users “migrate” to jurisdictions with regulatory regimes that give them the most favourable treatment. Users are able to engage in regulatory arbitrage by capitalising on the unique geographically neutral nature of the internet. For example, someone seeking pirated software might frequent websites geographically based in a jurisdiction that has a weak intellectual property regime. On the other side of the supply chain, the supplier of gambling services might, despite residing in the United States, deliberately host his or her website out of a jurisdiction that allows gambling and has no reciprocal enforcement arrangements with the United States.

Froomkin suggests that attempts to regulate the internet face immediate difficulties because of the very nature of the entity that is to be controlled. He draws upon the analogy of the mythological Hydra, but whereas the beast was a monster, the internet may be predominantly benign. Froomkin identifies the internet’s resistance to control as being caused by the following two technologies:

1.    The internet is a packet-switching network. This makes it difficult for anyone, including governments, to block or monitor information originating from large numbers of users.

2.    Powerful military-grade cryptography exists on the internet that users have access to that can, if used properly, make messages unreadable to anyone but the intended recipient.

As a result of the above, internet users have access to powerful tools which can be used to enable anonymous communication. This is unless, of course, their governments have strict access control, an extensive monitoring programme or can persuade its citizens not to use these tools by having liability rules or criminal law.

Froomkin’s theory is principally informal in character. Private users, rather than public institutions are responsible for choosing the governance regime they adhere to. The mechanism that allows this choice is technical and works in opposition to legally based models. Finally, the model is effectively global as users choose from a world of possibilities to decide which particular regime(s) to submit to, rather than a single national regime. While undeniably informal.

Unlike digital liberalists who advocate a separate internet jurisdiction encompassing a multitude of autonomous self-regulating regimes within that jurisdiction, Froomkin argues that the principal governance unit of the internet will remain the nation-state. He argues that users will be free to choose from the regimes of states rather than be bound to a single state, but does not yet advocate the electronic federalism model of digital liberalism.

Digital Libertarianism – Johson and Post

Digital liberalism is the oldest of the internet governance models and represents the original response to the question: “How will the internet be governed?” Digital liberalism developed in the early 1990s as the internet began to show the first inklings of its future potential. The development of a Graphical User Interface together with web browsers such as Mosaic made the web accessible to the general public for the first time. Escalating global connectivity and a lack of understanding or reaction by world governments contributed to a sense of euphoria and digital freedom that was reflected in the development of digital liberalism.

In its early years digital liberalism evolved around the core belief that “the internet cannot be controlled” and that consequently “governance” was a dead issue. By the mid-1990s advances in technology and the first government attempts to control the internet saw this descriptive claim gradually give way to a competing normative claim that “the internet can be controlled but it should not be”. These claims are represented as the sub-schools of digital liberalism — cyberanarchism and digital libertarianism.

In “And How Shall the Net be Governed?” David Johnson and David Post posed the following questions:

Now that lots of people use (and plan to use) the internet, many — governments, businesses, techies, users and system operators (the “sysops” who control ID issuance and the servers that hold files) — are asking how we will be able to:

(1)   establish and enforce baseline rules of conduct that facilitate reliable communications and trustworthy commerce; and

(2)   define, punish and prevent wrongful actions that trash the electronic commons or impose harm on others.

In other words, how will cyberspace be governed, and by what right?

Post and Johnson point out that one of the advantages of the internet is its chaotic and ungoverned nature. As to the question of whether the net must be governed at all they use the example of the three-Judge Federal Court in Philadelphiathat  “threw out the Communications Decency Act on First Amendment grounds seemed thrilled by the ‘chaotic’ and seemingly ungovernable character of the net”. Post and Johnson argue that because of its decentralised architecture and lack of a centralised rule-making authority the net has been able to prosper. They assert that the freedom the internet allows and encourages, has meant that sysops have been free to impose their own rules on users. However, the ability of the user to choose which sites to visit, and which to avoid, has meant the tyranny of system operators has been avoided and the adverse effect of any misconduct by individual users has been limited.

 Johnson and Post propose the following four competing models for net governance:

1.    Existing territorial sovereigns seek to extend their jurisdiction and amend their own laws as necessary to attempt to govern all actions on the net that have substantial impacts upon their own citizenry.

2.    Sovereigns enter into multilateral international agreements to establish new and uniform rules specifically applicable to conduct on the net.

3.    A new international organisation can attempt to establish new rules — a new means of enforcing those rules and of holding those who make the rules accountable to appropriate constituencies.

4.    De facto rules may emerge as the result of the interaction of individual decisions by domain name and IP registries (dealing with conditions imposed on possession of an on-line address), by system operators (local rules to be applied, filters to be installed, who can sign on, with which other systems connection will occur) and users (which personal filters will be installed, which systems will be patronised and the like).

The first three models are centralised or semi-centralised systems and the fourth is essentially a self-regulatory and evolving system. In their analysis, Johnson and Post consider all four and conclude that territorial laws applicable to online activities where there is no relevant geographical determinant are unlikely to work, and international treaties to regulate, say, ecommerce are unlikely to be drawn up.

Johnson and Post proposed a variation of the third option — a new international organisation that is similar to a federalist system, termed “net federalism”.

In net federalism, individual network systems rather than territorial sovereignty are the units of governance. Johnson and Post observe that the law of the net has emerged, and can continue to emerge, from the voluntary adherence of large numbers of network administrators to basic rules of law (and dispute resolution systems to adjudicate the inevitable inter-network disputes), with individual users voting with their electronic feet to join the particular systems they find most congenial. Within this model multiple network confederations could emerge. Each may have individual “constitutional” principles — some permitting and some prohibiting, say, anonymous communications, others imposing strict rules regarding redistribution of information and still others allowing freer movement — enforced by means of electronic fences prohibiting the movement of information across confederation boundaries.

Digital liberalism is clearly an informal governance model and for this reason has its attractions for those who enjoyed the free-wheeling approach to the internet in the early 1990s. It advocates almost pure private governance, with public institutions playing a role only in so much as they validate the existence and independence of cyber-based governance processes and institutions. Governance is principally to be achieved by technical solutions rather than legal process and occurs at a global rather than national level. Digital liberalism is very much the antithesis of the digital realist school and has been one of the two driving forces that has characterised the internet governance debate in the last decade.

Cyberanarchism – John Perry Barlow

In 1990, the FBI were involved in a number of actions against a perceived “computer security threat” posed by a Texas role-playing game developer named Steve Jackson. Following this, John Perry Barlow and Mitch Kapor formed the Electronic Freedom Foundation. Its mission statement says that it was “established to help civilize the electronic frontier; to make it truly useful and beneficial not just to a technical elite, but to everyone; and to do this in a way which is in keeping with our society’s highest traditions of the free and open flow of information and communication”.

One of Barlow’s significant contributions to thinking on internet regulation was the article, “Declaration of the Independence of Cyberspace”, although idealistic in expression and content, eloquently expresses a point of view held by many regarding efforts to regulate cyberspace. The declaration followed the passage of the Communications Decency Act.  In “The Economy of Ideas: Selling Wine without Bottles on the Global Net”,Barlow challenges assumptions about intellectual property in the digital online environment. He suggests that the nature of the internet environment means that different legal norms must apply. While the theory has its attractions, especially for the young and the idealistic, the fact of the matter is that “virtual” actions are grounded in the real world, are capable of being subject to regulation and, subject to jurisdiction, are capable of being subject to sanction. Indeed, we only need to look at the Digital Millennium Copyright Act (US) and the Digital Agenda Act 2000 (Australia) to gain a glimpse of how, when confronted with reality, Barlow’s theory dissolves.

Regulatory Assumptions

In understanding how regulators approach the control of internet content, one must first understand some of the assumptions that appear to underlie any system of data network regulation.

First and foremost, sovereign states have the right to regulate activity that takes place within their own borders. This right to regulate is moderated by certain international obligations. Of course there are certain difficulties in identifying the exact location of certain actions, but the internet only functions at the direction of the persons who use it. These people live, work, and use the internet while physically located within the territory of a sovereign state and so it is unquestionable that states have the authority to regulate their activities.

A second assumption is that a data network infrastructure is critical to the continued development of national economies. Data networks are a regular business tool like the telephone. The key to the success of data networking infrastructure is its speed, widespread availability, and low cost. If this last point is in doubt, one need only consider that the basic technology of data networking has existed for more than 20 years. The current popularity of data networking, and of the internet generally, can be explained primarily by the radical lowering of costs related to the use of such technology. A slow or expensive internet is no internet at all.

The third assumption is that international trade requires some form of international communication. As more communication takes place in the context of data networking, then continued success in international trade will require sufficient international data network connections.

The fourth assumption is that there is a global market for information. While it is still possible to internalise the entire process of information gathering and synthesis within a single country, this is an extremely costly process. If such expensive systems represent the only source of information available it will place domestic businesses at a competitive disadvantage in the global marketplace.

The final assumption is that unpredictability in the application of the law or in the manner in which governments choose to enforce the law will discourage both domestic and international business activity. In fashioning regulations for the internet, it is important that the regulations are made clear and that enforcement policies are communicated in advance so that persons have adequate time to react to changes in the law.

Concluding Thoughts

Governance and the Properties of the Digital Paradigm

Regulating or governing cyberspace faces challenges that lie within the properties or affordances of the Digital Paradigm. To begin with, territorial sovereignty concepts which have been the basis for most regulatory or governance activity rely on physical and defined geographical realities. By its nature, a communications system like the Internet challenges that model. Although the Digital Realists assert that effectively nothing has changed, and that is true to a limited extent, the governance functions that can be exercised are only applicable to that part of cyberspace that sits within a particular geographical space. Because the Internet is a distributed system it is impossible for any one sovereign state to impose its will upon the entire network. It is for this reason that some nations are setting up their own networks, independent of the Internet, although the perception is that the Internet is controlled by the US, the reality is that with nationally based “splinternets” sovereigns have greater ability to assert control over the network both in terms of the content layer and the various technical layers beneath that make up the medium. The distributed network presents the first challenge to national or territorially based regulatory models.

Of course aspects of sovereign power may be ceded by treaty or by membership of international bodies such as the United Nations. But does, say, the UN have the capacity to impose a worldwide governance system over the Internet. True, it created the IGF but that organisation has no power and is a multi-stakeholder policy think tank. Any attempt at a global governance model requires international consensus and, as the ITU meeting in Dubai in December 2012 demonstrated, that is not forthcoming at present.

Two other affordances of the Digital Paradigm challenge the establishment of tradition regulatory or governance systems. Those affordances are continuing disruptive change and permissionless innovation.  The very nature of the legislative process is measured. Often it involves cobbling a consensus. All of this takes time and by the time there is a crystallised proposition the mischief that the regulation is trying to address either no longer exists or has changed or taken another form. The now limited usefulness (and therefore effectiveness) of the provisions of s.122A – P of the New Zealand Copyright Act 1994 demonstrate this proposition. Furthermore, the nature of the legislative process involving reference to Select Committees and the prioritisation of other legislation within the time available in a Parliamentary session means that a “swift response” to a problem is very rarely possible.

Permissionless innovation adds to the problem because as long as this continues, and there is no sign that the inventiveness of the human mind is likely to slow down, developers and software writers will continue to change the digital landscape meaning that the target of a regulatory system may be continually moving, and certainty of law, a necessity in any society that operates under the Rule of Law, may be compromised. Again, the example of the file sharing provisions of the New Zealand Copyright Act provide an example. The definition of file sharing is restricted to a limited number of software applications – most obviously Bit Torrent. Work arounds such as virtual private networks and magnet links, along with anonymisation proxies fall outside the definition. In addition the definition addresses sharing and does not include a person who downloads but does not share by uploading infringing content.

Associated with disruptive change and permissionless innovation are some other challenges to traditional governance thinking. Participation and interactivity, along with exponential dissemination emphasise the essentially bottom up participatory nature of the Internet ecosystem. Indeed this is reflected in the quality of permissionless innovation where any coder may launch an app without any regulatory sign-off. The Internet is perhaps the greatest manifestation of democracy that there has been. It is the Agora of Athens on a global scale, a cacophony of comment, much of it trivial but the fact is that everyone has the opportunity to speak and potentially to be heard. Spiro Agnew’s “silent majority” need be silent no longer. The events of the Arab Spring showed  the way in which the Internet can be used in the face of oppressive regimes in motivating populaces. It seems unlikely that an “undemocratic” regulatory regime could be put in place absent the “consent of the governed” and despite the usual level of apathy that occurs in political matters, it seems unlikely that, given its participatory nature, netizens would tolerate such interference.

Perhaps the answer to the issue of Internet Governance is already apparent – a combination of Lessig’s Code is Law and the technical standards organisations that actually make the Internet work, such as ISOC, ITEF and ICANN. Much criticism has been levelled at ICANN’s lack of accountability, but in many respects similar issues arise with the IETF and IAB, dominated as they are by  groups of engineers. But in the final analysis, perhaps this is the governance model that is the most suitable. The objective of engineers is to make systems work at the most efficient level. Surely this is the sole objective of any regulatory regime. Furthermore, governance by technicians, if it can be called that, contains safeguards against political, national or regional capture. By all means, local governments may regulate content. But that is not the primary objective of Internet governance. Internet governance addresses the way in which the network operates. And surely that is an engineering issue rather than a political one.

 The Last Word

Perhaps the last word on the general topic of internet regulation should be left to Tsutomu Shinomura, a computational physicist and computer security expert who was responsible for tracking down the hacker Kevin Mitnick which he recounted in the excellent book Takedown:

The network of computers known as the internet began as a unique experiment in building a community of people who shared a set of values about technology and the role computers could play in shaping the world. That community was based on a shared sense of trust. Today, the electronic walls going up everywhere on the Net are the clearest proof of the loss of that trust and community. It’s a great loss for all of us.

Back to the Future – Google Spain and the Restoration of Partial and Practical Obscurity

Arising from the pre-digital paradigm are two concepts that had important implications for privacy. Their continued validity as a foundation for privacy protection has been challenged by the digital paradigm. The terms are practical and partial obscurity which are both descriptive of information accessibility and recollection in the pre-digital paradigm and of a challenge imposed by the digital paradigm, especially for privacy.  The terms, as will become apparent, are interrelated.

Practical obscurity refers to the quality of availability of information which may be of a private or public nature[1].  Such information is usually in hard copy format, may be indexed, is in a central location or locations, is frequently location-dependent in that the information that is in a particular location will refer only to the particular area served by that location, requires interaction with officials or bureaucrats to locate the information and, finally, in terms of accessing the information, requires some knowledge of the particular file within which the information source lies. Practical obscurity means that information is not indexed on key words or key concepts but generally is indexed on the basis of individual files or in relation to a named individual or named location.  Thus, it is necessary to have some prior knowledge of information to enable a search for the appropriate file to be made.

 Partial obscurity addresses information of a private nature which may earlier have been in the public arena, either in a newspaper, television or radio broadcast or some other form of mass media communication whereby the information communicated is, at a later date, recalled in part but where, as the result of the inability of memory to retain all the detail of all of the information that has been received by an individual, has become subsumed.  Thus, a broad sketch of the information renders the details obscure, only leaving the major heads of the information available in memory, hence the term partial obscurity.  To recover particulars of the information will require resort to film, video, radio or newspaper archives, thus bringing into play the concepts of practical obscurity. Partial obscurity may enable information which is subject to practical obscurity to be obtained more readily because some of the informational references enabling the location of the practically obscure information can be provided.

The Digital Paradigm and Digital Information Technologies challenge these concepts. I have written elsewhere about the nature of the underlying properties or qualities of the digital medium that sits beneath the content or the “message”. Peter Winn has made the comment “When the same rules that have been worked out for the world of paper records are applied to electronic records, the result does not preserve the balance worked out between the competing policies in the world of paper records, but dramatically alters that balance.”[2]

A property present in digital technologies and very relevant to this discussion is that of searchability. Digital systems allow the retrieval of information with a search utility that can take place “on the fly” and may produce results that are more comprehensive than a mere index. The level of analysis that may be undertaken may be deeper than mere information drawn from the text itself. Writing styles and the use of language or “stock phrases” may be undertaken, thus allowing a more penetrating and efficient analysis of the text than was possible in print.

The most successful search engine is Google which has been available since 1998.  So pervasive and popular is Google’s presence that modern English has introduced the verb “to Google” which means “To search for information about (a person or thing) using the Google search engine” or “To use the Google search engine to find information on the Internet”.[3] The ability to locate information using search engines returns us to the print based properties of fixity and preservation and also enhances the digital property of “the document that does not die”

A further property presented by digital systems is that of accessibilty. If one has the necessary equipment – a computer, modem\router and an internet connection – information is accessible to an extent not possible in the pre-digital environment. In that earlier paradigm, information was located across a number of separate media. Some had the preservative quality of print. Some, such as television or radio, required personal attendance at a set time. In some cases information may be located in a central repository like a library or archive. These are aspects of partial and practical obscurity

The Internet and convergence reverses the pre-digital activity of information seeking to one of information obtaining. The inquirer need not leave his or her home or office and go to another location where the information may be. The information is delivered via the Internet. As a result of this, with the exception of the time spent locating the information via Google, more time can be spent considering, analysing or following up the information. Although this may be viewed as an aspect of information dissemination, the means of access is revolutionarily different.

Associated with this characteristic of informational activity is the way in which the Internet enhances the immediacy of information. Not only is the inquirer no longer required to leave his or her home of place of work but the information can be delivered at a speed that is limited only by the download speed of an internet connection. Thus information which might have involved a trip to a library, a search through index cards and a perusal of a number of books or articles before the information sought was obtained, now, by means of the Internet may take a few keystrokes and mouse clicks and a few seconds for the information to be presented on screen

This enhances our expectations about the access to and availability of information. We expect the information to be available. If Google can’t locate it, it probably doesn’t exist on-line. If the information is available it should be presented to us in seconds. Although material sought from Wikipedia may be information rich, one of the most common complaints about accessability is the time that it takes to download onto a user’s computer. Yet in the predigital age a multi-contributing information resource (an encyclopedia) could only be located at a library and the time in accessing that information could be measured in hours depending upon the location of the library and the efficiency of the transport system used.

Associated with accessibility of information is the fact that it can be preserved by the user. The video file can be downloaded. The image or the text can be copied. Although this has copyright implications, substantial quantities of content are copied and are preserved by users, and frequently may be employed for other purposes such as inclusion in projects or assignments or academic papers.  The “cut and paste” capabilities of digital systems are well known and frequently employed and are one of the significant consequences of information accessibility that the Internet allows.

The “Google Spain” Decision and the “Right to Be Forgotten”

The decision of the European Court of Justice in Google Spain SL, Google Inc. v Agencia Española de Protección de Datos (AEPD), Mario Costeja González, has the potential to significantly change the informational landscape enabled by digital technologies. I do not intend to analyse the entire decision but rather focus on one aspect of it – the discussion about the so-called “right to be forgotten.” The restrictions placed on Google and other search engines as opposed to the provider of the particular content demonstrates a significant inconsistency of approach that is concerning.

The complaint by Mr Gonzales was this. When an internet user entered Mr Costeja González’s name in the Google search engine of  he or she would obtain links to two pages of the La Vanguardia’s newspaper, of 19 January and 9 March 1998 respectively  In those publications was an announcement mentioning Mr Costeja González’s name related to a real-estate auction connected with attachment proceedings for the recovery of social security debts.

Mr González requested, first, that La Vanguardia be required either to remove or alter those pages so that the personal data relating to him no longer appeared or to use certain tools made available by search engines in order to protect the data.

Second, he requested that Google Spain or Google Inc. be required to remove or conceal the personal data relating to him so that they ceased to be included in the search results and no longer appeared in the links to La Vanguardia. Mr González stated in this context that the attachment proceedings concerning him had been fully resolved for a number of years and that reference to them was now entirely irrelevant.

The effect of the decision is that the Court was prepared to allow the particular information – the La Vanguardia report – to remain. The Court specifically did not require that material be removed even although the argument advanced in respect of the claim against Google was essentially the same – the attachment proceedings had been fully resolved for a number of years and that reference to them was now entirely irrelevant. What the Court did was to make it very difficult if not almost impossible for a person to locate the information with ease.

The Court’s exploration of the “right to be forgotten”  was collateral to its main analysis about privacy, yet the development of the “right to be forgotten” section was as an aspect of privacy – a form of gloss on fundamental privacy principles. The issue was framed in this way. Should the various statutory and directive provisions be interpreted as enabling Mr Gonzales to require Google to remove, from the list of results displayed following a search made for his name, links to web pages published lawfully by third parties and containing true information relating to him, on the ground that that information may be prejudicial to him or that he wishes it to be ‘forgotten’ after a certain time? It was argued that the “right to be forgotten” was an element of Mr Gonzales’ privacy rights which overrode the legitimate interests of the operator of the search engine and the general interest in freedom of information.

The Court observed that even initially lawful processing of accurate information may, in the course of time, become incompatible with the privacy directive where that information is no longer necessary in the light of the purposes for which it was originally collected or processed. That is so in particular where the purposes appear to be inadequate, irrelevant or no longer as relevant, or excessive in relation to those purposes and in the light of the time that has elapsed.

What the Court is saying is that notwithstanding that information may be accurate or true, it may no longer be sufficiently relevant and as a result be transformed into information which is incompatible with European privacy principles. The original reasons for the collection of the data may, at a later date, no longer pertain. It follows from this that individual privacy requirements may override any public interest that may have been relevant at the time that the information was collected.

In considering requests to remove links it was important to consider whether a data subject like Mr Gonzales had a right that the information relating to him personally should, at a later point in time, no longer be linked to his name by a list of results displayed following a search based on his name. In this connection, the issue of whether or not the information may be prejudicial to the “data subject” need not be considered. The information may be quite neutral in terms of effect. The criterion appears to be one of relevance at a later date.

Furthermore the privacy rights override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in finding that information upon a search relating to the data subject’s name.

One has to wonder about the use of language in this part of the decision. Certainly, the decision is couched in a very formalised and somewhat convoluted style that one would associate with a bureaucrat rather than a judge articulating reasons for a decision. But what does the Court mean when it says “as a rule”? Does it have the vernacular meaning of “usually” or does it mean what it says – that the rule is that individual privacy rights override economic interests of the search engine operator and of the general public in being able to locate information. If the latter interpretation is correct that is a very wide ranging rule indeed.

However, the Court continued, that would not be the case if it appeared, for particular reasons, such as the role played by the data subject in public life, that the interference with his fundamental rights is justified by the preponderant interest of the general public in having, on account of inclusion in the list of results, access to the information in question.

Thus if a person has a public profile, for example in the field of politics, business or entertainment, there may be a higher public interest in having access to information.

Finally the Court looked at the particular circumstances of Mr Gonzales. The information reflected upon Mr Gonzales private life. Its initial publication was some 16 years ago. Presumably the fact of attachment proceedings and a real estate auction for the recovery of social security debts was no longer relevant within the context of Mr Gonzales’ life at the time of the complaint. Thus the Court held that Mr Gonzales had established a right that that information should no longer be linked to his name by means of such a list.

“Accordingly, since in the case in point there do not appear to be particular reasons substantiating a preponderant interest of the public in having, in the context of such a search, access to that information, a matter which is, however, for the referring court to establish, the Gonzales may, require those links to be removed from the list of results.”

There is an interesting comment in this final passage. The ECJ decision is on matters of principle. It defines tests which the referring Court should apply. Thus the referring Court still has to consider on the facts whether there are particular reasons that may substantiate a preponderant public interest in the information, although the ECJ stated that it did not consider such facts to be present.

Matters Arising

There are a number of issues that arise from this decision. The reference to the “right to be forgotten” is made at an early stage in the discussion but the use of the phrase is not continued. It is developed as an aspect of privacy within the context of the continued use of data acquired for a relevant purpose at one point in time, but the relevance of which may not be so crucial at a later point in time. One of the fundamental themes underlying most privacy laws is that of collection and retention of data for a particular purpose. The ECJ has introduced an element of temporal relevance into that theme.

A second issue restates what I said before. The information about the attachment proceedings and real estate sale which Mr Gonzales faced in 1998 was still “at large” on the Internet. In the interests of a consistent approach, an order should have been made taking that information down. It was that information that was Mr Gonzales’ concern. Google was a data processor that made it easy to access that information. So the reference may not appear in a Google search, but the underlying and now “irrelevant” information still remains.

A third issue relates to access to historical information and to primary data. Historians value primary data. Letters, manuscripts, records, reports from times gone by allow us to reconstruct the social setting within which people carried out their daily lives and against which the great events of the powerful and the policy makers took place. One only has to attempt to a research project covering a period say four hundred years ago to understand the huge problems that may be encountered as a result of gaps in information retained largely if not exclusively in manuscript form, most of which is unindexed. A search engine such as Google aids in the retrieval of relevant information. And it is a fact that social historians relay on the “stories” of individuals to illustrate a point or justify an hypothesis. The removal of references to these stories, or the primary data itself will be a sad loss to historians and social science researchers. What is concerning is that it is the “data subject” that is going to determine which the historical archive will contain – at least from an indexing perspective.

A fourth issue presents something of a conundrum. Imagine that A had information published about him 20 years ago regarding certain business activities that may have been controversial. Assume that 20 years later A has put all that behind him and is a respected member of the community and his activities in the past bear no relevance to his present circumstances. Conceivably, following the approach of the ECJ, he might require Google to remove search results to those events from queries on his name. Now assume a year or so later that A once again gets involved in a controversial business activity. Searches on his name would reveal the current controversy, but not the earlier one. His earlier activities would remain under a shroud – at least as far as Google searches are concerned. Yet it could be validly argued that his earlier activities are very relevant in light of his subsequent actions. How do we get that information restored to the Google search results? Does a news media organisation which has its own information resources and thus may have some “institutional memory” of the earlier event go to Google and request restoration of the earlier results?

The example I have given demonstrates how relevance may be a dynamic beast and may be a rather uncertain basis for something as elevated as a right and certainly as a basis for allowing a removal of results from a search engine as a collateral element of a privacy right.

Another interesting conundrum is presented for Mr Gonzales himself. By instituting proceedings he has highlighted the very problem that he wished to have removed from the search results. To make it worse for Mr Gonzales and his desire for the information of his 1998 activities to remain private, the decision of the ECJ has been the subject of wide ranging international comment on the decision. The ECJ makes reference to his earlier difficulties, and given that the timing of those difficulties is a major consideration in the Court’s assessment of relevance, perhaps those activities have taken on a new and striking relevance in the context of the ECJ’s decision. If Mr Gonzales wanted his name and affairs to remain difficult to find his efforts to do so have had the opposite effect, and perhaps his business problems in 1998 have achieved a new and striking relevance in the context of the ECJ’s decision which would eliminate any privacy interest he might have had but for the case.

Conclusion

But there are other aspects of the decision that are more fundamental for the communication of information and the rights to receive and impart information which are aspects of freedom of expression. What the decision does is that it restores the pre-digital concepts of partial and practical obscurity. The right to be forgotten will only be countered with the ability to be remembered, and no less a person than Sir Edward Coke in 1600 described memory as “slippery”. One’s recollection of a person or an event may modify over a period of time. The particular details of an event congeal into a generalised recollection. Often the absence of detail will result in a misinterpretation of the event.

Perhaps the most gloomy observation about the decision is its potential to emasculate the promise of the Internet and one of its greatest strengths – searchability of information –  based upon privacy premises that were developed in the pre-Internet age, and where privacy concerns involved the spectre of totalitarian state mass data collection on every citizen. In many respects the Internet presents a different scenario involving the gathering and availability of data frequently provided by the “data subject” and the properties and the qualities of digital technologies have remoulded our approaches to information and our expectations of it. The values underpinning pre-digital privacy expectations have undergone something of a shift in the “Information Age” although there are occasional outraged outbursts at incidence of state sponsored mass data gathering exploits. One wonders whether the ECJ is tenaciously hanging on to pre-digital paradigm data principles, taking us back to a pre-digital model or practical and partial obscurity in the hope that it will prevail for the future.  Or perhaps in the new Information Age we need to think again about the nature of privacy in light of the underlying qualities and properties of the Digital Paradigm.

 

[1] The term “practical obscurity” was used in the case of US Department of Justice v Reporters Committee for Freedom of the Press. 489 US 749 (1989)

[2] Peter A. Winn, Online Court Records: Balancing Judicial Accountability and Privacy in an Age of Electronic Information, (2004)79 WASH. L. REV. 307, 315

[3] Oxford English Dictionary

Towards an Internet Bill of Rights

 

Tim Berners-Lee, in an article in the Guardian of the 12th March 2014, building on a comment that he made that the Internet should be safeguarded from being controlled by governments or large corporations, reported in the Guardian for 26 June 2013,  claimed that an online “Magna Carta” is needed to protect and enshrine the independence of the internet.  His argument is that the internet has come under increasing attack from governments and corporate influence.  Although no examples were not cited this has been a developing trend.  The comments by Nicolas Sarkozy at the G8 meetings in 2011 and the unsuccessful attempts by Russia, China and other nation via the ITU at the 2012 World Conference on International Telecommunications to establish wider governance and control of the internet from a national government point of view provide examples.  Sarkozy’s comments were rejected by English Prime Minister David Cameron and the then Secretary of State for the United States, Hillary Clinton. More recently, on 29 April 2014 Russia’s Parliament approved a package of sweeping restrictions on the Internet and blogging.  Clearly there is an appetite for greater control by governments of the internet and, in the opinion of Berners-Lee, this must be resisted.  He considers that what is needed is a global constitution or a Bill of Rights.  He suggests that people generate a digital Bill of Rights for each country – a statement of principles that he hopes will be supported by public institutions government officials and corporations. I should perhaps observe that what is probably intended is an Internet Bill of Rights rather than a Digital one. I say this because it could well be difficult to apply some concepts to all digital technologies, some of which have little to do with the Internet.

The important point that Berners-Lee makes is that there must be a neutral internet and that there must be certainty that it will remain so. Without an open or neutral internet there can be no open government, no good democracy, no good healthcare, no connected communities and no diversity of culture.  By the same token Berners-Lee is of the view that net neutrality is not just going to happen. It requires positive action.

But it is not about direct governmental control of the Internet that concerns Berners-Lee. An example of indirect government interference with the Internet and with challenges to the utilisation of the new communications technology by individuals are the activities of the NSA and the GCHQ as revealed by the Snowden disclosures.  There have been attempts to undermine encryption and to circumvent security tools which face challenges upon individual liberty to communicate frankly and openly and without State surveillance.

What Would An On-Line “Magna Carta” Address

According to Berners-Lee, among the issues that would need to be addressed by an online “Magna Carta” would be those of privacy, free speech and responsible anonymity together with the impact of copyright laws and cultural-societal issues around the ethics of technology.  He freely acknowledges that regional regulation and cultural sensitivities would vary.  “Western democracy” after all is exactly that and its tenets, whilst laudable to its proponents, may not have universal appeal.

What is really required is a shared document of principle that could provide an international standard not so much for the values of Western democracy but for the values and importance that underlie an open Internet.

One of the things that Berners-Lee is keen to see changed is the connection between the US Department of Commerce and the internet addressing system – the IANA contract which controls the database of all domain names.  Berners-Lees’ view was that the removal of this link, if one will forgive the pun, was long overdue and that the United States government could not have a place in running something which is non-national.  He observed that there was a momentum towards that uncoupling but that there should be a continued multi-stakeholder approach and one where governments and corporates are kept at arm’s length.  As it would happen within a week or so after Berners-Lees expressions of opinion the United States government advised that it was going to de-couple its involvement with the addressing system.

Another concern by Berners-Lee was the “balkanisation” of the internet whereby countries or organisations would carve up digital space to work under their own rules be it for censorship regulation or for commerce.  Following the Snowden revelations there were indeed discussions along this line where various countries, to avoid US intrusion into the communications of their citizens, suggested a separate national “internet”.  This division of a global communications infrastructure into one based upon national boundaries is anathema to the concept of an open internet and quite contrary to the views expressed by Mr Berners-Lee.

Is This New?

The idea of some form of Charter or principles that limit or define the extent of potential governmental interference in the Internet is not new. Perhaps what is remarkable is that Berners-Lee, who has been apolitical and concerned primarily with engineering issues surrounding the Internet and the World Wide Web has, since 2013, spoken out on concerns regarding the future of the Internet and fundamental governance issues.

Governing the internet is a challenging undertaking. It is a decentralised, global environment, so governance mechanisms must account for many varied legal jurisdictions and national contexts. It is an environment which is evolving rapidly – legislation cannot keep pace with technological advances, and risks undermining future innovation. And it is shaped by the actions of many different stakeholders including governments, the private sector and civil society.

These qualities mean that the internet is not well suited to traditional forms of governance such as national and international law. Some charters and declarations have emerged as an alternative, providing the basis for self-regulation or co-regulation and helping to guide the actions of different stakeholders in a more flexible, bottom-up manner. In this sense, charters and principles operate as a form of soft law: standards that are not legally binding but which carry normative and moral weight.

Dixie Hawtin in her article “Internet Charters and Principles: Trends and Insights” summarises some of the steps that have been taken:

“Civil society charters and declarations

John Perry Barlow’s 1996 Declaration of Cyberspace Independence is one of the earliest and most famous examples. Barlow sought to articulate his vision of the internet as a space that is fundamentally different to the offline world, in which governments have no jurisdiction. Since then civil society has tended to focus on charters which apply human rights standards to the internet, and which define policy principles that are seen as essential to fulfilling human rights in the digital environment. Some take a holistic approach, such as theAssociation for Progressive Communications’ Internet Rights Charter (2006) and the Internet Rights and Principles Coalition’s (IRP) Charter of Human Rights and Principles for the Internet (2010). Others are aimed at distinct issues within the broader field, for instance, the Electronic Frontier Foundation’s Bill of Privacy Rights for Social Networks (2010), the Charter for Innovation, Creativity and Access to Knowledge (2009), and the Madrid Privacy Declaration (2009).

Initiatives targeted at the private sector

The private sector has a central role in the internet environment through providing hardware, software, applications and services. However, businesses are not bound by the same confines as governments (including international law and electorates), and governments are limited in their abilities to regulate businesses due to the reasons outlined above. A growing number of principles seek to influence private sector activities. The primary example is the Global Network Initiative, a multi-stakeholder group of businesses, civil society and academia which has negotiated principles that member businesses have committed themselves to follow to protect and promote freedom of expression and privacy. Some initiatives are developed predominantly by the private sector (such as the Aspen Institute International Digital Economy Accords which are currently being negotiated); others are a result of co-regulatory efforts with governments and intergovernmental organisations. The Council of Europe, for instance, has developed guidelines in partnership with the online search and social networking sectors. This is part of a much wider trend of initiatives seeking to hold companies to account to human rights standards in response to the challenges of a globalised world where the power of the largest companies can eclipse that of national governments. Examples of the wider trend include the United Nations Global Compact, and the Special Rapporteur on human rights and transnationalcorporations’ Protect, Respect and Remedy Framework.

 Intergovernmental organisation principles

There are many examples of principles and declarations issued by intergovernmental organisations, but in the past year a particularly noticeable trend has been the emergence of overarching sets of principles. The Organisation for Economic Co-operation and Development (OECD) released a Communiqué on Principles for Internet Policy Making in June 2011. The principles seek to provide a reference point for all stakeholders involved in internet policy formation. The Council of Europe has created a set of Internet Governance Principles which are due to be passed in September 2011. The document contains ten principles (including human rights, multi-stakeholder governance, network neutrality and cultural and linguistic diversity) which member states should upholdwhen developing national and international internet policies.

National level principles

At the national level too, some governments have turned to policy principles as an internet governance tool. Brazil has taken the lead in this area through its multi-stakeholder Internet Steering Committee, which has developed the Principles for the Governance and Use of the Internet – a set of ten principles including freedom of expression, privacy and respect for human rights. Another example is Norway’s Guidelines for Internet Neutrality (2009) which were developed by the Norwegian Post and Telecommunications Authority in collaboration withother actors such as internet service providers (ISPs) and consumer protection agencies”

 

A Starting Point – Initial Thoughts.

So what would be a starting point for the development of an internet or digital bill or rights?

Traditionally the “Bill of Rights” concept has been to act as a buffer between over-weaning government power on the one hand and individual liberties on the other.  The first attempt at a form of Bill of Rights occurred at the end of the English Revolution (1642 – 1689) and imposed limits upon the Sovereigns power.

The Age of Enlightenment and much of the philosophical thinking that took place in the late 17th and early 18th centuries resulted in statements or declarations of rights by the American colonies – the Declaration of Independence – the United States in  Amendments 1-10 to the Constitution (referred to as the Bill of Rights)  and the 1789 Declaration of the Rights of Man and the Citizen following the French Revolution.

An essential characteristic of these statements was to define and restrict the interference of the State in the affairs of individuals and guarantee certain freedoms and liberties.  It seems to me that a Internet Bill of Rights would set out and define individual expectations of liberty and non-interference on the part of the State within the context of the communications media made available by the Internet.

But the function of Charters has developed since the Age of Enlightenment approaches, especially with the development of global and transnational institutions. Hawtin notes that:

“Civil society uses charters and principles to raise awareness about the importance of protecting freedom of expression and association online through policy and practice. The process of drafting these texts provides a valuable platform for dialogue and networking. For example, the IRP’s Charter of Human Rights and Principles for the Internet has been authored collaboratively by a wide range of individuals and organisations from different fields of expertise and regions of the world. The Charter acts as an important space, fostering dialogue about how human rights apply to the internet and forging new connections between people.

Building consensus around demands and articulating these in inspirational charters provide civil society with common positions and tools with which to push for change. This is demonstrated by the number of widely supported civil society statements which refer to existing charters issued over the past year. The Civil Society Statement to the e G8 and G8, which was signed by 36 different civil society groups from across the world, emphasises both the IRP’s 10 Internet Rights and Principles (derived from its Charter of Human Rights and Principles for the Internet) and the Declaration of the Assembly on the Right to Communication. The Internet Rights are Human Rights statement submitted to the Human Rights Council was signed by more than 40 individuals and organisations and reiterates APC’s Internet Rights Charter and the IRP’s 10 Internet Rights and Principles.

As charters and principles are used and reiterated, so their standing as shared norms increases. When charters and statements are open to endorsement by different organisations and individuals from around the world, this helps to give them legitimacy and demonstrate to policy makers that there is a wide community of people who are demanding change.

While the continuance of practices which are detrimental to internet freedom indicates that these initiatives have not, so far, been entirely successful, there are signs of improvements. Groups like APC and the IRP have successfully pushed human rights up the agenda in the Internet Governance Forum. Other groups are hoping to emulate these efforts to increase awareness about human rights in other forums. The At-Large Advisory Committee, for instance, is in the beginning stages of creating a charter of rights for use within the Internet Corporation for Assigned Names and Numbers (ICANN).”

  Part of the problem with the “Charter Approach” is that there may be a proliferation of such instruments or proposals that may have the effect of diluting the moves for a universal approach. On the other hand, charters or statements of principle of a high quality with an acceptance that lends legitimacy may be more likely to attract adoption and advocacy by a growing majority of stakeholders. Some charters may be applicable to local circumstances. Those with a specific international orientation will attract a different audience and advocacy approach. As I understand it Berners-Lee is suggesting a combination of the two – an international statement of principle incorporated into local law recognising differences in cultural and customary norms. In some respects his approach seems to have an air the EU approach whereby an EU requirement is adopted into local law – often with a shift in emphasis that takes into account local conditions.

However, what must be remembered is the difficulty with power imbalances where economically and political powerful groups may drive a local (or even international) process. What is required is a meaningful multi-stake-holder approach that recognises equality of arms and influence. Hawtin also observes that with the proliferation of charters and principles, governments and corporates may “cherry pick” those standards which accord with their own interests. Voluntary standards have difficulties with engagement and enforcement.

A Starting Point – A Possible Framework

Because the Internet is primarily a means of communication of information – it’s not referred to as ICT or Information and Communication Technology for nothing – what is being proposed is an extension or redefinition of the rights of freedom of expression guaranteed in national and international instruments such as the First Amendment to the United States Constitution, section 14 of the New Zealand Bill of Rights Act 1990,  Section 2 of the Canadian Charter of Rights and Freedoms and Article 19 of the Universal Declaration of Human  Rights, to mention but a few. Thus an Internet Bill of Rights would have to be crafted as guaranteeing aspects or details of the freedom of expression, although the freedom of expression right also has attached to it other collateral rights such as the right to education, the right to freedom of association (in the sense of communicating with those with whom one is associated), the right to full participation in social, cultural and political life and the right to social and economic development. Perhaps a proper focus for attention should be upon the Internet as a means of facilitating the freedom of expression right.

This approach was the subject of the Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank LaRue, to the General Assembly of the United Nations, in August 2011.

In that Report he made the following observations

14. The Special Rapporteur reiterates that the framework of international human rights law, in particular the provisions relating to the right to freedom of expression, continues to remain relevant and applicable to the Internet. Indeed, by explicitly providing that everyone has the right to freedom of expression through any media of choice, regardless of frontiers, articles 19 of the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights were drafted with the foresight to include and accommodate future technological developments through which individuals may exercise this right.

 15. Hence, the types of information or expression that may be restricted under international human rights law in relation to offline content also apply to online content. Similarly, any restriction applied to the right to freedom of expression exercised through the Internet must also comply with international human rights law, including the following three-part, cumulative criteria:

(a) Any restriction must be provided by law, which must be formulated with sufficient precision to enable an individual to regulate his or her conduct accordingly and must be made accessible to the public;

(b) Any restriction must pursue one of the legitimate grounds for restriction set out in article 19, paragraph 3, of the International Covenant, namely (i) respect of the rights or reputation of others; or (ii) the protection of national security or of public order, or of public health or morals;

 (c) Any restriction must be proven as necessary and proportionate, or the least restrictive means to achieve one of the specified goals listed above.

The issue of the potential human right of access to the Internet was covered in this way:

61. Although access to the Internet is not yet a human right as such, the Special Rapporteur would like to reiterate that States have a positive obligation to promote or to facilitate the enjoyment of the right to freedom of expression and the means necessary to exercise this right, which includes the Internet. Moreover, access to the Internet is not only essential to enjoy the right to freedom of expression, but also other rights, such as the right to education, the right to freedom of association and assembly, the right to full participation in social, cultural and political life and the right to social and economic development.

 62. Recently, the Human Rights Committee, in its general comment No. 34 on the right to freedom of opinion and expression, also underscored that States parties should take all necessary steps to foster the independence of new media, such as the Internet, and to ensure access of all individuals thereto.

 63. Indeed, given that the Internet has become an indispensable tool for full participation in political, cultural, social and economic life, States should adopt effective and concrete policies and strategies, developed in consultation with individuals from all segments of society, including the private sector as well as relevant Government ministries, to make the Internet widely available, accessible and affordable to all.

In locating an Internet Bill of Rights within the concept of the freedom of expression, one must be careful to ensure that by defining subsets of the freedom of expression right, one does not impose limitations that may impinge upon the collateral rights identified by Mr. LaRue

Having made that observation, it is important to recall that an Internet Bill of Rights could guarantee the independence and neutrality of the means of communication – the Internet – and prohibit heavy handed secretive surveillance and intrusive interference with that means of communication.  Whilst it is acknowledged that there is a need for meaningful laws to protect the security of citizens both individually and as a group – and Mr LaRue recognises justified limitation on the freedom of expression in areas such as child pornography, direct and public incitement to commit genocide, Advocacy of national, racial or religious hatred that constitutes incitement to  discrimination, hostility or violence, and incitement to terrorism –  such laws cannot be intrusive into areas such as privacy or private activity and communication.

One of the problems about regulating the Internet or indeed preventing the regulation of the internet is to understand how it is used by end users.  In the United States Representatives Issa (R-Ca) and Senator Wyden (D-Or) developed an idea for a Digital Bill of Rights based upon ten principles:

  1. Freedom – The right to a free and uncensored Internet.
  2. Openness – The right to an open, unobstructed Internet.
  3. Equality – The right to equality on the Internet.
  4. Participation – The right to gather and participate in online activities.
  5. Creativity – The right to create and collaborate on the Internet.
  6. Sharing – The right to freely share their ideas.
  7. Access – The right to access the Internet equally, regardless of who they are or where they are.
  8. Association – The right to freely associate on the Internet.
  9. Privacy – The right to privacy on the Internet.
  10. Property – The right to benefit from what they create.

 

The Issa\Wyden categories are helpful in some respects, again as a starting point. One of the most significant things about their observations lies not so much in their categorisation but in the observation that the way that the Internet is used within the wider activity of communication and social activity must be understood.

Many of the Issa\Wyden principles are in fact subsets of the right to free expression.  Within the right to free expression there is a right not only to the means of expressing an opinion – described in s. 14 of the New Zealand Bill of Rights Act  as the right to impart information – but also the right to receive it.

The wording of the concept of “participation” in the Issa\Wyden proposal is important and in some respects reflects the LaRue concept of association within the Internet space. One must be careful, as Issa and Wyden have been to ensure that concepts applicable to the Internet space as a means of communication remain.

Expressions in favour of an Internet Bill of Rights have been put forward on the basis that the digital economy requires a reliable set of laws and procedures whereby individuals and corporations may do business and promote innovation.  It is suggested that an Internet Bill of Rights could well establish a nation that enacted and guaranteed such rights as being an innovative place within the digital environment which would guarantee a citizen privacy and promote a digital It may support a vision for a country as a data haven where people and businesses can have confidence that they have sovereignty over and unfettered ownership of  their data and that it will be protected.

Stability and certainty, particularly within the commercial environment, are necessary prerequisites for flourishing commercial activity.  I wonder, however, whether or not the concept of an Internet Bill of Rights fits comfortably within the “nation state” model of a secure, predictable and certain place where people can do business.

The Internet Bill of Rights ideally would guarantee certain national and minimum standard for Internet activity that could be mirrored worldwide.   Examples of digital paradigm legislation which attempt to harmonise principles transnationally may be found in New Zealand in the Electronic Transactions Act which has its genesis in international Conventions and the Unsolicited Electronic Messages Act where the principles applied in similar legislation in Australia favour a particular opt-in model for the continued receipt of commercial electronic messages. Legislation in the United States (The CAN-SPAM Act) favours an opt-out approach  based upon constitutional imperatives surrounding the First Amendment. Differing approaches to Spam control  based on local legal or cultural imperative provide a good example of the difficulty in achieving international harmonisation of national laws.

It was suggested by Issa and Wyden that it was necessary for there to be an understanding of the Internet and how it is used. I suggest that in considering a Internet Bill of Rights the enquiry must go further.  Not only must there be an understanding of how the Internet is used but also of how it works and essentially this involves a recognition of the paradigmatic differences between models of communications media and styles that existed before the Digital Age and understanding the way in which the qualities, properties or, as one writer has put it, the affordances of digital technologies work.

One of the present qualities of digital technologies and particularly of the internet is that of “permissionless innovation” – the ability to “bolt on” to the Internet backbone an application without seeking permission from any supervising or regulatory entitly.. This concept is reflected in items 2, 5, 6 and 10 of the Issa/Wyden list of rights   Permissionless innovation is inherent within digital technologies only because it is in existing default position and one which could well change depending upon the level of government interference.  Thus if one were to maintain net neutrality integrity and the importance of innovation the concept of permissionless innovation would have to be endorsed and protected.

A further matter to be considered is the way in which these various characteristics affordances properties or qualities impact upon human behaviour and upon expectations of information.  Our current expectations relating to information, its use, availability, dynamic quality, accessibility and searchability all impact upon our behaviours and responses within the context of the act of communication.  “Information now” – an expectation of an immediate  reply, an expectation of immediate access 24/7 – has developed as the result of the inherent and underlying properties of digital communication systems enabled by the Internet, email, instant messaging, internet telephony, Skype, mobile phone technology or otherwise.

The problem with the Issa\Wyden proposal is that it is cast within the very wide framework of guarantees for individual liberties. In this respect it reflects traditional “rights” instruments as being a definition of the boundaries between the individual and the State. In addressing the Internet – a medium of communication – there are some difficulties in this approach.  Of the items that they identify those of openness, freedom and access are those that might be the focus of attention of an Internet Bill of Rights. The other aspects deal with issues that inhabit the content layer, yet the technological layers are the ones that are really the subject of potential threat from the State. The objective is summed up by InternetNZ who seek an open and uncapturable Internet. This objective recognises the medium rather than the message that it conveys. But by the same token, the medium is critical as a means of fostering the guarantee of freedom of expression.

Moving Forward

It seems to me that the proper focus of an Internet Bill of Rights is that of the technology that is the Internet. Berners-Lee recognises this when he refers to “net neutrality” which is a term that is capable of a number of meanings. What must be guaranteed and recognised by States is that the means of communication must be left alone and should not be the subject of interference by domestic legal processes. An open and uncapturable Internet cannot be compromised by local rules governing technical standards which have world wide application. It is perhaps this global aspect that confounds a traditional approach to Internet regulation in that although it is possible for there to be local rules that interfere with Internet functionality, there cannot be given that such rules may impact upon the wider use of the Internet. Local interference with engineering or technical standards may have downstream implications for overall Internet use by those who are not subject to those local rules.

Recent efforts by the ITU to establish some form of regulatory or governance structures allowing government restriction or blocking of information disseminated via the internet and to create a global regime of monitoring internet communications – including the demand that those who send and receive information identify themselves would have wide ranging implications for Internet use. The proposal would also have allowed governments to shut down the internet if there is the belief that it may interfere in the internal affairs of other states or that information of a sensitive nature might be shared.  Although some of the proposals suggested less US control over the Internet, which is forthcoming is the disengagement of the US Department of Commerce from involvement with ICANN, nevertheless it is of concern that wider interference with Internet traffic should be seriously proposed under the umbrella of an agency whose brief is essentially directed towards the efficient functioning of communications networks, rather than obstructing them.

That there is such an appetite for regulation and control present at an international forum is a matter of concern and probably underscores an increased urgency for a rights-based solution to be put in place.

There are two main areas where the Bill of Rights for the Internet could be explored. One is through the Internet Society operating as an umbrella for those that make up the Internet Ecosystem including:

Technologists, engineers, architects, creatives, organizations such as the Internet Engineering Task Force (IETF) and the World Wide Web Consortium(W3C) who help coordinate and implement open standards.

 Global and local Organizations that manage resources for global addressing capabilities such as the Internet Corporation for Assigned Names and Numbers(ICANN), including its operation of the Internet Assigned Numbers Authority(IANA) function, Regional Internet Registries (RIR), and Domain Name Registries and Registrars.

 Operators, engineers, and vendors that provide network infrastructure services such as Domain Name Service (DNS) providers, network operators, and Internet Exchange Points (IXPs)

 The other is the Internet Governance Forum where its mission to “identify emerging issues, bring them to the attention of the relevant bodies and the general public, and, where appropriate, make recommendations” ideally encompasses discussions and recommendations around an Internet Bill of Rights. It seems to me that the development of a means by which the technical infrastructure of the Internet and the standards that underlie it – which have been in the hands of the ITEF and the W3 consortium – remain open, free and uncapturable should have some priority.

These are organisations that could properly address issues of how to maintain the neutrality and integrity of the engineering and technical aspects of the Internet – to ensure a proper means of ensuring from a principled position an identification and articulation of the technical aspects of the Internet that require protection by a statement of rights – which would be a non-interference approach – couple with the definition of the technological means that can be employed to ensure the protection of those rights.

The objection to such a proposal would be that all power would rest with the engineers, but given that the principle objective of an engineer is to make things work, that can hardly be a bad thing. Maintaining a system in good working order would be preferable to arbitrary and capricious interference with the mechanics of communication by politicians or organs of the State.

This is a project that will have to be developed carefully and analytically to ensure that what we have now continues and is not subverted, damaged or the potential that it may have for humanity in the future as a means of relating to one another is not compromised. It seems to me that protection of the technology is the means by which Berner-Lee’s goal of net neutrality may be maintained.

 

David Harvey

12 May 2014

E-Discovery and Asia Legal Big Data

I had the privilege of being invited to take part in the Asia Legal Big Data Symposium held at the Conrad Hotel in Hong Kong on 29 – 30 April 2014, and to share a place on a panel which included Registrar Lung Kim Wan of the Hong Kong High Court, Senior Assistant Registrar Yeong Zee Kin from the Singapore Supreme Court and Stephen Yu from Alix Partners. The focus of the Conference was upon the imminent release of a Practice Direction for the Hong Kong Courts addressing E-Discovery. Although the present Hong Kong Rules are sufficiently wide to deal with E-Discovery in a broad sense, and more focussed approach is proposed.

The panel in which I participated dealt with existing rules and how they work in Singapore and New Zealand, and how the general shape of the Hong Kong direction may appear. Stephen Yu was able to bring valuable technical knowledge into the mix in considering some of the tools and technological solutions that may be utilised in the E-Discovery process.

The Symposium itself was an abundance of riches and as is so often the case, there were times when a difficult choice had to be made between which session to attend. Some of the sessions on data and information management within organisations were very interesting and helpful, emphasising the importance of how proper information management systems and policies can be helpful when a litigation hold is notified. Of particular interest was the way in which such policies may be used to resist spoliation allegations. A proper, principled inmjformation management policy may offer a reasonable explanation for why data is not immediately available or why it is no longer in existence.

It was also a pleasure to meet again Chris Dale from the E-Disclosure Information Project. I first met Chris in Singapore at a Conference a couple of years ago and we have kept in touch. Indeed I owe a debt to Chris for it was he who recommended my participation to the Conference organisers. Chris, as always, played a valuable part of the Hong Kong Conference, sharing his experiences and insights in the E-Discovery field and often was able to point out some of the shortcomings in the way in which E-Discovery Rules are working. One observation that he made was in the context of E-Discovery as a process.

The process starts often before litigation actually begins – when in fact it is contemplated. Parties should start considering their E-Discovery obligations at this time. The various stages of the process (reflected in the EDRM diagram – EDRM means Electronic Discovery Reference Model) continue through the the presentation of documents at Court. I think I should point out that I do not consider the E-Discovery process to be of the “tick the boxes” type of process, nor one which involves a slavish adherence to a set step by step approach. In my view the process is in the form of a journey which carries on throughout the life of the litigation and which involves a number of steps or stages together with an on-going obligation on the part of counsel to meet, confer and co-operate and a requirement by the Court by way of Case Management Conferences to ensure that discovery is reasonable and proportionate. The Court can keep a steady guiding hand on the wheel as the parties continue on the E-Discovery journey. Chris’s criticism of “E-Discovery as a process” was in the context of a slavish adherence to a step by step “plan” and I agree with that. But my view is that a process may have within it a certain flexibility. For example, a staged approach to electronic review may mean that different options become apparent as the review continues, allowing for modifications as review continues.

A copy of my paper delivered to the Symposium may be found below, along with a copy of my presentation.

 

 

Presentation

Linking and the Law – Part 3

Linking and the Law

PART 3

 

8.            Linking and Publication – Ramifications for Defamation

There have been a number of cases that address the issue of whether or not posting a link can amount to publication for the purposes of defamation. It is not surprising that there is some divergence of opinion between Courts, and in essence the conclusion can be summed up with the phrase “it depends”.

A number of recent cases have involved Google. The important thing to remember is that not all the cases involving Google involve linking. One of the important English cases (Tamiz v Google[62]) deals with comments placed on a blog hosted by Google, and whether Google is a publisher for the purposes of defamation. The Australian cases of Trkulja v Google[63] and Trkulja v Yahoo[64] involve the return of search results (particularly involving illustrations the juxtaposition of which resulted in defamation by innuendo. Google’s general position is that it is a content neutral provider. Although this may be correct from a technological point of view, the matter becomes a little more complex when the way in which search results are displayed depends upon search algorithms developed by Google programmers and used in the delivery of search results. The issue of snippet – the brief record of the contents of the relevant web-page – and whether or not they can be defamatory was the issue in Metropolitan International Schools Ltd. v. Designtechnica Corpn.,[65] which held that despite the presence of this brief information, Google was not a publisher, although Abbot JA  in A v Google opined that it was arguable that Google was a publisher for the purposes of dismissing an application by Google for summary judgement striking out a claim for defamation. He left open the possibility “to hold that a search engine is a publisher but with access to the defence of innocent dissemination”.  Part of that determination may involve asking whether the automatic search result process contains a “stamp of human intervention”.

Curiously enough none of the cases referred to above carry any discussion of liability for simply providing a link to defamatory material. In all the cases there has been a deeper issue about services that are provided by Google, or the manner in which search results are displayed. However, the case of Crookes v Newton,[66] a decision of the Supreme Court of Canada provides authority at the highest level for the treatment of links in defamation proceedings. Indeed, the finding of the Court on links could well provide guidance in other areas of law. Before embarking upon that discussion there is an early New Zealand case that requires consideration.

The case of International Telephone Link Pty Ltd v IDG Communications Ltd[67]  involved an application to strike out a claim for defamation regarding references in an article to a website which was created by a third party.

The article in question contained a summary of a number of allegations against the plaintiff that had been made by a Mr Leng on a website that he had created “as a warning to others”. At the end of the article was the URL for Mr Leng’s website.[68] The plaintiff claimed that the defendant republished the website publication by making reference to it. It should be emphasised that the article contained a report or summary of the allegations that the plaintiff considered defamatory.

Counsel for the defendant argued:

1. That the defendants cannot be regarded as having communicated the contents of the Website to anyone; a person does not communicate words nor convey their meaning by identifying where they can be found.

2. The defendants did not cause nor participate in the creation of the Website and are therefore not parties to the publication inherent in that creation.

3. That the references to the website were to the entirety of the site, although only portions thereof were defamatory.

Master Kennedy-Grant swiftly rejected the third submission, observing that few documents are defamatory in their entirety It therefore does not matter that the Website is not defamatory in its entirety. [Pge 5 (2)] The second submission was deemed to be irrelevant. He then went on to identify the crucial issue as whether it is arguable that the references to the website in the article were sufficient communication of the defamatory contents of the website to constitute publication of those contents and referred to a number of cases from the late nineteenth and early twentieth century.

The cases relied upon by the Judge involved circumstances where attention was drawn to the existence of defamatory content. In Hird v Wood[69] a person who sat near a placard which allegedly defamed the plaintiffs and pointed to it was held to have published the contents of the placard, even though it was not shown that he had written or been a party to the writing of the offending words. In Lawrence v Newberry[70] a letter which was published in a newspaper referred readers of the letter to a speech that contained defamatory content. In Hird v Wood, there was an immediacy about what happened. In Lawrence v Newberry a curious person would have to obtain a copy of the text to the speech. Master Kennedy-Grant observed:

There does not seem to me any difference in principle between what had to happen in Lawrence v Newberry for the publishee to receive the information intended to be conveyed and what had to happen in this case. A hundred years ago the reader in question would have picked up a back number of The Times or gone to the reading room of the local library; today he or she would log onto the Net and access the Website.

He referred to other cases where there was held to be “publication by reference” and concluded:

(a) the authorities referred to by counsel favour the view that, even where there is a lack of immediacy between the reference and the possibility of reading the material referred to, there can be publication; and

(b) the question of whether there has been adoption or approval or repetition of the material referred to is essentially a question of fact and, as such, fit for determination at trial.

Significantly there was no discussion of the nature of a link as a means of referencing potentially defamatory content and the thrust of the rationale was based upon publication by reference. The Judge was unconcerned about the nature of the reference or how it was provided. All that was need was for attention to be drawn to the referred defamatory content.

One point of distinction in this case was the fact that there had been a summary of the defamatory content in the body of the article which appeared in Computerworld magazine. The decision does not make it clear that the article appeared in an on-line version of the magazine. But the provision of the information provides a context to the provision of the link to a website where the actual details of the defamatory content may be found. It is not as thought the link existed in isolation, but provided a reference point for further information.

8.1          Crookes v Newton[71]

A similar situation arose in the case of Crookes v Newton.

It is settled law that to succeed in a defamation action, a plaintiff must first prove that defamatory words were published. Crookes v Newton holds that a hyperlink, by itself, is not publication of the content to which it refers. Publication will only occur if the hyperlink is presented in a way that repeats the defamatory content.

The facts in that case were these.

The appellant Crookes brought numerous defamation actions against various individuals and organizations alleging that he had been defamed in several articles on the internet. After those actions were commenced, the respondent Newton posted an article on his website which commented on the implications of the plaintiff’s defamation suits for operators of internet forums. The respondent’s article included hyperlinks to websites containing some of the allegedly defamatory articles that were the subject of the plaintiff’s actions. However, the respondent’s article did not reproduce or comment on the content in those articles.

The appellant discovered the respondent’s article and advised him to remove the hyperlinks. When the respondent refused, the appellant brought an action seeking damages for defamation on the basis that the hyperlinks constituted publication of the allegedly defamatory articles. There was evidence that the respondent’s article had been viewed 1,788 times, but no evidence as to how many times, if any, the hyperlinks in the article had been followed.

The Court[72] began by considering the development of the publication rule. The scope of the rule was wide indeed. In one case a person whose role was to manually operate a printing press was found liable for defamatory words contained in the publication, despite being unaware of its contents. The rigour of the publication rule was ameliorated by the “innocent dissemination” defence, allowing booksellers and librarians to avoid liability if they had no actual knowledge of alleged libel, were not aware of circumstances that would give cause to suspect a libel, and were not negligent in failing to discover the libel.

The majority then went on to consider the nature of a hypertext links as a means of publication and returned to first principles.“To prove the publication element of defamation, a plaintiff must establish that the defendant has, by any act, conveyed defamatory meaning to a single third party who has received it.  Traditionally, the form the defendant’s act takes and the manner in which it assists in causing the defamatory content to reach the third party are irrelevant.  Applying this traditional rule to hyperlinks, however, would have the effect of creating a presumption of liability for all hyperlinkers. This would seriously restrict the flow of information on the Internet and, as a result, freedom of expression.

The functionality of hyperlinks as a form of “publication” was then considered:

Hyperlinks are, in essence, references, which are fundamentally different from other acts of “publication”.  Hyperlinks and references both communicate that something exists, but do not, by themselves, communicate its content.  They both require some act on the part of a third party before he or she gains access to the content.  The fact that access to that content is far easier with hyperlinks than with footnotes does not change the reality that a hyperlink, by itself, is content-neutral.  Furthermore, inserting a hyperlink into a text gives the author no control over the content in the secondary article to which he or she has linked.

A hyperlink, by itself, should never be seen as “publication” of the content to which it refers.  When a person follows a hyperlink to a secondary source that contains defamatory words, the actual creator or poster of the defamatory words in the secondary material is the person who is publishing the libel. Only when a hyperlinker presents content from the hyperlinked material in a way that actually repeats the defamatory content, should that content be considered to be “published” by the hyperlinker.

Thus the Court is saying that hyperlinks are essentially content neutral references to material that hyperlinkers

a) have not created and

b) do not control.

Although a hyperlink communicates that information exists and may facilitate the transfer of information, it does not, by itself, communicate information. But the Court went on to consider a significantly wider issue – that of the internet itself.

The Internet cannot, in short, provide access to information without hyperlinks.  Limiting their usefulness by subjecting them to the traditional publication rule would have the effect of seriously restricting the flow of information and, as a result, freedom of expression.  The potential “chill” in how the Internet functions could be devastating, since primary article authors would unlikely want to risk liability for linking to another article over whose changeable content they have no control.  Given the core significance of the role of hyperlinking to the Internet, we risk impairing its whole functioning.  Strict application of the publication rule in these circumstances would be like trying to fit a square archaic peg into the hexagonal hole of modernity.[73]

However, this did not mean that there was a blanket defence available for those who hyperlinked to defamatory material. A hyperlink will constitute publication if it “presents content from the hyperlinked material in a way that actually repeats the defamatory content.” This might occur, for example, where a person inserts a hyperlink in text that repeats the defamatory content in the hyperlinked material. In these cases, the hyperlink would be more than a reference; it would be an expression of defamatory meaning.  However, this had not occurred in the present case, and the majority dismissed the appeal.

McLachlin C.J.C. and Fish J., whilst in substantial agreement with the majority, held that “a hyperlink should constitute publication if, read contextually, the text that includes the hyperlink constitutes adoption or endorsement of the specific content it links to.” A hyperlinker should be liable for linked defamatory content if the surrounding context communicates agreement with the linked content. In such cases, the hyperlink “ceases to be a mere reference and the content to which it refers becomes part of the published text itself.”

Deschamps J agreed with the outcome but differed considerably in approach. A blanket exclusion of all references from the scope of the publication rule erroneously treats all references alike. Deschamps J considered that the majority’s approach “disregards the fact that references vary greatly in how they make defamatory information available to readers and, consequently, in the harm they cause to reputations.” She proposed a solution that was nuanced and fact specific. A hyperlink would constitute publication if the plaintiff established two elements:

a) that the defendant “performed a deliberate act that made defamatory material readily available to a third party in a comprehensible form,” and

b) that “a third party received and understood the defamatory material.”

As to the first element, the burden would be upon the plaintiff to establish that the defendant played more than a passive instrumental role in making the information available. There would need to be reference to numerous factors bearing on the ease with which the referenced information could be accessed.

To establish the second element, plaintiffs would need to adduce direct evidence that a third party had received and understood the defamatory material, or convince the court to draw an inference to that effect based on the totality of the circumstances.

The difficulty with this approach, and with the contextual approach suggested by McLachlin CJ and Fish J is that it would erode the “bright line” rule proposed by the majority and which provides certainty in this area. Deschamps J’s approach is fact driven, whereas the contextual approach is dependent on the presence of indicia of “adoption or endorsement,” the scope of which is inherently uncertain. If the “bright line” rule provided by the majority were not present the proposals of the minority could have the effect of potentially inhibiting the use of hyperlinks to contentious material thus inhibiting the internet as a medium for free expression. This concern possibly encouraged the majority to establish their rule.

A further difficulty that follows from the minority approach is that it would shift the weight of litigation onto defendants in this difficult area. Although this is already the case with defamation as a “strict liability” tort  the effect of the minority approach would be to lower the threshold of proof for a plaintiff.  Internet users would be placed in the position of having to justify their conduct by reaching for the protection of a defence – and defamation proceedings are costly and beyond the resources of most. Although the wide availability of defences for hyperlinkers may, as Deschamps J. suggests, “dissuade overeager litigants from having a chilling effect on hyperlinking,” it would not deter plaintiffs who wish to stifle criticism by issuing gagging writs and intimidating defendants through costly litigation.

However, despite the welcome statement of the “bright line” rule by the majority, the case is not closed on hyperlinking. The Court expressly left open the question of whether the same principles apply to embedded or automatic hyperlinks. These hyperlinks automatically display referenced material with little or no prompting from the reader. They are distinguishable from the user-activated hyperlinks in Crookes. User activated links require users to click on the hyperlink in order to access content.

The Court declined to comment on the legal implications of automatic or embedded hyperlinks.  It seems that they would constitute publication, according to the majority’s because they make third party content appear as part of the website that the hyperlinker controls. The third party material that is “linked-to” becomes a part of the users site and thereby constitutes publication by the user.

8.2       Concluding Thoughts on Crookes v Newton

The cases suggest as a starting point that links can be content neutral, and that their use may not have any legal implications nor attract liability if the material linked-to may be contentious. If the link acts as a reference point for “further reading or discussion” its function is clearly neutral.

The situation becomes different if the link goes beyond as reference point and adopts material in the linked-to site, or uses the site as an endorsement for the views expressed. This accords with the view of the majority in Crookes v Newton and was the situation in the case of International Telephone Link Pty Ltd v IDG Communications Ltd where the nature of material linked-to was coloured by the commentary published by the defendant.

Although Kaplan J seemed to focus more on the “electronic civil diosobedience” motive of the defendants in Universal City Studios v Reimerdes and Corley there can be little doubt that the use of links was anything but “content neutral” and was coloured by the commentary and actions of the defendants. It is for this reason that the differentiation between the defendants and an informational organ such as the LA Times becomes clear. In such a case the link has a referencing or “for further information” quality rather than the encouragement of an unlawful act.

Thus, although the “contextual” approach of McLachlin CJ and Fish J was that of a minority, nevertheless it cannot be denied that there is a contextual element that pervades the use of links. Although the decision of the majority provides welcome clarification, it is not absolute. The technologists may see links as mere code that enables internet navigation, and thereby is content neutral. The law sees it in a more nuanced way.

9.            The European View – Svensson v Retreiver Sverige AB

Linking, in the context of copyright infringement, has come under scrutiny in Europe. Under EU copyright laws, authors have the exclusive right to control the “communication to the public”  and “the making available to the public” of their works, whilst performers, producers and others also have exclusive right to control the “making available to the public” of their works. It is generally an infringement of those rights if others communicate or make available content without permission from rights holders to do so. The general approach is that the rights around the ‘communication to the public’ are said to “cover any such transmission or retransmission of a work to the public by wire or wireless means, including broadcasting” and “should not cover any other acts”.

In the case of Svensson v Retreiver Sverige AB[74] The Court of Justice of the European Union (CJEU) has been asked to provide a ruling on how that EU law applies in the case of hyperlinks. The issue is whether  anyone, other than the holder of copyright in a certain work ,who supplies a clickable link to the work on his website, communicates the work to the public. The Swedish court has also asked whether the answer to that question changes if access to the content is restricted in some way or by the way the content is displayed after a link is clicked on.

The case  began when a Swedish journalist, Svensson, wrote an article which was published by a Swedish newspaper in print and on the paper’s website. An e-commerce company, Retriever Sverige AB, ran a subscription service which gave its customers access to newspaper articles.

Svensson sued Retriever Sverige AB for “equitable remuneration”, alleging that Retriever had made his article available to Retriever subscribers through the search and alert functions on its website. He stated that this came within the copyright-relevant acts of either a communication to the public or the public performance of a work. Whatever the situation, Retriever needed his consent and he had not granted it.

Retriever denied any liability. One of its key arguments was that the linking mechanisms did not constitute copyright-relevant acts, and there was therefore no infringement of copyright law. The fact that a Retriever customer had to log in to Retriever’s website and then fill in a search term was also relevant.

The European Copyright Society[75] in a submission to the Court has argued that the act of hyperlinking to copyright material without permission ought not to constitute outright infringement. In an 18 page submission[76] the Society argues[77]:

“Clearly, hyperlinking involves some sort of act – an intervention. But it is not, for that reason alone, an act of communication. This is because there is no transmission. The act of communication rather is to be understood as equivalent to electronic ‘transmission’ of the work, or placing the work into an electronic network or system from which it can be accessed. This is because hyperlinks do not transmit a work, (to which they link) they merely provide the viewer with information as to the location of a page that the user can choose to access or not. There is thus no communication of the work. As Abella J explained, speaking for the majority of the Supreme Court of Canada (in a case concerning hyperlinks and defamation):

‘Communicating something is very different from merely communicating that something exists or where it exists. The former involves dissemination of the content, and suggests control over both the content and whether the content will reach an audience at all, while the latter does not….

Hyperlinks … share the same relationship with the content to which they refer as do references. Both communicate that something exists, but do not, by themselves, communicate its content. And they both require some act on the part of a third party before he or she gains access to the content. The fact that access to that content is far easier with hyperlinks than with footnotes does not change the reality that a hyperlink, by itself, is content-neutral — it expresses no opinion, nor does it have any control over, the content to which it refers.’

The Society provided a very strong and technologically correct statement on the function of a link within the context of transmission and communication of a work.

“(a) Hyperlinks are not communications because establishing a hyperlink does not amount to “transmission” of  a work, and such transmission is a prerequisite for “communication”

(b) Even if transmission is not necessary for there to be a “communication”, the rights of the copyright owner apply only to communication to the public “of the work”, and whatever a hyperlink provides, it is not “of a work””[78]

In addition, the Society argued that  the CJEU should generally uphold that hyperlinking does not constitute a communication to the public of copyrighted content regardless of the ‘framing’ given to the content when it appears after a hyperlink has been clicked on.

“In so far as there might be technical differences in some cases where the work is made available from the server of a person providing a hyperlink, it is our view that, even were there an act of communication or making available, such a communication or making available is not “to the public” because it is not to a “new” public – it is a public which already had the possibility of access to the material from the web. Just as an improved search-engine that improves the ability of users to locate material for which they are searching should not be required to obtain permission as a matter of copyright law, so providing links or access to material already publicly available should not be regarded as an act that requires any authorisation.”[79]

Reference was also made to domestic court cases from Germany and Norway which are consistent with the views advanced by the European Copyright Society.

In Paperboy,[80] the German Bundesgerichtshof found that the “paperboy search engine” which searched newspaper websites and provided search results including hyperlinks, did not thereby infringe. The Court considered whether hyperlinking was “communication” under German law and under Article 3 of the Information Society Directive, 2001/29, concluding that there was no infringement. It observed:

[42] A person who sets a hyperlink to a website with a work protected under copyright law which has been made available to the public by the copyright owner, does not commit an act of exploitation under copyright law by doing so but only refers to the work in a manner which facilitates the access already provided …. He neither keeps the protected work on demand, nor does he transmit it himself following the demand by third parties. Not he, but the person who has put the work on the internet, decides whether the work remains available to the public. If the web page containing the protected work is deleted after the setting of the hyperlink, the hyperlink misses. Access to the work is only made possible through the hyperlink and therefore the work literally is made available to a user, who does not already know the URL as the precise name of the source of the webpage on the internet. This is however no different to a reference to a print or to a website in the footnote of a publication. 

[43] The Information Society Directive, …., has not changed the assessment of hyperlinks, as are in question here, under copyright law … According to Art.3(1) of the Information Society Directive Member States are obliged to provide authors with the exclusive right to authorise or prohibit any communication to the public of their works, including the making available to the public of their works in such a way that members of the public may access them from a place and a time individually chosen by them. This provision refers to the use of works in their communication to the public. The setting of hyperlinks is not a communication in this sense; it enables neither the (further) keeping available of the work nor the on-demand transmission of the work to the user.

In Napster.no[81], the Supreme Court of Norway held that the posting on a website (in this case, http://www.napster.no) of hyperlinks that led to unlawfully uploaded MP3 files did not necessarily constitute an act of making the files available to the public. It stated:

“[44] There has been no dispute that those uploading the music files carried out illegal copying and made the works publicly available. If the linking is regarded as making works publicly available, this will concern linking to both lawfully and unlawfully disclosed material. The conception of what constitutes making works publicly available must be the same in both cases…

[45] The appellants claim that the linking involved an independent and immediate access to the music. A [the respondent] for his part has  pointed out that the links only contained an address to a webpage and  that, by clicking on the link, the music file would be stored temporarily on the user’s own computer. Not until such storage took place would the user be able to play the music file or download it for later use.

[46] In my opinion, it is not decisive whether [direct/deep links] or [superficial links - links to the main page of the website] are involved, nor whether the user technically is “located” on his/her own computer, on napster.no, or has “moved” to the website to which the link leads. What must be decisive is how the technique functions – whether and how access is given.

[47] It cannot be doubted that simply making a website address known by rendering it on the internet is not making a work publicly available. This must be the case independent of whether the address concerns lawfully or unlawfully posted material…”

The European Copyright Society also referred to the case of Perfect 10 v Google Inc.,[82] a decision of the 9th Circuit Court of Appeals.

10.          Perfect 10 v Google – Moving Away from Reimerdes & Corley

The factual background was as follows:

Perfect 10 marketed and sold copyrighted images of nude models. Among other enterprises, it operated a subscription website on the Internet. Subscribers paid a monthly fee to view Perfect 10 images in a “members’ area” of the site. Subscribers had to use a password to log into the members’ area. Google did not include these password-protected images from the members’ area in Google’s index or database. Perfect 10 also licensed Fonestarz Media Limited to sell and distribute Perfect 10’s reduced-size copyrighted images for download and use on cell phones.

Some website publishers republished Perfect 10’s images on the Internet without authorization. Once this occurred, Google’s search engine automatically indexed the webpages containing these images and provided thumbnail versions of images in response to user inquiries. When a user clicked on the thumbnail image returned by Google’s search engine, the user’s browser accessed the third-party webpage and in-line links to the full-sized infringing image stored on the website publisher’s computer. This image appeared, in its original context, on the lower portion of the window on the user’s computer screen framed by information from Google’s web-page.

Perfect 10 sued Google claiming that the latter’s “Google Image search” infringed Perfect 10’s copyrighted photographs of nude models, when it provided users of the search engine with thumbnail versions of Perfect 10’s images, accompanied by hyperlinks to the website publisher’s page.

The Court commenced by examining the operation of Google’s search engine and the way in which searches were returned and the operation of the links that were provided to such returns. It observed that there was no dispute that Google’s computers stored thumbnail versions of Perfect 10’s copyrighted images and communicated copies of the thumbnails to Google users. However, it also noted that Google did not display a full sized infringing image when Google framed in-line linked images that appear on a user’s computer screen.

“Because Google’s computers do not store the photographic images, Google does not have a copy of the images for purposes of the Copyright Act. In other words, Google does not have any “material objects … in which a work is fixed … and from which the work can be perceived, reproduced, or otherwise communicated” and thus cannot communicate a copy.”[83]

The Court went on to look at the way that the technology operated:

“Instead of communicating a copy of the image, Google provides HTML instructions that direct a user’s browser to a website publisher’s computer that stores the full-size photographic image. Providing these HTML instructions is not equivalent to showing a copy. First, the HTML instructions are lines of text, not a photographic image. Second, HTML instructions do not themselves cause in-fringing images to appear on the user’s computer screen. The HTML merely gives the address of the image to the user’s browser. The browser then interacts with the computer that stores the infringing image. It is this interaction that causes an infringing image to appear on the user’s computer screen. Google may facilitate the user’s access to infringing images. However, such assistance raises only contributory liability issues, see Metro-Goldwyn-Mayer Studios, Inc. v. Grokster, Ltd., 545 U.S. 913, 929-30, 125 S.Ct. 2764, 162 L.Ed.2d 781 (2005), Napster, 239 F.3d at 1019, and does not constitute direct infringement of the copyright owner’s display rights.”

It was for that reason that the Court concluded that

“Google’s search engine communicates HTML instructions that tell a user’s browser where to find full-size images on a website publisher’s computer, but Google does not itself distribute copies of the infringing photographs. It is the website publisher’s computer that distributes copies of the images by transmitting the photographic image electronically to the user’s computer.”[84]

This case is useful in that it considers the nature of the technology in arriving at its conclusion. However, unlike Crookes v Newton which attempted to lay down some bright line rules about links and their general function within the legal framework, the decision in Perfect 10 is fact specific. Whilst, within the factual matrix of the case, Google was not liable for direct infringement, there was still the problem of assisting third-party websites in distributing their infringing copies of photographs to a worldwide market and assisting worldwide audience of users to access infringing materials, for purpose of Perfect 10’s contributory infringement claim.

In addition, the decision seems to be a step away from Judge Kaplan’s approach in Reimerdes & Corley which relied on the “functional equivalence” approach. The Court in Perfect 10 effectively adopted a content neutrality approach to the issue of direct infringement, focussing especially upon the nature of the code and the HTML instructions which they held were not equivalent to showing a copy. Applying the approach in Reimerdes & Corley that would amount to a “distinction without a difference” but the more recent cases seem to demonstrate a shift away from such a rationale to a more nuanced understanding of links and their use as a coded reference point.[85]

Return to Svensson

Which brings us back to Svensson and the recent decision of the Court of Justice of the European Union (CJEU). This decision could be viewed as a compromise. It contains some relief to those who link to content, but at the some time hedges that around with exceptions. Regrettably the decisions seems to take little account of the technological underpinning of hyperlinking, nor that fact that links are in essence neutral. AT the same time it must be remembered that the decision relates to EU copyright law and therefore cannot be said to contain rules of universal application.

In summary the Court held as follows:

      1. A clickable direct link to a copyright work made freely available on the internet with the authority of the copyright holder does not infringe.
      2. It makes no difference to that if a user clicking on the link is given the impression that the work is on the linking site.
      3. However, it seems that a clickable link will (unless saved by any applicable copyright exceptions) infringe if the copyright holder has not itself authorised the work to be made freely available on the internet.
      4. If the work is initially made available on the internet with restrictions so that only the site’s subscribers can access it, then a link that circumvents those restrictions will infringe (again subject to any applicable exceptions and further discussion below).
      5. The same is true where the work is no longer available on the site on which it was initially communicated, or where it was initially freely available and subsequently restricted, while being accessible on another site without the copyright holder’s authorisation.

Within the context of copyright law, if material is made freely available on the Internet by the copyright holder, there are no infringement implications. Linking to the content, assuming that a link amounts to an act of communication, does not constitute infringement. The Court said: “the provision on a website of clickable links to works freely available on another website does not constitute an act of communication to the public, as referred to in that provision.”

Graham Smith on his Cyberleagle blog makes the following comment:

“Taken at its face, that could suggest that a link to any freely available work does not infringe, regardless of whether the copyright holder initially authorised the work to be made freely available on the internet. That would broadly legitimise most links. But if that is right it is difficult to understand the numerous references in the judgment to whether the copyright holders authorised the initial communication to the public on the internet, and the potential audience contemplated when they did so.  It seems likely that the operative part should instead be understood to mean:
“…the provision on a website of clickable links to works freely available on another website, in circumstances where the copyright holder has authorised such works to be made freely available at [that]/ [an] internet location, does not constitute an ‘act of communication to the public’ … .”
The alternatives ‘that’/‘an’ reflect the possible uncertainty about the effect of the judgment on links to unauthorised copies where the copyright holder has authorised the work to be freely available at some other location on the internet.”

The situation begins to get complex if the copyright holder has not authorised the placing of the material on the Internet.. Once again there is little contentious in this proposition. The placing of material on the Internet without the authorisation of the copyright holder (and making it available) involves acts of infringement unless one can fall within exceptions or claim a permitted act. The issue seems to become a little more complex, but in reality it is not, if the content is place on the Internet with the approval of the copyright holder but is subject to restrictions which are subsequently circumvented. Clealry, unless one can fall within exceptions or establish a permitted use, there ar infringement implications in providing such material.

There are a couple of observations that must be made about the CJEU decision. The first is that it is not a decision about the implications of hypertext links. There is no discussion about the technology or implications of hypertext links. In that respect the decision is a little disappointing. Secondly, what the decision IS about is the nature communication within the context of copyright law. It is, therefore,  a decision about copyright.

A factor which complicates the issue of communication is whether or not there has been communication to a “new public”. Laurence Eastham at the Society for Computers and the Law makes the following observations:

“The Court points out, however, that the communication must be directed at a new public, that is to say, at a public that was not taken into account by the copyright holders at the time the initial communication was authorised. According to the Court, there is no such ‘new public’ in the case of the site operated by Retriever Sverige. As the works offered on the site of theGöteborgs-Posten were freely accessible, the users of Retriever Sverige’s site must be deemed to be part of the public already taken into account by the journalists at the time the publication of the articles on the Göteborgs-Posten was authorised. That finding is not called into question by the fact that the internet users who click on the link have the impression that the work is appearing on Retriever Sverige’s site, whereas in fact it comes from the Göteborgs-Posten.

The Court concludes from this that the owner of a web site, such as that of Retriever Sverige, may, without the authorisation of the copyright holders, redirect internet users, via hyperlinks, to protected works available on a freely accessible basis on another site.

The position would be different, however, in a situation where the hyperlink permits users of the site on which that link appears to circumvent restrictions put in place by the site on which the protected work appears in order to restrict public access to that work to the latter site’s subscribers only, since in that situation, the users would not have been taken into account as potential public by the copyright holders when they authorised the initial communication.”

So is Svensson a helpful decision. The general reaction is positive. Iain Connor makes the following assessment:

 “On the whole, this is a good decision for rights holders and consumers alike. It makes clear that you can provide links to content freely available on the web but that you need permission from the copyright holder in all other circumstances and so it puts rights holders in control of their business model.”

“The slight wrinkle is that where content is freely available, the decision appears to allow it to be ‘framed’ on a third party website,” he said. “However, it should be possible to manage the framing issue by robust website terms and conditions and other legal means such as the author’s right not to have his work falsely attributed to another.”

By focussing on communication, the Court avoided the thorny issue of the function and essential meaning of hypertext links. This was probably by design. By restricting their decision strictly to the ambit of the questions posed by the national court, the CJEU adopted a narrow focus the the issue, restricting the decision to the questions at hand and further restricting the decision to the strict framework of copyright law. The issues have been addressed strictly within that context and the content neutrality (or partiality) of hypertext links need not, therefore, have been considered.

11.             Linking — issues arising

The cases that have been decided on the issue of linking do not establish with any degree of clarity that the provision of a link in all cases will automatically result in a copyright infringement. The mere provision of a link does not mean that a copy is made. Rather like a signpost, it directs a user to a particular site from which information may be obtained. The link itself does not involve copying material from another website.

Thus the mere provision of a link should not incur any direct liability for copyright infringement of website material. The work that is at the destination of the link is neither displayed nor communicated. The user is merely told where the work may be found.

When the site is accessed and temporary copies have been made into RAM, the matter is one between the copyright owner and the browser. Claims for secondary infringement against intermediaries will be futile.[86] The mere provision of a link without more does not implicate the link provider. It seems to have been the case in Universal City Studios v Reimerdes[87] and RIAA v Napster[88] to pursue the provider of the links or the operator of the Napster server using US contributory infringement theory as an expedient rather than pursue the millions of unidentified direct infringers. However, in the absence of any issues of contributory infringement, encouragement or interference with property rights that are inherent in framing and deep linking, the link provider commits no infringement.

Browsing webpages, if it is to be a “functional equivalent” of anything, may be the equivalent of reading a book in an environment akin to a public library. It thus falls outside the acts restricted by copyright, and there is no basis for secondary liability claims.

It may also be argued that anyone who places material on the internet without effective restrictions grants an implied licence to an internet user to make copies of the material and to any other website operator who links to it. The vast scope of the internet makes contractual solutions — agreements between owners of linking and linked-to sites — almost impossible.[89]

In this context, one must question whether the issue of implied licences is appropriate for issues involving copyright.[90]

First, it is unclear that an implied licence could apply in the case of a deep link that circumvents the main page and the advertising placed on it. It could be argued that the owner of a website grants a licence to browse the site, but in the way the creator of the website intended and designed it. However, one commentator suggested that the argument of an implied licence is still tenable even in the event of linking to deep pages in a website.[91]

Secondly, the use of web linking agreements that explicitly state that permission is required can avoid the argument for an implied licence. Such web agreements are becoming more common, and the implied licence then cannot work against the clearly expressed will of the copyright owner, as agreed with the author of the linking site.

However, even the defenders of these kinds of agreements recognise that the suggestion of a contract being necessary to link to a website seems contradictory to the ethos of the internet, and suggest that, for normal links, these kinds of agreements are unnecessary. One can link without permission and without having to give notice to the copyright owner.

Web linking agreements, according to this view, should still be useful for embedded links and especially for frames, and as a precautionary measure taken by the operator of a commercial website that wants to be sure that it will not face liability for the links that it is providing.

In addition, some owners have attempted to negate implied licence by inserting “Terms of Use” that explicitly deny the existence of implied licence to link, but these disclaimers are often inconspicuous, and may be considered unenforceable.

Thirdly, the implied licence doctrine is an aspect of contract law, essentially an estoppel doctrine, and it does not fit well with the traditional copyright law, because such a factual contract would not seem to arise between strangers who do not have a previous relation, legal or de facto.

Finally, in the case of copyrighted materials posted on webpages without the consent or the knowledge of the owner, it is clear that this argument cannot work because the implied licence is based on the premise that the owner knew that his or her material would be available on the internet.

In terms of copyright law, fair dealing is a far more satisfactory solution. It is clear that some form of statutory amendment would be required to the New Zealand Copyright Act 1994 because to attempt to apply the fair dealing provisions of that legislation[92] to webpages may strain the language of the statute.

Under the fair use defence in the US there is no infringement if the use is fair, even if the use violates one of the copyright owner’s exclusive rights. Four factors are considered in determining whether a use is fair:

1.   the purpose and character of the use, including whether such use is for a commercial nature or is for non-profit educational purposes;

2.   the nature of the copyrighted work;

3.   the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

4.   the effect of the use upon the potential market for or value of the copyrighted work.[93]

The purposes and character of linking are closely related to the purpose of the internet, which is to provide information freely to the public in an accessible manner. Where the nature of the linked page is commercial or non-fictional and where the linked page is a small and insubstantial part of the plaintiff’s work, in the absence of a commercial or profit-making use, browsing should qualify as fair use. As far as the fourth requirement is concerned, linking could be said to be an advantage.

In the context of the world wide web the link expands the potential market of the linked site, because more people are able to easily find and access the site. Even if accessing a website through a link does make a copy of the whole work, this factor does not have to be decisive in the majority of cases and fair use can be successfully invoked.

So far the issue of simple linking has been considered. It is clear that the cases outlined recognise that different considerations apply to deep linking and framing that involves the diversion past a normal entry point to a website (along with any notices, terms or conditions of use, advertising and the like) to material of value to the user, or, in the case of framing, the apparent appropriation of another user’s material and inclusion as part of another, unrelated site.

The cases on deep linking demonstrate that a deep link potentially prejudices the linked-to site by circumventing advertising material and possibly prejudicing the linked-to site’s income, especially if advertising revenue is based on the number of times the page containing the advertising is accessed. Although the objective in providing a deep link may be to provide relevant information (which is the major purpose of the internet) nevertheless there is an expectation on the part of a site owner that access to the site will be reached via a home or starting page.

However, given the nature of the internet as an information-disseminating medium, and the essentially “public nature” of information available on the internet, and the fact that it is potentially available for world wide distribution and access, and given that in essence a link is no more and no less than a signpost, it is suggested that it need not follow as a matter of course that a deep link should be seen as a copyright infringement. Rather, greater value should be placed on the availability of information than on the fact that access to that information has been other than through a home page. As additional factors, the person or organisation behind the linked-to site:

•    knows the nature of the web and that the link is a means of access and navigation;

•    is taking advantage of the tremendous reach that the web offers; and

•    has available certain technological means of restricting access to parts of the site.[94]

Given the third factor, it would be a simple matter to establish copyright infringement if a person linking to the site used some sort of “hack” or “software solution” to gain access to the deep-linked material by circumventing the means employed to restrict access.

Another option is to place a higher burden of proof on a person alleging infringement by deep linking in terms either of standard of proof or of the criteria to be established to prove infringement. Among such criteria could be actual (as opposed to potential) damage or provable loss of revenue or patronage as an ingredient of the infringement rather than the quasi-strict liability copyright law approach that deems unauthorised copying to be infringement.

Framing creates another situation in terms of intellectual property. Framing is a function of software, having been introduced as a feature of Netscape 2. It is now widespread within browser software. Many sites require that for a user to access them, a frames-capable browser should be used. Unlike linking, however, framing is not a fundamental part of the architecture of the world wide web. In addition, framing is far closer to traditional print-based copyright theory than linking in that framing may give the impression that the web material framed in the window of site A belongs to and is a part of site A rather than in fact belonging to site B. In effect, framing without attribution is a clear appropriation of site B’s material and can be protected by ordinary copyright principles. To avoid such difficulties, many websites open a linked-to site in a completely new window, thus identifying the linked-to material by name and by URL.

12.             The Threat to the Web

The challenging of links based on copyright theory raises a greater issue as far as the internet is concerned. This goes to the heart of the function of the world wide web, the architecture and environment of the internet and the way in which, if at all, such fundamental aspects of the new technology are going to be regulated or governed.

Linking is what gives the web its awesome power. It is an attraction for ordinary people who wish to obtain information quickly and easily without having to understand the mysteries of code or remember complex address parameters or details. Linking is an indispensable tool that allows internet users to benefit from information that is located on the web. To establish that a mere link or mere browsing infringes copyright would be the equivalent of killing the world wide web, which represents the internet to the majority of computer users.

Linking seems to have attracted the attention of copyright specialists because it provides a means by which potential infringements may take place by directing users to copyrighted material. Thus, although the link is merely what could be classified as an intermediate step in the process of potential infringement, it seems to have assumed a significance that goes beyond what it really is. Further, it gives copyright owners a convenient target — the owner of the linking site — rather than the ultimate consumer, and even then there may be some doubt as to whether there has in fact been an infringement by the mere accessing of a webpage without more.

Linking holds no mystery. A link is merely a line of code that allows a step to be taken. Tim Berners-Lee, who developed the world wide web at the European Organisation for Nuclear Research (CERN),[95] puts the matter as “the intention in the design of the web was that normal links should simply be references, with no implied meaning”.[96]

The contents of the linked document may contain meaning and often do but the link itself does not. A recommendation to go to a particular site followed by a link, or even the embedding of a hypertext reference (HREF) within the recommendation does not add any extra meaning to the link.

A useful analogy for a link is to treat it as a card index system in the library that directs a researcher or library user to a particular location within the shelves of the library. The library card carries no more information than is necessary to enable the user to satisfy him or her that the book is the one that is sought and to locate it.[97] A hypertext link does not even go this far. It only contains the information about the location of the information on the destination site and makes that site available to the computer user.

However, it is, as has already been stated, an essential part of the architecture of the internet. This then raises a question that relates to the way in which the law applies to the internet, and touches on an even deeper issue that is the purpose of law itself.

The purpose of the law is to regulate the behaviour of individuals within society. One of the areas that the law has been unable to regulate is the environment within which individuals operate. That environment is governed by what we may refer to as the “laws of nature” or the “laws of science”. It is, if you like, the architecture within which society operates.

Similarly with the internet — inherent within what has been called cyberspace is a fundamental architecture or system within which we may operate and without which the internet or parts of it cannot function. An example may be found in the TCP\IP protocol that allows different computers to communicate with one another. Another is the system of IP numbers that are assigned to machines on the internet. Although we may set rules for the assignation of IP numbers[98] the internet simply will not function without these two essential aspects of the internet environment or its architecture. This environment or architecture is not a part of nature such as is the real world environment. It is created by human beings. However, that architecture sets the metes and bounds of the internet or cyberspace and receives its expression in code.

The world wide web built on existing internet protocols and added another dimension to the internet. However, what the world wide web actually is and its limitations are governed by the code that makes it operate. Part of that system of operations is hypertext linking, activated in code by the term HREF. Linking cannot take place without the HREF expression. Thus linking is an essential part of the architecture and the environment of the world wide web. To attempt to limit or regulate its use is rather like a Judge trying to slow the growth of a tree by judicial decree.

Lest it is suggested that it is not the architecture that is being regulated but the way in which people behave — that is, utilise the architecture — we must return to first principles and see what the HREF expression does. Unlike a mechanical creation, which may be used for good or ill, HREF allows only one thing and that is a hypertext link — a means of locating a page and bringing it into a user’s computer. To attempt to legislate or regulate what is an essential part of the web does violence to the environment within which the internet user may expect to operate. Indeed, the most extreme view might be that for the purposes of ensuring that internet users have certainty in terms of the lawfulness of their activities within the environment, the law should not concern itself with issues of the use of basic and necessary parts of the internet by using copyright theory to limit the use of these fundamental operators. The real issue should be with the use that the ultimate user may make of copyrighted material. If that causes copyright owners a problem in terms of detection and enforcement so be it. Back-door methods should not be used which do violence to the internet environment and simultaneously to legal principle.

In a sense the code or the architecture of the internet limits the way in which the law can be applied to regulate or govern it. In a sense the code and the architecture that it provides imposes its own regulatory metes and bounds not only in terms of what may or may not be done, but in terms of the boundaries of any regulatory or governance system that may be imposed upon it by the law, either as pronounced by the Courts or by legislative bodies.

If, however, linking activity is going to be the subject of regulation under, say, principles of copyright, the effect upon the internet will be dramatic.[99]

First, the internet will cease to be the free information environment that it was originally conceived to be. Freedom of navigation for information on the internet will become restricted in the same way that “real space” is. The freedom that users enjoy to link to content on similar subjects whereby a collection of links on, say, copyright law are all brought together, may be compromised or indeed become impossible. Not only would the free information environment be restricted but the utility of the internet as a source of information would be hampered.

Secondly, the internet could become divided into a number of information “zones” of open and closed areas. Distinctions between sources of information may be made on a number of criteria, among them pricing considerations, the willingness of a user to provide information about himself or herself, the willingness of a user to accept additional information on products or services, whether the site is a commercial site or a non-commercial one and so on. In some respects this is already taking place. For example, when the New York Times Cyberlaw Journal existed online it was often linked to from legal sites. However, a visitor to that site for the first time was unable to access it until he or she registered, and that registration was specific to the particular machine. Thus, if the user wished to access the Cyberlaw Journal from another machine, the registration information (user name and password) had to be re-entered. The New York Times Cyberlaw Journal was free of charge, but the mechanism prevented casual access to the site by deep linking.

Other technological solutions may be available such as requiring a password to gain access, or building dynamic webpages that only appear when the user uses a certain program. There may be feasibility issues for commercial sites who would try to obtain as big a reach as possible that would mitigate this solution, although it may be satisfactory for non-commercial sites. The irony is that the complaints about deep linking arise mainly from commercial sites, whereas non-commercial sites are generally more attuned to the “information wants to be free” ethic that underpinned the early internet.

It is possible to program webpages to reject linking from unwelcome sources or users. In a sense this is a logical and acceptable solution, for it puts control of access to the site in the hands of the site owner. This enables the owner to obtain the exposure that is required while at the same time preventing unwelcome links, such as in the case of Havana House Cigars in New Zealand. Framing may also be prevented in that the frame may be “dissolved” thus enabling the user to see the entire page from its source and not as a part of another site.

There is no doubt that there will be further litigation about links and in the near future there will be some considered and possibly definitive solutions. However, those solutions will further obscure this complex area. Decisions emanating from the US will have to resolve apparent conflicts between the Digital Millennium Copyright Act and the Constitution of the US, in particular the First Amendment. With legislation in Australia in force another outcome may well be presented, for Australia has no constitutional equivalent to the US Constitution or its Amendments.[100]

Thus it is likely that there will be a jigsaw of rules and regulations limited by territorial jurisdictions and applicable in some areas and not in others. The casualties in the resolution of these conflicts will be the law, which, when territorially based will be unable to provide consistency and certainty for an environment that does not know borders, and tragically the internet itself.

13.             Conclusion

There are those such as Goldsmith[101] who call for internet regulation and a system of governance. There are others, such as John Perry Barlow,[102] who see the internet as the last frontier for freedom, and who quail at any form of intrusion by the law.

It is necessary to remember that the internet is primarily a means of conveying information. It is not without reason that many emphasise the internet as an important component of the so-called “knowledge economy”. Issues of technology convergence and the use of the internet for business and commerce mean that necessarily the law has a place in the digital environment. In many cases existing legal rules govern these relationships. In some cases, new solutions will have to be devised. Information that is available in a book that can be borrowed from a public library should not be proscribed merely because that same information is available online and can be obtained by clicking on a hypertext link. By the same token, information available on the internet should not be proscribed merely because it is there, especially if it is legitimately available from another source.

What will require care is to ensure that the digital environment receives the same treatment as the “real world” and that restrictions and inhibitors on activities and relationships that are not present in the real world seem to become available in or intrude on the online environment.


[62] [2013] EWCA Civ 68

[63] [2012] VSC 533

[64] [2012] VSC 88

[65] [2009] EWHC 1765

[66] [2011] SCC 47

[67] H C Auckland, 19 January 1998, CP 344/97 Master Kennedy-Grant

[68] The domain name of which was the subject of litigation – see NZ Post v Leng [1999] 3 NZLR 219

[69] (1894) 38 SJ 234 (CA).

[70] (1891) 64 LT 797

[71] [2011] SCC 47

[72] By a majority decision comprising Binnie, LeBel, Abella, Charron, Rothstein and Cromwell JJ.

[73] At para [36]

[74] Reference for a preliminary ruling from the Svea hovrätt (Sweden) lodged on 18 October 2012 – Nils Svensson, Sten Sjögren, Madelaine Sahlman, Pia Gadd v Retreiver Sverige AB http://curia.europa.eu/juris/document/document.jsf?text=&docid=130286&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1258343 (last accessed 19 March 2013).

[75] The European Copyright Society is a year-old group of academics and scholars that it has said seek to “promote their views of the overall public interest”. The group’s opinion on the issues before the CJEU was formed by 17 academics from across Europe, including Professor Lionel Bently from Cambridge, Professor Graeme B Dinwoodie of Oxford University and Professor Martin Kretschmer, the director of CREATe at the University of Glasgow.

[76] Opinion on the reference to the CJEU in Case C-466/12 Svensson – 15 February 2013 http://www.ivir.nl/news/European_Copyright_Society_Opinion_on_Svensson.pdf (last accessed 19 March 2013).

[77] Ibid. paras [35 – 36].

[78] Ibid. para [6 (a) – (b)]. The argument is developed later in the submission by a careful analysis of CJEU cases on the notion of communication and the significance of transmission – see paras [23 – 26]

[79] Ibid. para [55].

[80] Case I ZR 259/00 (17 July 2003) [2005] ECDR (7) 67, 77,

[81] (2006) IIC 120 (27 January 2005)

[82] 487 F. 3d 701 (2007)

[83] Ibid. p. 717

[84] Ibid. p. 718

[85] Having said that it must be noted that Perfect 10  was a decision of the 9th Circuit whereas Reimerdes & Corley was a decision of the 2nd Circuit. The matter will only be definitively resolved by accord between the decisions of the Circuits or a decision of the Supreme Court of the United States.

[86]      In Europe this exclusion was embodied in art 5.1 of the Proposal for a Directive on Copyright and Related Rights in the Information Society, approved 10 December 1997.

[87]      111 F Supp 2d 294 (SDNY 2000), 273 F 3d 429 (2d Cir NY 2001).

[88]      114 F2d 896, 239 F3d 1004.

[89]      P Jakab “Framing Technology and Link Liability” Internet Law Symposium (1998) Pace Law Review 23–25; On contractual solutions see, “Weblinking agreements, Contracting Strategies and Model Provisions” (1997) Section of Business Law American Bar Association; M Luria “Controlling Web Advertising: Spamming, Linking, Framing and Privacy” (1997) 14/1 The Computer Lawyer, 10–22.

[90]      Although it is far more apposite to patent law. Indeed, the defence of implied licence in the US is an affirmative one and it cannot automatically be asserted that it is available. In order to determine whether such an implied non-exclusive licence exists, every objective fact concerning the transaction should be examined to determine whether it supports such a finding. Several objective inquiries can be made in this regard, including an assessment of whether the delivery of the copyrighted material was without warning that its further use would constitute copyright infringement. See Edward A Cavazos and Coe F Miles, “Copyright on the WWW: Linking and Liability” 4 Richmond Jnl of Law and Technology (1997) para 40 and following http://law.richmond.edu/jolt/v4i2/index.html (last accessed 21 March 2013).

[91]      Maureen O’Rourke, “Fencing Cyberspace: Drawing Borders in a Virtual World”, 82 Minn L Rev 609, 660.

[92]      Part 3, ss 40–93.

[93]      USCS 107.

[94]      Special log-in, registration, provision of information for the payment of a fee or a technological way of ensuring that access to the “deep-linked” page is only possible by way of the entry page.

[95]      http://public.web.cern.ch/Public/Welcome.html (last accessed 21 March 2013).

[96]      Tim Berners-Lee, “Links and Law” http://www.w3.org./DesignIssues/LinkLaw (last accessed 21 March 2013).

[97]      The analogy of a referencing system was used in Crookes v Newton [2011] SCC 47

[98]      One of the functions of ICANN.

[99]      The Copyright Amendment (Digital Agenda) Act 2000 (Australia) which substantially amends the Copyright Act 1968 provides that copyrighted subject-matter is not infringed by making a temporary reproduction or copy of the subject-matter as part of the technical process of making or receiving a communication, provided that the making of the communication is not an infringement of copyright (ss 43A and 111A). Although this substantially clarifies the position as far as incidental copying associated with online activity is concerned, the legislation remained silent on the issue of hyperlinks. There is no relevant Australian case law on the issue although an early draft of the Digital Agenda Bill suggested that one of the objects of the legislation was to relieve uncertainty as to whether practices such as internet browsing and hyperlinks violated the Copyright Act. However, the matter has been left to the Courts. See Maree Sainsbury, “The Copyright Act in the Digital Age” 11 Jnl Law and Information Science 182.

[100]     For an example of conflicting outcomes between Australia and England see Sony Computer Entertainment v Owen [2002] WL 346974 (Ch D), [2002] EWHC 45 and Sony Computer Entertainment v Stevens (2002) FCA 906 (26 July 2002).

[101]     “Against Cyberanarchy” 65 U Chicago LR 1205.

[102]     “A Declaration of Independence of Cyberspace”, 8 February 1996, https://projects.eff.org/~barlow/Declaration-Final.html  (last accessed 21 March 2013).

Linking and the Law – Part 2 – A Diversion to TPMs

Linking and the Law

PART 2

7    The New Zealand position — Technological Protection Measures, Anti-circumvention and communication

7.1             Introduction

The Digital Millennium Copyright Act came into force in the US in October 1998 as a response to the 1996 WIPO Copyright Treaty. Updated copyright legislation, including provisions relating to anti-circumvention of copy protection were enacted in the UK in 1988[54] and in New Zealand in 1994.[55]  In this section I shall consider the provisions of the Copyright Act that deal with Technological Protection Measures or TPMs.[56] It will become clear as the discussion progresses that the New Zealand legislation as amended by the Copyright (New Technologies)Amendment Act 2008 addresses the issues that were raised in Reimerdes and Corley.

The discussion in this section is admittedly lengthy but necessary to understand the approach to TPMs and the way that the Legislature has attempted to address  the problem. It is helpful within the context of linking because the decision in Reimerdes and Carley centreed around providing access to TPMs. A consideration of the New Zealand position, especially following the 2008 amendments to the Copyright Act will show significant differences in approach to TPMs from that in the DMCA and which would mean that the approach to linking in Reimerdes and Corley need not necessarily be applicable in New Zealand.

7.2             The Former Section 226 of the Copyright Act 1994

The provisions of the former s 226 of the Copyright Act 1994 created a right in favour of a person issuing copies (effectively a publisher). That person has the same rights as a copyright owner and the remedies that are available are provided.

Subsection (2) of s 226 defined how a person “infringes” the new right. The elements of the prohibited activity were:

•   Making, selling, offering or exposing for sale or hire; or

•   Advertising for sale or hire;

•   Any device or means;

•   Specifically designed or adapted to circumvent the form of copy protection employed; or

•   Publishing information;

•   With the intention to enable or assist persons to circumvent that form of copy protection;

•   Knowing or having reason to believe that the devices, means, or information will be used to make infringing copies.

State of mind is significant. For the publishing of information there were two states of mind involved. First, there had to be an intention to enable or assist persons to circumvent copy protection — the specific intention. Secondly, it had to be proven that the publisher knew, or had reason to believe, that the information would be used to make infringing copies.

The prohibition on the distribution of circumvention devices involved proof of the same state of mind relating to the use of devices. There had to be knowledge (or reason to believe) that the device would be used to make infringing copies. In addition, the device had to be specifically designed or adapted to circumvent the copy protection employed. Knowledge would seem to follow from the specific design or adaptation of the device. One would hardly distribute a circumvention device specifically designed for that purpose if one did not know or have reason to believe that the device would be used for circumvention purposes.

As far as devices or means were concerned it appeared that the use of those words extended not only to hardware devices that prevented copyright infringement taking place but the software devices such as DECSS.[57] As far as devices that have substantial non-infringing uses but incidentally include a circumvention device the situation is a little more difficult. At present DVD and Blu-Ray players have a built device that decrypts the CSS copy protection system. Imagine a DVD player/recorder that could not only play back material, but could record from a DVD as well. The CSS decoding system would be present for legitimate and authorised playback provisions. Thus the machine would be specifically designed to circumvent copy protection. But such a use would be authorised. Then there is the recording use. For liability to follow there would have to be specific knowledge on the part of the distributor of such a device that it would be used to make infringing copies. The mere presence of a circumvention means or device is not enough. It must be accompanied by the requisite knowledge or reason to believe.

The provision of information about circumvention means or devices was limited by two state of mind requirements that, arguably, would mean that the publication of, for example, academic research regarding circumvention technologies would not be caught by the section if:

•    the intention to enable or assist circumvention were absent; and/or

•    there was an absence of knowledge or reason to believe that the publication would be used to make infringing copies.

Thus, when we consider the examples the scope of the former section was somewhat narrower than it first appeared.

7.3              The 2008 Amendment

The 2008 Amendment of s 226 and following amendments have made a number of changes. The first is that definitions have been provided. The second is that the essence of the former s 226 is retained in s 226A.

The focus of the new s 226 continues to be on the link between circumvention and copyright infringement and on the making, sale and hire of devices or information rather than on the act of actual circumvention. Actual circumvention is not prohibited, but any unauthorised use of the material that is facilitated by circumvention continues to be an infringement of copyright.

The new amendments recognise that consumers should be able to make use of materials under the permitted acts, or view or execute a non-infringing copy of a work. This is consistent with New Zealand’s position on parallel importation of legitimate goods; for example, genuine DVDs from other jurisdictions. New provisions have also been introduced to enable the actual exercise of permitted acts where TPMs have been applied.

What the new TPM provisions do is two-fold — broadly they prohibit and criminalise.

There is a prohibition of commercial conduct that undermines the TPM by putting a circumvention device into circulation or providing a service including the publication of information which relates to overriding TPM protection. Contravention has civil consequences — specifically the issue of the work protected by a TPM is protected as if the conduct was an infringement of copyright. The second leg is to make the prohibited conduct a criminal offence.[58]

There is a knowledge element for both the prohibition and the offence — the knowledge of the use to which the circumvention device or the service or published information will, or is likely to, be put.

There are however some limits on the prohibition when circumvention device has a legitimate use.

7.3.1           The Definitions

There are three definitions which are applicable to ss 226A–226E. The first is a technological protection measure or TPM:[59]

TPM or technological protection measure

(a)    means any process, treatment, mechanism, device, or system that in the normal course of its operation prevents or inhibits the infringement of copyright in a TPM work; but

(b)   for the avoidance of doubt, does not include a process, treatment, mechanism, device, or system to the extent that, in the normal course of operation, it only controls any access to a work for non-infringing purposes (for example, it does not include a process, treatment, mechanism, device, or system to the extent that it controls geographic market segmentation by preventing the playback in New Zealand of a non-infringing copy of a work)

Significantly, the legislature differentiated between a TPM for the purposes of the prevention of infringement and one that relates to access to a work for non-infringing purposes. The example is given of the control of “geographic market segmentation”, which clearly relates to a region protection in games or DVDs. Thus, if a person legitimately acquired a DVD that was coded for region 1, the region coding device or process in the DVD player which would otherwise prevent the use of the DVD may be circumvented so that the non-infringing purpose of viewing the DVD could be carried out.

The second definition relates to a TPM circumvention device:[60]

TPM circumvention device means a device or means that—

(a)    is primarily designed, produced, or adapted for the purpose of enabling or facilitating the circumvention of a technological protection measure; and

(b)   has only limited commercially significant application except for its use in circumventing a technological protection measure

The primary purpose of the circumvention device must be to circumvent a TPM — taking into account that the TPM must prevent infringement rather than access for non-infringing purposes and as well as its primary design production or adaptation it must have limited commercially significant application other than for its use in circumventing a TPM.

Both paras (a) and (b) are conjunctive; it may well be that a TPM circumvention device may have other commercially significant applications or, as the Americans put it, substantial non-infringing uses.

The third definition relates to a TPM work which is defined as a copyright work that is protected by a TPM. A TPM work must be a copyright work but it may well be that this cannot prevent a entrepreneur locking up a public domain work with a TPM if there is some significance in the way in which the work has been typographically arranged.

7.3.2      The Operative Sections

Section 226A sets out the prohibited conduct in relation to a TPM, stating:

226A Prohibited conduct in relation to technological protection measure

(1) A person (A) must not make, import, sell, distribute, let for hire, offer or expose for sale or hire, or advertise for sale or hire, a TPM circumvention device that applies to a technological protection measure if A knows or has reason to believe that it will, or is likely to, be used to infringe copyright in a TPM work.

(2) A person (A) must not provide a service to another person (B) if—

(a)    A intends the service to enable or assist B to circumvent a technological protection measure; and

(b)   A knows or has reason to believe that the service will, or is likely to, be used to infringe copyright in a TPM work.

(3) A person (A) must not publish information enabling or assisting another person to circumvent a technological protection measure if A intends that the information will be used to infringe copyright in a TPM work.

Section 226A provides a useful example of modern statutory drafting techniques by clarifying the behaviours of the certain actors that the section addresses.

Section 226A(1) is identical in scope to the former s 226(1), with the exception that the definitions contained in the new s 226 impact upon the scope. Whereas the previous legislation referred to a form of copy protection, the definition of a TPM work, a TPM and a TPM circumvention device now govern.

Section 226A(2) relates to the publishing information limb of the former s 226, except that a new term (“service”) is used. This is undefined but clearly encompasses information.

Once again there are two limbs underlying the prohibition: the intention that the service enable or assists circumvention of TPM; and specific knowledge that the service will or is likely to be used to infringe copyright in a TPM work.

If the service is for the purposes of university research, it is difficult to imagine that B could be satisfied, thus the prohibitive conduct is not complete.

The use of the word “service” in s. 226A(2) is new. The earlier iteration used the words “device” or “means”. Service is a very wide concept and although s. 226A(3) refers to the publication of information to enable or assist another person to circumvent a TPM, service extends the scope of s. 226A(1) and in essence addresses any form of assistance enabling circumvention of a TPM accompanied by knowledge or reason to believe that the assistance or service will be used to infringe copyright. There seems to be little doubt that a “service” could concveivably encompass a computer program or code.

Section 226A(3) relates specifically to the publication of information, and although the behaviour could be encompassed by a service, the legislature saw fit to make publication of information about TPM circumvention a discreet behaviour.

There is only one knowledge element in s 226A(3), as opposed to the two in s 226A(2). That knowledge element is that person A must know that the provider of the information must intend that the information is to be used to infringe copyright in a TPM work. Thus s 226A prohibits:

•    the making or distribution of a TPM;

•    the provision of a service with the two limbs of intention to assist circumvention and knowledge that the service will be or likely to be used to circumvent; and

•    publication of information enabling circumvention if it is intended that information will be used to circumvent.

Unlike the original  s.226, which was restricted in the language to commercial activity (sells, lets for hire, offers or exposes for sale or hire or advertises for sale or hire) s. 226A prohibits not only the making, selling, letting for hire, offering or exposing for sake, but also prohibits importing or distributing a TPM circumvention device. These terms can encompass an individual who downloads a TPM circumvention device from an off-shore site. This was not thje case in the earlier legislation. Distibution is also prohibite3d. Thus if one makes a TPM circumvention device available for download from a website, and uses a link to facilitate delivery, such an action could fall within the ambit or “distribution”.

However, the provision of information has a commercial aspect to it, for the provision of such information must be “in the course of business” and is therefore of a narrower scope that had those words been omitted.

Section 226B sets out the rights that accrue to the issuer of a TPM work. These rights are what Kirby J referred to as para copyright in Stevens v Kabushiki Kaisha Sony Computer Entertainment.[61] Essentially, the issue of a TPM work has the same rights against a person who contravenes s 226A as the copyright owner has in respect of infringement. The provisions of the Copyright Act relating to delivery up in civil or criminal proceedings is available to the issuer of a TPM work as are certain presumptions that are contained in ss 126–129 of the Copyright Act. The provisions of s 134 relating to disposing of infringing copies or objects applies as well with the necessary modifications.

Absent from the 1994 version of s 226 was the offence of contravening s 226A. Section 226C creates that offence:

226C Offence of contravening section 226A

(1) A person (A) commits an offence who, in the course of business, makes, imports, sells, distributes, lets for hire, offers or exposes for sale or hire, or advertises for sale or hire, a TPM circumvention device that applies to a technological protection measure if A knows that it will, or is likely to, be used to infringe copyright in a TPM work.

(2) A person (A) commits an offence who, in the course of business, provides a service to another person (B) if—

(a)    A intends the service to enable or assist B to circumvent a technological protection measure; and

(b)   A knows that the service will, or is likely to, be used to infringe copyright in a TPM work.

(3) A person (A) commits an offence who, in the course of business, publishes information enabling or assisting another person to circumvent a technological protection measure if A intends that the information will be used to infringe copyright in a TPM work.

(4) A person who commits an offence under this section is liable on conviction on indictment to a fine not exceeding $150,000 or a term of imprisonment not exceeding 5 years or both.

The first important thing to note is that subs (4) requires the conviction to be on indictment, so the matter must be dealt with in the jury jurisdiction and cannot be dealt with summarily although the position may well be altered by the provisions of the Criminal Procedure Act 2011..

Section 226C mirrors the prohibitions in s 226A, but the critical matter for an offence is that there is a commercial element — “in the course of business”.

Similarly, the provision of the service in subs (2) of 226C must have a commercial element as must the publication of information in subs (3).

This then brings the criminalisation of para-copyright in line with the provisions of s 135 of the Copyright Act which relates to piracy or commercial infringement. Clearly, s 226D considers that the offence should relate to commercial activity involving TPMs. In this way the rather wider prohibitions contained in s. 226A  do not automatically lead to potential liability under s. 226C

Section 226D clarifies the position relating to the scope of the rights of the issuer of a TPM work. The operative part states:

226D When rights of issuer of TPM work do not apply

(1) The rights that the issuer of a TPM work has under section 226B do not prevent or restrict the exercise of a permitted act.

(2) The rights that the issuer of a TPM work has under section 226B do not prevent or restrict the making, importation, sale, or letting for hire of a TPM circumvention device to enable—

(a)    a qualified person to exercise a permitted act under Part 3 using a TPM circumvention device on behalf of the user of a TPM work; or

(b)   a person referred to in section 226E(3) to undertake encryption research.

(3) In this section and in section 226E, qualified person means—

(a)    the librarian of a prescribed library; or

(b)   the archivist of an archive; or

(c)    an educational establishment; or

(d)   any other person specified by the Governor-General by Order in Council on the recommendation of the Minister.

(4) A qualified person must not be supplied with a TPM circumvention device on behalf of a user unless the qualified person has first made a declaration to the supplier in the prescribed form.

The issuer of a TPM work cannot prevent or restrict the exercise of a permitted act. Nor can the prohibition prevent or restrict the making, importation, sale or letting for hire of a TPM circumvention device to enable encryption research under s 226E(3), or to enable a qualified person to exercise a permitted act using a TPM circumvention device.

The legislation goes on to define “qualified person”, who, in this case, is required to make a declaration relating to certain matters.

On their own the provisions of s 226D seem confusing, although the provisions of s 226 and following do not prohibit the act of circumvention. Subsection (1) of 226D makes it clear that circumvention may be permissible for the purposes of the exercise of a permitted act.

Section 226E takes the matter further.

226E User’s options if prevented from exercising permitted act by TPM

(1) Nothing in this Act prevents any person from using a TPM circumvention device to exercise a permitted act under Part 3.

(2) The user of a TPM work who wishes to exercise a permitted act under Part 3 but cannot practically do so because of a TPM may do either or both of the following:

(a)    apply to the copyright owner or the exclusive licensee for assistance enabling the user to exercise the permitted act:

(b)   engage a qualified person (see section 226D(3)) to exercise the permitted act on the user’s behalf using a TPM circumvention device, but only if the copyright owner or the exclusive licensee has refused the user’s request for assistance or has failed to respond to it within a reasonable time.

(3) Nothing in this Act prevents any person from using a TPM circumvention device to undertake encryption research if that person—

(a)    is either—

(i)     engaged in a course of study at an educational establishment in the field of encryption technology; or

(ii)    employed, trained, or experienced in the field of encryption technology; and

(b)   has either—

(i)     obtained permission from the copyright owner or exclusive licensee of the copyright to the use of a TPM circumvention device for the purpose of the research; or

(ii)    has taken, or will take, all reasonable steps to obtain that permission.

(4) A qualified person who exercises a permitted act on behalf of the user of a TPM work must not charge the user more than a sum consisting of the total of the cost of the provision of the service and a reasonable contribution to the qualified person’s general expenses.

Once again the section makes it clear that the act of circumvention to exercise a permitted act is not prohibited. Thus a person may use a circumvention device to copy a selection from a TPM work for the purposes of review, a comment or inclusion (with attribution) in an academic work.

Subsection (2) glosses over that, however. If the user of a TPM work wishes to exercise a permitted act, he or she may use a TPM circumvention device to do so, but the subsection includes the words “but cannot practically do so because of a TPM”. It is unclear what this means. If a person whose access to a work to carry out a permitted act is prevented by a TPM, does subs (2) automatically apply? Or, if a circumvention device is available, is the user able to use that circumvention device to exercise the permitted act? Does subs (2) relate to the situation where there is no circumvention device available? Subsection (2), in providing certain options for the person who is stymied by a TPM, challenges the market failure theory of fair use.

A person wishing to do one of the permitted acts may apply to the copyright owner or licensee for assistance. The alternative is to engage a qualified person (see s 226D(3)) to exercise a permitted act on the user’s behalf using a circumvention device. But that can only apply if the copyright owner exclusive licensee refuses the user’s request for assistance or fail to respond within a reasonable time.

A sensible interpretation of s 226E suggests that subs (2) must be followed if there is no readily available circumvention device enabling the user to exercise a permitted act.

It is also important to note that s 226E makes a specific exception for the use of circumvention devices to undertake encryption research in certain circumstances.

7.4              Comment

The new provisions of s 226 and following are indeed helpful. The incorporation of clear definitions, that make it clear that TPMs are for the purposes of prevention of infringement rather than access, are to be welcomed (although s 226E seems to introduce a somewhat unnecessary level of complexity).

Underlying the whole issue of para-copyright is the fact that, in reality, TPMs are a somewhat blunt instrument for the purposes of copyright protection, presenting an “all or nothing” level of protection. TPMs cannot discriminate between a permitted or prohibited use. They are international and are applied internationally, whereas copyright law is territorial. TPMs place the control in the hands of the copyright owner of a technological rather than a legal nature and, as already observed, provide a potential for market failure. Essentially, TPMs do not provide an absolute protection, rather they impose another layer of protection that sits on top of the balance of interests created by statute, and muddy the waters between what is and is not allowed. The various options relating to behaviour regarding TPMs contained in ss 226B, 226D and 226E suggest that certain behaviours may be permissible while others are not. Clearly, the legislature did not want to impose a prohibition on the act of circumvention, but the various alternatives given in ss 226D and 226E seem to suggest prohibition.

The legislation, while addressing the issue of circumvention of TPMs, and restricting prohibited conduct to the means by which copyright protection (rather than access prevention) may be circumvented therefore makes it clear that the provisions of means by which access controls may be circumvented is not within the scope of prohibited conduct. This means that one may provide services, information and programs that assist in circumventing access protections. In this way the legislation addresses its target – the copy right – rather than allowing the engraftment of another “para-copyright” – the “prevention of access” right. This is eminently justifiable. Region coding is a means by which copyright owners facilitate distribution of their products. The only issue is obne of market segmentation and a release strategy that copyright owners may have in place. There is no reason, in terms of copyright, why a person who legitimately acquires content in one geographical area should be prohibited from accessing it in another.

However, unlike the New Zealand legislation the DMCA prohibits thje circumvention of access control systems, despite there being no copyright implications and, to further complicate matters, criminalises such behaviour.  It should be a matter of concern that should international trade treaty negotiations result in the application of a DMCA style of anti-TPM circumvention regime, the results will be:

a) the imposition of a foreign marketing system that goes far beyond those chosen say for the release of non-digital product such as movies and CDs

b) the end of the parallel importing regime insofar as geographically segmented digital product is concerned

c) the criminalisation of behaviour that has nothing to do with copyright infringement and has no economic implications for content owners whatsoever.

Finally, it is still not clear whether licence terms or conditions of sale may override the way in which circumvention devices may be used in the limited situations provided in s 226. Unlike s 84, which statutorily negates such conditions, the matter is left open. The legislature has gone to considerable lengths to ensure the balance of interests that underlies copyright law is maintained. It seems unusual that those rights may be subverted by contractual arrangements.


[54]      The Copyright, Designs and Patents Act 1988.

[55]      The Copyright Act 1994 as amended by the Copyright (New Technologies) Amendment Act 2008

[56]      Section 226 – 226E

[57]      CSS is the DVD content scrambling system that prohibits the copying of the files on a DVD movie disk. DECSS is the system that circumvents the content scrambling system.

[58]      The offence of contravening s 226A is set out in s 226C.

[59]      See the new s 226.

[60]      See the new s 226.

[61]      Stevens v Kabushiki Kaisha Sony Computer Entertainment [2005] HCA 58, (2005) 224 CLR 193, (2005) 221 ALR 448.