Under the law in New Zealand a digital file cannot be stolen. This follows from the Court of Appeal decision in Dixon v R [2014] NZCA 329 and depends upon the way in which various definitions contained in the Crimes Act coupled with the nature of the charge were interpreted by the Court.
Mr. Dixon, the appellant, had been employed by a security firm in Queenstown. One of the clients of the firm was Base Ltd which operated the Altitude Bar in Queenstown. Base had installed a closed circuit TV system in the bar.
In September 2011 the English rugby team was touring New Zealand as part of the Rugby World Cup. The captain of the team was Mr Tindall. Mr Tindall had recently married the Queen’s granddaughter. On 11 September, Mr Tindall and several other team members visited Altitude Bar. During the evening there was an incident involving Mr Tindall and a female patron, which was recorded on Base’s CCTV.
Mr Dixon found out about the existence of the footage of Mr Tindall and asked one of Base’s receptionists to download it onto the computer she used at work. She agreed, being under the impression that Mr Dixon required it for legitimate work purposes. The receptionist located the footage and saved it onto her desktop computer in the reception area. Mr Dixon subsequently accessed that computer, located the relevant file and transferred it onto a USB stick belonging to him.
Mr Dixon attempted to sell the footage but when that proved unsuccessful he posted it on a video-sharing site, resulting in a storm of publicity both in New Zealand and in the United Kingdom. At his trial the judge Phillips found that Mr Dixon had done this out of spite and to ensure that no one else would have the opportunity to make any money from the footage.
A complaint was laid with the Police and Mr Dixon was charged under s. 249(1)(a) of the Crimes Act 1961.
That section provides as follows:
249 Accessing computer system for dishonest purpose
(1) Every one is liable to imprisonment for a term not exceeding 7 years who, directly or indirectly, accesses any computer system and thereby, dishonestly or by deception, and without claim of right,—
(a) obtains any property, privilege, service, pecuniary advantage, benefit, or valuable consideration;
The indictment against Mr Dixon alleged that he had “accessed a computer system and thereby dishonestly and without claim of right obtained property.”
The issue before the Court was whether or not digital footage stored on a computer was “property” as defined in the Crimes Act.
“Property” is defined in section 2 of the Crimes Act in the following way:
property includes real and personal property, and any estate or interest in any real or personal property, money, electricity, and any debt, and any thing in action, and any other right or interest.
The Court considered the legislative history of the definition, noting that in the Bill that introduced the new computer crimes a separate definition of property specifically for those crimes had been provided. The definition was discarded by the Select Committee which rejected the suggestion that there should be different definitions of the word property for different offences.
The Court also noted that in the case of Davies v Police [2008] 1 NZLR 638 (HC) it was held that internet usage (the consumption of megabytes in the transmission of electronic data) is “property” but in that case the Judge specifically distinguished internet usage from the information contained in the data. Thus, Dixon was the first case where the Court had to consider “property” as defined in the context of “electronically stored footage or images”.
In considering the decision of the trial Judge, the Court was of the view that he had been influenced by the very wide definition of property and the inclusion of intangible things, and that the footage in question seemed to have all the normal attributes of personal property. The Court also observed that Base Ltd who operated the CCTV system did not lose the file. What it lost was the right to exclusive possession and control of it. The Court considered the trial judge’s holding that the files were within the scope of the definition of property reflected “an intuitive response that in the modern computer age digital data must be property.” (para 20)
The Court concluded otherwise and held that digital files are not property within section 2, and therefore Mr Dixon did not obtain property and was charged under the wrong part of section 249(1)(a). Rather, held the Court, he should have been charged with accessing a computer and dishonestly and without claim of right obtaining a benefit.
The Court referred to the English decision of Oxford v Moss (1979) 68 Cr App R 183 which involved a University student who unlawfully acquired an examination paper, read its contents and returned it. The Court held that was not theft. The student had obtained the information on the paper – confidential it may have been, but it was not property, unlike the medium upon which it was written.
The Court of Appeal noted that Oxford v Moss was not a closely reasoned decision but it remained good law in England and had been followed by the Supreme Court of Canada in Stewart v R [1988] 1 SCR 963. Oxford v Moss had also been followed in New Zealand. In Money Managers Ltd v Foxbridge Trading Ltd (HC Hamilton CP 67/93 15 December 1993) Hammond J noted that traditionally the common law had refused to treat information as property, and in Taxation Review Authority 25 [1997] TRNZ 129 Judge Barber had to consider whether computer programs and software constituted goods for the purpose of the Goods and Services Tax Act 1985. He drew a distinction between the medium upon which information or data was stored – such as computer disks – and the information itself.
The Court considered the nature of confidential information and a line of cases that held that it was not property. The traditional approach had been to rely on the equitable cause of action for breach of confidence.
The Court went on to consider whether or not the digital footage might be distinguishable from confidential information. Once again it noted the distinction between the information or data and the medium, observing that a computer disk containing the information was property whilst the information contained upon it was not. It observed that a digital file arguably does have a physical existence in a way that information (in non-physical form) does not, citing the decision in R v Cox (2004) 21 CRNZ 1 CA at [49]. Cox was a case about intercepted SMS messages. The relevant observation was directed to the issue of whether or not an electronic file could be the subject of a search. The Court in Cox noted
“Nor do we see anything in the argument that the electronic data is not “a thing”. It has a physical existence even if ephemeral and that in any event the computer componentry on which it was stored was undoubtedly “a thing”.
Any doubt on this particular issue has been resolved by the Search and Surveillance Act 2012. However, as I will discuss below, although a digital file does have a physical existence, it is not in coherent form. One of the subtexts to the Court of Appeal’s observations of the “electronically stored footage” was that, when stored electronically it has a continuity similar to film footage. For reason that I will discuss later, this is not the case.
The Court then went on to discuss the nature of information in the electronic space. The Court stated at [31]:
It is problematic to treat computer data as being analogous to information recorded in physical form. A computer file is essentially just a stored sequence of bytes that is available to a computer program or operating system. Those bytes cannot meaningfully be distinguished from pure information. A Microsoft Word document, for example, may appear to us to be the same as a physical sheet of paper containing text, but in fact is simply a stored sequence of bytes used by the Microsoft Word software to present the image that appears on the monitor.
Having reviewed the background to the extension of the definition of “property’ following the decision in the case of R v Wilkinson [1999] 1 NZLR 403 (CA) where it was held that credit extended by a bank was not capable of being stolen because the definition of things capable of being stolen was limited to moveable, tangible things, and the fact that although the definition of document extended to electronic files the word “document” – thereby extending the definition of property to include electronic files – did not appear in the definition of “property”, along with the fact that the Law Commission in its hastily produced and somewhat flawed report Computer Misuse (NZLC R54 1999) referred to a possible redefinition of information as a property right, the Court took what it described as the orthodox approach. Parliament was taken to be aware of the large body of authority regarding the status of information and had it intended to change the legal position, it would have expressly said so by including a specific reference to computer-stored data.
This holding did not make section 249(1) of the Crimes Act meaningless. The section would still extend to cases where, for example, a defendant accesses a computer and uses, for example, credit card details to unlawfully obtain goods. In this case, the Court observed, Mr. Dixon had been charged under the wrong part of the section.
It is clear that prosecuting authorities will have to move with care in future. Under the Dixon holding, someone who unlawfully obtains an e-book for a Kindle or other reader could not be charged with theft, because an e-book is information in digital form. If the same book in hard copy form were taken without payment and with the requisite intention from a bookstore, a charge of theft could follow.
Comment
There can be no doubt that the decision of the Court of Appeal is correct technologically and in law and, although I do take a few minor points with the way in which the technological realities have been articulated.
The issue of where the property lies within medium\information dichotomy has been with us for a considerable period of time. I can own the book, but I do not “own” the content and do with it as I wish because it is the “property” of the author. The particular property right – the “copy right” gives the author the control over the use of the content of the book – the author may lose possession and control of the medium but he or she does not lose control of the message.
But the “copy right” has its own special statute and those legislatively created special property rights do not extend to the provisions of the Crimes Act – even although copyright owners frequently mouth the mantra that copyright infringement is “theft”. Clearly the decision in Dixon emphasising the principle that information is not property for the purposes of theft must put that myth to rest.
Information or Data in the Digital Space
To clearly understand the import of the decision in Dixon it is necessary to understand the nature of information or data in the digital space. The Court of Appeal refers to “information” because that is the basis of the “orthodox” conclusion that it reached. Information implies a certain continuity and coherence that derives from the way in which it was communicated in the pre-digital paradigm. Lawyers are so used to obtaining information that is associated primarily with paper, the medium takes second place to the message. Lawyers focus upon the “content layer” – an approach that must be reconsidered in the Digital Paradigm. For reasons which I shall develop, the word “data” can (and perhaps should) be substituted.
The properties of electronic and digital technologies and their product require a review of one’s approach to information. The nature of the print and paper based medium as a means of recording and storing information, and the digital equivalent are radically different. Apart from occasional incidents of forgery, with paper-based documents, what you saw was what you got. There was no underlying information embedded or hidden in the document, as there is with meta-data in the digital environment. The issue of the integrity of the information contained on the static medium was reasonably clear.
Electronic data is quite different to its predigital counterpart. Some of those differences may be helpful. Electronic information may be easily copied and searched but it must be remembered that electronic documents do pose some challenges.
Electronic data is dynamic and volatile. It is often difficult to ensure it has been captured and retained in such a way as to ensure its integrity. Unintentional modifications may be made simply by opening and reading data. Although the information that appears on the screen may not have been altered, some of the vital meta-data which traces the history of the file (and which can often be incredibly helpful in determining its provenance and which may be of assistance in determining the chronology of events and when parties knew what they knew) may have been changed.
To understand the difficulty that the electronic paradigm poses for our conception of data it is necessary to consider the technological implications of storing information in the digital space. It is factually and paradigmatically far removed from information recorded on a medium such as paper.
If we consider data as information written upon a piece of paper it is quite easy for a reader to obtain access to that information long after it was created. The only thing necessary is good eye sight and an understanding of the language in which the document is written. It is information in that it is comprehensible and the content informs. Electronic data in and of itself does not do that. It incoherent and incomprehensible, scattered across the sectors of the medium on which it is contained. In that state it is not information in that it does not inform.
Data in electronic format is dependent upon hardware and software. The data contained upon a medium such as a hard drive requires an interpreter to render it into human readable format. The interpreter is a combination of hardware and software. Unlike the paper document, the reader cannot create or manipulate electronic data into readable form without the proper hardware in the form of computers.[1]
There is a danger in thinking of electronic data as an object ‘somewhere there’ on a computer in the same way as a hard copy book is in a library. Because of the way in which electronic storage media are constructed it is almost impossible for a complete file of electronic information be stored in consecutive sectors of a medium. An electronic file is better understood as a process by which otherwise unintelligible pieces of data are distributed over a storage medium, are assembled, processed and rendered legible for a human user. In this respect the “information” or “file” as a single entity is in fact nowhere. It does not exist independently from the process that recreates it every time a user opens it on a screen.[2]
Computers are useless unless the associated software is loaded onto the hardware. Both hardware and software produce additional evidence that includes, but is not limited to, information such as metadata and computer logs that may be relevant to any given file or document in electronic format.
This involvement of technology and machinery makes electronic information paradigmatically different from traditional information where the message and the medium are one. It is this mediation of a set of technologies that enables data in electronic format – at its simplest, positive and negative electromagnetic impulses recorded upon a medium – to be rendered into human readable form. This gives rise to other differentiation issues such as whether or not there is a definitive representation of a particular source digital object. Much will depend, for example, upon the word processing program or internet browser used.
The necessity for this form of mediation for information acquisition and communication explains the apparent fascination that people have with devices such as smart phones and tablets. These devices are necessary to “decode” information and allow for its comprehension and communication.
Thus, the subtext to the description of the electronically stored footage which seems to suggest a coherence of data similar to that contained on a strip of film cannot be sustained. The “electronically stored footage” is meaningless as data without a form of technological mediation to assemble and present the data in coherent form. The Court made reference to the problem of trying to draw an analogy between computer data and non-digital information or data and referred to the example of the Word document. This is part of an example of the nature of “information as process” that I have described above. Nevertheless there is an inference of coherence of information in a computer file that is not present in the electronic medium – references to “sequence of bytes” are probably correct once the assembly of data prior to presentation on a screen has taken place – but the reality is that throughout the process of information display on a screen there is constant interactivity between the disk or medium interpreter, the code of the word processing program and the interpreter that is necessary to display the image on the screen.
In the final analysis there are two approaches to the issue of whether or not digital data is property for the purposes of theft. The first is the orthodox legal position taken by the Court of Appeal. The second is the technological reality of data in the digital space. Even although the new definition of property extends to intangibles such as electricity it cannot apply to data in the digital space because of the incoherence of the data. Even although a file may be copied from one medium to another, it remains in an incoherent state. Even although it may be inextricably associated with a medium of some sort or another, it maintains that incoherent state until it is subjected to the mediation of hardware and software that I have described above. The Court of Appeal’s “information” based approach becomes even sharper when one substitutes the word “data” for “information”. Although there is a distinction between the medium and the data, the data requires a storage medium of some sort. And it is this that is capable of being stolen
Although Marshall McLuhan intended an entirely different interpretation of the phrase, ‘the medium is the message,’[3] it is a truth of information in digital format.
[1] Burkhard Schafer and Stephen Mason, chapter 2 ‘The Characteristics of Electronic Evidence in Digital Format’ in Stephen Mason (gen ed) Electronic Evidence (3rd edn, LexisNexis Butterworths, London 2012) 2.05.
[2] Burkhard Schafer and Stephen Mason, chapter 2 ‘The Characteristics of Electronic Evidence in Digital Format’ 2.06.
[3] Marshall McLuhan, Understanding Media : The Extensions of Man (Massachusetts Institute of Technology Cambridge 1994) Ch 1
The various theories on internet regulation can be placed within a taxonomy structure . In the centre is the Internet itself. On one side are the formal theories based on traditional “real world” governance models. These are grounded in traditional concepts of law and territorial authority. Some of these model could well become a part of an “uber-model” described as the “polycentric model” – a theory designed to address specific issues in cyberspace. Towards the middle are less formal but nevertheless structured models. Largely technical or “code-based” in nature that are less formal but nevertheless exercise a form of control over Internet operation.
On the other side are informal theories that emphasise non-traditional or radical models. These models tend to be technically based, private and global in character.
Internet Governance Models – click on the image for a larger copy
What I would like to do is briefly outline aspects of each of the models. This will be a very “once over lightly” approach and further detail may be found in Chapter 3 of my text internet.law.nz. This piece also contains some new material on Internet Governance together with some reflections on how traditional sovereign/territorial governance models just won’t work within the context of the Digital Paradigm and the communications medium that is the Internet.
The Formal Theories
The Digital Realists
The “Digital Realist” school has been made famous by Judge Easterbrook’s comment that “there [is] no more a law of cyberspace than there [is] a ‘Law of the Horse.’” Easterbrook summed the theory up in this way:
“When asked to talk about “Property in Cyberspace,” my immediate reaction was, “Isn’t this just the law of the horse?” I don’t know much about cyberspace; what I do know will be outdated in five years (if not five months!); and my predictions about the direction of change are worthless, making any effort to tailor the law to the subject futile. And if I did know something about computer networks, all I could do in discussing “Property in Cyberspace” would be to isolate the subject from the rest of the law of intellectual property, making the assessment weaker.
This leads directly to my principal conclusion: Develop a sound law of intellectual property, then apply it to computer networks.”
Easterbrook’s comment is a succinct summary of the general position of the digital realism school: that the internet presents no serious difficulties, so the “rule of law” can simply be extended into cyberspace, as it has been extended into every other field of human endeavour. Accordingly, there is no need to develop a “cyber-specific” code of law.
Another advocate for the digital realist position is Jack Goldsmith. In “Against Cyberanarchy” he argues strongly against those whom he calls “regulation sceptics” who suggest that the state cannot regulate cyberspace transactions. He challenges their opinions and conclusions, arguing that regulation of cyberspace is feasible and legitimate from the perspective of jurisdiction and choice of law — in other words he argues from a traditionalist, conflict of laws standpoint. However, Goldsmith and other digital realists recognise that new technologies will lead to changes in government regulation; but they believe that such regulation will take place within the context of traditional governmental activity.
Goldsmith draws no distinction between actions in the “real” world and actions in “cyberspace” — they both have territorial consequences. If internet users in one jurisdiction upload pornography, facilitate gambling, or take part in other activities that are illegal in another jurisdiction and have effects there then, Goldsmith argues, “The territorial effects rationale for regulating these harms is the same as the rationale for regulating similar harms in the non-internet cases”. The medium that transmitted the harmful effect, he concludes, is irrelevant.
The digital realist school is the most formal of all approaches because it argues that governance of the internet can be satisfactorily achieved by the application of existing “real space” governance structures, principally the law, to cyberspace. This model emphasises the role of law as a key governance device. Additional emphasis is placed on law being national rather than international in scope and deriving from public (legislation, regulation and so on) rather than private (contract, tort and so on) sources. Digital realist theorists admit that the internet will bring change to the law but argue that before the law is cast aside as a governance model it should be given a chance to respond to these changes. They argue that few can predict how legal governance might proceed. Given the law’s long history as society’s foremost governance model and the cost of developing new governance structures, a cautious, formal “wait and see” attitude is championed by digital realists.
The Transnational Model – Governance by International Law
The transnational school, although clearly still a formal governance system, demonstrates a perceptible shift away from the pure formality of digital realism. The two key proponents of the school, Burk and Perritt, suggest that governance of the internet can be best achieved not by a multitude of independent jurisdiction-based attempts but via the medium of public international law. They argue that international law represents the ideal forum for states to harmonise divergent legal trends and traditions into a single, unified theory that can be more effectively applied to the global entity of the internet.
The transnationalists suggest that the operation of the internet is likely to promote international legal harmonisation for two reasons.
First, the impact of regulatory arbitrage and the increased importance of the internet for business, especially the intellectual property industry, will lead to a transfer of sovereignty from individual states to international and supranational organisations. These organisations will be charged with ensuring broad harmonisation of information technology law regimes to protect the interests of developed states, lower trans-border costs to reflect the global internet environment, increase opportunities for transnational enforcement and resist the threat of regulatory arbitrage and pirate regimes in less developed states.
Secondly, the internet will help to promote international legal harmonisation through greater availability of legal knowledge and expertise to legal personnel around the world.
The transnational school represents a shift towards a less formal model than the digital realism because it is a move away from national to international sources of authority. However, it still clearly belongs to the formalised end of the governance taxonomy on three grounds:
1. its reliance on law as its principal governance methodology;
2. the continuing public rather than private character of the authority on which governance rests; and
3. the fact that although governance is by international law, in the final analysis, this amounts to delegated authority from national sovereign states.
National and UN Initiatives – Governance by Governments
This discussion will be a little lengthier because there is some history the serves to illustrate how governments may approach Internet governance.
In 2011 and 2012 there were renewed calls for greater regulation of the Internet. These were driven by the events in the Middle East early in 2011 which became known as the “Arab Spring” seems more than co-incidental. The “Arab Spring” is a term that refers to anti-government protests that spread across the Middle East. These followed a successful uprising in Tunisia against former leader Zine El Abidine Ben Ali which emboldened similar anti-government protests in a number of Arab countries. The protests were characterised by the extensive use of social media to organise gatherings and spread awareness. There has, however, been some debate about the influence of social media on the political activism of the Arab Spring. Some critics contend that digital technologies and other forms of communication–videos, cellular phones, blogs, photos and text messages– have brought about the concept of a ‘digital democracy’ in parts of North Africa affected by the uprisings. Other have claimed that in order to understand the role of social media during the Arab Uprisings there is context of high rates of unemployment and corrupt political regimes which led to dissent movements within the region. There is certainly evidence of an increased uptake of Internet and social media usage over the period of the events, and during the uprising in Egypt, then President Mubarak’s State Security Investigations Service blocked access to Twitter and Facebook and on 27 January 2011 the Egyptian Government shut down the Internet in Egypt along with SMS messaging.
The G8 Meeting in Deauville May 2011
In May 2011 at G8 meeting in France, President Sarkozy issued a provocative call for stronger Internet Regulation. M. Sarkozy convened a special gathering if global “digerati” in Paris and called the rise of the Internet a “revolution” as significant as the age of exploration and the industrial revolution. This revolution did not have a flag and M. Sarkozy acknowledged that the Internet belonged to everyone, citing the “Arab Spring” as a positive example. However, he warned executives of Google, Facebook, Amazon and E-Bay who were present : “The universe you represent is not a parallel universe. Nobody should forget that governments are the only legitimate representatives of the will of the people in our democracies. To forget this is to risk democratic chaos and anarchy.”
Mr. Sarkozy was not alone in calling existing laws and regulations inadequate to deal with the challenges of a borderless digital world. Prime Minister David Cameron of Britain stated that he would ask Parliament to review British privacy laws after Twitter users circumvented court orders preventing newspapers from publishing the names of public figures who are suspected of having had extramarital affairs but he did not go as far as M. Sarkozy who was pushing for a “civilized Internet” implying wide regulation.
However, the Deauville Communique did not go as far as M. Sarkozy may have like. It affirmed the importance of intellectual property protection, the effective protection of personal data and individual privacy, security of networks a crackdown on trafficking in children for their sexual exploitation. But it did not advocate state control of the Internet but staked out a role for governments. The communique stated:
“We discussed new issues such as the Internet which are essential to our societies, economies and growth. For citizens, the Internet is a unique information and education tool, and thus helps to promote freedom, democracy and human rights. The Internet facilitates new forms of business and promotes efficiency, competitiveness, and economic growth. Governments, the private sector, users, and other stakeholders all have a role to play in creating an environment in which the Internet can flourish in a balanced manner. In Deauville in 2011, for the first time at Leaders’ level, we agreed, in the presence of some leaders of the Internet economy, on a number of key principles, including freedom, respect for privacy and intellectual property, multi-stakeholder governance, cyber-security, and protection from crime, that underpin a strong and flourishing Internet. The “e-G8” event held in Paris on 24 and 25 May was a useful contribution to these debates….
The Internet and its future development, fostered by private sector initiatives and investments, require a favourable, transparent, stable and predictable environment, based on the framework and principles referred to above. In this respect, action from all governments is needed through national policies, but also through the promotion of international cooperation……
As we support the multi-stakeholder model of Internet governance, we call upon all stakeholders to contribute to enhanced cooperation within and between all international fora dealing with the governance of the Internet. In this regard, flexibility and transparency have to be maintained in order to adapt to the fast pace of technological and business developments and uses. Governments have a key role to play in this model.
We welcome the meeting of the e-G8 Forum which took place in Paris on 24 and 25 May, on the eve of our Summit and reaffirm our commitment to the kinds of multi-stakeholder efforts that have been essential to the evolution of the Internet economy to date. The innovative format of the e-G8 Forum allowed participation of a number of stakeholders of the Internet in a discussion on fundamental goals and issues for citizens, business, and governments. Its free and fruitful debate is a contribution for all relevant fora on current and future challenges.
We look forward to the forthcoming opportunities to strengthen international cooperation in all these areas, including the Internet Governance Forum scheduled next September in Nairobi and other relevant UN events, the OECD High Level Meeting on “The Internet Economy: Generating Innovation and Growth” scheduled next June in Paris, the London International Cyber Conference scheduled next November, and the Avignon Conference on Copyright scheduled next November, as positive steps in taking this important issue forward.”
The ITU Meeting in Dubai December 2012
The meeting of the International Telecommunications Union (ITU) in Dubai provided the forum for further consideration of expanded Internet regulation. No less an authority than Vinton Cerf, the co-developer with Robert Kahn of the TCP/IP protocol which was one of the important technologies that made the Internet possible, sounded a warning when he said
“But today, despite the significant positive impact of the Internet on the world’s economy, this amazing technology stands at a crossroads. The Internet’s success has generated a worrying desire by some countries’ governments to create new international rules that would jeopardize the network’s innovative evolution and its multi-faceted success.
This effort is manifesting itself in the UN General Assembly and at the International Telecommunication Union – the ITU – a United Nations organization that counts 193 countries as its members, each holding one vote. The ITU currently is conducting a review of the international agreements governing telecommunications and it aims to expand its regulatory authority to include the Internet at a treaty summit scheduled for December of this year in Dubai. ….
Today, the ITU focuses on telecommunication networks, radio frequency allocation, and infrastructure development. But some powerful member countries see an opportunity to create regulatory authority over the Internet. Last June, the Russian government stated its goal of establishing international control over the Internet through the ITU. Then, last September, the Shanghai Cooperation Organization – which counts China, Russia, Tajikistan, and Uzbekistan among its members – submitted a proposal to the UN General Assembly for an “international Code of Conduct for Information Security.” The organization’s stated goal was to establish government-led “international norms and rules standardizing the behavior of countries concerning information and cyberspace.” Other proposals of a similar character have emerged from India and Brazil. And in an October 2010 meeting in Guadalajara, Mexico, the ITU itself adopted a specific proposal to “increase the role of ITU in Internet governance.”
As a result of these efforts, there is a strong possibility that this December the ITU will significantly amend the International Telecommunication Regulations – a multilateral treaty last revised in 1988 – in a way that authorizes increased ITU and member state control over the Internet. These proposals, if implemented, would change the foundational structure of the Internet that has historically led to unprecedented worldwide innovation and economic growth.”
The ITU, originally the International Telegraph Union, is a specialised agency of the United Nations and is responsible for issues concerning information and communication technologies. It was originally founded in 1865 and in the past has been concerned with technical communications issues such as standardisation of communications protocols (which was one of its original purposes) that management of the international radio-frequency spectrum and satellite orbit resources and the fostering of sustainable, affordable access to ICT. It took its present name in 1934 and in 1947 became a specialised agency of the United Nations.
The position of the ITU approaching the 2012 meeting in Dubai was that, given the vast changes that had taken place in the world of telecommunications and information technologies, the International Telecommunications Regulations (ITR)that had been revised in 1988 were no longer in keeping with modern developments. Thus, the objective of the 2012 meeting was to revise the ITRs to suit the new age. After a controversial meeting in Dubai in December 2012 the Final Acts of the Conference were published. The controversial issue was that there was a proposal to redefine the Internet as a system of government-controlled, state supervised networks. The proposal was contained in a leaked document by a group of members including Russia, China, Saudi Arabia, Algeria, Sudan, Egypt and the United Arab Emirates. However, the proposal was withdrawn. But the governance model defined the Internet as an:
“international conglomeration of interconnected telecommunication networks,” and that “Internet governance shall be effected through the development and application by governments,” with member states having “the sovereign right to establish and implement public policy, including international policy, on matters of Internet governance.”
This wide-ranging proposal went well beyond the traditional role of the ITU and other members such as the United States, European countries, Australia, New Zealand and Japan insisted that the ITU treaty should apply to traditional telecommunications systems. The resolution that won majority support towards the end of the conference stated that the ITU’s leadership should “continue to take the necessary steps for ITU to play an active and constructive role in the multi-stakeholder model of the internet.” However, the Treaty did not receive universal acclaim. United States Ambassador Kramer of the announced that the US would not be signing the new treaty. He was followed by the United Kingdom. Sweden said that it would need to consult with its capital (code in UN-speak for “not signing”). Canada, Poland, the Netherlands, Denmark, Kenya, New Zealand, Costa Rica, and the Czech Republic all made similar statements. In all, 89 countries signed while 55 did not.
Quite clearly there is a considerable amount of concern about the way in which national governments wish to regulate or in some way govern and control the Internet. Although at first glance this may seem to be directed at the content layer, and amount to a rather superficial attempt to embark upon the censorship of content passing through a new communications technology, the attempt to regulate through a technological forum such as the ITU clearly demonstrates that governments wish to control not only content but the various transmission and protocol layers of the Internet and possibly even the backbone itself. Continued attempts to interfere with aspects of the Internet or embark upon an incremental approach to regulation have resulted in expressions of concern from another Internet pioneer, Sir Tim Berners-Lee who, in addition to claiming that governments are suppressing online freedom has issued a call for a Digital Magna Carta.
Clearly the efforts described indicate that some form of national government or collective government form of Internet Governance is on the agenda. Already the United Nations has become involved in the development of Internet Governance policy with the establishment of the Internet Governance Forum.
The Internet Governance Forum
The Internet Governance Forum describes itself as bringing
“people together from various stakeholder groups as equals, in discussions on public policy issues relating to the Internet. While there is no negotiated outcome, the IGF informs and inspires those with policy-making power in both the public and private sectors. At their annual meeting delegates discuss, exchange information and share good practices with each other. The IGF facilitates a common understanding of how to maximize Internet opportunities and address risks and challenges that arise.
The IGF is also a space that gives developing countries the same opportunity as wealthier nations to engage in the debate on Internet governance and to facilitate their participation in existing institutions and arrangements. Ultimately, the involvement of all stakeholders, from developed as well as developing countries, is necessary for the future development of the Internet.”
The Internet Governance Forum is an open forum which has no members. It was established by the World Summit on the Information Society in 2006. Since then, it has become the leading global multi-stakeholder forum on public policy issues related to Internet governance.
Its UN mandate gives it convening power and the authority to serve as a neutral space for all actors on an equal footing. As a space for dialogue it can identify issues to be addressed by the international community and shape decisions that will be taken in other forums. The IGF can thereby be useful in shaping the international agenda and in preparing the ground for negotiations and decision-making in other institutions. The IGF has no power of redistribution, and yet it has the power of recognition – the power to identify key issues.
A small Secretariat was set up in Geneva to support the IGF, and the UN Secretary-General appointed a group of advisers, representing all stakeholder groups, to assist him in convening the IGF. The United Nations General Assembly agreed in December 2010 to extend the IGF’s mandate for another five years. The IGF is financed through voluntary contributions.”
Zittrain describes the IGF as “diplomatically styled talk-shop initiatives like the World Summit on the Information Society and its successor, the Internet Governance Forum, where “stakeholders” gather to express their views about Internet governance, which is now more fashionably known as “the creation of multi-stakeholder regimes.”
Less Formal Yet Structured
The Engineering and Technical Standards Community
The internet governance models under discussion have in common the involvement of law or legal structures in some shape or form or, in the case of the cyber anarchists, an absence thereof.
Essentially internet governance falls within two major strands:
1. The narrow strand involving the regulation of technical infrastructure and what makes the internet work.
2. The broad strand dealing with the regulation of content, transactions and communication systems that use the internet.
The narrow strand regulation of internet architecture recognises that the operation of the internet and the superintendence of that operation involves governance structures that lack the institutionalisation that lies behind governance by law.
The history of the development of the internet although having its origin with the United States Government has had little if any direct government involvement or oversight. The Defence Advanced Research Projects Administration (DARPA) was a funding agency providing money for development. It was not a governing agency nor was it a regulator. Other agencies such as the Federal Networking Council and the National Science Foundation are not regulators, they are organisations that allow user agencies to communicate with one another. Although the United States Department of Commerce became involved with the internet, once potential commercial implications became clear it too has maintained very much of a hands-off approach and its involvement has primarily been with ICANN with whom the Department has maintained a steady stream of Memoranda of Understanding over the years.
Technical control and superintendence of the internet rests with the network engineers and computer scientists who work out problems and provide solutions for its operation. There is no organisational charter. The structures within which decisions are made are informal, involving a network of interrelated organisations with names which at least give the appearance of legitimacy and authority. These organisations include the Internet Society (ISOC), an independent international non-profit organisation founded in 1992 to provide leadership and internet-related standards, education and policy around the world. Several other organisations are associated with ISOC. The Internet Engineering Taskforce (IETF), is a separate legal entity, which has as its mission to make the internet work better by producing high quality, relevant technical documents that influence the way people design, use and manage the internet.
The Internet Architecture Board (IAB) is an advisory body to ISOC and also a committee of IETF, which has an oversight role. Also housed within ISOC is the IETF Administrative Support Activity, which is responsible for the fiscal and administrative support of the IETF Standards Process. The IETF Administrative Support Activity (IASA) has a committee, the IETF Administrative Oversight Committee (IAOC), which carries out the responsibilities of the IASA supporting the Internet Engineering Steering Group (IESG) working groups, the Internet Architecture Board (IAB), the Internet Research Taskforce (IRTF) and Steering Groups (IRSG). The IAOC oversees the work of the IETF Administrative Director (IAD) who has the day-to-day operational responsibility of providing the fiscal and administrative support through other activities, contractors and volunteers.
The central hub of these various organisations is the IETF. This organisation has no coercive power, but is responsible for establishing internet standards, some of which such as TCP/IP are core standards and are non-optional. The compulsory nature of these standards do not come from any regulatory powers, but because of the nature of the critical mass of network externalities involving internet users. Standards become economically mandatory and there is an overall acceptance of IETF standards which maintain core functionality of the internet.
A characteristic of IETF, and indeed all of the technical organisations involved in internet functionality, is the open process that theoretically allows any person to participate. The other characteristic of internet network organisations is the nature of the rough consensus by which decisions are made. Proposals are circulated in the form of a Request for Comment to members of the internet, engineering and scientific communities and from this collaborative and consensus-based approach a new standard is agreed.
Given that the operation of the internet involves a technical process and the maintenance of the technical process depends on the activities of scientific and engineering specialists, it is fair to conclude that a considerable amount of responsibility rests with the organisations who set and maintain standards. Many of these organisations have developed a considerable power structure them without any formal governmental or regulatory oversight – an issue that may well need to be addressed. Another issue is whether these organisations have a legitimate basis to do what they are doing with such an essential infrastructure as the internet. The objective of organisations such IETF is a purely technical one that has little if any public policy ramifications. Its ability to work outside government bureaucracyenables greater efficiency.
However, the internet’s continued operation depends on a number of interrelated organisations which, while operating in an open and transparent manner in a technical collaborative consensus-based model, have little understanding of the public interest ramifications of their decisions. This aspect of internet governance is often overlooked. The technical operation and maintenance of the internet is superintended by organisations that have little or no interactivity with any of the formalised power structures that underlie the various “governance by law” models of internet governance. The “technical model” of internet governance is an anomaly arising not necessarily from the technology, but from its operation.
ICANN
Of those involved in the technical sphere of Internet governance, ICANN is perhaps the best known. Its governance of the “root” or addressing systems makes it a vital player in the Internet governance taxonomy and for that reason requires some detailed consideration.
ICANN is the Internet Corporation for Assigned Names and Numbers (ICANN). This organisation was formed in October 1998 at the direction of the Clinton Administration to take responsibility for the administration of the Internet’s Domain Name System (DNS). Since that time ICANN has been dogged by controversy and criticism from all sides. ICANN wields enormous power as the sole controlling authority of the DNS, which has a “chokehold” over the internet because it is the only aspect of the entire decentralised, global system of the internet that is administered from a single, central point. By selectively editing, issuing or deleting net identities ICANN is able to choose who is able to access cyberspace and what they will see when they are there. ICANN’s control effectively amounts, in the words of David Post, to “network life or death”. Further, if ICANN chooses to impose conditions on access to the internet, it can indirectly project its influence over every aspect of cyberspace and the activity that takes place there.
The obvious implication for governance theorists is that the ICANN model is not a theory but a practical reality. ICANN is the first indigenous cyberspace governance institution to wield substantive power and demonstrate a real capacity for effective enforcement. Ironically, while other internet governance models have demonstrated a sense of purpose but an acute lack of power, ICANN has suffered from excess power and an acute lack of purpose. ICANN arrived at its present position almost, but not quite, by default and has been struggling to find a meaningful raison d’être since. In addition it is pulled by opposing forces all anxious to ensure their vision of the new frontier prevails
ICANN’s “democratic” model of governance has been attacked as unaccountable, anti-democratic, subject to regulatory capture by commercial and governmental interests, unrepresentative, and excessively Byzantine in structure. ICANN has been largely unresponsive to these criticisms and it has only been after concerted publicity campaigns by opponents that the board has publicly agreed to change aspects of the process.
As a governance model, a number of key points have emerged:
1. ICANN demonstrates the internet’s enormous capacity for marshalling global opposition to governance structures that are not favourable to the interests of the broader internet community.
2. Following on from point one, high profile, centralised institutions such as ICANN make extremely good targets for criticism.
3. Despite enormous power and support from similarly powerful backers, public opinion continues to prove a highly effective tool, at least in the short run, for stalling the development of unfavourable governance schemes.
4. ICANN reveals the growing involvement of commercial and governmental interests in the governance of the internet and their reluctance to be directly associated with direct governance attempts.
5. ICANN, it demonstrates an inability to project its influence beyond its core functions to matters of general policy or governance of the internet.
ICANN lies within the less formal area of governance taxonomy in that it operates with a degree of autonomy it retains a formal character. Its power is internationally based (and although still derived from the United States government, there is a desire by the US to “de-couple” its involvement with ICANN). It has greater private rather than public sources of authority, in that its power derives from relationships with registries, ISPs and internet users rather than sovereign states. Finally, it is evolving towards a technical governance methodology, despite an emphasis on traditional decision-making structures and processes.
The Polycentric Model of Internet Governance
The Polycentric Model embraces, for certain purposes, all of the preceding models. It does not envelop them, but rather employs them for specific governance purposes.
This theory is one that has been developed by Professor Scott Shackelford. Shackelford in his article “Toward Cyberpeace: Managing Cyberattacks Through Polycentric Governance” and locates Internet Governance within a special context of cybersecurity and the maintenance of cyberpeace He contends that the international community must come together to craft a common vision for cybersecurity while the situation remains malleable. Given the difficulties of accomplishing this in the near term, bottom-up governance and dynamic, multilevel regulation should be undertaken consistent with polycentric analysis.
While he sees a role for governments and commercial enterprises he proposes a mixed model. Neither governments nor the private sector should be put in exclusive control of managing cyberspace since this could sacrifice both liberty and innovation on the mantle of security, potentially leading to neither.
The basic notion of polycentric governance is that a group facing a collective action problem should be able to address it in whatever way they see fit, which could include using existing or crafting new governance structures; in other words, the governance regime should facilitate the problem-solving process.
The model demonstrates the benefits of self-organization, networking regulations at multiple levels, and the extent to which national and private control can co-exist with communal management. A polycentric approach recognizes that diverse organizations and governments working at multiple levels can create policies that increase levels of cooperation and compliance, enhancing flexibility across issues and adaptability over time.
Such an approach, a form of “bottom-up” governance, contrasts with what may be seen as an increasingly state-centric approach to Internet Governance and cybersecurity which has become apparent in for a such as the G8 Conference in Deauville in 2011 and the ITU Conference in Dubai in 2012. The approach also recognises that cyberspace has its own qualities or affordances, among them its decentralised nature along with the continuing dynamic change flowing from permissionless innovation. To put it bluntly it is difficult to forsee the effects of regulatory efforts which a generally sluggish in development and enactment, with the result that the particular matter which regulation tried to address has changed so that the regulatory system is no longer relevant. Polycentric regulation provides a multi-faceted response to cybersecurity issues in keeping with the complexity of crises that might arise in cyberspace.
So how should the polycentric model work. First, allies should work together to develop a common code of cyber conduct that includes baseline norms, with negotiations continuing on a harmonized global legal framework. Second, governments and CNI operators should establish proactive, comprehensive cybersecurity policies that meet baseline standards and require hardware and software developers to promote resiliency in their products without going too far and risking balkanization. Third, the recommendations of technical organizations such as the IETF should be made binding and enforceable when taken up as industry best practices. Fourth, governments and NGOs should continue to participate in U.N. efforts to promote global cybersecurity, but also form more limited forums to enable faster progress on core issues of common interest. And fifth, training campaigns should be undertaken to share information and educate stakeholders at all levels about the nature and extent of the cyber threat.
Code is Law
Located centrally within the taxonomy and closely related to the Engineering and Technology category of governance models is the “code is law” model, designed by Harvard Professor, Lawrence Lessig, and, to a lesser extent, Joel Reidenberg. The school encompasses in many ways the future of the internet governance debate. The system demonstrates a balance of opposing formal and informal forces and represents a paradigm shift in the way internet governance is conceived because the school largely ignores the formal dialectic around which the governance debate is centred and has instead developed a new concept of “governance and the internet”. While Lessig’s work has been favourably received even by his detractors, it is still too early to see if it is indeed a correct description of the future of internet governance, or merely a dead end. Certainly, it is one of the most discussed concepts of cyberspace jurisprudence.
Lessig asserts that human behaviour is regulated by four “modalities of constraint”: law, social norms, markets and architecture. Each of these modalities influences behaviour in different ways:
1. law operates via sanction;
2. markets operate via supply and demand and price;
3. social norms operate via human interaction; and
4. architecture operates via the environment.
Governance of behaviour can be achieved by any one or any combination of these four modalities. Law is unique among the modalities in that it can directly influence the others.
Lessig argues that in cyberspace, architecture is the dominant and most effective modality to regulate behaviour. The architecture of cyberspace is “code” — the hardware and software — that creates the environment of the internet. Code is written by code writers; therefore it is code writers, especially those from the dominant software and hardware houses such as Microsoft and AOL, who are best placed to govern the internet. In cyberspace, code is law in the imperative sense of the word. Code determines what users can and cannot do in cyberspace.
“Code is law” does not mean lack of regulation or governmental involvement, although any regulation must be carefully applied. Neil Weinstock Netanel argues that “contrary to the libertarian impulse of first generation cyberspace scholarship, preserving a foundation for individual liberty, both online and off, requires resolute, albeit carefully tailored, government intervention”. Internet architecture and code effectively regulate individual activities and choices in the same way law does and that market actors need to use these regulatory technologies in order to gain a competitive advantage. Thus, it is the role of government to set the limits on private control to facilitate this.
The crux of Lessig’s theory is that law can directly influence code. Governments can regulate code writers and ensure the development of certain forms of code. Effectively, law and those who control it, can determine the nature of the cyberspace environment and thus, indirectly what can be done there. This has already been done. Code is being used to rewrite Copyright Law. Technological Protection Measures (TPMs) allow content owners to regulate the access and/or use to which a consumer may put digital content. Opportunities to exercise fair uses or permitted uses can be limited beyond normal user expectations and beyond what the law previously allowed for analogue content. The provision of content in digital format, the use of TPMs and the added support that legislation gives to protect TPMs effectively allows content owners to determine what limitations they will place upon users’ utilisation of their material. It is possible that the future of copyright lies not in legislation (as it has in the past) but in contract.
Informal Models and Aspects of Digital Liberalism
Digital liberalism is not so much a model of internet governance as it is a school of theorists who approach the issue of governance from roughly the same point on the political compass: (neo)-liberalism. Of the models discussed, digital liberalism is the broadest. It encompasses a series of heterogeneous theories that range from the cyber-independence writings of John Perry Barlow at one extreme, to the more reasoned private legal ordering arguments of Froomkin, Post and Johnson at the other. The theorists are united by a common “hands off” approach to the internet and a tendency to respond to governance issues from a moral, rather than a political or legal perspective.
Regulatory Arbitrage – “Governance by whomever users wish to be governed by”
The regulatory arbitrage school represents a shift away from the formal schools, and towards digital liberalism. “Regulatory arbitrage” is a term coined by the school’s principal theorist, Michael Froomkin, to describe a situation in which internet users “migrate” to jurisdictions with regulatory regimes that give them the most favourable treatment. Users are able to engage in regulatory arbitrage by capitalising on the unique geographically neutral nature of the internet. For example, someone seeking pirated software might frequent websites geographically based in a jurisdiction that has a weak intellectual property regime. On the other side of the supply chain, the supplier of gambling services might, despite residing in the United States, deliberately host his or her website out of a jurisdiction that allows gambling and has no reciprocal enforcement arrangements with the United States.
Froomkin suggests that attempts to regulate the internet face immediate difficulties because of the very nature of the entity that is to be controlled. He draws upon the analogy of the mythological Hydra, but whereas the beast was a monster, the internet may be predominantly benign. Froomkin identifies the internet’s resistance to control as being caused by the following two technologies:
1. The internet is a packet-switching network. This makes it difficult for anyone, including governments, to block or monitor information originating from large numbers of users.
2. Powerful military-grade cryptography exists on the internet that users have access to that can, if used properly, make messages unreadable to anyone but the intended recipient.
As a result of the above, internet users have access to powerful tools which can be used to enable anonymous communication. This is unless, of course, their governments have strict access control, an extensive monitoring programme or can persuade its citizens not to use these tools by having liability rules or criminal law.
Froomkin’s theory is principally informal in character. Private users, rather than public institutions are responsible for choosing the governance regime they adhere to. The mechanism that allows this choice is technical and works in opposition to legally based models. Finally, the model is effectively global as users choose from a world of possibilities to decide which particular regime(s) to submit to, rather than a single national regime. While undeniably informal.
Unlike digital liberalists who advocate a separate internet jurisdiction encompassing a multitude of autonomous self-regulating regimes within that jurisdiction, Froomkin argues that the principal governance unit of the internet will remain the nation-state. He argues that users will be free to choose from the regimes of states rather than be bound to a single state, but does not yet advocate the electronic federalism model of digital liberalism.
Digital Libertarianism – Johson and Post
Digital liberalism is the oldest of the internet governance models and represents the original response to the question: “How will the internet be governed?” Digital liberalism developed in the early 1990s as the internet began to show the first inklings of its future potential. The development of a Graphical User Interface together with web browsers such as Mosaic made the web accessible to the general public for the first time. Escalating global connectivity and a lack of understanding or reaction by world governments contributed to a sense of euphoria and digital freedom that was reflected in the development of digital liberalism.
In its early years digital liberalism evolved around the core belief that “the internet cannot be controlled” and that consequently “governance” was a dead issue. By the mid-1990s advances in technology and the first government attempts to control the internet saw this descriptive claim gradually give way to a competing normative claim that “the internet can be controlled but it should not be”. These claims are represented as the sub-schools of digital liberalism — cyberanarchism and digital libertarianism.
In “And How Shall the Net be Governed?” David Johnson and David Post posed the following questions:
Now that lots of people use (and plan to use) the internet, many — governments, businesses, techies, users and system operators (the “sysops” who control ID issuance and the servers that hold files) — are asking how we will be able to:
(1) establish and enforce baseline rules of conduct that facilitate reliable communications and trustworthy commerce; and
(2) define, punish and prevent wrongful actions that trash the electronic commons or impose harm on others.
In other words, how will cyberspace be governed, and by what right?
Post and Johnson point out that one of the advantages of the internet is its chaotic and ungoverned nature. As to the question of whether the net must be governed at all they use the example of the three-Judge Federal Court in Philadelphiathat “threw out the Communications Decency Act on First Amendment grounds seemed thrilled by the ‘chaotic’ and seemingly ungovernable character of the net”. Post and Johnson argue that because of its decentralised architecture and lack of a centralised rule-making authority the net has been able to prosper. They assert that the freedom the internet allows and encourages, has meant that sysops have been free to impose their own rules on users. However, the ability of the user to choose which sites to visit, and which to avoid, has meant the tyranny of system operators has been avoided and the adverse effect of any misconduct by individual users has been limited.
Johnson and Post propose the following four competing models for net governance:
1. Existing territorial sovereigns seek to extend their jurisdiction and amend their own laws as necessary to attempt to govern all actions on the net that have substantial impacts upon their own citizenry.
2. Sovereigns enter into multilateral international agreements to establish new and uniform rules specifically applicable to conduct on the net.
3. A new international organisation can attempt to establish new rules — a new means of enforcing those rules and of holding those who make the rules accountable to appropriate constituencies.
4. De facto rules may emerge as the result of the interaction of individual decisions by domain name and IP registries (dealing with conditions imposed on possession of an on-line address), by system operators (local rules to be applied, filters to be installed, who can sign on, with which other systems connection will occur) and users (which personal filters will be installed, which systems will be patronised and the like).
The first three models are centralised or semi-centralised systems and the fourth is essentially a self-regulatory and evolving system. In their analysis, Johnson and Post consider all four and conclude that territorial laws applicable to online activities where there is no relevant geographical determinant are unlikely to work, and international treaties to regulate, say, ecommerce are unlikely to be drawn up.
Johnson and Post proposed a variation of the third option — a new international organisation that is similar to a federalist system, termed “net federalism”.
In net federalism, individual network systems rather than territorial sovereignty are the units of governance. Johnson and Post observe that the law of the net has emerged, and can continue to emerge, from the voluntary adherence of large numbers of network administrators to basic rules of law (and dispute resolution systems to adjudicate the inevitable inter-network disputes), with individual users voting with their electronic feet to join the particular systems they find most congenial. Within this model multiple network confederations could emerge. Each may have individual “constitutional” principles — some permitting and some prohibiting, say, anonymous communications, others imposing strict rules regarding redistribution of information and still others allowing freer movement — enforced by means of electronic fences prohibiting the movement of information across confederation boundaries.
Digital liberalism is clearly an informal governance model and for this reason has its attractions for those who enjoyed the free-wheeling approach to the internet in the early 1990s. It advocates almost pure private governance, with public institutions playing a role only in so much as they validate the existence and independence of cyber-based governance processes and institutions. Governance is principally to be achieved by technical solutions rather than legal process and occurs at a global rather than national level. Digital liberalism is very much the antithesis of the digital realist school and has been one of the two driving forces that has characterised the internet governance debate in the last decade.
Cyberanarchism – John Perry Barlow
In 1990, the FBI were involved in a number of actions against a perceived “computer security threat” posed by a Texas role-playing game developer named Steve Jackson. Following this, John Perry Barlow and Mitch Kapor formed the Electronic Freedom Foundation. Its mission statement says that it was “established to help civilize the electronic frontier; to make it truly useful and beneficial not just to a technical elite, but to everyone; and to do this in a way which is in keeping with our society’s highest traditions of the free and open flow of information and communication”.
One of Barlow’s significant contributions to thinking on internet regulation was the article, “Declaration of the Independence of Cyberspace”, although idealistic in expression and content, eloquently expresses a point of view held by many regarding efforts to regulate cyberspace. The declaration followed the passage of the Communications Decency Act. In “The Economy of Ideas: Selling Wine without Bottles on the Global Net”,Barlow challenges assumptions about intellectual property in the digital online environment. He suggests that the nature of the internet environment means that different legal norms must apply. While the theory has its attractions, especially for the young and the idealistic, the fact of the matter is that “virtual” actions are grounded in the real world, are capable of being subject to regulation and, subject to jurisdiction, are capable of being subject to sanction. Indeed, we only need to look at the Digital Millennium Copyright Act (US) and the Digital Agenda Act 2000 (Australia) to gain a glimpse of how, when confronted with reality, Barlow’s theory dissolves.
Regulatory Assumptions
In understanding how regulators approach the control of internet content, one must first understand some of the assumptions that appear to underlie any system of data network regulation.
First and foremost, sovereign states have the right to regulate activity that takes place within their own borders. This right to regulate is moderated by certain international obligations. Of course there are certain difficulties in identifying the exact location of certain actions, but the internet only functions at the direction of the persons who use it. These people live, work, and use the internet while physically located within the territory of a sovereign state and so it is unquestionable that states have the authority to regulate their activities.
A second assumption is that a data network infrastructure is critical to the continued development of national economies. Data networks are a regular business tool like the telephone. The key to the success of data networking infrastructure is its speed, widespread availability, and low cost. If this last point is in doubt, one need only consider that the basic technology of data networking has existed for more than 20 years. The current popularity of data networking, and of the internet generally, can be explained primarily by the radical lowering of costs related to the use of such technology. A slow or expensive internet is no internet at all.
The third assumption is that international trade requires some form of international communication. As more communication takes place in the context of data networking, then continued success in international trade will require sufficient international data network connections.
The fourth assumption is that there is a global market for information. While it is still possible to internalise the entire process of information gathering and synthesis within a single country, this is an extremely costly process. If such expensive systems represent the only source of information available it will place domestic businesses at a competitive disadvantage in the global marketplace.
The final assumption is that unpredictability in the application of the law or in the manner in which governments choose to enforce the law will discourage both domestic and international business activity. In fashioning regulations for the internet, it is important that the regulations are made clear and that enforcement policies are communicated in advance so that persons have adequate time to react to changes in the law.
Concluding Thoughts
Governance and the Properties of the Digital Paradigm
Regulating or governing cyberspace faces challenges that lie within the properties or affordances of the Digital Paradigm. To begin with, territorial sovereignty concepts which have been the basis for most regulatory or governance activity rely on physical and defined geographical realities. By its nature, a communications system like the Internet challenges that model. Although the Digital Realists assert that effectively nothing has changed, and that is true to a limited extent, the governance functions that can be exercised are only applicable to that part of cyberspace that sits within a particular geographical space. Because the Internet is a distributed system it is impossible for any one sovereign state to impose its will upon the entire network. It is for this reason that some nations are setting up their own networks, independent of the Internet, although the perception is that the Internet is controlled by the US, the reality is that with nationally based “splinternets” sovereigns have greater ability to assert control over the network both in terms of the content layer and the various technical layers beneath that make up the medium. The distributed network presents the first challenge to national or territorially based regulatory models.
Of course aspects of sovereign power may be ceded by treaty or by membership of international bodies such as the United Nations. But does, say, the UN have the capacity to impose a worldwide governance system over the Internet. True, it created the IGF but that organisation has no power and is a multi-stakeholder policy think tank. Any attempt at a global governance model requires international consensus and, as the ITU meeting in Dubai in December 2012 demonstrated, that is not forthcoming at present.
Two other affordances of the Digital Paradigm challenge the establishment of tradition regulatory or governance systems. Those affordances are continuing disruptive change and permissionless innovation. The very nature of the legislative process is measured. Often it involves cobbling a consensus. All of this takes time and by the time there is a crystallised proposition the mischief that the regulation is trying to address either no longer exists or has changed or taken another form. The now limited usefulness (and therefore effectiveness) of the provisions of s.122A – P of the New Zealand Copyright Act 1994 demonstrate this proposition. Furthermore, the nature of the legislative process involving reference to Select Committees and the prioritisation of other legislation within the time available in a Parliamentary session means that a “swift response” to a problem is very rarely possible.
Permissionless innovation adds to the problem because as long as this continues, and there is no sign that the inventiveness of the human mind is likely to slow down, developers and software writers will continue to change the digital landscape meaning that the target of a regulatory system may be continually moving, and certainty of law, a necessity in any society that operates under the Rule of Law, may be compromised. Again, the example of the file sharing provisions of the New Zealand Copyright Act provide an example. The definition of file sharing is restricted to a limited number of software applications – most obviously Bit Torrent. Work arounds such as virtual private networks and magnet links, along with anonymisation proxies fall outside the definition. In addition the definition addresses sharing and does not include a person who downloads but does not share by uploading infringing content.
Associated with disruptive change and permissionless innovation are some other challenges to traditional governance thinking. Participation and interactivity, along with exponential dissemination emphasise the essentially bottom up participatory nature of the Internet ecosystem. Indeed this is reflected in the quality of permissionless innovation where any coder may launch an app without any regulatory sign-off. The Internet is perhaps the greatest manifestation of democracy that there has been. It is the Agora of Athens on a global scale, a cacophony of comment, much of it trivial but the fact is that everyone has the opportunity to speak and potentially to be heard. Spiro Agnew’s “silent majority” need be silent no longer. The events of the Arab Spring showed the way in which the Internet can be used in the face of oppressive regimes in motivating populaces. It seems unlikely that an “undemocratic” regulatory regime could be put in place absent the “consent of the governed” and despite the usual level of apathy that occurs in political matters, it seems unlikely that, given its participatory nature, netizens would tolerate such interference.
Perhaps the answer to the issue of Internet Governance is already apparent – a combination of Lessig’s Code is Law and the technical standards organisations that actually make the Internet work, such as ISOC, ITEF and ICANN. Much criticism has been levelled at ICANN’s lack of accountability, but in many respects similar issues arise with the IETF and IAB, dominated as they are by groups of engineers. But in the final analysis, perhaps this is the governance model that is the most suitable. The objective of engineers is to make systems work at the most efficient level. Surely this is the sole objective of any regulatory regime. Furthermore, governance by technicians, if it can be called that, contains safeguards against political, national or regional capture. By all means, local governments may regulate content. But that is not the primary objective of Internet governance. Internet governance addresses the way in which the network operates. And surely that is an engineering issue rather than a political one.
The Last Word
Perhaps the last word on the general topic of internet regulation should be left to Tsutomu Shinomura, a computational physicist and computer security expert who was responsible for tracking down the hacker Kevin Mitnick which he recounted in the excellent book Takedown:
The network of computers known as the internet began as a unique experiment in building a community of people who shared a set of values about technology and the role computers could play in shaping the world. That community was based on a shared sense of trust. Today, the electronic walls going up everywhere on the Net are the clearest proof of the loss of that trust and community. It’s a great loss for all of us.