Lessons Unlearned

The Christchurch Call was a meeting co-hosted by New Zealand’s Prime Minister, Jacinda Ardern and French President, Emmanuel Macron, held in Paris on 15 May 2019. It’s a global call which aims to “bring together countries and tech companies in an attempt to bring to an end the ability to use social media to organise and promote terrorism and violent extremism.”[1]It is intended to be an ongoing process.

This piece was written at the end of last year and for one reason or another – and primarily the Covid-19 crisis – has languished. I post it now as the first anniversary of the Call approaches. The overall context is that of Internet Regulation – content or technology – and the difficulties that presents.

Introduction

The Christchurch Call is not the first attempt to regulate or control Internet based content. It will not be the last. And, despite its aim to reduce or eliminate the use of social media to organize and promote terrorism and violent extremism, it carries within it the seeds of its own downfall. The reason is, like so many efforts before it, the target of the Christchurch Call is content rather than technology.

Calls to regulate content and access to it have been around since the Internet went public.

The Christchurch Call is eerily familiar, not because of what motivated and inspired it, but because it represents an effort by Governments and States to address perceived problems posed by Internet based content.

In 2011 a similar effort was led by then French President Nicholas Sarkozy at the economic summit at Deauville – is it a co-incidence that once again the French are leaders in this present initiative? So what was the Deauville initiative all about?

Deauville May 2011

The Background

In 2011 and 2012 there were renewed calls for greater regulation of the Internet. That these were driven by the events in the Middle East early in 2011 which became known as the “Arab Spring” seems more than coincidental. The “Arab Spring” is a term that refers to anti-government protests that spread across the Middle East. These followed a successful uprising in Tunisia against former leader Zine El Abidine Ben Ali which emboldened similar anti-government protests in a number of Arab countries. The protests were characterised by the extensive use of social media to organise gatherings and spread awareness. There has, however, been some debate about the influence of social media on the political activism of the Arab Spring. Some critics contend that digital technologies and other forms of communication — videos, cellular phones, blogs, photos and SMS messages— have brought about the concept of a “digital democracy” in parts of North Africa affected by the uprisings. Others have claimed that in order to understand the role of social media during the Arab Spring there is context of high rates of unemployment and corrupt political regimes which led to dissent movements within the region. There is certainly evidence of an increased uptake of Internet and social media usage over the period of the events, and during the uprising in Egypt; then President Mubarak’s State Security Investigations Service blocked access to Twitter and Facebook and on 27 January 2011 the Egyptian Government shut down the Internet in Egypt along with SMS messaging.

Sarkozy’s Initiative

In May 2011 at the first e-G8 Forum, before the G8 summit in France, President Nicolas Sarkozy issued a provocative call for stronger Internet regulation. Mr Sarkozy convened a special gathering of global “digerati” in Paris and called the rise of the Internet a “revolution” as significant as the age of exploration and the industrial revolution.

This revolution did not have a flag and Mr Sarkozy acknowledged that the Internet belonged to everyone, citing the Arab Spring as a positive example. However, he warned executives of Google, Facebook, Amazon and eBay who were present:

“The universe you represent is not a parallel universe. Nobody should forget that governments are the only legitimate representatives of the will of the people in our democracies. To forget this is to risk democratic chaos and anarchy.”

Mr Sarkozy was not alone in calling existing laws and regulations inadequate to deal with the challenges of a borderless digital world. Prime Minister David Cameron of Britain stated that he would ask Parliament to review British privacy laws after Twitter users circumvented court orders preventing newspapers from publishing the names of public figures who are suspected of having had extramarital affairs, but he did not go as far as Mr Sarkozy who was pushing for a “civilized Internet” implying wide regulation.

However, the Deauville Communique did not extend as far as Mr Sarkozy may have liked. It affirmed the importance of intellectual property protection, the effective protection of personal data and individual privacy, security of networks, and a crackdown on trafficking in children for sexual exploitation; however it did not advocate state control of the Internet but staked out a role for governments.

Deauville was not an end to the matter. The appetite for Internet regulation by domestic governments had just been whetted. This was demonstrated by the events at the ITU meeting in Dubai in 2012

The ITU meeting in Dubai December 2012

The meeting of the International Telecommunications Union (ITU) in Dubai provided the forum for further consideration of expanded Internet regulation. No less an authority than Vinton Cerf, the co-developer with Robert Kahn of the TCP/IP protocol which was one of the important technologies that made the Internet possible, sounded a warning when he said:

“But today, despite the significant positive impact of the Internet on the world’s economy, this amazing technology stands at a crossroads. The Internet’s success has generated a worrying desire by some countries’ governments to create new international rules that would jeopardize the network’s innovative evolution and its multi-faceted success.

This effort is manifesting itself in the UN General Assembly and at the International Telecommunication Union — the ITU — a United Nations organization that counts 193 countries as its members, each holding one vote. The ITU currently is conducting a review of the international agreements governing telecommunications and it aims to expand its regulatory authority to include the Internet at a treaty summit scheduled for December of this year in Dubai….”

Today, the ITU focuses on telecommunication networks, radio frequency allocation, and infrastructure development. But some powerful member countries saw an opportunity to create regulatory authority over the Internet. In June 2012, the Russian government stated its goal of establishing international control over the Internet through the ITU. Then, in September 2012, the Shanghai Cooperation Organization — which counts China, Russia, Tajikistan, and Uzbekistan among its members — submitted a proposal to the UN General Assembly for an “international Code of Conduct for Information Security.” The organization’s stated goal was to establish government-led “international norms and rules standardizing the behavior of countries concerning information and cyberspace.” Other proposals of a similar character have emerged from India and Brazil. And in an October 2010 meeting in Guadalajara, Mexico, the ITU itself adopted a specific proposal to “increase the role of ITU in Internet governance.”

As a result of these efforts, there was a strong possibility that the ITU would significantly amend the International Telecommunication Regulations — a multilateral treaty last revised in 1988 — in a way that authorizes increased ITU and member state control over the Internet. These proposals, if they had been implemented, would have changed the foundational structure of the Internet that has historically led to unprecedented worldwide innovation and economic growth.

What is the ITU?

The ITU, originally the International Telegraph Union, is a specialised agency of the United Nations and is responsible for issues concerning information and communication technologies. It was originally founded in 1865 and in the past has been concerned with technical communications issues such as standardisation of communications protocols (which was one of its original purposes), the management of the international radio-frequency spectrum and satellite orbit resources and the fostering of sustainable, affordable access to information and communication technology. It took its present name in 1934 and in 1947 became a specialised agency of the United Nations.

The position of the ITU approaching the 2012 meeting in Dubai was that, given the vast changes that had taken place in the world of telecommunications and information technologies, the International Telecommunications Regulations (ITR) that had been revised in 1988 were no longer in keeping with modern developments. Thus, the objective of the 2012 meeting was to revise the ITRs to suit the new age. After a controversial meeting in Dubai in December 2012, the Final Acts of the Conference were published. The controversial issue was that there was a proposal to redefine the Internet as a system of government-controlled, state-supervised networks. The proposal was contained in a leaked document by a group of members including Russia, China, Saudi Arabia, Algeria, Sudan, Egypt and the United Arab Emirates. However, the proposal was withdrawn. But the governance model defined the Internet as an “international conglomeration of interconnected telecommunication networks”, and that “Internet governance shall be effected through the development and application by governments” with member states having “the sovereign right to establish and implement public policy, including international policy, on matters of Internet governance”.

This wide-ranging proposal went well beyond the traditional role of the ITU, and other members such as the United States, European countries, Australia, New Zealand and Japan insisted that the ITU treaty should apply to traditional telecommunications systems. The resolution that won majority support towards the end of the conference stated that the ITU’s leadership should “continue to take the necessary steps for ITU to play an active and constructive role in the multi-stakeholder model of the Internet.”

However, the Treaty did not receive universal acclaim. United States Ambassador Kramer announced that the US would not be signing the new treaty. He was followed by the United Kingdom. Sweden said that it would need to consult with its capital (code in UN-speak for “not signing”). Canada, Poland, the Netherlands, Denmark, Kenya, New Zealand, Costa Rica, and the Czech Republic all made similar statements. In all, 89 countries signed while 55 did not.

From the Conference three different versions of political power vis-à-vis the Internet became clear. Cyber sovereignty states such as Russia, China and Saudi Arabia advocated that the mandate of the ITU be extended to include Internet governance issues. The United States and allied predominantly Western states were of the view that the current multi-stakeholder processes should remain in place. States such as Brazil, South Africa and Egypt rejected the concept of Internet censorship and closed networks but expressed concern at what appeared to be United States dominance of aspects of Internet management.

In 2014 at the NETmundial Conference the multi-stakeholder model was endorsed, recognising that the Internet was a global resource and should be managed in the public interest.

The Impact of International Internet Governance

Issues surrounding Internet Governance are important in this discussion because issues of Internet control will directly impact upon content delivery and will thus have an impact upon freedom of expression in its widest sense. 

Rules surrounding global media governance do not exist. The current model based on localised rule systems and the lack of harmonisation arise from differing cultural and social perceptions as to media content. Although the Internet- based technologies have the means to provide a level of technical regulation such as code itself, digital rights management and internet filtering, and the larger issue of control of the distribution system poses an entirely novel set of issues that have not been encountered by traditional localised print and broadcast systems.

The Internet separates the medium from the message and issues of Internet governance will have a significant impact upon the means and scope of content delivery. From the perspective of media freedom and freedom of expression, Internet governance is a matter that will require close attention. As matters stand at the moment the issue of who rules the channels of communication is a work in progress.

Quite clearly there is a considerable amount of concern about the way in which national governments wish to regulate, or in some way govern and control, the Internet. Although at first glance this may seem to be directed at the content of content passing through a new communications technology, the attempt to regulate through a technological forum such as the ITU clearly demonstrates that governments wish to control not only content but the various transmission and protocol layers of the Internet and possibly even the backbone itself. The Christchurch Call is merely a continuation of that desire by governments to regulate and control the Internet.

Resisting Regulation

The early history of the commercial Internet reveals a calculated effort to ensure that the new technology was not the subject of regulation. The Progress and Freedom Foundation, established in 1993, had an objective of ensuring that, unlike radio or television, the new medium would lie beyond the realm of government regulation. At a meeting in 1994, attended by futurists Alvin Toffler and Esther Dyson along with George Keyworth, President Reagan’s former science adviser, a Magna Carta for the Knowledge Age contended that although the industrial age may have required some form of regulation, the knowledge age did not. If there was to be an industrial policy for the knowledge age, it should focus on removing barriers to competition and massively deregulating the telecommunications and computing industries.

On 8 February 1996 the objectives of the Progress and Freedom Foundation became a reality when President Clinton signed the Telecommunications Act. This legislation effectively deregulated the entire communications industry, allowed for the subsequent consolidation of media companies and prohibiting regulation of the Internet. On the same day, as a statement of disapproval that the US government would even regulate by deregulating, John Perry Barlow released his Declaration of Independence of Cyberspace from the World Economic Forum in Davos, Switzerland.

Small wonder that the United States of America resists attempts at Internet regulation. But the problem is more significant than the will or lack of will to regulate. The problem lies within the technology itself and although efforts such as Deauville, Dubai, the NetMundial Conference and the Christchurch Call may focus on content, this is merely what Marshall McLuhan termed the meat that attracts the lazy dog of the mind. To regulate content requires an understanding and appreciation of some of the deeper aspects or qualities of the new communications technology. Once these are understood, the magnitude of the task becomes apparent and the practicality of effectively achieving regulation of communications runs up against the fundamental values of Western liberal democracies.

Permissionless Innovation

One characteristic of the Digital Paradigm is that of permissionless innovation. No approvals are need for developers to connect an application or a platform to the backbone of the Internet. All that is required is that the application comply with standards set by Internet engineers and essentially these standards ensure that an application will be compatible with Internet protocols.

No licences are required to connect an application. No regulatory approvals are needed. A business plan need not be submitted for bureaucratic fiat. Permissive innovation has been a characteristic of the Internet and it has allowed the Internet to grow. It allowed for the development of the Hypertext Transfer Protocol that allowed for the development of the World Wide Web – the most familiar aspect of the Internet today. It allowed for the development of a myriad of social media platforms. It co-exists with another quality of the Internet which is that of continuing disruptive change – the reality that the environment is not static and does not stand still.

Targetting the most popular social media platforms will only address a part of the problem. Permissionless innovation means that the leading platforms may modify their algorithms to try and capture extreme content but this is a less than subtle solution and is prone to the error of false positives.

Permissionless innovation and the ability to develop and continue to develop other social media platforms brings into play Michael Froomkin’s theory of regulatory arbitrage – where users will migrate to the environment that most suits them. Should the major players so regulate their platforms that desired aspects are no longer available, users may choose to use other platforms which will be more “user friendly” or attuned to their needs.

The question that arises from this aspect of the Digital Paradigm is how one regulates permissive innovation, given its critical position in the development of communications protocols. To constrain it, to tie it up in the red tape that accompanies broadcast licences and the like would strangle technological innovation, evolution and development. To interfere with permissionless innovation would strangle the continuing promise of the Internet as a developing communications medium.

Content Dynamics

An aspect of content on the Internet is what could be termed persistence of information. Once information reaches the Internet it is very difficult to remove it because it may spread through the vast network of computers that comprise the Internet and maybe retained on any one of the by the quality of exponential dissemination discussed below, despite the phenomenon of “link rot.”  It has been summed up in another way by the phrase “the document that does not die.” Although on occasions it may be difficult to locate information, the quality of information persistence means that it will be on the Internet somewhere.  This emphasises the quality of permanence of recorded information that has been a characteristic of that form of information ever since people started putting chisel to stone, wedge to clay or pen to papyrus.  Information persistence means that the information is there but if it has become difficult to locate,and  retrieving it may resemble the digital equivalent of an archaeological expedition, although the spade and trowel are replaced by the search engine.  The fact that information is persistent means that it is capable of location.

In some respects the dynamic nature of information challenges the concept of information persistence because digital content may change.  It could be argued that this seems to be more about the nature of content, but the technology itself underpins and facilitates this quality as it does with many others.

An example of dynamic information may be found in the on-line newspaper which may break a story at 10am, receive information on the topic by midday and by 1pm on the same day have modified the original story.  The static nature of print and the newspaper business model that it enabled meant that the news cycle ran from edition to edition. The dynamic quality of information in the Digital Paradigm means that the news cycle potentially may run on a 24 hour basis, with updates every five minutes.

Similarly, the ability that digital technologies have for contributing dialog on any topic enabled in many communication protocols, primarily as a result of Web 2.0, means that an initial statement may undergo a considerable amount of debate, discussion and dispute, resulting ultimately in change.  This dynamic nature of information challenges the permanence that one may expect from persistence and it is acknowledged immediately that there is a significant tension between the dynamic nature of digital information and the concept of the “document that does not die”.

Part of the dynamic of the digital environment is that information is copied when it is transmitted to a user’s computer.  Thus there is the potential for information to be other than static.  If I receive a digital copy I can make another copy of it or, alternatively, alter it and communicate the new version.  Reliance upon the print medium has been based upon the fact that every copy of a particular edition is identical until the next edition.  In the digital paradigm authors and publishers can control content from minute to minute.

In the digital environment individual users may modify information at a computer terminal to meet whatever need may be required.  In this respect the digital reader becomes something akin to a glossator of the scribal culture, the difference being that the original text vanishes and is replaced with the amended copy.  Thus one may, with reason, validly doubt the validity or authenticity of information as it is transmitted.

Let us assume for the moment that a content moderation policy by a search engine or a social media platform can be developed that will identify extreme content and return a “null” result. These policies will often if not always have identifiable gaps. If the policy relates to breaches of terms of use, how often are these breaches subject to human review which is often more nuanced than an algorithm. Often “coded language” may be used as alternatives to extreme content. Because of the context-specific nature of the coded language and the fact that it is not typically directed at a vulnerable group, targetted posts would in most instances not trigger social media platform content rules even if they were more systematically flagged. In addition the existence of “net centers” that coordinate attacks using hundreds of accounts result in broad dissemination of harmful posts which are harder to remove. Speech that is removed may be reposted using different accounts. Finally, content moderation policies of some social media providers do not provide a means for considering the status of the speaker in evaluating the harmful impact the speech may have, and it is widely recognized in the social science literature that speakers with authority have greater influence on behavior.

Exponential Dissemination

Dissemination was one of the leading qualities of print identified by Elizabeth Eisenstein in her study of the printing press as an agent of change, and it has been a characteristic of all information technologies since. What the internet and digital technologies enable is a form of dissemination that has two elements.

One element is the appearance that information is transmitted instantaneously to both an active (on-line recipient) and a passive (potentially on-line but awaiting) audience. Consider the example of an e-mail. The speed of transmission of emails seems to be instantaneous (in fact it is not) but that enhances our expectations of a prompt response and concern when there is not one. More important, however, is that a matter of interest to one email recipient may mean that the email is forwarded to a number of recipients unknown to the original sender. Instant messaging is so-called because it is instant and a complex piece of information may be made available via a link by Twitter to a group of followers which may then be retweeted to an exponentially larger audience.

The second element deals with what may be called the democratization of information dissemination. This aspect of exponential dissemination exemplifies a fundamental difference between digital information systems and communication media that have gone before. In the past information dissemination has been an expensive business. Publishing, broadcast, record and CD production and the like are capital intensive businesses. It used to (and still does)  cost a large amount of money and required a significant infrastructure to be involved in information gathering and dissemination. There were a few exceptions such as very small scale publishing using duplicators, carbon paper and samizdats but in these cases dissemination was very small. Another aspect of early information communication technologies is that they involved a monolithic centralized communication to a distributed audience. The model essentially was one of “one to many” communication or information flow.

The Internet turns that model on its head. The Internet enables a “many to many” communication or information flow  with the added ability on the part of recipients of information to “republish” or “rebroadcast”. It has been recognized that the Internet allows everyone to become a publisher. No longer is information dissemination centralized and controlled by a large publishing house, a TV or radio station or indeed the State. It is in the hands of users. Indeed, news organizations regularly source material from Facebook, YouTube or from information that is distributed on the Internet by Citizen Journalists. Once the information has been communicated it can “go viral” a term used to describe the phenomenon of exponential dissemination as Internet users share information via e-mail, social networking sites or other Internet information sharing protocols. This in turn exacerbates the earlier quality of Information Persistence or “the document that does not die” in that once information has been subjected to Exponential Dissemination it is almost impossible to retrieve it or eliminate it.

It can be seen from this discussion that dissemination is not limited to the “on-line establishment” of Facebook, Twitter or Instagram, and trying the address the dissemination of extreme content by attacking it through ”established” platforms will not eliminate it – just slow down the dissemination process. It will present and obstruction as in fact on-line censorship is just that – an obstruction to the information flow on the Internet. It was John Gilmore who said The Net interprets censorship as damage and routes around it. Primarily because State-based censorship is based on a centralized model and the dissemination of information of the Internet is based upon a distributed one, effectively what happens on the Internet is content redistribution which is a reflection both of Gilmore’s adage and the quality of exponential dissemination.

The Dark Web

Finally there is the aspect of the Internet known as the Dark Web. If the searchable web comprises 10% of available Internet content there is content that is not amenable to search known as the Deep Web which encompasses sites such as LexisNexis and Westlaw if one seeks and example from the legal sphere.

The Deep Web is not the Dark Web. The Dark Web is altogether different. It is more difficult to reach than the surface or deep web, since it’s only accessible through special browsers such as the Tor browser. The dark web is the unregulated part of the internet. No organization, business or government is in charge of it or able to apply rules. This is exactly the reason why the dark web is commonly associated with illegal practices. It’s impossible to reach the dark web through a ‘normal’ browser, such as Google Chrome or Mozilla Firefox. Even in the Tor browser you won’t be able to find any ‘dark’ websites ending in .com or .org. Instead, URLs usually consist of a random mix of letters and numbers and end in .onion. Moreover, the URLs of websites on the dark net change regularly. If there are difficulties in regulating content via social media platforms, to do so via the Dark Web would be impossible. Yet it is within that environment that most of the extreme content may be found.

Effective Regulation

The Christchurch Call has had some very positive effects. It has drawn attention, yet again, to the problem of dissemination of extreme and terrorist content online. It should be remembered that this is not a new issue and has been in the sights of politicians since Deauville although in New Zealand, as far back as 1993, there were proposals to deal with the problems with the availability of pornography online.

Another positive outcome of the Christchurch Call has been to increase public awareness and corporate acceptance of the necessity for there to be some standards of global good citizenship on the part of large and highly profitable Internet based organisations. It is not enough for a company to have as its guiding light “do no evil” but more is required including steps to ensure that its service are not facilitating the doing of evil by others.

At the moment the Christchurch Call has adopted, at least in public, a velvet glove approach, although it is not hard to imagine that in some of the closed meetings the steel fist has been if not threatened at least uncovered. There are a number of ways that the large conglomerates might be persuaded to toe a more responsible line. One is to introduce the concept of an online duty of care as has been suggested in the United Kingdom. Although this sounds like a comfortable and simple concept, anyone who has spent some time studying the law of torts will understand that the duty of care is a highly nuanced and complex aspect of the law of obligations, and one which will require years of litigation and development before it achieves a satisfactory level of certainty.

Another way to have conglomerates toe the line is to increase the costs of doing business. Although it is in a different sphere – that of e-commerce – the recent requirement by the New Zealand Government upon overseas vendors to impose GST is an example, although I was highlighting this issue 20 years ago. Governments do not have a tendency to move fast although they do have a tendency to break things once the sleeping giant awakes.

Yet these various moves and others like them are really rather superficial and only scratch the surface of the content layer of the Internet. The question must be asked – how serious are the governments of the Christchurch Call in regulating not simply access to content by the means by which content is accessed – the technology.

The lessons of history give us some guidance. The introduction of the printing press into England was followed by 120 years of unsuccessful attempts to control the content of printed material. It was not until the Star Chamber Decrees of 1634 that the Stuart monarchy put in place some serious and far-reaching regulatory requirements to control not what was printed (although that too was the subject of the 1634 provisions) but how it was printed. The way in which the business and process of printing was regulated gave the State unprecedented control not only over content but by the means of production and dissemination of that content. The reaction against this – a process involving some many years – led to our present values that underpin freedom of the press and freedom of expression.

As new communications technologies have been developed the State has interested itself in imposing regulatory requirements. There is no permissionless innovation available in setting up a radio station or television network. The State has had a hand of varying degrees of heaviness throughout the development and availability of both these media. In 1966 there was a tremendous issue about whether or not a ship that was to be the platform for the unlicensed and therefore “pirate” radio station, Radio Hauraki would be allowed to sail. The State unsuccessfully tried to prevent this.

Once upon a time in New Zealand (and still in the United Kingdom) anyone who owned a television set had to pay a broadcasting fee. This ostensibly would be applied to the development of content but is indicative of the level of control that the State exerted. And it was not a form of content regulation. It was regulation that was applied to access to the technology.

More recently we are well aware of the so called “Great Firewall of China” – a massive state sponsored means of controlling the technology to proven access to content. And conglomerates such a Google have found that if they want to do business in China they must play by Chinese rules.

The advocacy of greater technological control has come from Russia, Brazil, India and some of the Arab countries. These States I think understand the import of McLuhan’s paradox of technology and content. The issue is whether or not the Christchurch Call is prepared to take that sort of radical step and proceed to consider technological regulation rather than step carefully around the edges of the problem.

Of course, one reason why at least some Western democracies would not wish to take such an extreme step lies in their reliance upon the Internet themselves as a means of doing business, be it by way of using the Internet for the collection of census data, for providing taxation services or online access to benefits and other government services. Indeed the use of the Internet by politicians who use their own form of argumentative speech has become the norm. Often, however, we find that the level of political debate is as banal and cliched as the platforms that are used to disseminate it. But to put it simply, where would politicians be in the second decade of the 21st Century without access to Facebook, Twitter or Instagram (or whatever new flavor of platform arises as a result of permissionless innovation).

Conclusion

I think it is safe to say that the Christchurch Call is no more and no less than a very well managed and promoted public relations exercise that is superficial and will have little long term impact. It will go down in history as part of a continuing story that really started with Deauville and continues and will continue to do so.

Only when Governments are prepared to learn and apply the lessons about the Internet and the way that it works will we see effective regulatory steps instituted.

And then, when that occurs, will we realise that democracy and the freedom that we have to hold and express our own opinions is really in trouble.


[1] Internet NZ “The Christchurch Call: helping important voices be heard” https://internetnz.nz/Christchurch-Call (Last accessed 2 January 2020)

Advertisement

Fearing Technology Giants

On 15 January 2018 opinion writer Deborah Hill Cone penned a piece entitled “Why tech giants need a kick in the software”

Not a lot of it is very original and echoes many of the arguments in Jonathan Taplin’s “Move Fast and Break Things.” I have already critiqued some of Taplin’s propositions in my earlier post Misunderstanding the Internet . Over the Christmas break I revisited Mr. Taplin’s book. It is certainly not a work of scholarship, rather it is a perjorative filled polemic that in essence calls for regulation of Internet platforms to preserve certain business and economic models that are challenged by the new paradigm. Mr. Taplin comes from a background of involvement primarily in the music industry and the realities of the digital paradigm have hit that industry very hard. But, as was the case with the film industry, music took an inordinate amount of time to adapt to the new paradigm and develop new business models. It seems that is now happening with iTunes and Spotify and the movie industry seems to have recognised other models of online distribution such as Netflix, Hulu and other on-demand streaming services.

For Mr. Taplin these new business models are not enough. His argument is that artists should have an expectation that they should draw the same level of income that they enjoyed in the pre-digital age. And that ignores the fact that the whole paradigm has changed.

But Mr. Taplin directs most of his argument against the Internet giants – Facebook, Google, Amazon and the like and singles out their creators and financiers as members of a libertarian conspiracy dedicated to eliminating competition – although to conflate monopolism with libertarianism has its own problems.

Much of Mr. Taplin’s argument uses labels and generalisations which do not stand up to scrutiny. For example he frequently cites one of the philosophical foundations for the direction travelled by the Internet Giants as Ayn Rand whom he describes as a libertarian. In fact Ms. Rand’s philosophy was that of objectivism rather than libertarianism. Indeed, libertarianism has its own subsets. In using the term does Mr. Taplin refer to Thomas Jefferson’s flavour of libertarianism or that advocated by John Stuart Mill in his classic “On Liberty”?  It is difficult to say.

Another problem for Mr Taplin is his brief discussion on the right to be forgotten He says (at page 98) “In Europe, Google continues to challenge the “right to be forgotten” – customers’ ability to eliminate false articles written about them from Google’s search engine.” (The emphasis is mine).

The Google Spain Case which gave rise the the right to be forgotten discussion was not a case about a false article or false information. In fact the article that Sr Costeja-Gonzales wished to deindex was true. It was an advertisement regarding his financial that was published in La Vanguardia newspaper in Barcelona some years before. The reason why deindexing was sought was because the article was no longer relevant to Sr Consteja-Gonzales improved fortunes. To characterise the desire by Google to resist attempts to remove false information misunderstands the nuances of the right to be forgotten.

One thing is clear. Mr. Taplin wants regulation and the nature of the regulation that he seeks is considerable and of such a nature that it might stifle much of the stimulus to creativity that the Internet allows. I have already discussed some of these concepts in other posts but in summary there must be an understanding not of the content that is delivered via Internet platforms but rather of the underlying properties or affordances of digital technologies.

One of these is the fact that digital technologies cannot operate without copying. From the moment a user switches on a computer or a digital device to the moment that device is shut down, copying takes place. Quite simply, the device won’t work without copying. This is a challenge to concepts of intellectual property that developed after the first information technology – the printing press. The press allowed for mechanised copying and challenged the earlier manual copying processes that characterised the scribal paradigm of information communication.

Now we have a digital system that challenges the assumptions that content “owners” have had about control of their product. And the digital horse has bolted and a new paradigm is in place that has altered behaviours, attitudes, expectations and values surrounding information. And can regulation hold back the flood? One need only look at the file sharing provisions of the Copyright Act 1994 in New Zealand. These provisions were put in place, as the name suggests, to combat file sharing. They are now out of date and were little used when introduced. Technology has overtaken them. The provisions were used sporadically by the music industry and, despite extensive lobbying, not at all by the movie industry.

Two other affordances that underlie digital technologies are linked. The first is that of permissionless innovation which is interlinked with the second – continuing disruptive change.  Indeed it could be argued that permissionless innovation is what drives continuing disruptive change.

Permissionless innovation is the quality that allows entrepreneurs, developers and programmers to develop protocols using standards that are available and that have been provided by Internet developers to “bolt‑on” a new utility to the Internet.

Thus we see the rise of Tim Berners-Lee’s World Wide Web which, in the minds of many, represents the Internet as a whole.  Permissionless innovation enabled Shawn Fanning to develop Napster; Larry Page and Sergey Brin to develop Google; Mark Zuckerberg to develop Facebook and Jack Dorsey, Evan Williams, Biz Stone and Noah Glass to develop Twitter along with dozens of other utilities and business models that proliferate the Internet.  There is no need to seek permission to develop these utilities.  Using the theory “if you build it, they will come”[1] new means of communicating information are made available on the Internet.  Some succeed but many fail[2].  No regulatory criteria need to be met other than that the particular utility complies with basic Internet standards.

What permissionless innovation does allow is a constantly developing system of communication tools that change in sophistication and the various levels of utility that they enable.  It is also important to recognize that permissionless innovation underlies changing means of content delivery.

So are these the aspects of the Internet and its associated platforms that are to be regulated? If the Internet Giants are to be reined in the affordances of the Internet that give them sustenance must be denied them. But in doing that, it may well be that the advantages of the Internet may be lost. So the answer I would give to Mr Taplin is to be careful what you wish for.

This rather long introduction leads me to a consideration of Ms. Hill Cone’s slightly less detailed analysis that nevertheless seizes upon Mr Taplin’s themes. Her list of “things to loathe” follow along with some of my own observations

1.) These companies (Apple, Alphabet, Facebook, Amazon) have simply been allowed to get unhealthily large and dominant with barely any checks or balances. The tech firms are more powerful than the telco AT&T ever was, yet regulators do nothing (AT&T was split up). In this country the Commerce Commission spent millions fighting to stop one firm, NZME (publisher of the New Zealand Herald) from merging with another Fairfax (Now called Stuff), a sideshow, while they appear stubbornly uninterested in tackling the real media dominance battle: how Facebook broke the media. I know we’re just little old New Zealand, but we still have sovereignty over our nation, surely? [Commerce Commission chairman] Mark Berry? Can’t you do something? The EU at least managed to fine Google a couple of lazy bill.

Taplin deals with this argument in an extensive analysis of the way in which antitrust law in the United States has become somewhat toothless. He attributes this to the teachings of Robert Bork and the Chicago School of law and economics.

Ms Hill Cones critique suggests that there is something wrong with large corporate conglomerates and that simply because something has become too big it must be bad and therefore should be regulated rather than identifying a particular mischief and then deciding whether regulation is necessary – and I emphasise the word necessary.

2.) Some of these tech companies have got richer and richer exploiting the creative content of writers and artists who create things of real value and who can no longer earn a living from doing so.

This is straight out of the Taplin playbook which I have discussed above. I don’t think its has been suggested that artists are not earning. They are – perhaps not to the level that they used to and perhaps not from sales of remuneration from Spotify tracks. But what Taplin points out – and this is how paradigmatic change drives behavioural change – is that artists are moving back to live performance to earn an income. Gone are the days when the artist could rely on recorded performances. So Ms Hill Cone’s critique may be partially correct as it applies to the earlier expectation of making an income.

3.) Mark Zuckerberg’s mea culpa, announced in the last few days that Facebook is going to focus on what he called “meaningful interaction”, is like a drug dealer offering a cut-down dose of its drug, hoping addicts won’t give up the drug completely. Even Zuckerberg’s former mentor, investor Robert McNamee said in the Guardian that all Zuckerberg is doing is deflecting criticism and leaving users “in peril.”

The perjorative analogy of the drug dealer ignores the fact that no one is required to sign up to Facebook. It is, after all, a choice. And in some respects, Zuckerberg’s announcement is an example of continuing disruptive change that affects Internet Giants as much as it does a startup.

4.) These companies have created technology and thrown it out there, without any sense of responsibility for its potential impact. It’s time for them to be held accountable. Last week Jana Partners, a Wall Street investment firm, wrote to Apple pushing it to look at its products’ health effects, especially on children. Even Facebook founder Sean Parker has recently admitted “God knows what [technology) is doing to our children’s brains.”

The target here is that of permissionless innovation. Upon what basis is it necessary to regulate permissionless innovation. Or does Ms Hill Cone wish to wrap up the Internet with regulatory red tape. Aa far as the effects of social media are concern, I think what worries may digital immigrants and indeed digital deniers is that all social media does is to enable communication – which is what people do. It is an alternative to face to face, telephone, snail mail, email, smoke signals etc. We need to accept that new technologies drive behavioural change.

5.) While it’s funny when the bong-sucking entrepreneur Erlich Bachman says in the HBO comedy Silicon Valley: “We’re walking in there with three foot c**ks covered in Elvis dust!” in reality, many of these firms have a repugnant, arrogant and ignorant culture. In the upcoming Vanity Fair story “Oh. My god, this is so f***ed up: inside Silicon Valley’s secretive orgiastic dark side” insiders talked about the creepy tech parties in which young women are exploited and harassed by tech guys who are still making up for getting bullied at school. (Just as bad, they use the revolting term “cuddle puddles”) The romantic image of scrappy, visionary nerds inventing the future in a garage has evolved into a culture of entitled frat boys behaving badly. “Too much swagger and not enough self-awareness,” as one investor said.

I somehow don’t think that the bad behaviours described here is limited to tech companies. I am sure that in her days as a business journalist (and a very good one too) Ms Hill Cone saw examples of the behaviours she condemns in any number of enterprises.

6.) These giant companies suck millions in profits out of our country but do little to participate as good corporate citizens. If they even have an office here at all, it is tiny. And don’t get started on how much tax they pay. A few years ago Google’s New Zealand operation consisted of three people who would fly back and forth from Sydney to manage sales over here. Apparently, Apple has opened a Wellington office and lured “several employees” from Weta Digital. But there is little transparency about how or where these companies do business or how to hold them accountable. There is no local number to call, there is no local door to knock on. And don’t hold your breath that our children might get good jobs working for any of these corporations.

This criticism goes to the tax problem and probably has underneath it a much larger debate about the purposes and morality of the tax system. The classic statement, since modified, is stated in the case of Inland Revenue Commissioners v Duke of Westminster [1936] AC 1 where it was stated:

“Every man is entitled if he can to order his affairs so that the tax attaching under the appropriate Acts is less than it otherwise would be. If he succeeds in ordering them so as to secure this result, then, however unappreciative the Commissioners of Inland Revenue or his fellow tax-payers may be of his ingenuity, he cannot be compelled to pay an increased tax.”

There can be no doubt that the tax laws will be changed to close the loophole that exists whereby the relationship between the income derived by Google and Apple from their NZ activites will be subject to NZ tax. But Ms Hill Cone goes further and suggests that these companies should have a physical presence – a local door to knock on. This is the digital paradigm. It is no longer necessary to have a suite of offices in a CBD building paying rent.

7.) Mark Zuckerberg preaches that Facebook’s mission is to connect people. But Johann Hari’s new book Lost Connections: Uncovering the real causes of depression and the unexpected solutions, out this week, provides convincing evidence that in the digital age people are more lonely than ever. Hari argues the very companies which are trying to “fix” loneliness – Facebook, for example – are the ones which have made people feel more disconnected and depressed in the first place.

The book cited by Ms Cone is by a journalist writing about depression. Apparently the diagnosis for hsi depression was supposedly from a chemical imbalance in his brain whereas he discovered after investigating some of the social science evidence that depression and anxiety are caused by key problems with the way that we live. He uncovered nine causes of depression and anxiety and offers seven solutions to the problems. Much of the book is about the author and the problems that he had with the treatment he received. His book is as much a critique of the pharmaceutical industry as much as anything. It is described in the Guardian as a flawed study.  Certainly it cannot be said that Hari’s argument is directed towards the suggestion that social media platforms are causative of depression.

8.) Is all this technology really making the world a better place? At this week’s CES (Consumer Electronics Show) in Las Vegas some of the innovations were positive but a lot of them were really, quite dumb. Do you really need a robot that will fold your laundry or a suitcase that will follow you? Or a virtual reality headset that will make you feel like you are flying on a dinosaur (Okay, maybe that one would be fun.)

Point taken. A lot of inventions are not going to make the world a better place. On the other hand many do. Think Thomas Alva Edison and then think about the Edsel motor vehicle. Ms Hill Cone accepts that some of the innovations were positive and the positive ones will probably survive the “Dragon’s Den” of funding rounds and the market.

These eight points were advanced by Ms Hill Cone as reasons why tech companies should get their comeuppance as she puts it. It is difficult to decide whether the article is merely a rant or a restatement of some deeper concerns about Tech Giants. If it should be the latter there should be more thorough analysis. But unless it is absolutely necessary and identifies and addresses a particular mischief in my view regulation is not the answer.

But Ms Hill-Cone is not alone. Later in January a significant beneficiary of Silicon Valley, Marc Benioff compares the crisis of trust facing tech giants to the financial crisis of a decade ago. He suggest that Google, Facebook and other dominant forms pose a threat and he made these comments at the World Economic Forum in Davos. He suggested that what is needed is more regulation and his call was backed by Sir Martin Sorrell who suggested that Apple, Facebook, Amazon, Google, Microsoft, and China’s Alibaba and Tencent had become too big. Sir Martin compared Amazon founder Jeff Bezos to a modern John D. Rockefeller.

One of the suggestions by Sir Martin was that Google and Facebook were media companies, echoing concerns that had been expressed by Rupert Murdoch. The argument is that as the Internet Giants get bigger, it is not a fair fight. And then, of course, there were the criticisms that the Internet Giants had become so big that they were unaware of the nefarious use of their services by those who would spread fake news.

George Soros added his voice to the calls for regulation in two pieces here and here. At the Davos forum he suggested that Facebook and Google have become “obstacles to innovation” and are a “menace” to society whose “days are numbered”. As mining companies exploited the physical environment, so social media companies exploited the social environment.

“This is particularly nefarious because social media companies influence how people think and behave without them even being aware of it. This has far-reaching adverse consequences on the functioning of democracy, particularly on the integrity of elections.”

In addition to skewing democracy, social media companies “deceive their users by manipulating their attention and directing it towards their own commercial purposes” and “deliberately engineer addiction to the services they provide”. The latter, he said, “can be very harmful, particularly for adolescents”.

He considers that the Internet Giants are unlikely to change without regulation. He compared social media companies to casinos, accusing them of deceiving users “by manipulating their attention” and “deliberately engineering addiction” to their services, arguing that they should be broken up. The basis for following a model that was applied in the break up of AT & T Soros suggested that the fact that the Internet Giants are near-monopoly distributors makes them public utilities and should subject them to more stringent regulation, aimed at preserving competition, innovation and fair and open access.

Soros pointed to steps that had been taken in Europe where he described regulators as more farsighted than those in the US when it comes to social policies, referring to the work done by EU Competition Commissioner Margrethe Vestager, who hit Google with a 2.4 billion euro fine ($3 billion) in 2017 after the search giant was found in violation of antitrust rules.

Even more recently, in light of the indictments proferred by Spevial Prosecutor Mueller against a number of Russians who attempted to interfere with the US election of 2016 and who used social media to do so, a call has gone up to regulate social media so that this does not happen again. Of course that is a knee jerk reaction that seems to forget the rights of freedom of expression enshrined in both international convention and domestic legislation and the First Amendment to the US Constitution which protects freedom of speech and where political speech is given the highest level of protection in subsequent cases. But nevertheless, the call goes out to regulate.

Facebook has responded to these concerns by reducing the news feeds that may be provided and more recently in New Zealand Google has restructured its tax arrangements. Both of these steps represent a response by the Internet Giants to public concern – perhaps an indication of a willingness to self-regulate

The urge to regulate is a strong one especially on the part of those who favour the status quo. There can be little doubt that ultimately what is sought is control of the digital environment. The content deliverers like Facebook and Google will be first, but thereafter the architecture – the delivery system that is the Internet that must be free and open – will increasingly come under a form of regulatory control that will have little to do with operational efficiency.

Of course, content is a low-hanging fruit. Marshall McLuhan recognised that when he called the “content” of a medium is like the juicy piece of meat carried by the burglar to distract the watchdog of the mind.” I doubt very much that content is the real target. Nicholas Sarkozy called for regulation of the Internet in 2012 so that urge to regulate is not new by any means.

At the risk of being labelled a technological determinist, I suggest that trying to impose regulatory structures that preserve the status quo inhibits innovation and creativity as much if not more than the suggestion that such an outcome will happen if we leave the Internet Giants alone. Rather I suggest that we should recognise that the changes that are being wrought are paradigmatic. There will be a transformation of the way in which we use communication systems after the current disruption that is being experienced. That means that what comes out the other end may not be immediately recognisable to those of us whose values and predispositions were formed during the analog or pre-digital paradigm.

On the other hand those who reject technological determinism still recognise the inevitability of change. Mark Kurlansky in his excellent book “Paper: Paging through history” argues that technologies have arisen to meet societal needs. It is futile to denounce the technology itself. Rather you have to change the operation of society for which the technology was created.  For every new technology there are detractors, those who see the new invention destroying everything that is good in the old.

To suggest that regulation will preserve the present – if indeed it is worth preserving – is rear view mirror thinking at its worst. Rather we should be looking at the opportunities and advantages that the new paradigm presents. And this isn’t going to be done by wishing for a world that used to be, because that is what regulation will do – it will freeze the inevitable development of the new paradigm.

__________________________________________________________________________________________

[1] In fact a misquote that has fallen into common usage from the movie Field of Dreams (Director and Screenplay by Phil Alden Robinson 1989). The correct quote is “If you build it he will come” (my emphasis) http://www.imdb.com/title/tt0097351/quotes (last accessed 3 February 2015).

[2] See for example  Andrew Keen The Internet is Not the Answer (Atlantic Books, London 2015)

Memory Illusions and Cybernannies

Over the last week I read a couple of very interesting books. One was Dr Julia Shaw’s The Memory Illusion. Dr. Shaw describes herself as a “memory hacker” and has a You Tube presence where she explains a number of the issues that arise in her book.

The other book was The Cyber Effect by Dr Mary Aiken who reminds us on a number of occasions in every chapter that she is a trained cyberpsychologist and cyberbehavioural specialist and who was a consultant for CSI-Cyber which, having watched a few episodes, I abandoned. Regrettably I don’t see that qualification as a recommendation, but that is a subjective view and I put it to one side.

Both books were fascinating. Julia Shaw’s book in my view should be required reading for lawyers and judges. We place a considerable amount of emphasis upon memory assisted by the way in which a witness presents him or herself -what we call demeanour. Demeanour has been well and truly discredited by Robert Fisher QC in an article entitled “The Demeanour Fallacy” [2014] NZ Law Review 575. The issue has also been covered by  Chris Gallavin in a piece entitled “Demeanour Evidence as the backbone of the adversarial process” Lawtalk Issue 834 14 March 2014 http://www.lawsociety.org.nz/lawtalk/issue-837/demeanour-evidence-as-the-backbone-of-the-adversarial-process

A careful reading of The Memory Illusion is rewarding although worrisome. The chapter on false memories, evidence and the way in which investigators may conclude that “where there is smoke there is fire” along with suggestive interviewing techniques is quite disturbing and horrifying at times.

But the book is more than that, although the chapter on false memories, particularly the discussions about memory retrieval techniques, was very interesting. The book examines the nature of memory and how memories develop and shift over time, often in a deceptive way. The book also emphasises how the power of suggestion can influence memory. What does this mean – that everyone is a liar to some degree? Of course not. A liar is a person who tells a falsehood knowing it to be false. Slippery memory, as Sir Edward Coke described it, means that what we are saying we believe to be true even although, objectively, it is not.

A skilful cross-examiner knows how to work on memory and highlight its fallibility. If the lawyer can get the witness in a criminal trial to acknowledge that he or she cannot be sure, the battle is pretty well won. But even the most skilful cross-examiner will benefit from a reading of The Memory Illusion. It will add a number of additional arrows to the forensic armoury. For me the book emphasises the risks of determining criminal liability on memory or recalled facts alone. A healthy amount of scepticism and a reluctance to take an account simply and uncritically at face value is a lesson I draw from the book.

The Cyber Effect is about how technology is changing human behaviour. Although Dr Aiken starts out by stating the advantages of the Internet and new communications technologies, I fear that within a few pages the problems start with the suggestion that cyberspace is an actual place. Although Dr Aiken answers unequivocally in the affirmative it clearly is not. I am not sure that it would be helpful to try and define cyberspace – it is many things to many people. The term was coined by William Gibson in his astonishingly insightful Neuromancer and in subsequent books Gibson imagines the network (I use the term generically) as a place. But it isn’t. The Internet is no more and no less than a transport system to which a number of platforms and applications have been bolted. Its purpose –  Communication. But it is communication plus interactivity and it is that upon which Aiken relies to support her argument. If that gives rise to a “place” then may I congratulate her imagination. The printing press – a form of mechanised writing that revolutionised intellectual activity in Early-modern Europe – didn’t create a new “place”. It enabled alternative means of communication. The Printing Press was the first Information Technology. And it was roundly criticised as well.

Although the book purports to explain how new technologies influence human behaviour it doesn’t really offer a convincing argument. I have often quoted the phrase attributed to McLuhan – we shape our tools and thereafter our tools shape us – and I was hoping for a rational expansion of that theory. It was not to be. Instead it was a collection of horror stories about how people and technology have had problems. And so we get stories of kids with technology, the problems of cyberbullying, the issues of on-line relationships, the misnamed Deep Web when she really means the Dark Web – all the familiar tales attributing all sorts of bizarre behaviours to technology – which is correct – and suggesting that this could become the norm.

What Dr Aiken fails to see is that by the time we recognise the problems with the technology it is too late. I assume that Dr Aiken is a Digital Immigrant, and she certainly espouses the cause that our established values are slipping away in the face of an unrelenting onslaught of cyber-bad stuff. But as I say, the changes have already taken place. By the end of the book she makes her position clear (although she misquotes the comments Robert Bolt attributed to Thomas More in A Man for All Seasons which the historical More would never have said). She is pro-social order in cyberspace, even if that means governance or regulation and she makes no apology for that.

Dr Aiken is free to hold her position and to advocate it and she argues her case well in her book. But it is all a bit unrelenting, all a bit tiresome these tales of Internet woe. It is clear that if Dr Aiken had her way the very qualities that distinguish the Digital Paradigm from what has gone before, including continuous disruptive and transformative change and permissionless innovation, will be hobbled and restricted in a Nanny Net.

For another review of The Cyber Effect see here

Internet Governance Theory – Collisions in the Digital Paradigm III

 

The various theories on internet regulation can be placed  within a taxonomy structure . In the centre is the Internet itself. On one side are the formal theories based on traditional “real world” governance models. These are grounded in traditional concepts of law and territorial authority. Some of these model could well become a part of an “uber-model” described as the “polycentric model” – a theory designed to address specific issues in cyberspace. Towards the middle are less formal but nevertheless structured models. Largely technical or “code-based” in nature that are less formal but nevertheless exercise a form of control over Internet operation.

 

On the other side are informal theories that emphasise non-traditional or radical models. These models tend to be technically based, private and global in character.

Internet Governance Graphic
Internet Governance Models – click on the image for a larger copy

 

What I would like to do is briefly outline aspects of each of the models. This will be a very “once over lightly” approach and further detail may be found in Chapter 3 of my text internet.law.nz. This piece also contains some new material on Internet Governance together with some reflections on how traditional sovereign/territorial governance models just won’t work within the context of the Digital Paradigm and the communications medium that is the Internet.

The Formal Theories

The Digital Realists

The “Digital Realist” school has been made famous by Judge Easterbrook’s comment that “there [is] no more a law of cyberspace than there [is] a ‘Law of the Horse.’” Easterbrook summed the theory up in this way:

“When asked to talk about “Property in Cyberspace,” my immediate reaction was, “Isn’t this just the law of the horse?” I don’t know much about cyberspace; what I do know will be outdated in five years (if not five months!); and my predictions about the direction of change are worthless, making any effort to tailor the law to the subject futile. And if I did know something about computer networks, all I could do in discussing “Property in Cyberspace” would be to isolate the subject from the rest of the law of intellectual property, making the assessment weaker.

This leads directly to my principal conclusion: Develop a sound law of intellectual property, then apply it to computer networks.”

Easterbrook’s comment is a succinct summary of the general position of the digital realism school: that the internet presents no serious difficulties, so the “rule of law” can simply be extended into cyberspace, as it has been extended into every other field of human endeavour. Accordingly, there is no need to develop a “cyber-specific” code of law.

Another advocate for the digital realist position is Jack Goldsmith. In “Against Cyberanarchy” he argues strongly against those whom he calls “regulation sceptics” who suggest that the state cannot regulate cyberspace transactions. He challenges their opinions and conclusions, arguing that regulation of cyberspace is feasible and legitimate from the perspective of jurisdiction and choice of law — in other words he argues from a traditionalist, conflict of laws standpoint. However, Goldsmith and other digital realists recognise that new technologies will lead to changes in government regulation; but they believe that such regulation will take place within the context of traditional governmental activity.

Goldsmith draws no distinction between actions in the “real” world and actions in “cyberspace” — they both have territorial consequences. If internet users in one jurisdiction upload pornography, facilitate gambling, or take part in other activities that are illegal in another jurisdiction and have effects there then, Goldsmith argues, “The territorial effects rationale for regulating these harms is the same as the rationale for regulating similar harms in the non-internet cases”. The medium that transmitted the harmful effect, he concludes, is irrelevant.

The digital realist school is the most formal of all approaches because it argues that governance of the internet can be satisfactorily achieved by the application of existing “real space” governance structures, principally the law, to cyberspace. This model emphasises the role of law as a key governance device. Additional emphasis is placed on law being national rather than international in scope and deriving from public (legislation, regulation and so on) rather than private (contract, tort and so on) sources. Digital realist theorists admit that the internet will bring change to the law but argue that before the law is cast aside as a governance model it should be given a chance to respond to these changes. They argue that few can predict how legal governance might proceed. Given the law’s long history as society’s foremost governance model and the cost of developing new governance structures, a cautious, formal “wait and see” attitude is championed by digital realists.

The Transnational Model – Governance by International Law

The transnational school, although clearly still a formal governance system, demonstrates a perceptible shift away from the pure formality of digital realism. The two key proponents of the school, Burk and Perritt, suggest that governance of the internet can be best achieved not by a multitude of independent jurisdiction-based attempts but via the medium of public international law. They argue that international law represents the ideal forum for states to harmonise divergent legal trends and traditions into a single, unified theory that can be more effectively applied to the global entity of the internet.

The transnationalists suggest that the operation of the internet is likely to promote international legal harmonisation for two reasons.

First, the impact of regulatory arbitrage and the increased importance of the internet for business, especially the intellectual property industry, will lead to a transfer of sovereignty from individual states to international and supranational organisations. These organisations will be charged with ensuring broad harmonisation of information technology law regimes to protect the interests of developed states, lower trans-border costs to reflect the global internet environment, increase opportunities for transnational enforcement and resist the threat of regulatory arbitrage and pirate regimes in less developed states.

Secondly, the internet will help to promote international legal harmonisation through greater availability of legal knowledge and expertise to legal personnel around the world.

The transnational school represents a shift towards a less formal model than the digital realism because it is a move away from national to international sources of authority. However, it still clearly belongs to the formalised end of the governance taxonomy on three grounds:

1.    its reliance on law as its principal governance methodology;

2.    the continuing public rather than private character of the authority on which governance rests; and

3.    the fact that although governance is by international law, in the final analysis, this amounts to delegated authority from national sovereign states.

 

National and UN Initiatives – Governance by Governments

This discussion will be a little lengthier because there is some history the serves to illustrate how governments may approach Internet governance.

In 2011 and 2012 there were renewed calls for greater regulation of the Internet.  These were driven by the events in the Middle East early in 2011 which became known as the “Arab Spring” seems more than co-incidental. The “Arab Spring” is a term that refers to anti-government protests that spread across the Middle East. These followed a successful uprising in Tunisia against former leader Zine El Abidine Ben Ali which emboldened similar anti-government protests in a number of Arab countries. The protests were characterised by the extensive use of social media to organise gatherings and spread awareness. There has, however, been some debate about the influence of social media on the political activism of the Arab Spring. Some critics contend that digital technologies and other forms of communication–videos, cellular phones, blogs, photos and text messages– have brought about the concept of a ‘digital democracy’ in parts of North Africa affected by the uprisings. Other have claimed that in order to understand the role of social media during the Arab Uprisings there is context of high rates of unemployment and corrupt political regimes which led to dissent movements within the region. There is certainly evidence of an increased uptake of Internet and social media usage over the period of the events, and during the uprising in Egypt, then President Mubarak’s State Security Investigations Service blocked access to Twitter and Facebook and on 27 January 2011 the Egyptian Government shut down the Internet in Egypt along with SMS messaging.

The G8 Meeting in Deauville May 2011

In May 2011 at G8 meeting in France, President Sarkozy issued a provocative call for stronger Internet Regulation. M. Sarkozy convened a special gathering if global “digerati” in Paris and called the rise of the Internet a “revolution” as significant as the age of exploration and the industrial revolution. This revolution did not have a flag and M. Sarkozy acknowledged that the Internet belonged to everyone, citing the “Arab Spring” as a positive example. However, he warned executives of Google, Facebook, Amazon and E-Bay who were present : “The universe you represent is not a parallel universe. Nobody should forget that governments are the only legitimate representatives of the will of the people in our democracies. To forget this is to risk democratic chaos and anarchy.”

Mr. Sarkozy was not alone in calling existing laws and regulations inadequate to deal with the challenges of a borderless digital world. Prime Minister David Cameron of Britain stated that he would ask Parliament to review British privacy laws after Twitter users circumvented court orders preventing newspapers from publishing the names of public figures who are suspected of having had extramarital affairs but he did not go as far as M. Sarkozy who was pushing for a “civilized Internet” implying wide regulation.

However, the Deauville Communique did not go as far as M. Sarkozy may have like. It affirmed the importance of intellectual property protection, the effective protection of personal data and individual privacy, security of networks a crackdown on trafficking in children for their sexual exploitation. But it did not advocate state control of the Internet but staked out a role for governments. The communique stated:

“We discussed new issues such as the Internet which are essential to our societies, economies and growth. For citizens, the Internet is a unique information and education tool, and thus helps to promote freedom, democracy and human rights. The Internet facilitates new forms of business and promotes efficiency, competitiveness, and economic growth. Governments, the private sector, users, and other stakeholders all have a role to play in creating an environment in which the Internet can flourish in a balanced manner. In Deauville in 2011, for the first time at Leaders’ level, we agreed, in the presence of some leaders of the Internet economy, on a number of key principles, including freedom, respect for privacy and intellectual property, multi-stakeholder governance, cyber-security, and protection from crime, that underpin a strong and flourishing Internet. The “e-G8” event held in Paris on 24 and 25 May was a useful contribution to these debates….

The Internet and its future development, fostered by private sector initiatives and investments, require a favourable, transparent, stable and predictable environment, based on the framework and principles referred to above. In this respect, action from all governments is needed through national policies, but also through the promotion of international cooperation……

As we support the multi-stakeholder model of Internet governance, we call upon all stakeholders to contribute to enhanced cooperation within and between all international fora dealing with the governance of the Internet. In this regard, flexibility and transparency have to be maintained in order to adapt to the fast pace of technological and business developments and uses. Governments have a key role to play in this model.

We welcome the meeting of the e-G8 Forum which took place in Paris on 24 and 25 May, on the eve of our Summit and reaffirm our commitment to the kinds of multi-stakeholder efforts that have been essential to the evolution of the Internet economy to date. The innovative format of the e-G8 Forum allowed participation of a number of stakeholders of the Internet in a discussion on fundamental goals and issues for citizens, business, and governments. Its free and fruitful debate is a contribution for all relevant fora on current and future challenges.

We look forward to the forthcoming opportunities to strengthen international cooperation in all these areas, including the Internet Governance Forum scheduled next September in Nairobi and other relevant UN events, the OECD High Level Meeting on “The Internet Economy: Generating Innovation and Growth” scheduled next June in Paris, the London International Cyber Conference scheduled next November, and the Avignon Conference on Copyright scheduled next November, as positive steps in taking this important issue forward.”

 The ITU Meeting in Dubai December 2012

The meeting of the International Telecommunications Union (ITU) in Dubai provided the forum for further consideration of expanded Internet regulation. No less an authority than Vinton Cerf, the co-developer with Robert Kahn of the TCP/IP protocol which was one of the important technologies that made the Internet possible, sounded a warning when he said

“But today, despite the significant positive impact of the Internet on the world’s economy, this amazing technology stands at a crossroads. The Internet’s success has generated a worrying desire by some countries’ governments to create new international rules that would jeopardize the network’s innovative evolution and its multi-faceted success.

This effort is manifesting itself in the UN General Assembly and at the International Telecommunication Union – the ITU – a United Nations organization that counts 193 countries as its members, each holding one vote. The ITU currently is conducting a review of the international agreements governing telecommunications and it aims to expand its regulatory authority to include the Internet at a treaty summit scheduled for December of this year in Dubai. ….

Today, the ITU focuses on telecommunication networks, radio frequency allocation, and infrastructure development. But some powerful member countries see an opportunity to create regulatory authority over the Internet. Last June, the Russian government stated its goal of establishing international control over the Internet through the ITU. Then, last September, the Shanghai Cooperation Organization – which counts China, Russia, Tajikistan, and Uzbekistan among its members – submitted a proposal to the UN General Assembly for an “international Code of Conduct for Information Security.” The organization’s stated goal was to establish government-led “international norms and rules standardizing the behavior of countries concerning information and cyberspace.” Other proposals of a similar character have emerged from India and Brazil. And in an October 2010 meeting in Guadalajara, Mexico, the ITU itself adopted a specific proposal to “increase the role of ITU in Internet governance.”

As a result of these efforts, there is a strong possibility that this December the ITU will significantly amend the International Telecommunication Regulations – a multilateral treaty last revised in 1988 – in a way that authorizes increased ITU and member state control over the Internet. These proposals, if implemented, would change the foundational structure of the Internet that has historically led to unprecedented worldwide innovation and economic growth.”

The ITU, originally the International Telegraph Union, is a specialised agency of the United Nations and is responsible for issues concerning information and communication technologies. It was originally founded in 1865 and in the past has been concerned with technical communications issues such as standardisation of communications protocols (which was one of its original purposes) that management of the international radio-frequency spectrum and satellite orbit resources and the fostering of sustainable, affordable access to ICT. It took its present name in 1934 and in 1947 became a specialised agency of the United Nations.

The position of the ITU approaching the 2012 meeting in Dubai was that, given the vast changes that had taken place in the world of telecommunications and information technologies, the International Telecommunications Regulations (ITR)that had been revised in 1988 were no longer in keeping with modern developments. Thus, the objective of the 2012 meeting was to revise the ITRs to suit the new age. After a controversial meeting in Dubai in December 2012 the Final Acts of the Conference were published. The controversial issue was that there was a proposal to redefine the Internet as a system of government-controlled, state supervised networks. The proposal was contained in a leaked document by a group of members including Russia, China, Saudi Arabia, Algeria, Sudan, Egypt and the United Arab Emirates. However, the proposal was withdrawn. But the governance model defined the Internet as an:

“international conglomeration of interconnected telecommunication networks,” and that “Internet governance shall be effected through the development and application by governments,” with member states having “the sovereign right to establish and implement public policy, including international policy, on matters of Internet governance.”

This wide-ranging proposal went well beyond the traditional role of the ITU and other members such as the United States, European countries, Australia, New Zealand and Japan insisted that the ITU treaty should apply to traditional telecommunications systems. The resolution that won majority support towards the end of the conference stated that the ITU’s leadership should “continue to take the necessary steps for ITU to play an active and constructive role in the multi-stakeholder model of the internet.” However, the Treaty did not receive universal acclaim. United States Ambassador Kramer of the announced that the US would not be signing the new treaty. He was followed by the United Kingdom. Sweden said that it would need to consult with its capital (code in UN-speak for “not signing”). Canada, Poland, the Netherlands, Denmark, Kenya, New Zealand, Costa Rica, and the Czech Republic all made similar statements. In all, 89 countries signed while 55 did not.

Quite clearly there is a considerable amount of concern about the way in which national governments wish to regulate or in some way govern and control the Internet. Although at first glance this may seem to be directed at the content layer, and amount to a rather superficial attempt to embark upon the censorship of content passing through a new communications technology, the attempt to regulate through a technological forum such as the ITU clearly demonstrates that governments wish to control not only content but the various transmission and protocol layers of the Internet and possibly even the backbone itself. Continued attempts to interfere with aspects of the Internet or embark upon an incremental approach to regulation have resulted in expressions of concern from another Internet pioneer, Sir Tim Berners-Lee who, in addition to claiming that governments are suppressing online freedom has issued a call for a Digital Magna Carta.

I have already written on the issue of a Digital Magna Carta or Bill of Rights here.

Clearly the efforts described indicate that some form of national government or collective government form of Internet Governance is on the agenda. Already the United Nations has become involved in the development of Internet Governance policy with the establishment of the Internet Governance Forum.

The Internet Governance Forum

The Internet Governance Forum describes itself as bringing

“people together from various stakeholder groups as equals, in discussions on public policy issues relating to the Internet. While there is no negotiated outcome, the IGF informs and inspires those with policy-making power in both the public and private sectors.  At their annual meeting delegates discuss, exchange information and share good practices with each other. The IGF facilitates a common understanding of how to maximize Internet opportunities and address risks and challenges that arise.

The IGF is also a space that gives developing countries the same opportunity as wealthier nations to engage in the debate on Internet governance and to facilitate their participation in existing institutions and arrangements. Ultimately, the involvement of all stakeholders, from developed as well as developing countries, is necessary for the future development of the Internet.”

The Internet Governance Forum is an open forum which has no members. It was established by the World Summit on the Information Society in 2006. Since then, it has become the leading global multi-stakeholder forum on public policy issues related to Internet governance.

Its UN mandate gives it convening power and the authority to serve as a neutral space for all actors on an equal footing. As a space for dialogue it can identify issues to be addressed by the international community and shape decisions that will be taken in other forums. The IGF can thereby be useful in shaping the international agenda and in preparing the ground for negotiations and decision-making in other institutions. The IGF has no power of redistribution, and yet it has the power of recognition – the power to identify key issues.

A small Secretariat was set up in Geneva to support the IGF, and the UN Secretary-General appointed a group of advisers, representing all stakeholder groups, to assist him in convening the IGF.  The United Nations General Assembly agreed in December 2010 to extend the IGF’s mandate for another five years. The IGF is financed through voluntary contributions.”

Zittrain describes the IGF as “diplomatically styled talk-shop initiatives like the World Summit on the Information Society and its successor, the Internet Governance Forum, where “stakeholders” gather to express their views about Internet governance, which is now more fashionably known as “the creation of multi-stakeholder regimes.”

Less Formal Yet Structured

The Engineering and Technical Standards Community

The internet governance models under discussion have in common the involvement of law or legal structures in some shape or form or, in the case of the cyber anarchists, an absence thereof.

Essentially internet governance falls within two major strands:

1.    The narrow strand involving the regulation of technical infrastructure and what makes the internet work.

2.    The broad strand dealing with the regulation of content, transactions and communication systems that use the internet.

The narrow strand regulation of internet architecture recognises that the operation of the internet and the superintendence of that operation involves governance structures that lack the institutionalisation that lies behind governance by law.

The history of the development of the internet although having its origin with the United States Government has had little if any direct government involvement or oversight. The Defence Advanced Research Projects Administration (DARPA) was a funding agency providing money for development. It was not a governing agency nor was it a regulator. Other agencies such as the Federal Networking Council and the National Science Foundation are not regulators, they are organisations that allow user agencies to communicate with one another. Although the United States Department of Commerce became involved with the internet, once potential commercial implications became clear it too has maintained very much of a hands-off approach and its involvement has primarily been with ICANN with whom the Department has maintained a steady stream of Memoranda of Understanding over the years.

Technical control and superintendence of the internet rests with the network engineers and computer scientists who work out problems and provide solutions for its operation. There is no organisational charter. The structures within which decisions are made are informal, involving a network of interrelated organisations with names which at least give the appearance of legitimacy and authority. These organisations include the Internet Society (ISOC), an independent international non-profit organisation founded in 1992 to provide leadership and internet-related standards, education and policy around the world. Several other organisations are associated with ISOC. The Internet Engineering Taskforce (IETF), is a separate legal entity, which has as its mission to make the internet work better by producing high quality, relevant technical documents that influence the way people design, use and manage the internet.

The Internet Architecture Board (IAB) is an advisory body to ISOC and also a committee of IETF, which has an oversight role. Also housed within ISOC is the IETF Administrative Support Activity, which is responsible for the fiscal and administrative support of the IETF Standards Process. The IETF Administrative Support Activity (IASA) has a committee, the IETF Administrative Oversight Committee (IAOC), which carries out the responsibilities of the IASA supporting the Internet Engineering Steering Group (IESG) working groups, the Internet Architecture Board (IAB), the Internet Research Taskforce (IRTF) and Steering Groups (IRSG). The IAOC oversees the work of the IETF Administrative Director (IAD) who has the day-to-day operational responsibility of providing the fiscal and administrative support through other activities, contractors and volunteers.

The central hub of these various organisations is the IETF. This organisation has no coercive power, but is responsible for establishing internet standards, some of which such as TCP/IP are core standards and are non-optional. The compulsory nature of these standards do not come from any regulatory powers, but because of the nature of the critical mass of network externalities involving internet users. Standards become economically mandatory and there is an overall acceptance of IETF standards which maintain core functionality of the internet.

A characteristic of IETF, and indeed all of the technical organisations involved in internet functionality, is the open process that theoretically allows any person to participate. The other characteristic of internet network organisations is the nature of the rough consensus by which decisions are made. Proposals are circulated in the form of a Request for Comment to members of the internet, engineering and scientific communities and from this collaborative and consensus-based approach a new standard is agreed.

Given that the operation of the internet involves a technical process and the maintenance of the technical process depends on the activities of scientific and engineering specialists, it is fair to conclude that a considerable amount of responsibility rests with the organisations who set and maintain standards. Many of these organisations have developed a considerable power structure them without any formal governmental or regulatory oversight – an issue that may well need to be addressed. Another issue is whether these organisations have a legitimate basis to do what they are doing with such an essential infrastructure as the internet. The objective of organisations such IETF is a purely technical one that has little if any public policy ramifications. Its ability to work outside government bureaucracyenables greater efficiency.

However, the internet’s continued operation depends on a number of interrelated organisations which, while operating in an open and transparent manner in a technical collaborative consensus-based model, have little understanding of the public interest ramifications of their decisions. This aspect of internet governance is often overlooked. The technical operation and maintenance of the internet is superintended by organisations that have little or no interactivity with any of the formalised power structures that underlie the various “governance by law” models of internet governance. The “technical model” of internet governance is an anomaly arising not necessarily from the technology, but from its operation.

ICANN

Of those involved in the technical sphere of Internet governance, ICANN is perhaps the best known. Its governance of the “root” or addressing systems makes it a vital player in the Internet governance taxonomy and for that reason requires some detailed consideration.

ICANN is the Internet Corporation for Assigned Names and Numbers (ICANN). This organisation was formed in October 1998 at the direction of the Clinton Administration to take responsibility for the administration of the Internet’s Domain Name System (DNS). Since that time ICANN has been dogged by controversy and criticism from all sides. ICANN wields enormous power as the sole controlling authority of the DNS, which has a “chokehold” over the internet because it is the only aspect of the entire decentralised, global system of the internet that is administered from a single, central point. By selectively editing, issuing or deleting net identities ICANN is able to choose who is able to access cyberspace and what they will see when they are there. ICANN’s control effectively amounts, in the words of David Post, to “network life or death”. Further, if ICANN chooses to impose conditions on access to the internet, it can indirectly project its influence over every aspect of cyberspace and the activity that takes place there.

The obvious implication for governance theorists is that the ICANN model is not a theory but a practical reality. ICANN is the first indigenous cyberspace governance institution to wield substantive power and demonstrate a real capacity for effective enforcement. Ironically, while other internet governance models have demonstrated a sense of purpose but an acute lack of power, ICANN has suffered from excess power and an acute lack of purpose. ICANN arrived at its present position almost, but not quite, by default and has been struggling to find a meaningful raison d’être since. In addition it is pulled by opposing forces all anxious to ensure their vision of the new frontier prevails

ICANN’s “democratic” model of governance has been attacked as unaccountable, anti-democratic, subject to regulatory capture by commercial and governmental interests, unrepresentative, and excessively Byzantine in structure. ICANN has been largely unresponsive to these criticisms and it has only been after concerted publicity campaigns by opponents that the board has publicly agreed to change aspects of the process.

As a governance model, a number of key points have emerged:

1.    ICANN demonstrates the internet’s enormous capacity for marshalling global opposition to governance structures that are not favourable to the interests of the broader internet community.

2.    Following on from point one, high profile, centralised institutions such as ICANN make extremely good targets for criticism.

3.    Despite enormous power and support from similarly powerful backers, public opinion continues to prove a highly effective tool, at least in the short run, for stalling the development of unfavourable governance schemes.

4.    ICANN reveals the growing involvement of commercial and governmental interests in the governance of the internet and their reluctance to be directly associated with direct governance attempts.

5.    ICANN, it demonstrates an inability to project its influence beyond its core functions to matters of general policy or governance of the internet.

ICANN lies within the less formal area of governance taxonomy in that it operates with a degree of autonomy it retains a formal character. Its power is internationally based (and although still derived from the United States government, there is a desire by the US to “de-couple” its involvement with ICANN). It has greater private rather than public sources of authority, in that its power derives from relationships with registries, ISPs and internet users rather than sovereign states. Finally, it is evolving towards a technical governance methodology, despite an emphasis on traditional decision-making structures and processes.

The Polycentric Model of Internet Governance

The Polycentric Model embraces, for certain purposes, all of the preceding models. It does not envelop them, but rather employs them for specific governance purposes.

This theory is one that has been developed by Professor Scott Shackelford. Shackelford in his article “Toward Cyberpeace: Managing Cyberattacks Through Polycentric Governance”  and locates Internet Governance within a special context of cybersecurity and the maintenance of cyberpeace He contends that the  international community must come together to craft a common vision for cybersecurity while the situation remains malleable. Given the difficulties of accomplishing this in the near term, bottom-up governance and dynamic, multilevel regulation should be undertaken consistent with polycentric analysis.

While he sees a role for governments and commercial enterprises he proposes a mixed model. Neither governments nor the private sector should be put in exclusive control of managing cyberspace since this could sacrifice both liberty and innovation on the mantle of security, potentially leading to neither.

The basic notion of polycentric governance is that a group facing a collective action problem should be able to address it in whatever way they see fit, which could include using existing or crafting new governance structures; in other words, the governance regime should facilitate the problem-solving process.

The model demonstrates the benefits of self-organization, networking regulations at multiple levels, and the extent to which national and private control can co-exist with communal management.  A polycentric approach recognizes that diverse organizations and governments working at multiple levels can create policies that increase levels of cooperation and compliance, enhancing flexibility across issues and adaptability over time.

Such an approach, a form of “bottom-up” governance, contrasts with what may be seen as an increasingly state-centric approach to Internet Governance and cybersecurity which has become apparent in for a such as the G8 Conference in Deauville in 2011 and the ITU Conference in Dubai in 2012.  The approach also recognises that cyberspace has its own qualities or affordances, among them its decentralised nature along with the continuing dynamic change flowing from permissionless innovation. To put it bluntly it is difficult to forsee the effects of regulatory efforts which a generally sluggish in development and enactment, with the result that the particular matter which regulation tried to address has changed so that the regulatory system is no longer relevant. Polycentric regulation provides a multi-faceted response to cybersecurity issues in keeping with the complexity of crises that might arise in cyberspace.

So how should the polycentric model work. First, allies should work together to develop a common code of cyber conduct that includes baseline norms, with negotiations continuing on a harmonized global legal framework. Second, governments and CNI operators should establish proactive, comprehensive cybersecurity policies that meet baseline standards and require hardware and software developers to promote resiliency in their products without going too far and risking balkanization. Third, the recommendations of technical organizations such as the IETF should be made binding and enforceable when taken up as industry best practices. Fourth, governments and NGOs should continue to participate in U.N. efforts to promote global cybersecurity, but also form more limited forums to enable faster progress on core issues of common interest. And fifth, training campaigns should be undertaken to share information and educate stakeholders at all levels about the nature and extent of the cyber threat.

Code is Law

Located centrally within the taxonomy and closely related to the Engineering and Technology category of governance models is the “code is law” model, designed by  Harvard Professor, Lawrence Lessig, and, to a lesser extent, Joel Reidenberg. The school encompasses in many ways the future of the internet governance debate. The system demonstrates a balance of opposing formal and informal forces and represents a paradigm shift in the way internet governance is conceived because the school largely ignores the formal dialectic around which the governance debate is centred and has instead developed a new concept of “governance and the internet”. While Lessig’s work has been favourably received even by his detractors, it is still too early to see if it is indeed a correct description of the future of internet governance, or merely a dead end. Certainly, it is one of the most discussed concepts of cyberspace jurisprudence.

Lessig asserts that human behaviour is regulated by four “modalities of constraint”: law, social norms, markets and architecture. Each of these modalities influences behaviour in different ways:

1.    law operates via sanction;

2.    markets operate via supply and demand and price;

3.    social norms operate via human interaction; and

4.    architecture operates via the environment.

Governance of behaviour can be achieved by any one or any combination of these four modalities. Law is unique among the modalities in that it can directly influence the others.

Lessig argues that in cyberspace, architecture is the dominant and most effective modality to regulate behaviour. The architecture of cyberspace is “code” — the hardware and software — that creates the environment of the internet. Code is written by code writers; therefore it is code writers, especially those from the dominant software and hardware houses such as Microsoft and AOL, who are best placed to govern the internet. In cyberspace, code is law in the imperative sense of the word. Code determines what users can and cannot do in cyberspace.

“Code is law” does not mean lack of regulation or governmental involvement, although any regulation must be carefully applied. Neil Weinstock Netanel argues that “contrary to the libertarian impulse of first generation cyberspace scholarship, preserving a foundation for individual liberty, both online and off, requires resolute, albeit carefully tailored, government intervention”. Internet architecture and code effectively regulate individual activities and choices in the same way law does and that market actors need to use these regulatory technologies in order to gain a competitive advantage. Thus, it is the role of government to set the limits on private control to facilitate this.

The crux of Lessig’s theory is that law can directly influence code. Governments can regulate code writers and ensure the development of certain forms of code. Effectively, law and those who control it, can determine the nature of the cyberspace environment and thus, indirectly what can be done there. This has already been done. Code is being used to rewrite Copyright Law. Technological Protection Measures (TPMs) allow content owners to regulate the access and/or use to which a consumer may put digital content. Opportunities to exercise fair uses or permitted uses can be limited beyond normal user expectations and beyond what the law previously allowed for analogue content. The provision of content in digital format, the use of TPMs and the added support that legislation gives to protect TPMs effectively allows content owners to determine what limitations they will place upon users’ utilisation of their material. It is possible that the future of copyright lies not in legislation (as it has in the past) but in contract.

 

Informal Models and Aspects of Digital Liberalism

Digital liberalism is not so much a model of internet governance as it is a school of theorists who approach the issue of governance from roughly the same point on the political compass: (neo)-liberalism. Of the models discussed, digital liberalism is the broadest. It encompasses a series of heterogeneous theories that range from the cyber-independence writings of John Perry Barlow at one extreme, to the more reasoned private legal ordering arguments of Froomkin, Post and Johnson at the other. The theorists are united by a common “hands off” approach to the internet and a tendency to respond to governance issues from a moral, rather than a political or legal perspective.

Regulatory Arbitrage – “Governance by whomever users wish to be governed by”

The regulatory arbitrage school represents a shift away from the formal schools, and towards digital liberalism. “Regulatory arbitrage” is a term coined by the school’s principal theorist, Michael Froomkin, to describe a situation in which internet users “migrate” to jurisdictions with regulatory regimes that give them the most favourable treatment. Users are able to engage in regulatory arbitrage by capitalising on the unique geographically neutral nature of the internet. For example, someone seeking pirated software might frequent websites geographically based in a jurisdiction that has a weak intellectual property regime. On the other side of the supply chain, the supplier of gambling services might, despite residing in the United States, deliberately host his or her website out of a jurisdiction that allows gambling and has no reciprocal enforcement arrangements with the United States.

Froomkin suggests that attempts to regulate the internet face immediate difficulties because of the very nature of the entity that is to be controlled. He draws upon the analogy of the mythological Hydra, but whereas the beast was a monster, the internet may be predominantly benign. Froomkin identifies the internet’s resistance to control as being caused by the following two technologies:

1.    The internet is a packet-switching network. This makes it difficult for anyone, including governments, to block or monitor information originating from large numbers of users.

2.    Powerful military-grade cryptography exists on the internet that users have access to that can, if used properly, make messages unreadable to anyone but the intended recipient.

As a result of the above, internet users have access to powerful tools which can be used to enable anonymous communication. This is unless, of course, their governments have strict access control, an extensive monitoring programme or can persuade its citizens not to use these tools by having liability rules or criminal law.

Froomkin’s theory is principally informal in character. Private users, rather than public institutions are responsible for choosing the governance regime they adhere to. The mechanism that allows this choice is technical and works in opposition to legally based models. Finally, the model is effectively global as users choose from a world of possibilities to decide which particular regime(s) to submit to, rather than a single national regime. While undeniably informal.

Unlike digital liberalists who advocate a separate internet jurisdiction encompassing a multitude of autonomous self-regulating regimes within that jurisdiction, Froomkin argues that the principal governance unit of the internet will remain the nation-state. He argues that users will be free to choose from the regimes of states rather than be bound to a single state, but does not yet advocate the electronic federalism model of digital liberalism.

Digital Libertarianism – Johson and Post

Digital liberalism is the oldest of the internet governance models and represents the original response to the question: “How will the internet be governed?” Digital liberalism developed in the early 1990s as the internet began to show the first inklings of its future potential. The development of a Graphical User Interface together with web browsers such as Mosaic made the web accessible to the general public for the first time. Escalating global connectivity and a lack of understanding or reaction by world governments contributed to a sense of euphoria and digital freedom that was reflected in the development of digital liberalism.

In its early years digital liberalism evolved around the core belief that “the internet cannot be controlled” and that consequently “governance” was a dead issue. By the mid-1990s advances in technology and the first government attempts to control the internet saw this descriptive claim gradually give way to a competing normative claim that “the internet can be controlled but it should not be”. These claims are represented as the sub-schools of digital liberalism — cyberanarchism and digital libertarianism.

In “And How Shall the Net be Governed?” David Johnson and David Post posed the following questions:

Now that lots of people use (and plan to use) the internet, many — governments, businesses, techies, users and system operators (the “sysops” who control ID issuance and the servers that hold files) — are asking how we will be able to:

(1)   establish and enforce baseline rules of conduct that facilitate reliable communications and trustworthy commerce; and

(2)   define, punish and prevent wrongful actions that trash the electronic commons or impose harm on others.

In other words, how will cyberspace be governed, and by what right?

Post and Johnson point out that one of the advantages of the internet is its chaotic and ungoverned nature. As to the question of whether the net must be governed at all they use the example of the three-Judge Federal Court in Philadelphiathat  “threw out the Communications Decency Act on First Amendment grounds seemed thrilled by the ‘chaotic’ and seemingly ungovernable character of the net”. Post and Johnson argue that because of its decentralised architecture and lack of a centralised rule-making authority the net has been able to prosper. They assert that the freedom the internet allows and encourages, has meant that sysops have been free to impose their own rules on users. However, the ability of the user to choose which sites to visit, and which to avoid, has meant the tyranny of system operators has been avoided and the adverse effect of any misconduct by individual users has been limited.

 Johnson and Post propose the following four competing models for net governance:

1.    Existing territorial sovereigns seek to extend their jurisdiction and amend their own laws as necessary to attempt to govern all actions on the net that have substantial impacts upon their own citizenry.

2.    Sovereigns enter into multilateral international agreements to establish new and uniform rules specifically applicable to conduct on the net.

3.    A new international organisation can attempt to establish new rules — a new means of enforcing those rules and of holding those who make the rules accountable to appropriate constituencies.

4.    De facto rules may emerge as the result of the interaction of individual decisions by domain name and IP registries (dealing with conditions imposed on possession of an on-line address), by system operators (local rules to be applied, filters to be installed, who can sign on, with which other systems connection will occur) and users (which personal filters will be installed, which systems will be patronised and the like).

The first three models are centralised or semi-centralised systems and the fourth is essentially a self-regulatory and evolving system. In their analysis, Johnson and Post consider all four and conclude that territorial laws applicable to online activities where there is no relevant geographical determinant are unlikely to work, and international treaties to regulate, say, ecommerce are unlikely to be drawn up.

Johnson and Post proposed a variation of the third option — a new international organisation that is similar to a federalist system, termed “net federalism”.

In net federalism, individual network systems rather than territorial sovereignty are the units of governance. Johnson and Post observe that the law of the net has emerged, and can continue to emerge, from the voluntary adherence of large numbers of network administrators to basic rules of law (and dispute resolution systems to adjudicate the inevitable inter-network disputes), with individual users voting with their electronic feet to join the particular systems they find most congenial. Within this model multiple network confederations could emerge. Each may have individual “constitutional” principles — some permitting and some prohibiting, say, anonymous communications, others imposing strict rules regarding redistribution of information and still others allowing freer movement — enforced by means of electronic fences prohibiting the movement of information across confederation boundaries.

Digital liberalism is clearly an informal governance model and for this reason has its attractions for those who enjoyed the free-wheeling approach to the internet in the early 1990s. It advocates almost pure private governance, with public institutions playing a role only in so much as they validate the existence and independence of cyber-based governance processes and institutions. Governance is principally to be achieved by technical solutions rather than legal process and occurs at a global rather than national level. Digital liberalism is very much the antithesis of the digital realist school and has been one of the two driving forces that has characterised the internet governance debate in the last decade.

Cyberanarchism – John Perry Barlow

In 1990, the FBI were involved in a number of actions against a perceived “computer security threat” posed by a Texas role-playing game developer named Steve Jackson. Following this, John Perry Barlow and Mitch Kapor formed the Electronic Freedom Foundation. Its mission statement says that it was “established to help civilize the electronic frontier; to make it truly useful and beneficial not just to a technical elite, but to everyone; and to do this in a way which is in keeping with our society’s highest traditions of the free and open flow of information and communication”.

One of Barlow’s significant contributions to thinking on internet regulation was the article, “Declaration of the Independence of Cyberspace”, although idealistic in expression and content, eloquently expresses a point of view held by many regarding efforts to regulate cyberspace. The declaration followed the passage of the Communications Decency Act.  In “The Economy of Ideas: Selling Wine without Bottles on the Global Net”,Barlow challenges assumptions about intellectual property in the digital online environment. He suggests that the nature of the internet environment means that different legal norms must apply. While the theory has its attractions, especially for the young and the idealistic, the fact of the matter is that “virtual” actions are grounded in the real world, are capable of being subject to regulation and, subject to jurisdiction, are capable of being subject to sanction. Indeed, we only need to look at the Digital Millennium Copyright Act (US) and the Digital Agenda Act 2000 (Australia) to gain a glimpse of how, when confronted with reality, Barlow’s theory dissolves.

Regulatory Assumptions

In understanding how regulators approach the control of internet content, one must first understand some of the assumptions that appear to underlie any system of data network regulation.

First and foremost, sovereign states have the right to regulate activity that takes place within their own borders. This right to regulate is moderated by certain international obligations. Of course there are certain difficulties in identifying the exact location of certain actions, but the internet only functions at the direction of the persons who use it. These people live, work, and use the internet while physically located within the territory of a sovereign state and so it is unquestionable that states have the authority to regulate their activities.

A second assumption is that a data network infrastructure is critical to the continued development of national economies. Data networks are a regular business tool like the telephone. The key to the success of data networking infrastructure is its speed, widespread availability, and low cost. If this last point is in doubt, one need only consider that the basic technology of data networking has existed for more than 20 years. The current popularity of data networking, and of the internet generally, can be explained primarily by the radical lowering of costs related to the use of such technology. A slow or expensive internet is no internet at all.

The third assumption is that international trade requires some form of international communication. As more communication takes place in the context of data networking, then continued success in international trade will require sufficient international data network connections.

The fourth assumption is that there is a global market for information. While it is still possible to internalise the entire process of information gathering and synthesis within a single country, this is an extremely costly process. If such expensive systems represent the only source of information available it will place domestic businesses at a competitive disadvantage in the global marketplace.

The final assumption is that unpredictability in the application of the law or in the manner in which governments choose to enforce the law will discourage both domestic and international business activity. In fashioning regulations for the internet, it is important that the regulations are made clear and that enforcement policies are communicated in advance so that persons have adequate time to react to changes in the law.

Concluding Thoughts

Governance and the Properties of the Digital Paradigm

Regulating or governing cyberspace faces challenges that lie within the properties or affordances of the Digital Paradigm. To begin with, territorial sovereignty concepts which have been the basis for most regulatory or governance activity rely on physical and defined geographical realities. By its nature, a communications system like the Internet challenges that model. Although the Digital Realists assert that effectively nothing has changed, and that is true to a limited extent, the governance functions that can be exercised are only applicable to that part of cyberspace that sits within a particular geographical space. Because the Internet is a distributed system it is impossible for any one sovereign state to impose its will upon the entire network. It is for this reason that some nations are setting up their own networks, independent of the Internet, although the perception is that the Internet is controlled by the US, the reality is that with nationally based “splinternets” sovereigns have greater ability to assert control over the network both in terms of the content layer and the various technical layers beneath that make up the medium. The distributed network presents the first challenge to national or territorially based regulatory models.

Of course aspects of sovereign power may be ceded by treaty or by membership of international bodies such as the United Nations. But does, say, the UN have the capacity to impose a worldwide governance system over the Internet. True, it created the IGF but that organisation has no power and is a multi-stakeholder policy think tank. Any attempt at a global governance model requires international consensus and, as the ITU meeting in Dubai in December 2012 demonstrated, that is not forthcoming at present.

Two other affordances of the Digital Paradigm challenge the establishment of tradition regulatory or governance systems. Those affordances are continuing disruptive change and permissionless innovation.  The very nature of the legislative process is measured. Often it involves cobbling a consensus. All of this takes time and by the time there is a crystallised proposition the mischief that the regulation is trying to address either no longer exists or has changed or taken another form. The now limited usefulness (and therefore effectiveness) of the provisions of s.122A – P of the New Zealand Copyright Act 1994 demonstrate this proposition. Furthermore, the nature of the legislative process involving reference to Select Committees and the prioritisation of other legislation within the time available in a Parliamentary session means that a “swift response” to a problem is very rarely possible.

Permissionless innovation adds to the problem because as long as this continues, and there is no sign that the inventiveness of the human mind is likely to slow down, developers and software writers will continue to change the digital landscape meaning that the target of a regulatory system may be continually moving, and certainty of law, a necessity in any society that operates under the Rule of Law, may be compromised. Again, the example of the file sharing provisions of the New Zealand Copyright Act provide an example. The definition of file sharing is restricted to a limited number of software applications – most obviously Bit Torrent. Work arounds such as virtual private networks and magnet links, along with anonymisation proxies fall outside the definition. In addition the definition addresses sharing and does not include a person who downloads but does not share by uploading infringing content.

Associated with disruptive change and permissionless innovation are some other challenges to traditional governance thinking. Participation and interactivity, along with exponential dissemination emphasise the essentially bottom up participatory nature of the Internet ecosystem. Indeed this is reflected in the quality of permissionless innovation where any coder may launch an app without any regulatory sign-off. The Internet is perhaps the greatest manifestation of democracy that there has been. It is the Agora of Athens on a global scale, a cacophony of comment, much of it trivial but the fact is that everyone has the opportunity to speak and potentially to be heard. Spiro Agnew’s “silent majority” need be silent no longer. The events of the Arab Spring showed  the way in which the Internet can be used in the face of oppressive regimes in motivating populaces. It seems unlikely that an “undemocratic” regulatory regime could be put in place absent the “consent of the governed” and despite the usual level of apathy that occurs in political matters, it seems unlikely that, given its participatory nature, netizens would tolerate such interference.

Perhaps the answer to the issue of Internet Governance is already apparent – a combination of Lessig’s Code is Law and the technical standards organisations that actually make the Internet work, such as ISOC, ITEF and ICANN. Much criticism has been levelled at ICANN’s lack of accountability, but in many respects similar issues arise with the IETF and IAB, dominated as they are by  groups of engineers. But in the final analysis, perhaps this is the governance model that is the most suitable. The objective of engineers is to make systems work at the most efficient level. Surely this is the sole objective of any regulatory regime. Furthermore, governance by technicians, if it can be called that, contains safeguards against political, national or regional capture. By all means, local governments may regulate content. But that is not the primary objective of Internet governance. Internet governance addresses the way in which the network operates. And surely that is an engineering issue rather than a political one.

 The Last Word

Perhaps the last word on the general topic of internet regulation should be left to Tsutomu Shinomura, a computational physicist and computer security expert who was responsible for tracking down the hacker Kevin Mitnick which he recounted in the excellent book Takedown:

The network of computers known as the internet began as a unique experiment in building a community of people who shared a set of values about technology and the role computers could play in shaping the world. That community was based on a shared sense of trust. Today, the electronic walls going up everywhere on the Net are the clearest proof of the loss of that trust and community. It’s a great loss for all of us.