The Christchurch Call was a meeting co-hosted by New Zealand’s Prime Minister, Jacinda Ardern and French President, Emmanuel Macron, held in Paris on 15 May 2019. It’s a global call which aims to “bring together countries and tech companies in an attempt to bring to an end the ability to use social media to organise and promote terrorism and violent extremism.”It is intended to be an ongoing process.
This piece was written at the end of last year and for one reason or another – and primarily the Covid-19 crisis – has languished. I post it now as the first anniversary of the Call approaches. The overall context is that of Internet Regulation – content or technology – and the difficulties that presents.
The Christchurch Call is not the first attempt to regulate or control Internet based content. It will not be the last. And, despite its aim to reduce or eliminate the use of social media to organize and promote terrorism and violent extremism, it carries within it the seeds of its own downfall. The reason is, like so many efforts before it, the target of the Christchurch Call is content rather than technology.
Calls to regulate content and access to it have been around since the Internet went public.
The Christchurch Call is eerily familiar, not because of what motivated and inspired it, but because it represents an effort by Governments and States to address perceived problems posed by Internet based content.
In 2011 a similar effort was led by then French President Nicholas Sarkozy at the economic summit at Deauville – is it a co-incidence that once again the French are leaders in this present initiative? So what was the Deauville initiative all about?
Deauville May 2011
In 2011 and 2012 there were renewed calls for greater regulation of the Internet. That these were driven by the events in the Middle East early in 2011 which became known as the “Arab Spring” seems more than coincidental. The “Arab Spring” is a term that refers to anti-government protests that spread across the Middle East. These followed a successful uprising in Tunisia against former leader Zine El Abidine Ben Ali which emboldened similar anti-government protests in a number of Arab countries. The protests were characterised by the extensive use of social media to organise gatherings and spread awareness. There has, however, been some debate about the influence of social media on the political activism of the Arab Spring. Some critics contend that digital technologies and other forms of communication — videos, cellular phones, blogs, photos and SMS messages— have brought about the concept of a “digital democracy” in parts of North Africa affected by the uprisings. Others have claimed that in order to understand the role of social media during the Arab Spring there is context of high rates of unemployment and corrupt political regimes which led to dissent movements within the region. There is certainly evidence of an increased uptake of Internet and social media usage over the period of the events, and during the uprising in Egypt; then President Mubarak’s State Security Investigations Service blocked access to Twitter and Facebook and on 27 January 2011 the Egyptian Government shut down the Internet in Egypt along with SMS messaging.
In May 2011 at the first e-G8 Forum, before the G8 summit in France, President Nicolas Sarkozy issued a provocative call for stronger Internet regulation. Mr Sarkozy convened a special gathering of global “digerati” in Paris and called the rise of the Internet a “revolution” as significant as the age of exploration and the industrial revolution.
This revolution did not have a flag and Mr Sarkozy acknowledged that the Internet belonged to everyone, citing the Arab Spring as a positive example. However, he warned executives of Google, Facebook, Amazon and eBay who were present:
“The universe you represent is not a parallel universe. Nobody should forget that governments are the only legitimate representatives of the will of the people in our democracies. To forget this is to risk democratic chaos and anarchy.”
Mr Sarkozy was not alone in calling existing laws and regulations inadequate to deal with the challenges of a borderless digital world. Prime Minister David Cameron of Britain stated that he would ask Parliament to review British privacy laws after Twitter users circumvented court orders preventing newspapers from publishing the names of public figures who are suspected of having had extramarital affairs, but he did not go as far as Mr Sarkozy who was pushing for a “civilized Internet” implying wide regulation.
However, the Deauville Communique did not extend as far as Mr Sarkozy may have liked. It affirmed the importance of intellectual property protection, the effective protection of personal data and individual privacy, security of networks, and a crackdown on trafficking in children for sexual exploitation; however it did not advocate state control of the Internet but staked out a role for governments.
Deauville was not an end to the matter. The appetite for Internet regulation by domestic governments had just been whetted. This was demonstrated by the events at the ITU meeting in Dubai in 2012
The ITU meeting in Dubai December 2012
The meeting of the International Telecommunications Union (ITU) in Dubai provided the forum for further consideration of expanded Internet regulation. No less an authority than Vinton Cerf, the co-developer with Robert Kahn of the TCP/IP protocol which was one of the important technologies that made the Internet possible, sounded a warning when he said:
“But today, despite the significant positive impact of the Internet on the world’s economy, this amazing technology stands at a crossroads. The Internet’s success has generated a worrying desire by some countries’ governments to create new international rules that would jeopardize the network’s innovative evolution and its multi-faceted success.
This effort is manifesting itself in the UN General Assembly and at the International Telecommunication Union — the ITU — a United Nations organization that counts 193 countries as its members, each holding one vote. The ITU currently is conducting a review of the international agreements governing telecommunications and it aims to expand its regulatory authority to include the Internet at a treaty summit scheduled for December of this year in Dubai….”
Today, the ITU focuses on telecommunication networks, radio frequency allocation, and infrastructure development. But some powerful member countries saw an opportunity to create regulatory authority over the Internet. In June 2012, the Russian government stated its goal of establishing international control over the Internet through the ITU. Then, in September 2012, the Shanghai Cooperation Organization — which counts China, Russia, Tajikistan, and Uzbekistan among its members — submitted a proposal to the UN General Assembly for an “international Code of Conduct for Information Security.” The organization’s stated goal was to establish government-led “international norms and rules standardizing the behavior of countries concerning information and cyberspace.” Other proposals of a similar character have emerged from India and Brazil. And in an October 2010 meeting in Guadalajara, Mexico, the ITU itself adopted a specific proposal to “increase the role of ITU in Internet governance.”
As a result of these efforts, there was a strong possibility that the ITU would significantly amend the International Telecommunication Regulations — a multilateral treaty last revised in 1988 — in a way that authorizes increased ITU and member state control over the Internet. These proposals, if they had been implemented, would have changed the foundational structure of the Internet that has historically led to unprecedented worldwide innovation and economic growth.
What is the ITU?
The ITU, originally the International Telegraph Union, is a specialised agency of the United Nations and is responsible for issues concerning information and communication technologies. It was originally founded in 1865 and in the past has been concerned with technical communications issues such as standardisation of communications protocols (which was one of its original purposes), the management of the international radio-frequency spectrum and satellite orbit resources and the fostering of sustainable, affordable access to information and communication technology. It took its present name in 1934 and in 1947 became a specialised agency of the United Nations.
The position of the ITU approaching the 2012 meeting in Dubai was that, given the vast changes that had taken place in the world of telecommunications and information technologies, the International Telecommunications Regulations (ITR) that had been revised in 1988 were no longer in keeping with modern developments. Thus, the objective of the 2012 meeting was to revise the ITRs to suit the new age. After a controversial meeting in Dubai in December 2012, the Final Acts of the Conference were published. The controversial issue was that there was a proposal to redefine the Internet as a system of government-controlled, state-supervised networks. The proposal was contained in a leaked document by a group of members including Russia, China, Saudi Arabia, Algeria, Sudan, Egypt and the United Arab Emirates. However, the proposal was withdrawn. But the governance model defined the Internet as an “international conglomeration of interconnected telecommunication networks”, and that “Internet governance shall be effected through the development and application by governments” with member states having “the sovereign right to establish and implement public policy, including international policy, on matters of Internet governance”.
This wide-ranging proposal went well beyond the traditional role of the ITU, and other members such as the United States, European countries, Australia, New Zealand and Japan insisted that the ITU treaty should apply to traditional telecommunications systems. The resolution that won majority support towards the end of the conference stated that the ITU’s leadership should “continue to take the necessary steps for ITU to play an active and constructive role in the multi-stakeholder model of the Internet.”
However, the Treaty did not receive universal acclaim. United States Ambassador Kramer announced that the US would not be signing the new treaty. He was followed by the United Kingdom. Sweden said that it would need to consult with its capital (code in UN-speak for “not signing”). Canada, Poland, the Netherlands, Denmark, Kenya, New Zealand, Costa Rica, and the Czech Republic all made similar statements. In all, 89 countries signed while 55 did not.
From the Conference three different versions of political power vis-à-vis the Internet became clear. Cyber sovereignty states such as Russia, China and Saudi Arabia advocated that the mandate of the ITU be extended to include Internet governance issues. The United States and allied predominantly Western states were of the view that the current multi-stakeholder processes should remain in place. States such as Brazil, South Africa and Egypt rejected the concept of Internet censorship and closed networks but expressed concern at what appeared to be United States dominance of aspects of Internet management.
In 2014 at the NETmundial Conference the multi-stakeholder model was endorsed, recognising that the Internet was a global resource and should be managed in the public interest.
The Impact of International Internet Governance
Issues surrounding Internet Governance are important in this discussion because issues of Internet control will directly impact upon content delivery and will thus have an impact upon freedom of expression in its widest sense.
Rules surrounding global media governance do not exist. The current model based on localised rule systems and the lack of harmonisation arise from differing cultural and social perceptions as to media content. Although the Internet- based technologies have the means to provide a level of technical regulation such as code itself, digital rights management and internet filtering, and the larger issue of control of the distribution system poses an entirely novel set of issues that have not been encountered by traditional localised print and broadcast systems.
The Internet separates the medium from the message and issues of Internet governance will have a significant impact upon the means and scope of content delivery. From the perspective of media freedom and freedom of expression, Internet governance is a matter that will require close attention. As matters stand at the moment the issue of who rules the channels of communication is a work in progress.
Quite clearly there is a considerable amount of concern about the way in which national governments wish to regulate, or in some way govern and control, the Internet. Although at first glance this may seem to be directed at the content of content passing through a new communications technology, the attempt to regulate through a technological forum such as the ITU clearly demonstrates that governments wish to control not only content but the various transmission and protocol layers of the Internet and possibly even the backbone itself. The Christchurch Call is merely a continuation of that desire by governments to regulate and control the Internet.
The early history of the commercial Internet reveals a calculated effort to ensure that the new technology was not the subject of regulation. The Progress and Freedom Foundation, established in 1993, had an objective of ensuring that, unlike radio or television, the new medium would lie beyond the realm of government regulation. At a meeting in 1994, attended by futurists Alvin Toffler and Esther Dyson along with George Keyworth, President Reagan’s former science adviser, a Magna Carta for the Knowledge Age contended that although the industrial age may have required some form of regulation, the knowledge age did not. If there was to be an industrial policy for the knowledge age, it should focus on removing barriers to competition and massively deregulating the telecommunications and computing industries.
On 8 February 1996 the objectives of the Progress and Freedom Foundation became a reality when President Clinton signed the Telecommunications Act. This legislation effectively deregulated the entire communications industry, allowed for the subsequent consolidation of media companies and prohibiting regulation of the Internet. On the same day, as a statement of disapproval that the US government would even regulate by deregulating, John Perry Barlow released his Declaration of Independence of Cyberspace from the World Economic Forum in Davos, Switzerland.
Small wonder that the United States of America resists attempts at Internet regulation. But the problem is more significant than the will or lack of will to regulate. The problem lies within the technology itself and although efforts such as Deauville, Dubai, the NetMundial Conference and the Christchurch Call may focus on content, this is merely what Marshall McLuhan termed the meat that attracts the lazy dog of the mind. To regulate content requires an understanding and appreciation of some of the deeper aspects or qualities of the new communications technology. Once these are understood, the magnitude of the task becomes apparent and the practicality of effectively achieving regulation of communications runs up against the fundamental values of Western liberal democracies.
One characteristic of the Digital Paradigm is that of permissionless innovation. No approvals are need for developers to connect an application or a platform to the backbone of the Internet. All that is required is that the application comply with standards set by Internet engineers and essentially these standards ensure that an application will be compatible with Internet protocols.
No licences are required to connect an application. No regulatory approvals are needed. A business plan need not be submitted for bureaucratic fiat. Permissive innovation has been a characteristic of the Internet and it has allowed the Internet to grow. It allowed for the development of the Hypertext Transfer Protocol that allowed for the development of the World Wide Web – the most familiar aspect of the Internet today. It allowed for the development of a myriad of social media platforms. It co-exists with another quality of the Internet which is that of continuing disruptive change – the reality that the environment is not static and does not stand still.
Targetting the most popular social media platforms will only address a part of the problem. Permissionless innovation means that the leading platforms may modify their algorithms to try and capture extreme content but this is a less than subtle solution and is prone to the error of false positives.
Permissionless innovation and the ability to develop and continue to develop other social media platforms brings into play Michael Froomkin’s theory of regulatory arbitrage – where users will migrate to the environment that most suits them. Should the major players so regulate their platforms that desired aspects are no longer available, users may choose to use other platforms which will be more “user friendly” or attuned to their needs.
The question that arises from this aspect of the Digital Paradigm is how one regulates permissive innovation, given its critical position in the development of communications protocols. To constrain it, to tie it up in the red tape that accompanies broadcast licences and the like would strangle technological innovation, evolution and development. To interfere with permissionless innovation would strangle the continuing promise of the Internet as a developing communications medium.
An aspect of content on the Internet is what could be termed persistence of information. Once information reaches the Internet it is very difficult to remove it because it may spread through the vast network of computers that comprise the Internet and maybe retained on any one of the by the quality of exponential dissemination discussed below, despite the phenomenon of “link rot.” It has been summed up in another way by the phrase “the document that does not die.” Although on occasions it may be difficult to locate information, the quality of information persistence means that it will be on the Internet somewhere. This emphasises the quality of permanence of recorded information that has been a characteristic of that form of information ever since people started putting chisel to stone, wedge to clay or pen to papyrus. Information persistence means that the information is there but if it has become difficult to locate,and retrieving it may resemble the digital equivalent of an archaeological expedition, although the spade and trowel are replaced by the search engine. The fact that information is persistent means that it is capable of location.
In some respects the dynamic nature of information challenges the concept of information persistence because digital content may change. It could be argued that this seems to be more about the nature of content, but the technology itself underpins and facilitates this quality as it does with many others.
An example of dynamic information may be found in the on-line newspaper which may break a story at 10am, receive information on the topic by midday and by 1pm on the same day have modified the original story. The static nature of print and the newspaper business model that it enabled meant that the news cycle ran from edition to edition. The dynamic quality of information in the Digital Paradigm means that the news cycle potentially may run on a 24 hour basis, with updates every five minutes.
Similarly, the ability that digital technologies have for contributing dialog on any topic enabled in many communication protocols, primarily as a result of Web 2.0, means that an initial statement may undergo a considerable amount of debate, discussion and dispute, resulting ultimately in change. This dynamic nature of information challenges the permanence that one may expect from persistence and it is acknowledged immediately that there is a significant tension between the dynamic nature of digital information and the concept of the “document that does not die”.
Part of the dynamic of the digital environment is that information is copied when it is transmitted to a user’s computer. Thus there is the potential for information to be other than static. If I receive a digital copy I can make another copy of it or, alternatively, alter it and communicate the new version. Reliance upon the print medium has been based upon the fact that every copy of a particular edition is identical until the next edition. In the digital paradigm authors and publishers can control content from minute to minute.
In the digital environment individual users may modify information at a computer terminal to meet whatever need may be required. In this respect the digital reader becomes something akin to a glossator of the scribal culture, the difference being that the original text vanishes and is replaced with the amended copy. Thus one may, with reason, validly doubt the validity or authenticity of information as it is transmitted.
Dissemination was one of the leading qualities of print identified by Elizabeth Eisenstein in her study of the printing press as an agent of change, and it has been a characteristic of all information technologies since. What the internet and digital technologies enable is a form of dissemination that has two elements.
One element is the appearance that information is transmitted instantaneously to both an active (on-line recipient) and a passive (potentially on-line but awaiting) audience. Consider the example of an e-mail. The speed of transmission of emails seems to be instantaneous (in fact it is not) but that enhances our expectations of a prompt response and concern when there is not one. More important, however, is that a matter of interest to one email recipient may mean that the email is forwarded to a number of recipients unknown to the original sender. Instant messaging is so-called because it is instant and a complex piece of information may be made available via a link by Twitter to a group of followers which may then be retweeted to an exponentially larger audience.
The second element deals with what may be called the democratization of information dissemination. This aspect of exponential dissemination exemplifies a fundamental difference between digital information systems and communication media that have gone before. In the past information dissemination has been an expensive business. Publishing, broadcast, record and CD production and the like are capital intensive businesses. It used to (and still does) cost a large amount of money and required a significant infrastructure to be involved in information gathering and dissemination. There were a few exceptions such as very small scale publishing using duplicators, carbon paper and samizdats but in these cases dissemination was very small. Another aspect of early information communication technologies is that they involved a monolithic centralized communication to a distributed audience. The model essentially was one of “one to many” communication or information flow.
The Internet turns that model on its head. The Internet enables a “many to many” communication or information flow with the added ability on the part of recipients of information to “republish” or “rebroadcast”. It has been recognized that the Internet allows everyone to become a publisher. No longer is information dissemination centralized and controlled by a large publishing house, a TV or radio station or indeed the State. It is in the hands of users. Indeed, news organizations regularly source material from Facebook, YouTube or from information that is distributed on the Internet by Citizen Journalists. Once the information has been communicated it can “go viral” a term used to describe the phenomenon of exponential dissemination as Internet users share information via e-mail, social networking sites or other Internet information sharing protocols. This in turn exacerbates the earlier quality of Information Persistence or “the document that does not die” in that once information has been subjected to Exponential Dissemination it is almost impossible to retrieve it or eliminate it.
It can be seen from this discussion that dissemination is not limited to the “on-line establishment” of Facebook, Twitter or Instagram, and trying the address the dissemination of extreme content by attacking it through ”established” platforms will not eliminate it – just slow down the dissemination process. It will present and obstruction as in fact on-line censorship is just that – an obstruction to the information flow on the Internet. It was John Gilmore who said The Net interprets censorship as damage and routes around it. Primarily because State-based censorship is based on a centralized model and the dissemination of information of the Internet is based upon a distributed one, effectively what happens on the Internet is content redistribution which is a reflection both of Gilmore’s adage and the quality of exponential dissemination.
The Dark Web
Finally there is the aspect of the Internet known as the Dark Web. If the searchable web comprises 10% of available Internet content there is content that is not amenable to search known as the Deep Web which encompasses sites such as LexisNexis and Westlaw if one seeks and example from the legal sphere.
The Deep Web is not the Dark Web. The Dark Web is altogether different. It is more difficult to reach than the surface or deep web, since it’s only accessible through special browsers such as the Tor browser. The dark web is the unregulated part of the internet. No organization, business or government is in charge of it or able to apply rules. This is exactly the reason why the dark web is commonly associated with illegal practices. It’s impossible to reach the dark web through a ‘normal’ browser, such as Google Chrome or Mozilla Firefox. Even in the Tor browser you won’t be able to find any ‘dark’ websites ending in .com or .org. Instead, URLs usually consist of a random mix of letters and numbers and end in .onion. Moreover, the URLs of websites on the dark net change regularly. If there are difficulties in regulating content via social media platforms, to do so via the Dark Web would be impossible. Yet it is within that environment that most of the extreme content may be found.
The Christchurch Call has had some very positive effects. It has drawn attention, yet again, to the problem of dissemination of extreme and terrorist content online. It should be remembered that this is not a new issue and has been in the sights of politicians since Deauville although in New Zealand, as far back as 1993, there were proposals to deal with the problems with the availability of pornography online.
Another positive outcome of the Christchurch Call has been to increase public awareness and corporate acceptance of the necessity for there to be some standards of global good citizenship on the part of large and highly profitable Internet based organisations. It is not enough for a company to have as its guiding light “do no evil” but more is required including steps to ensure that its service are not facilitating the doing of evil by others.
At the moment the Christchurch Call has adopted, at least in public, a velvet glove approach, although it is not hard to imagine that in some of the closed meetings the steel fist has been if not threatened at least uncovered. There are a number of ways that the large conglomerates might be persuaded to toe a more responsible line. One is to introduce the concept of an online duty of care as has been suggested in the United Kingdom. Although this sounds like a comfortable and simple concept, anyone who has spent some time studying the law of torts will understand that the duty of care is a highly nuanced and complex aspect of the law of obligations, and one which will require years of litigation and development before it achieves a satisfactory level of certainty.
Another way to have conglomerates toe the line is to increase the costs of doing business. Although it is in a different sphere – that of e-commerce – the recent requirement by the New Zealand Government upon overseas vendors to impose GST is an example, although I was highlighting this issue 20 years ago. Governments do not have a tendency to move fast although they do have a tendency to break things once the sleeping giant awakes.
Yet these various moves and others like them are really rather superficial and only scratch the surface of the content layer of the Internet. The question must be asked – how serious are the governments of the Christchurch Call in regulating not simply access to content by the means by which content is accessed – the technology.
The lessons of history give us some guidance. The introduction of the printing press into England was followed by 120 years of unsuccessful attempts to control the content of printed material. It was not until the Star Chamber Decrees of 1634 that the Stuart monarchy put in place some serious and far-reaching regulatory requirements to control not what was printed (although that too was the subject of the 1634 provisions) but how it was printed. The way in which the business and process of printing was regulated gave the State unprecedented control not only over content but by the means of production and dissemination of that content. The reaction against this – a process involving some many years – led to our present values that underpin freedom of the press and freedom of expression.
As new communications technologies have been developed the State has interested itself in imposing regulatory requirements. There is no permissionless innovation available in setting up a radio station or television network. The State has had a hand of varying degrees of heaviness throughout the development and availability of both these media. In 1966 there was a tremendous issue about whether or not a ship that was to be the platform for the unlicensed and therefore “pirate” radio station, Radio Hauraki would be allowed to sail. The State unsuccessfully tried to prevent this.
Once upon a time in New Zealand (and still in the United Kingdom) anyone who owned a television set had to pay a broadcasting fee. This ostensibly would be applied to the development of content but is indicative of the level of control that the State exerted. And it was not a form of content regulation. It was regulation that was applied to access to the technology.
More recently we are well aware of the so called “Great Firewall of China” – a massive state sponsored means of controlling the technology to proven access to content. And conglomerates such a Google have found that if they want to do business in China they must play by Chinese rules.
The advocacy of greater technological control has come from Russia, Brazil, India and some of the Arab countries. These States I think understand the import of McLuhan’s paradox of technology and content. The issue is whether or not the Christchurch Call is prepared to take that sort of radical step and proceed to consider technological regulation rather than step carefully around the edges of the problem.
Of course, one reason why at least some Western democracies would not wish to take such an extreme step lies in their reliance upon the Internet themselves as a means of doing business, be it by way of using the Internet for the collection of census data, for providing taxation services or online access to benefits and other government services. Indeed the use of the Internet by politicians who use their own form of argumentative speech has become the norm. Often, however, we find that the level of political debate is as banal and cliched as the platforms that are used to disseminate it. But to put it simply, where would politicians be in the second decade of the 21st Century without access to Facebook, Twitter or Instagram (or whatever new flavor of platform arises as a result of permissionless innovation).
I think it is safe to say that the Christchurch Call is no more and no less than a very well managed and promoted public relations exercise that is superficial and will have little long term impact. It will go down in history as part of a continuing story that really started with Deauville and continues and will continue to do so.
Only when Governments are prepared to learn and apply the lessons about the Internet and the way that it works will we see effective regulatory steps instituted.
And then, when that occurs, will we realise that democracy and the freedom that we have to hold and express our own opinions is really in trouble.