Covid 19 and the Future: Utopia or Dystopia

Once again an article by Simon Wilson has piqued my interest. In my post “The Culture of Idealised Individualism” I ventured to suggest that he is a bit preachy, a bit righteous, at times a bit of a high-horsed moralist. Certainly, I said, much of his thinking is left of centre. And as I emphasized in that post this is still a democracy and he is entitled to his opinion and to express it. He has a soap-box in the form of the NZ Herald. I have this blog with a rather less extensive reach. Yet Mr. Wilson recently put forward certain arguments and propositions that should be answered or challenged.

Mr Wilson’s piece in the NZ Herald for 5 May 2020 is entitled “Covid 19 Coronavirus: Simon Wilson: Is this the death of neoliberalism?” It is an interesting piece but is primarily a paean against a rather ill-defined view of neo-liberalism with a hope for some utopian collectivist future – a better society – under a benevolent Government that will look after our every need.

Allow me to unpick a few things. First, in the preceding paragraph I used the word “utopian”. The meaning usually ascribed to that word is an imaginary place or commonwealth, enjoying a perfect social, legal, and political system and depicted in a book in 1516 by Sir Thomas More.

Wilson’s words

“What we are doing now has the makings of a great achievement of civilisation. Those societies that get their pandemic response right have the chance to become more resilient, less burdened by their current failings, better able to face the next crisis and the next”

sound like a search for Utopia.

But was More’s Utopia a perfect society? Did he intend it to be a blueprint for an ideal commonwealth? Quite the contrary. More was a lawyer, and one of the skills that he learned at the Inns of Court – the training ground for members of the legal profession – was case putting. Case putting was a form of argument that was employed when one wanted to demonstrate the futility or impossibility of a certain proposition. It is a form of demonstrative oratory – one of the tools of rhetoric.

More demonstrated that his Utopia was not possible by the use of irony and ambivalence. “Utopia” from the Greek means “no place” – rather like Samuel Butler’s “Erewhon” which, of course, is “nowhere” spelled backwards

Behind what is ostensibly a serious text is satire. Ruskin considered it one of the most really mischievous books ever written and Erasmus, a contemporary and correspondent of More, suggested that one should read it if one wanted to laugh. A perfect society? I don’t think so.

But – and this is my second point – the word Utopia provides us with another – an opposite – and that is the word “dystopia” or, as John Stuart Mill put it, “too bad to be practicable.”

The word is frequently used in speculative fiction describing not a world we should not like to live in but rather one that we should avoid.

Mr. Wilson refers to the concept of dystopia in his article, quoting a libertarian MP at Westminster who suggested that a bill being introduced implemented a dystopian society. He went on to argue that in fact the measures being implemented are anything but that and that steps that are being taken are to build a better society. He suggests that New York is an example of dystopia.

Mr. Wilson is incorrect. The society in which we would rather not live has been forced upon us. The spread of a virulent disease, the illness and sudden deaths of many victims, the stress on public health systems, the disruption of movement, the interference with trade, the closure of borders all are aspects of a dystopian world.

And the unprecedented intrusion of the State into the lives of citizens, the prohibitions on freedom of movement and assembly, the indirect demeaning of any criticism or questioning all are examples of a society in which we would rather not live.

We are in a dystopia. Who really wants to live in this locked-down or partially locked-down world? We have been gradually sliding into dystopia since Covid 19 spread from its source to infect the world.

The dystopia is going to continue. The free society that we have enjoyed has come to an end. It is unlikely to return in an instantly recognizable form.

It has been frequently observed throughout this crisis that the Government has interfered with civil liberties and the ordinary lives of New Zealanders to an extent not seen since World War II – in fact I would probably suggest that the 1951 Waterfront Crisis with the invocation of the Public Safety Conservation Act (now fortunately repealed) was probably a more recent serious interference with civil liberties.

Dystopia not only encompasses unpalatable social situations. A reading of many of the science and speculative fiction works on the topic present a number of scenarios. One, favoured by Orwell (“1984”), Robert Heinlein (“Revolt in 2100”), Margaret Attwood (“The Handmaids Tale”), Ray Bradbury(“Fahrenheit 451”), and Aldous Huxley (“Brave New World”) suggest a political dystopia.

Film has also presented some graphic portrayals of dystopian societies. Based on the novels of Phillip K Dick “Bladerunner” and “Minority Report” are two examples.

“Soylent Green” based on Harry Harrison’s “Make Room, Make Room” propounded a society that literally fed on itself as the oceans died. There were disturbing aspects of voluntary euthanasia with rather ghastly consequences that made for a shocking climax.

“Logans Run” which propounded that everyone over 30 was a burden and therefore should be eliminated was very eerie, made more so by the initial panic over the risk of Covid-19 to those of us over 70 – as if we couldn’t assess the risk ourselves.

Ours is not as bad as these imagined dystopias but compared with the life that we enjoyed, the freedoms that we had and the relatively light hand of the State on our affairs, what we are in now is certainly dystopic.

I do not share Mr Wilson’s optimism that this is going to herald a new and better society. I see a continuing dystopia of increasing State interference in the lives of citizens, more State control over and limitations upon the freedoms that we have taken for granted for so long.

The main point of Mr. Wilson’s article is to trumpet the end of neo-liberalism although, as I have said, he doesn’t clearly define what he means. Roughly defined it means a modified form of liberalism tending to favour free market capitalism. Presumably he is calling for a return to greater State control of the economy and in the lives of citizens, citing the rush of corporates to the Government for assistance.

Certainly in this crisis the Government has a role. But let us not forget the purpose of the Government. It is to serve the people, not to control them. The people of Government are not called “public servants” for nothing.

The Government exists to protect the rights of the people, and to provide for their protection from foreign and domestic threats, to provide for the protection of their persons and property by a defined and clear Rule of Law framework and to allow individuals to choose for themselves how they will live their lives within the law both socially and economically. The role of the Government is therefore very limited and certainly not extensive.

At the moment the involvement of the Government in the lives of its citizens is highly invasive – reminiscent of a dystopia – and  the current situation will extend into Alert Level 2. And how long will that last? How long will we be subjected to decrees and proclamations from bureaucrats in Wellington? Do we really need to be patted on the head and told how good we have been by those who are meant to serve us? Do we really need to be told that because of the idiocy of the few all of us may suffer restrictions. That sounds like patronising school teacher-speak to me.

So how long will it be?  Until we get a vaccine? Or some other equally distant event? By the time we finally emerge into Alert Level 0 – if we ever do – the population will be so habituated to the 1:00 pm update that free will and freedom of choice will have vanished.

It will be the Government who will be telling us how to live our lives – as I said in an earlier post

“what to buy, how we should do this and how we should do that, and gradually we are allowing other people to do our thinking for us. The time will come when no longer will we make our own decisions, but some “big brother” will tell us what to do and what to think. We will be told who is good and who is bad, whom we shall love and whom we shall hate.”

I am sure that this is not the result that Mr Wilson wants. Nor do I believe, in his heart of hearts that he wants to see an end of freedom of enterprise, individual initiative, individual thinking and innovation and all the other aspects of a free and open society – especially the freedoms that he enjoys as a journalist to question authority and to speak truth to power.

It may be that the Government can provide, during this crisis, some direction. But it should have an exit strategy – mainly for itself. And we should know now what that exit strategy is. The resources that the Government has deployed should be viewed as temporary only – not as some initial investment with a view to maintaining control long after the crisis is over.

Mr Wilson’s rosy view of the future – of the opportunity that Covid 19 has presented – sounds hopeful on the surface – Utopian almost. But as we now know Utopia is an illusion.

The collectivist solution proposed by Mr Wilson, with its reduced focus upon the individual and an overly regulated and directed society – both politically and economically – is, to those who value liberty, initiative, innovation and individualism, a recipe for a continued dystopia.

Advertisement

Lessons Unlearned

The Christchurch Call was a meeting co-hosted by New Zealand’s Prime Minister, Jacinda Ardern and French President, Emmanuel Macron, held in Paris on 15 May 2019. It’s a global call which aims to “bring together countries and tech companies in an attempt to bring to an end the ability to use social media to organise and promote terrorism and violent extremism.”[1]It is intended to be an ongoing process.

This piece was written at the end of last year and for one reason or another – and primarily the Covid-19 crisis – has languished. I post it now as the first anniversary of the Call approaches. The overall context is that of Internet Regulation – content or technology – and the difficulties that presents.

Introduction

The Christchurch Call is not the first attempt to regulate or control Internet based content. It will not be the last. And, despite its aim to reduce or eliminate the use of social media to organize and promote terrorism and violent extremism, it carries within it the seeds of its own downfall. The reason is, like so many efforts before it, the target of the Christchurch Call is content rather than technology.

Calls to regulate content and access to it have been around since the Internet went public.

The Christchurch Call is eerily familiar, not because of what motivated and inspired it, but because it represents an effort by Governments and States to address perceived problems posed by Internet based content.

In 2011 a similar effort was led by then French President Nicholas Sarkozy at the economic summit at Deauville – is it a co-incidence that once again the French are leaders in this present initiative? So what was the Deauville initiative all about?

Deauville May 2011

The Background

In 2011 and 2012 there were renewed calls for greater regulation of the Internet. That these were driven by the events in the Middle East early in 2011 which became known as the “Arab Spring” seems more than coincidental. The “Arab Spring” is a term that refers to anti-government protests that spread across the Middle East. These followed a successful uprising in Tunisia against former leader Zine El Abidine Ben Ali which emboldened similar anti-government protests in a number of Arab countries. The protests were characterised by the extensive use of social media to organise gatherings and spread awareness. There has, however, been some debate about the influence of social media on the political activism of the Arab Spring. Some critics contend that digital technologies and other forms of communication — videos, cellular phones, blogs, photos and SMS messages— have brought about the concept of a “digital democracy” in parts of North Africa affected by the uprisings. Others have claimed that in order to understand the role of social media during the Arab Spring there is context of high rates of unemployment and corrupt political regimes which led to dissent movements within the region. There is certainly evidence of an increased uptake of Internet and social media usage over the period of the events, and during the uprising in Egypt; then President Mubarak’s State Security Investigations Service blocked access to Twitter and Facebook and on 27 January 2011 the Egyptian Government shut down the Internet in Egypt along with SMS messaging.

Sarkozy’s Initiative

In May 2011 at the first e-G8 Forum, before the G8 summit in France, President Nicolas Sarkozy issued a provocative call for stronger Internet regulation. Mr Sarkozy convened a special gathering of global “digerati” in Paris and called the rise of the Internet a “revolution” as significant as the age of exploration and the industrial revolution.

This revolution did not have a flag and Mr Sarkozy acknowledged that the Internet belonged to everyone, citing the Arab Spring as a positive example. However, he warned executives of Google, Facebook, Amazon and eBay who were present:

“The universe you represent is not a parallel universe. Nobody should forget that governments are the only legitimate representatives of the will of the people in our democracies. To forget this is to risk democratic chaos and anarchy.”

Mr Sarkozy was not alone in calling existing laws and regulations inadequate to deal with the challenges of a borderless digital world. Prime Minister David Cameron of Britain stated that he would ask Parliament to review British privacy laws after Twitter users circumvented court orders preventing newspapers from publishing the names of public figures who are suspected of having had extramarital affairs, but he did not go as far as Mr Sarkozy who was pushing for a “civilized Internet” implying wide regulation.

However, the Deauville Communique did not extend as far as Mr Sarkozy may have liked. It affirmed the importance of intellectual property protection, the effective protection of personal data and individual privacy, security of networks, and a crackdown on trafficking in children for sexual exploitation; however it did not advocate state control of the Internet but staked out a role for governments.

Deauville was not an end to the matter. The appetite for Internet regulation by domestic governments had just been whetted. This was demonstrated by the events at the ITU meeting in Dubai in 2012

The ITU meeting in Dubai December 2012

The meeting of the International Telecommunications Union (ITU) in Dubai provided the forum for further consideration of expanded Internet regulation. No less an authority than Vinton Cerf, the co-developer with Robert Kahn of the TCP/IP protocol which was one of the important technologies that made the Internet possible, sounded a warning when he said:

“But today, despite the significant positive impact of the Internet on the world’s economy, this amazing technology stands at a crossroads. The Internet’s success has generated a worrying desire by some countries’ governments to create new international rules that would jeopardize the network’s innovative evolution and its multi-faceted success.

This effort is manifesting itself in the UN General Assembly and at the International Telecommunication Union — the ITU — a United Nations organization that counts 193 countries as its members, each holding one vote. The ITU currently is conducting a review of the international agreements governing telecommunications and it aims to expand its regulatory authority to include the Internet at a treaty summit scheduled for December of this year in Dubai….”

Today, the ITU focuses on telecommunication networks, radio frequency allocation, and infrastructure development. But some powerful member countries saw an opportunity to create regulatory authority over the Internet. In June 2012, the Russian government stated its goal of establishing international control over the Internet through the ITU. Then, in September 2012, the Shanghai Cooperation Organization — which counts China, Russia, Tajikistan, and Uzbekistan among its members — submitted a proposal to the UN General Assembly for an “international Code of Conduct for Information Security.” The organization’s stated goal was to establish government-led “international norms and rules standardizing the behavior of countries concerning information and cyberspace.” Other proposals of a similar character have emerged from India and Brazil. And in an October 2010 meeting in Guadalajara, Mexico, the ITU itself adopted a specific proposal to “increase the role of ITU in Internet governance.”

As a result of these efforts, there was a strong possibility that the ITU would significantly amend the International Telecommunication Regulations — a multilateral treaty last revised in 1988 — in a way that authorizes increased ITU and member state control over the Internet. These proposals, if they had been implemented, would have changed the foundational structure of the Internet that has historically led to unprecedented worldwide innovation and economic growth.

What is the ITU?

The ITU, originally the International Telegraph Union, is a specialised agency of the United Nations and is responsible for issues concerning information and communication technologies. It was originally founded in 1865 and in the past has been concerned with technical communications issues such as standardisation of communications protocols (which was one of its original purposes), the management of the international radio-frequency spectrum and satellite orbit resources and the fostering of sustainable, affordable access to information and communication technology. It took its present name in 1934 and in 1947 became a specialised agency of the United Nations.

The position of the ITU approaching the 2012 meeting in Dubai was that, given the vast changes that had taken place in the world of telecommunications and information technologies, the International Telecommunications Regulations (ITR) that had been revised in 1988 were no longer in keeping with modern developments. Thus, the objective of the 2012 meeting was to revise the ITRs to suit the new age. After a controversial meeting in Dubai in December 2012, the Final Acts of the Conference were published. The controversial issue was that there was a proposal to redefine the Internet as a system of government-controlled, state-supervised networks. The proposal was contained in a leaked document by a group of members including Russia, China, Saudi Arabia, Algeria, Sudan, Egypt and the United Arab Emirates. However, the proposal was withdrawn. But the governance model defined the Internet as an “international conglomeration of interconnected telecommunication networks”, and that “Internet governance shall be effected through the development and application by governments” with member states having “the sovereign right to establish and implement public policy, including international policy, on matters of Internet governance”.

This wide-ranging proposal went well beyond the traditional role of the ITU, and other members such as the United States, European countries, Australia, New Zealand and Japan insisted that the ITU treaty should apply to traditional telecommunications systems. The resolution that won majority support towards the end of the conference stated that the ITU’s leadership should “continue to take the necessary steps for ITU to play an active and constructive role in the multi-stakeholder model of the Internet.”

However, the Treaty did not receive universal acclaim. United States Ambassador Kramer announced that the US would not be signing the new treaty. He was followed by the United Kingdom. Sweden said that it would need to consult with its capital (code in UN-speak for “not signing”). Canada, Poland, the Netherlands, Denmark, Kenya, New Zealand, Costa Rica, and the Czech Republic all made similar statements. In all, 89 countries signed while 55 did not.

From the Conference three different versions of political power vis-à-vis the Internet became clear. Cyber sovereignty states such as Russia, China and Saudi Arabia advocated that the mandate of the ITU be extended to include Internet governance issues. The United States and allied predominantly Western states were of the view that the current multi-stakeholder processes should remain in place. States such as Brazil, South Africa and Egypt rejected the concept of Internet censorship and closed networks but expressed concern at what appeared to be United States dominance of aspects of Internet management.

In 2014 at the NETmundial Conference the multi-stakeholder model was endorsed, recognising that the Internet was a global resource and should be managed in the public interest.

The Impact of International Internet Governance

Issues surrounding Internet Governance are important in this discussion because issues of Internet control will directly impact upon content delivery and will thus have an impact upon freedom of expression in its widest sense. 

Rules surrounding global media governance do not exist. The current model based on localised rule systems and the lack of harmonisation arise from differing cultural and social perceptions as to media content. Although the Internet- based technologies have the means to provide a level of technical regulation such as code itself, digital rights management and internet filtering, and the larger issue of control of the distribution system poses an entirely novel set of issues that have not been encountered by traditional localised print and broadcast systems.

The Internet separates the medium from the message and issues of Internet governance will have a significant impact upon the means and scope of content delivery. From the perspective of media freedom and freedom of expression, Internet governance is a matter that will require close attention. As matters stand at the moment the issue of who rules the channels of communication is a work in progress.

Quite clearly there is a considerable amount of concern about the way in which national governments wish to regulate, or in some way govern and control, the Internet. Although at first glance this may seem to be directed at the content of content passing through a new communications technology, the attempt to regulate through a technological forum such as the ITU clearly demonstrates that governments wish to control not only content but the various transmission and protocol layers of the Internet and possibly even the backbone itself. The Christchurch Call is merely a continuation of that desire by governments to regulate and control the Internet.

Resisting Regulation

The early history of the commercial Internet reveals a calculated effort to ensure that the new technology was not the subject of regulation. The Progress and Freedom Foundation, established in 1993, had an objective of ensuring that, unlike radio or television, the new medium would lie beyond the realm of government regulation. At a meeting in 1994, attended by futurists Alvin Toffler and Esther Dyson along with George Keyworth, President Reagan’s former science adviser, a Magna Carta for the Knowledge Age contended that although the industrial age may have required some form of regulation, the knowledge age did not. If there was to be an industrial policy for the knowledge age, it should focus on removing barriers to competition and massively deregulating the telecommunications and computing industries.

On 8 February 1996 the objectives of the Progress and Freedom Foundation became a reality when President Clinton signed the Telecommunications Act. This legislation effectively deregulated the entire communications industry, allowed for the subsequent consolidation of media companies and prohibiting regulation of the Internet. On the same day, as a statement of disapproval that the US government would even regulate by deregulating, John Perry Barlow released his Declaration of Independence of Cyberspace from the World Economic Forum in Davos, Switzerland.

Small wonder that the United States of America resists attempts at Internet regulation. But the problem is more significant than the will or lack of will to regulate. The problem lies within the technology itself and although efforts such as Deauville, Dubai, the NetMundial Conference and the Christchurch Call may focus on content, this is merely what Marshall McLuhan termed the meat that attracts the lazy dog of the mind. To regulate content requires an understanding and appreciation of some of the deeper aspects or qualities of the new communications technology. Once these are understood, the magnitude of the task becomes apparent and the practicality of effectively achieving regulation of communications runs up against the fundamental values of Western liberal democracies.

Permissionless Innovation

One characteristic of the Digital Paradigm is that of permissionless innovation. No approvals are need for developers to connect an application or a platform to the backbone of the Internet. All that is required is that the application comply with standards set by Internet engineers and essentially these standards ensure that an application will be compatible with Internet protocols.

No licences are required to connect an application. No regulatory approvals are needed. A business plan need not be submitted for bureaucratic fiat. Permissive innovation has been a characteristic of the Internet and it has allowed the Internet to grow. It allowed for the development of the Hypertext Transfer Protocol that allowed for the development of the World Wide Web – the most familiar aspect of the Internet today. It allowed for the development of a myriad of social media platforms. It co-exists with another quality of the Internet which is that of continuing disruptive change – the reality that the environment is not static and does not stand still.

Targetting the most popular social media platforms will only address a part of the problem. Permissionless innovation means that the leading platforms may modify their algorithms to try and capture extreme content but this is a less than subtle solution and is prone to the error of false positives.

Permissionless innovation and the ability to develop and continue to develop other social media platforms brings into play Michael Froomkin’s theory of regulatory arbitrage – where users will migrate to the environment that most suits them. Should the major players so regulate their platforms that desired aspects are no longer available, users may choose to use other platforms which will be more “user friendly” or attuned to their needs.

The question that arises from this aspect of the Digital Paradigm is how one regulates permissive innovation, given its critical position in the development of communications protocols. To constrain it, to tie it up in the red tape that accompanies broadcast licences and the like would strangle technological innovation, evolution and development. To interfere with permissionless innovation would strangle the continuing promise of the Internet as a developing communications medium.

Content Dynamics

An aspect of content on the Internet is what could be termed persistence of information. Once information reaches the Internet it is very difficult to remove it because it may spread through the vast network of computers that comprise the Internet and maybe retained on any one of the by the quality of exponential dissemination discussed below, despite the phenomenon of “link rot.”  It has been summed up in another way by the phrase “the document that does not die.” Although on occasions it may be difficult to locate information, the quality of information persistence means that it will be on the Internet somewhere.  This emphasises the quality of permanence of recorded information that has been a characteristic of that form of information ever since people started putting chisel to stone, wedge to clay or pen to papyrus.  Information persistence means that the information is there but if it has become difficult to locate,and  retrieving it may resemble the digital equivalent of an archaeological expedition, although the spade and trowel are replaced by the search engine.  The fact that information is persistent means that it is capable of location.

In some respects the dynamic nature of information challenges the concept of information persistence because digital content may change.  It could be argued that this seems to be more about the nature of content, but the technology itself underpins and facilitates this quality as it does with many others.

An example of dynamic information may be found in the on-line newspaper which may break a story at 10am, receive information on the topic by midday and by 1pm on the same day have modified the original story.  The static nature of print and the newspaper business model that it enabled meant that the news cycle ran from edition to edition. The dynamic quality of information in the Digital Paradigm means that the news cycle potentially may run on a 24 hour basis, with updates every five minutes.

Similarly, the ability that digital technologies have for contributing dialog on any topic enabled in many communication protocols, primarily as a result of Web 2.0, means that an initial statement may undergo a considerable amount of debate, discussion and dispute, resulting ultimately in change.  This dynamic nature of information challenges the permanence that one may expect from persistence and it is acknowledged immediately that there is a significant tension between the dynamic nature of digital information and the concept of the “document that does not die”.

Part of the dynamic of the digital environment is that information is copied when it is transmitted to a user’s computer.  Thus there is the potential for information to be other than static.  If I receive a digital copy I can make another copy of it or, alternatively, alter it and communicate the new version.  Reliance upon the print medium has been based upon the fact that every copy of a particular edition is identical until the next edition.  In the digital paradigm authors and publishers can control content from minute to minute.

In the digital environment individual users may modify information at a computer terminal to meet whatever need may be required.  In this respect the digital reader becomes something akin to a glossator of the scribal culture, the difference being that the original text vanishes and is replaced with the amended copy.  Thus one may, with reason, validly doubt the validity or authenticity of information as it is transmitted.

Let us assume for the moment that a content moderation policy by a search engine or a social media platform can be developed that will identify extreme content and return a “null” result. These policies will often if not always have identifiable gaps. If the policy relates to breaches of terms of use, how often are these breaches subject to human review which is often more nuanced than an algorithm. Often “coded language” may be used as alternatives to extreme content. Because of the context-specific nature of the coded language and the fact that it is not typically directed at a vulnerable group, targetted posts would in most instances not trigger social media platform content rules even if they were more systematically flagged. In addition the existence of “net centers” that coordinate attacks using hundreds of accounts result in broad dissemination of harmful posts which are harder to remove. Speech that is removed may be reposted using different accounts. Finally, content moderation policies of some social media providers do not provide a means for considering the status of the speaker in evaluating the harmful impact the speech may have, and it is widely recognized in the social science literature that speakers with authority have greater influence on behavior.

Exponential Dissemination

Dissemination was one of the leading qualities of print identified by Elizabeth Eisenstein in her study of the printing press as an agent of change, and it has been a characteristic of all information technologies since. What the internet and digital technologies enable is a form of dissemination that has two elements.

One element is the appearance that information is transmitted instantaneously to both an active (on-line recipient) and a passive (potentially on-line but awaiting) audience. Consider the example of an e-mail. The speed of transmission of emails seems to be instantaneous (in fact it is not) but that enhances our expectations of a prompt response and concern when there is not one. More important, however, is that a matter of interest to one email recipient may mean that the email is forwarded to a number of recipients unknown to the original sender. Instant messaging is so-called because it is instant and a complex piece of information may be made available via a link by Twitter to a group of followers which may then be retweeted to an exponentially larger audience.

The second element deals with what may be called the democratization of information dissemination. This aspect of exponential dissemination exemplifies a fundamental difference between digital information systems and communication media that have gone before. In the past information dissemination has been an expensive business. Publishing, broadcast, record and CD production and the like are capital intensive businesses. It used to (and still does)  cost a large amount of money and required a significant infrastructure to be involved in information gathering and dissemination. There were a few exceptions such as very small scale publishing using duplicators, carbon paper and samizdats but in these cases dissemination was very small. Another aspect of early information communication technologies is that they involved a monolithic centralized communication to a distributed audience. The model essentially was one of “one to many” communication or information flow.

The Internet turns that model on its head. The Internet enables a “many to many” communication or information flow  with the added ability on the part of recipients of information to “republish” or “rebroadcast”. It has been recognized that the Internet allows everyone to become a publisher. No longer is information dissemination centralized and controlled by a large publishing house, a TV or radio station or indeed the State. It is in the hands of users. Indeed, news organizations regularly source material from Facebook, YouTube or from information that is distributed on the Internet by Citizen Journalists. Once the information has been communicated it can “go viral” a term used to describe the phenomenon of exponential dissemination as Internet users share information via e-mail, social networking sites or other Internet information sharing protocols. This in turn exacerbates the earlier quality of Information Persistence or “the document that does not die” in that once information has been subjected to Exponential Dissemination it is almost impossible to retrieve it or eliminate it.

It can be seen from this discussion that dissemination is not limited to the “on-line establishment” of Facebook, Twitter or Instagram, and trying the address the dissemination of extreme content by attacking it through ”established” platforms will not eliminate it – just slow down the dissemination process. It will present and obstruction as in fact on-line censorship is just that – an obstruction to the information flow on the Internet. It was John Gilmore who said The Net interprets censorship as damage and routes around it. Primarily because State-based censorship is based on a centralized model and the dissemination of information of the Internet is based upon a distributed one, effectively what happens on the Internet is content redistribution which is a reflection both of Gilmore’s adage and the quality of exponential dissemination.

The Dark Web

Finally there is the aspect of the Internet known as the Dark Web. If the searchable web comprises 10% of available Internet content there is content that is not amenable to search known as the Deep Web which encompasses sites such as LexisNexis and Westlaw if one seeks and example from the legal sphere.

The Deep Web is not the Dark Web. The Dark Web is altogether different. It is more difficult to reach than the surface or deep web, since it’s only accessible through special browsers such as the Tor browser. The dark web is the unregulated part of the internet. No organization, business or government is in charge of it or able to apply rules. This is exactly the reason why the dark web is commonly associated with illegal practices. It’s impossible to reach the dark web through a ‘normal’ browser, such as Google Chrome or Mozilla Firefox. Even in the Tor browser you won’t be able to find any ‘dark’ websites ending in .com or .org. Instead, URLs usually consist of a random mix of letters and numbers and end in .onion. Moreover, the URLs of websites on the dark net change regularly. If there are difficulties in regulating content via social media platforms, to do so via the Dark Web would be impossible. Yet it is within that environment that most of the extreme content may be found.

Effective Regulation

The Christchurch Call has had some very positive effects. It has drawn attention, yet again, to the problem of dissemination of extreme and terrorist content online. It should be remembered that this is not a new issue and has been in the sights of politicians since Deauville although in New Zealand, as far back as 1993, there were proposals to deal with the problems with the availability of pornography online.

Another positive outcome of the Christchurch Call has been to increase public awareness and corporate acceptance of the necessity for there to be some standards of global good citizenship on the part of large and highly profitable Internet based organisations. It is not enough for a company to have as its guiding light “do no evil” but more is required including steps to ensure that its service are not facilitating the doing of evil by others.

At the moment the Christchurch Call has adopted, at least in public, a velvet glove approach, although it is not hard to imagine that in some of the closed meetings the steel fist has been if not threatened at least uncovered. There are a number of ways that the large conglomerates might be persuaded to toe a more responsible line. One is to introduce the concept of an online duty of care as has been suggested in the United Kingdom. Although this sounds like a comfortable and simple concept, anyone who has spent some time studying the law of torts will understand that the duty of care is a highly nuanced and complex aspect of the law of obligations, and one which will require years of litigation and development before it achieves a satisfactory level of certainty.

Another way to have conglomerates toe the line is to increase the costs of doing business. Although it is in a different sphere – that of e-commerce – the recent requirement by the New Zealand Government upon overseas vendors to impose GST is an example, although I was highlighting this issue 20 years ago. Governments do not have a tendency to move fast although they do have a tendency to break things once the sleeping giant awakes.

Yet these various moves and others like them are really rather superficial and only scratch the surface of the content layer of the Internet. The question must be asked – how serious are the governments of the Christchurch Call in regulating not simply access to content by the means by which content is accessed – the technology.

The lessons of history give us some guidance. The introduction of the printing press into England was followed by 120 years of unsuccessful attempts to control the content of printed material. It was not until the Star Chamber Decrees of 1634 that the Stuart monarchy put in place some serious and far-reaching regulatory requirements to control not what was printed (although that too was the subject of the 1634 provisions) but how it was printed. The way in which the business and process of printing was regulated gave the State unprecedented control not only over content but by the means of production and dissemination of that content. The reaction against this – a process involving some many years – led to our present values that underpin freedom of the press and freedom of expression.

As new communications technologies have been developed the State has interested itself in imposing regulatory requirements. There is no permissionless innovation available in setting up a radio station or television network. The State has had a hand of varying degrees of heaviness throughout the development and availability of both these media. In 1966 there was a tremendous issue about whether or not a ship that was to be the platform for the unlicensed and therefore “pirate” radio station, Radio Hauraki would be allowed to sail. The State unsuccessfully tried to prevent this.

Once upon a time in New Zealand (and still in the United Kingdom) anyone who owned a television set had to pay a broadcasting fee. This ostensibly would be applied to the development of content but is indicative of the level of control that the State exerted. And it was not a form of content regulation. It was regulation that was applied to access to the technology.

More recently we are well aware of the so called “Great Firewall of China” – a massive state sponsored means of controlling the technology to proven access to content. And conglomerates such a Google have found that if they want to do business in China they must play by Chinese rules.

The advocacy of greater technological control has come from Russia, Brazil, India and some of the Arab countries. These States I think understand the import of McLuhan’s paradox of technology and content. The issue is whether or not the Christchurch Call is prepared to take that sort of radical step and proceed to consider technological regulation rather than step carefully around the edges of the problem.

Of course, one reason why at least some Western democracies would not wish to take such an extreme step lies in their reliance upon the Internet themselves as a means of doing business, be it by way of using the Internet for the collection of census data, for providing taxation services or online access to benefits and other government services. Indeed the use of the Internet by politicians who use their own form of argumentative speech has become the norm. Often, however, we find that the level of political debate is as banal and cliched as the platforms that are used to disseminate it. But to put it simply, where would politicians be in the second decade of the 21st Century without access to Facebook, Twitter or Instagram (or whatever new flavor of platform arises as a result of permissionless innovation).

Conclusion

I think it is safe to say that the Christchurch Call is no more and no less than a very well managed and promoted public relations exercise that is superficial and will have little long term impact. It will go down in history as part of a continuing story that really started with Deauville and continues and will continue to do so.

Only when Governments are prepared to learn and apply the lessons about the Internet and the way that it works will we see effective regulatory steps instituted.

And then, when that occurs, will we realise that democracy and the freedom that we have to hold and express our own opinions is really in trouble.


[1] Internet NZ “The Christchurch Call: helping important voices be heard” https://internetnz.nz/Christchurch-Call (Last accessed 2 January 2020)