Lessons Unlearned

The Christchurch Call was a meeting co-hosted by New Zealand’s Prime Minister, Jacinda Ardern and French President, Emmanuel Macron, held in Paris on 15 May 2019. It’s a global call which aims to “bring together countries and tech companies in an attempt to bring to an end the ability to use social media to organise and promote terrorism and violent extremism.”[1]It is intended to be an ongoing process.

This piece was written at the end of last year and for one reason or another – and primarily the Covid-19 crisis – has languished. I post it now as the first anniversary of the Call approaches. The overall context is that of Internet Regulation – content or technology – and the difficulties that presents.

Introduction

The Christchurch Call is not the first attempt to regulate or control Internet based content. It will not be the last. And, despite its aim to reduce or eliminate the use of social media to organize and promote terrorism and violent extremism, it carries within it the seeds of its own downfall. The reason is, like so many efforts before it, the target of the Christchurch Call is content rather than technology.

Calls to regulate content and access to it have been around since the Internet went public.

The Christchurch Call is eerily familiar, not because of what motivated and inspired it, but because it represents an effort by Governments and States to address perceived problems posed by Internet based content.

In 2011 a similar effort was led by then French President Nicholas Sarkozy at the economic summit at Deauville – is it a co-incidence that once again the French are leaders in this present initiative? So what was the Deauville initiative all about?

Deauville May 2011

The Background

In 2011 and 2012 there were renewed calls for greater regulation of the Internet. That these were driven by the events in the Middle East early in 2011 which became known as the “Arab Spring” seems more than coincidental. The “Arab Spring” is a term that refers to anti-government protests that spread across the Middle East. These followed a successful uprising in Tunisia against former leader Zine El Abidine Ben Ali which emboldened similar anti-government protests in a number of Arab countries. The protests were characterised by the extensive use of social media to organise gatherings and spread awareness. There has, however, been some debate about the influence of social media on the political activism of the Arab Spring. Some critics contend that digital technologies and other forms of communication — videos, cellular phones, blogs, photos and SMS messages— have brought about the concept of a “digital democracy” in parts of North Africa affected by the uprisings. Others have claimed that in order to understand the role of social media during the Arab Spring there is context of high rates of unemployment and corrupt political regimes which led to dissent movements within the region. There is certainly evidence of an increased uptake of Internet and social media usage over the period of the events, and during the uprising in Egypt; then President Mubarak’s State Security Investigations Service blocked access to Twitter and Facebook and on 27 January 2011 the Egyptian Government shut down the Internet in Egypt along with SMS messaging.

Sarkozy’s Initiative

In May 2011 at the first e-G8 Forum, before the G8 summit in France, President Nicolas Sarkozy issued a provocative call for stronger Internet regulation. Mr Sarkozy convened a special gathering of global “digerati” in Paris and called the rise of the Internet a “revolution” as significant as the age of exploration and the industrial revolution.

This revolution did not have a flag and Mr Sarkozy acknowledged that the Internet belonged to everyone, citing the Arab Spring as a positive example. However, he warned executives of Google, Facebook, Amazon and eBay who were present:

“The universe you represent is not a parallel universe. Nobody should forget that governments are the only legitimate representatives of the will of the people in our democracies. To forget this is to risk democratic chaos and anarchy.”

Mr Sarkozy was not alone in calling existing laws and regulations inadequate to deal with the challenges of a borderless digital world. Prime Minister David Cameron of Britain stated that he would ask Parliament to review British privacy laws after Twitter users circumvented court orders preventing newspapers from publishing the names of public figures who are suspected of having had extramarital affairs, but he did not go as far as Mr Sarkozy who was pushing for a “civilized Internet” implying wide regulation.

However, the Deauville Communique did not extend as far as Mr Sarkozy may have liked. It affirmed the importance of intellectual property protection, the effective protection of personal data and individual privacy, security of networks, and a crackdown on trafficking in children for sexual exploitation; however it did not advocate state control of the Internet but staked out a role for governments.

Deauville was not an end to the matter. The appetite for Internet regulation by domestic governments had just been whetted. This was demonstrated by the events at the ITU meeting in Dubai in 2012

The ITU meeting in Dubai December 2012

The meeting of the International Telecommunications Union (ITU) in Dubai provided the forum for further consideration of expanded Internet regulation. No less an authority than Vinton Cerf, the co-developer with Robert Kahn of the TCP/IP protocol which was one of the important technologies that made the Internet possible, sounded a warning when he said:

“But today, despite the significant positive impact of the Internet on the world’s economy, this amazing technology stands at a crossroads. The Internet’s success has generated a worrying desire by some countries’ governments to create new international rules that would jeopardize the network’s innovative evolution and its multi-faceted success.

This effort is manifesting itself in the UN General Assembly and at the International Telecommunication Union — the ITU — a United Nations organization that counts 193 countries as its members, each holding one vote. The ITU currently is conducting a review of the international agreements governing telecommunications and it aims to expand its regulatory authority to include the Internet at a treaty summit scheduled for December of this year in Dubai….”

Today, the ITU focuses on telecommunication networks, radio frequency allocation, and infrastructure development. But some powerful member countries saw an opportunity to create regulatory authority over the Internet. In June 2012, the Russian government stated its goal of establishing international control over the Internet through the ITU. Then, in September 2012, the Shanghai Cooperation Organization — which counts China, Russia, Tajikistan, and Uzbekistan among its members — submitted a proposal to the UN General Assembly for an “international Code of Conduct for Information Security.” The organization’s stated goal was to establish government-led “international norms and rules standardizing the behavior of countries concerning information and cyberspace.” Other proposals of a similar character have emerged from India and Brazil. And in an October 2010 meeting in Guadalajara, Mexico, the ITU itself adopted a specific proposal to “increase the role of ITU in Internet governance.”

As a result of these efforts, there was a strong possibility that the ITU would significantly amend the International Telecommunication Regulations — a multilateral treaty last revised in 1988 — in a way that authorizes increased ITU and member state control over the Internet. These proposals, if they had been implemented, would have changed the foundational structure of the Internet that has historically led to unprecedented worldwide innovation and economic growth.

What is the ITU?

The ITU, originally the International Telegraph Union, is a specialised agency of the United Nations and is responsible for issues concerning information and communication technologies. It was originally founded in 1865 and in the past has been concerned with technical communications issues such as standardisation of communications protocols (which was one of its original purposes), the management of the international radio-frequency spectrum and satellite orbit resources and the fostering of sustainable, affordable access to information and communication technology. It took its present name in 1934 and in 1947 became a specialised agency of the United Nations.

The position of the ITU approaching the 2012 meeting in Dubai was that, given the vast changes that had taken place in the world of telecommunications and information technologies, the International Telecommunications Regulations (ITR) that had been revised in 1988 were no longer in keeping with modern developments. Thus, the objective of the 2012 meeting was to revise the ITRs to suit the new age. After a controversial meeting in Dubai in December 2012, the Final Acts of the Conference were published. The controversial issue was that there was a proposal to redefine the Internet as a system of government-controlled, state-supervised networks. The proposal was contained in a leaked document by a group of members including Russia, China, Saudi Arabia, Algeria, Sudan, Egypt and the United Arab Emirates. However, the proposal was withdrawn. But the governance model defined the Internet as an “international conglomeration of interconnected telecommunication networks”, and that “Internet governance shall be effected through the development and application by governments” with member states having “the sovereign right to establish and implement public policy, including international policy, on matters of Internet governance”.

This wide-ranging proposal went well beyond the traditional role of the ITU, and other members such as the United States, European countries, Australia, New Zealand and Japan insisted that the ITU treaty should apply to traditional telecommunications systems. The resolution that won majority support towards the end of the conference stated that the ITU’s leadership should “continue to take the necessary steps for ITU to play an active and constructive role in the multi-stakeholder model of the Internet.”

However, the Treaty did not receive universal acclaim. United States Ambassador Kramer announced that the US would not be signing the new treaty. He was followed by the United Kingdom. Sweden said that it would need to consult with its capital (code in UN-speak for “not signing”). Canada, Poland, the Netherlands, Denmark, Kenya, New Zealand, Costa Rica, and the Czech Republic all made similar statements. In all, 89 countries signed while 55 did not.

From the Conference three different versions of political power vis-à-vis the Internet became clear. Cyber sovereignty states such as Russia, China and Saudi Arabia advocated that the mandate of the ITU be extended to include Internet governance issues. The United States and allied predominantly Western states were of the view that the current multi-stakeholder processes should remain in place. States such as Brazil, South Africa and Egypt rejected the concept of Internet censorship and closed networks but expressed concern at what appeared to be United States dominance of aspects of Internet management.

In 2014 at the NETmundial Conference the multi-stakeholder model was endorsed, recognising that the Internet was a global resource and should be managed in the public interest.

The Impact of International Internet Governance

Issues surrounding Internet Governance are important in this discussion because issues of Internet control will directly impact upon content delivery and will thus have an impact upon freedom of expression in its widest sense. 

Rules surrounding global media governance do not exist. The current model based on localised rule systems and the lack of harmonisation arise from differing cultural and social perceptions as to media content. Although the Internet- based technologies have the means to provide a level of technical regulation such as code itself, digital rights management and internet filtering, and the larger issue of control of the distribution system poses an entirely novel set of issues that have not been encountered by traditional localised print and broadcast systems.

The Internet separates the medium from the message and issues of Internet governance will have a significant impact upon the means and scope of content delivery. From the perspective of media freedom and freedom of expression, Internet governance is a matter that will require close attention. As matters stand at the moment the issue of who rules the channels of communication is a work in progress.

Quite clearly there is a considerable amount of concern about the way in which national governments wish to regulate, or in some way govern and control, the Internet. Although at first glance this may seem to be directed at the content of content passing through a new communications technology, the attempt to regulate through a technological forum such as the ITU clearly demonstrates that governments wish to control not only content but the various transmission and protocol layers of the Internet and possibly even the backbone itself. The Christchurch Call is merely a continuation of that desire by governments to regulate and control the Internet.

Resisting Regulation

The early history of the commercial Internet reveals a calculated effort to ensure that the new technology was not the subject of regulation. The Progress and Freedom Foundation, established in 1993, had an objective of ensuring that, unlike radio or television, the new medium would lie beyond the realm of government regulation. At a meeting in 1994, attended by futurists Alvin Toffler and Esther Dyson along with George Keyworth, President Reagan’s former science adviser, a Magna Carta for the Knowledge Age contended that although the industrial age may have required some form of regulation, the knowledge age did not. If there was to be an industrial policy for the knowledge age, it should focus on removing barriers to competition and massively deregulating the telecommunications and computing industries.

On 8 February 1996 the objectives of the Progress and Freedom Foundation became a reality when President Clinton signed the Telecommunications Act. This legislation effectively deregulated the entire communications industry, allowed for the subsequent consolidation of media companies and prohibiting regulation of the Internet. On the same day, as a statement of disapproval that the US government would even regulate by deregulating, John Perry Barlow released his Declaration of Independence of Cyberspace from the World Economic Forum in Davos, Switzerland.

Small wonder that the United States of America resists attempts at Internet regulation. But the problem is more significant than the will or lack of will to regulate. The problem lies within the technology itself and although efforts such as Deauville, Dubai, the NetMundial Conference and the Christchurch Call may focus on content, this is merely what Marshall McLuhan termed the meat that attracts the lazy dog of the mind. To regulate content requires an understanding and appreciation of some of the deeper aspects or qualities of the new communications technology. Once these are understood, the magnitude of the task becomes apparent and the practicality of effectively achieving regulation of communications runs up against the fundamental values of Western liberal democracies.

Permissionless Innovation

One characteristic of the Digital Paradigm is that of permissionless innovation. No approvals are need for developers to connect an application or a platform to the backbone of the Internet. All that is required is that the application comply with standards set by Internet engineers and essentially these standards ensure that an application will be compatible with Internet protocols.

No licences are required to connect an application. No regulatory approvals are needed. A business plan need not be submitted for bureaucratic fiat. Permissive innovation has been a characteristic of the Internet and it has allowed the Internet to grow. It allowed for the development of the Hypertext Transfer Protocol that allowed for the development of the World Wide Web – the most familiar aspect of the Internet today. It allowed for the development of a myriad of social media platforms. It co-exists with another quality of the Internet which is that of continuing disruptive change – the reality that the environment is not static and does not stand still.

Targetting the most popular social media platforms will only address a part of the problem. Permissionless innovation means that the leading platforms may modify their algorithms to try and capture extreme content but this is a less than subtle solution and is prone to the error of false positives.

Permissionless innovation and the ability to develop and continue to develop other social media platforms brings into play Michael Froomkin’s theory of regulatory arbitrage – where users will migrate to the environment that most suits them. Should the major players so regulate their platforms that desired aspects are no longer available, users may choose to use other platforms which will be more “user friendly” or attuned to their needs.

The question that arises from this aspect of the Digital Paradigm is how one regulates permissive innovation, given its critical position in the development of communications protocols. To constrain it, to tie it up in the red tape that accompanies broadcast licences and the like would strangle technological innovation, evolution and development. To interfere with permissionless innovation would strangle the continuing promise of the Internet as a developing communications medium.

Content Dynamics

An aspect of content on the Internet is what could be termed persistence of information. Once information reaches the Internet it is very difficult to remove it because it may spread through the vast network of computers that comprise the Internet and maybe retained on any one of the by the quality of exponential dissemination discussed below, despite the phenomenon of “link rot.”  It has been summed up in another way by the phrase “the document that does not die.” Although on occasions it may be difficult to locate information, the quality of information persistence means that it will be on the Internet somewhere.  This emphasises the quality of permanence of recorded information that has been a characteristic of that form of information ever since people started putting chisel to stone, wedge to clay or pen to papyrus.  Information persistence means that the information is there but if it has become difficult to locate,and  retrieving it may resemble the digital equivalent of an archaeological expedition, although the spade and trowel are replaced by the search engine.  The fact that information is persistent means that it is capable of location.

In some respects the dynamic nature of information challenges the concept of information persistence because digital content may change.  It could be argued that this seems to be more about the nature of content, but the technology itself underpins and facilitates this quality as it does with many others.

An example of dynamic information may be found in the on-line newspaper which may break a story at 10am, receive information on the topic by midday and by 1pm on the same day have modified the original story.  The static nature of print and the newspaper business model that it enabled meant that the news cycle ran from edition to edition. The dynamic quality of information in the Digital Paradigm means that the news cycle potentially may run on a 24 hour basis, with updates every five minutes.

Similarly, the ability that digital technologies have for contributing dialog on any topic enabled in many communication protocols, primarily as a result of Web 2.0, means that an initial statement may undergo a considerable amount of debate, discussion and dispute, resulting ultimately in change.  This dynamic nature of information challenges the permanence that one may expect from persistence and it is acknowledged immediately that there is a significant tension between the dynamic nature of digital information and the concept of the “document that does not die”.

Part of the dynamic of the digital environment is that information is copied when it is transmitted to a user’s computer.  Thus there is the potential for information to be other than static.  If I receive a digital copy I can make another copy of it or, alternatively, alter it and communicate the new version.  Reliance upon the print medium has been based upon the fact that every copy of a particular edition is identical until the next edition.  In the digital paradigm authors and publishers can control content from minute to minute.

In the digital environment individual users may modify information at a computer terminal to meet whatever need may be required.  In this respect the digital reader becomes something akin to a glossator of the scribal culture, the difference being that the original text vanishes and is replaced with the amended copy.  Thus one may, with reason, validly doubt the validity or authenticity of information as it is transmitted.

Let us assume for the moment that a content moderation policy by a search engine or a social media platform can be developed that will identify extreme content and return a “null” result. These policies will often if not always have identifiable gaps. If the policy relates to breaches of terms of use, how often are these breaches subject to human review which is often more nuanced than an algorithm. Often “coded language” may be used as alternatives to extreme content. Because of the context-specific nature of the coded language and the fact that it is not typically directed at a vulnerable group, targetted posts would in most instances not trigger social media platform content rules even if they were more systematically flagged. In addition the existence of “net centers” that coordinate attacks using hundreds of accounts result in broad dissemination of harmful posts which are harder to remove. Speech that is removed may be reposted using different accounts. Finally, content moderation policies of some social media providers do not provide a means for considering the status of the speaker in evaluating the harmful impact the speech may have, and it is widely recognized in the social science literature that speakers with authority have greater influence on behavior.

Exponential Dissemination

Dissemination was one of the leading qualities of print identified by Elizabeth Eisenstein in her study of the printing press as an agent of change, and it has been a characteristic of all information technologies since. What the internet and digital technologies enable is a form of dissemination that has two elements.

One element is the appearance that information is transmitted instantaneously to both an active (on-line recipient) and a passive (potentially on-line but awaiting) audience. Consider the example of an e-mail. The speed of transmission of emails seems to be instantaneous (in fact it is not) but that enhances our expectations of a prompt response and concern when there is not one. More important, however, is that a matter of interest to one email recipient may mean that the email is forwarded to a number of recipients unknown to the original sender. Instant messaging is so-called because it is instant and a complex piece of information may be made available via a link by Twitter to a group of followers which may then be retweeted to an exponentially larger audience.

The second element deals with what may be called the democratization of information dissemination. This aspect of exponential dissemination exemplifies a fundamental difference between digital information systems and communication media that have gone before. In the past information dissemination has been an expensive business. Publishing, broadcast, record and CD production and the like are capital intensive businesses. It used to (and still does)  cost a large amount of money and required a significant infrastructure to be involved in information gathering and dissemination. There were a few exceptions such as very small scale publishing using duplicators, carbon paper and samizdats but in these cases dissemination was very small. Another aspect of early information communication technologies is that they involved a monolithic centralized communication to a distributed audience. The model essentially was one of “one to many” communication or information flow.

The Internet turns that model on its head. The Internet enables a “many to many” communication or information flow  with the added ability on the part of recipients of information to “republish” or “rebroadcast”. It has been recognized that the Internet allows everyone to become a publisher. No longer is information dissemination centralized and controlled by a large publishing house, a TV or radio station or indeed the State. It is in the hands of users. Indeed, news organizations regularly source material from Facebook, YouTube or from information that is distributed on the Internet by Citizen Journalists. Once the information has been communicated it can “go viral” a term used to describe the phenomenon of exponential dissemination as Internet users share information via e-mail, social networking sites or other Internet information sharing protocols. This in turn exacerbates the earlier quality of Information Persistence or “the document that does not die” in that once information has been subjected to Exponential Dissemination it is almost impossible to retrieve it or eliminate it.

It can be seen from this discussion that dissemination is not limited to the “on-line establishment” of Facebook, Twitter or Instagram, and trying the address the dissemination of extreme content by attacking it through ”established” platforms will not eliminate it – just slow down the dissemination process. It will present and obstruction as in fact on-line censorship is just that – an obstruction to the information flow on the Internet. It was John Gilmore who said The Net interprets censorship as damage and routes around it. Primarily because State-based censorship is based on a centralized model and the dissemination of information of the Internet is based upon a distributed one, effectively what happens on the Internet is content redistribution which is a reflection both of Gilmore’s adage and the quality of exponential dissemination.

The Dark Web

Finally there is the aspect of the Internet known as the Dark Web. If the searchable web comprises 10% of available Internet content there is content that is not amenable to search known as the Deep Web which encompasses sites such as LexisNexis and Westlaw if one seeks and example from the legal sphere.

The Deep Web is not the Dark Web. The Dark Web is altogether different. It is more difficult to reach than the surface or deep web, since it’s only accessible through special browsers such as the Tor browser. The dark web is the unregulated part of the internet. No organization, business or government is in charge of it or able to apply rules. This is exactly the reason why the dark web is commonly associated with illegal practices. It’s impossible to reach the dark web through a ‘normal’ browser, such as Google Chrome or Mozilla Firefox. Even in the Tor browser you won’t be able to find any ‘dark’ websites ending in .com or .org. Instead, URLs usually consist of a random mix of letters and numbers and end in .onion. Moreover, the URLs of websites on the dark net change regularly. If there are difficulties in regulating content via social media platforms, to do so via the Dark Web would be impossible. Yet it is within that environment that most of the extreme content may be found.

Effective Regulation

The Christchurch Call has had some very positive effects. It has drawn attention, yet again, to the problem of dissemination of extreme and terrorist content online. It should be remembered that this is not a new issue and has been in the sights of politicians since Deauville although in New Zealand, as far back as 1993, there were proposals to deal with the problems with the availability of pornography online.

Another positive outcome of the Christchurch Call has been to increase public awareness and corporate acceptance of the necessity for there to be some standards of global good citizenship on the part of large and highly profitable Internet based organisations. It is not enough for a company to have as its guiding light “do no evil” but more is required including steps to ensure that its service are not facilitating the doing of evil by others.

At the moment the Christchurch Call has adopted, at least in public, a velvet glove approach, although it is not hard to imagine that in some of the closed meetings the steel fist has been if not threatened at least uncovered. There are a number of ways that the large conglomerates might be persuaded to toe a more responsible line. One is to introduce the concept of an online duty of care as has been suggested in the United Kingdom. Although this sounds like a comfortable and simple concept, anyone who has spent some time studying the law of torts will understand that the duty of care is a highly nuanced and complex aspect of the law of obligations, and one which will require years of litigation and development before it achieves a satisfactory level of certainty.

Another way to have conglomerates toe the line is to increase the costs of doing business. Although it is in a different sphere – that of e-commerce – the recent requirement by the New Zealand Government upon overseas vendors to impose GST is an example, although I was highlighting this issue 20 years ago. Governments do not have a tendency to move fast although they do have a tendency to break things once the sleeping giant awakes.

Yet these various moves and others like them are really rather superficial and only scratch the surface of the content layer of the Internet. The question must be asked – how serious are the governments of the Christchurch Call in regulating not simply access to content by the means by which content is accessed – the technology.

The lessons of history give us some guidance. The introduction of the printing press into England was followed by 120 years of unsuccessful attempts to control the content of printed material. It was not until the Star Chamber Decrees of 1634 that the Stuart monarchy put in place some serious and far-reaching regulatory requirements to control not what was printed (although that too was the subject of the 1634 provisions) but how it was printed. The way in which the business and process of printing was regulated gave the State unprecedented control not only over content but by the means of production and dissemination of that content. The reaction against this – a process involving some many years – led to our present values that underpin freedom of the press and freedom of expression.

As new communications technologies have been developed the State has interested itself in imposing regulatory requirements. There is no permissionless innovation available in setting up a radio station or television network. The State has had a hand of varying degrees of heaviness throughout the development and availability of both these media. In 1966 there was a tremendous issue about whether or not a ship that was to be the platform for the unlicensed and therefore “pirate” radio station, Radio Hauraki would be allowed to sail. The State unsuccessfully tried to prevent this.

Once upon a time in New Zealand (and still in the United Kingdom) anyone who owned a television set had to pay a broadcasting fee. This ostensibly would be applied to the development of content but is indicative of the level of control that the State exerted. And it was not a form of content regulation. It was regulation that was applied to access to the technology.

More recently we are well aware of the so called “Great Firewall of China” – a massive state sponsored means of controlling the technology to proven access to content. And conglomerates such a Google have found that if they want to do business in China they must play by Chinese rules.

The advocacy of greater technological control has come from Russia, Brazil, India and some of the Arab countries. These States I think understand the import of McLuhan’s paradox of technology and content. The issue is whether or not the Christchurch Call is prepared to take that sort of radical step and proceed to consider technological regulation rather than step carefully around the edges of the problem.

Of course, one reason why at least some Western democracies would not wish to take such an extreme step lies in their reliance upon the Internet themselves as a means of doing business, be it by way of using the Internet for the collection of census data, for providing taxation services or online access to benefits and other government services. Indeed the use of the Internet by politicians who use their own form of argumentative speech has become the norm. Often, however, we find that the level of political debate is as banal and cliched as the platforms that are used to disseminate it. But to put it simply, where would politicians be in the second decade of the 21st Century without access to Facebook, Twitter or Instagram (or whatever new flavor of platform arises as a result of permissionless innovation).

Conclusion

I think it is safe to say that the Christchurch Call is no more and no less than a very well managed and promoted public relations exercise that is superficial and will have little long term impact. It will go down in history as part of a continuing story that really started with Deauville and continues and will continue to do so.

Only when Governments are prepared to learn and apply the lessons about the Internet and the way that it works will we see effective regulatory steps instituted.

And then, when that occurs, will we realise that democracy and the freedom that we have to hold and express our own opinions is really in trouble.


[1] Internet NZ “The Christchurch Call: helping important voices be heard” https://internetnz.nz/Christchurch-Call (Last accessed 2 January 2020)

Dangerous Speech – some legislative proposals

Preface

This piece was written in April 2019. I sat on it for a while and then published it on the Social Science Research Network. It has attracted some interest since it was posted and was recently listed on SSRN’s Top Ten download list for LSN: Criminal Offenses & Defenses. As at 21 January a copy had been downloaded 21 times and there have been 180 abstract views.

Of more interest is the fact that a colleague in the United States has used the paper as a teaching aid for his First Amendment teaching course on the case of Terminiello v City of Chicago 337 U.S. 1 (1949). Terminiello held that a “breach of peace” ordinance of the City of Chicago that banned speech which “stirs the public to anger, invites dispute, brings about a condition of unrest, or creates a disturbance” was unconstitutional under the First and Fourteenth Amendments to the United States Constitution.

My piece, which I have decided to publish on this blog, deals primarily with the position under NZ Law. I had not come across Terminiello but it is interesting to see that it comes largely to a similar conclusion. It is a real thrill that has been found to be useful for teaching purposes.

Abstract

This paper considers steps that can be taken to legislate against hate speech.

 The first issue is the term “hate speech” itself and, in light of the proposals advanced, this emotive and largely meaningless term should be replaced with that of “dangerous speech” which more adequately encapsulates the nature of the harm that the law should address.

The existing criminal provisions relating to what I call communications offences are outlined. Proposals are advanced for an addition to the Crimes Act to fill what appears to be a gap in the communications offences and which should be available to both individuals and groups. A brief discussion then follows about section 61 of the Human Rights Act and section 22 of the Harmful Digital Communications Act. It is suggested that major changes to these pieces of legislation is unnecessary.

Communications offences inevitably involve a tension with the freedom of expression under the New Zealand Bill of Rights Act and the discussion demonstrates that the proposal advanced are a justifiable limitation on freedom of expression, but also emphasises that a diverse society must inevitably contain a diversity of opinion which should be freely expressed.  

 Introduction

The Context

In the early afternoon of 15 March 2019 a gunman armed with semi-automatic military style weapons attacked two mosques in Christchurch where people had gathered to pray. There were 50 deaths. The alleged gunman was apprehended within about 30 minutes of the attacks. It was found that he had live streamed his actions via Facebook. The stream was viewed by a large number of Facebook members and was shared across Internet platforms.

It also transpired that the alleged gunman had sent a copy of his manifesto entitled “The Great Replacement: Towards a New Society” to a number of recipients using Internet based platforms. Copies of both the live stream and the manifesto have been deemed objectionable by the Chief Censor.[1]

In addition it appears that the alleged gunman participated in discussions on Internet platforms such as 4Chan and 8Chan which are known for some of their discussion threads advocating White Supremacy and Islamophobic tropes

The Reaction

There can be no doubt that what was perpetrated in Christchurch amounted to a hate crime. What has followed has been an outpouring of concern primarily at the fact that the stream of the killings was distributed via Facebook and more widely via the Internet.

The response by Facebook has been less than satisfactory although it would appear that in developing their Livestream facility they then were unable to monitor and control the traffic across it – a digital social media equivalent of Frankenstein’s creature.

However, the killings have focused attention on the wider issue of hate speech and the adequacy of the law to deal with this problem.

Whither “Hate” Speech

The problem with the term “hate speech” is that it is difficult, if not impossible, to define.

Any speech that advocates, incites and intends physical harm to another person must attract legal sanction. It is part of the duty of government to protect its citizens from physical harm.

In such a situation, it matters not that the person against whom the speech is directed is a member of a group or not. All citizens, regardless of any specific identifying characteristics are entitled to be protected from physical harm or from those who would advocate or incite it.

Certain speech may cause harm that is not physical. Such harm may be reputational, economic or psychological. The law provides a civil remedy for such harms.

At the other end of the spectrum – ignoring speech that is anodyne – is the speech that prompts the response “I am offended” – what has been described as the veto statement.[2] From an individual perspective this amounts to a perfectly valid statement of opinion. It may not address the particular argument or engage in any meaningful debate. If anything it is a statement of disengagement akin to “I don’t like what I am hearing.”

Veto Statements

The difficulty arises when such a veto statement claims offence to a group identity. Such groups could include the offended woman, the offended homosexual, the offended person of colour or some other categorization based on the characteristics of a particular group. The difficulty with such veto statements – characterizing a comment as “racist” is another form of veto of the argument – is that they legitimize the purely subjective act of taking offence, generally with negative consequences for others.

Should speech be limited, purely because it causes offence? There are many arguments against this proposition. That which protects people’s rights to say things I find objectionable or offensive is precisely what protects my right to object.  Do we want to live in a society that is so lacking in robustness that we are habitually ready to take offence? Do we want our children to be educated or socialized in this way? Do we desire our children to be treated as adults, or our adults to be treated as children? Should our role model be the thin-skinned individual who cries “I am offended” or those such as Mandela, Baldwin or Gandhi who share the theme that although something may be grossly offensive, it is beneath my dignity to take offence? Those who abuse me demean themselves.

It may well be that yet another veto statement is applied to the mix. What right does a white, privileged, middle-class old male – a member of a secure group – have to say this. It is my opinion that the marginalization of the “I’m offended” veto statement is at least to open the door to proper debate and disagreement.

Furthermore, the subjective taking of offence based on group identity ignores the fact that we live in a diverse and cosmopolitan society. The “I’m offended” veto statement discourages diversity and, in particular, diversity of opinion. One of the strengths of our society is its diversity and multi-cultural nature. Within this societal structure are a large number of different opinions. For members of one group to shut down the opinions of another on the basis of mere offence is counter to the diverse society that we celebrate.

The term “hate speech” is itself a veto statement and often an opposing view is labelled as “hate speech”. The problem with this approach seems to be that the listener hates what has been said and therefore considers the proposition must be “hate speech”. This is arrant nonsense. The fact that we may find a proposition hateful to our moral or philosophical sense merely allows us to choose not to listen further. But it does not mean that because I find a point of view hateful that it should be shut down. As Justice Holmes said in US v Schwimmer[3] “if there is any principle of the Constitution that more imperatively calls for attachment than any other, it is the principle of free thought—not free thought for those who agree with us but freedom for the thought that we hate.”

Our commitment to freedom of expression lies not in allowing others the freedom to say things with which we agree, but in allowing them the right to say things with which we absolutely disagree.

Finally, in considering the nature of the veto statement “I’m offended” or categorizing a comment as “hate speech” where lies the harm. Is anybody hurt? The harm in fact comes in trying to shut down the debate with the use of the veto statement.

Aspects of “Harm”

However, recent thinking has had a tendency to extend the concept of harm suffered by individuals. It is accepted that the law should target physical harm, but should it protect an individual from any sort of harm. Catherine MacKinnon has formulated a view, based on the work of J.L. Austin, that many words or sentiments are essentially indistinguishable from deeds and therefore, sexist or misogynistic language should be regarded as a form of violence.[4] This form of assaultive speech can be extended to be available to any group based of distinguishing characteristics or identity.

The emphasis is upon the subjectivity of the person offended. What offence there may be is in the sphere of feelings. It may follow from this that if I do not feel I have been offended then I have not been offended. If we reverse the proposition only the individual may judge whether or not they have been offended. I would suggest that this element of subjectivity is not the interest of the law.

The problem is that such an extension of potentially harmful speech becomes equated with “hate speech” and virtually encompasses any form of critical dialogue. To conflate offence with actual harm means that any sort of dialogue may be impossible.

To commit an offence of violence is to perform an action with objective, observable detrimental physical consequences, the seriousness of which requires the intervention of the law. To give offence is to perform an action – the making of a statement – the seriousness of which is in part dependant upon another person’s interpretation of it.

An example may be given by looking at Holocaust denial. Those who deny the Holocaust may insult the Jewish people. That may compound the injury that was caused by the event itself. But the insult is not identical to the injury. To suggest otherwise is to invite censorship. The denial of the Holocaust is patently absurd. But it needs to be debated as it was when Deborah Lipstadt challenged the assertions of David Irving. In an action brought by Irving for defamation his claims of Holocaust denial were examined and ultimately ridiculed.[5]

Jeremy Waldron is an advocate for limits on speech. He argues that since the aim of “hate speech” is to compromise the dignity of those at whom it is targeted it should be subject to restrictions.[6] Waldron argues that public order means more than an absence of violence but includes the peaceful order of civil society and a dignitary order of ordinary people interacting with one another in ordinary ways based upon an arms-length respect.

So what does Waldron mean by dignity. He relies upon the case of Beauharnais v Illinois[7] where the US Supreme Court upheld the constitutionality of a law prohibiting any material that portrayed “depravity, criminality, unchastity or lack of virtue of a class of citizens, of any race, colour, creed or religion.” On this basis Waldron suggests that those who attack the basic social standing and reputation of a group should be deemed to have trespassed upon that group’s dignity and be subject to prosecution. “Hate speech”, he argues, should be aimed at preventing attacks on dignity and not merely offensive viewpoints. Using this approach I could say that Christianity is an evil religion but I could not say Christians are evil people.

The problem with Waldron’s “identity” approach is that is that the dignity of the collective is put before the dignity of its individual members. This raises the difficulty of what may be called “groupthink”. If I think of myself primarily as a member of a group I have defined my identity by my affiliation rather than by myself. This group affiliation suggests a certain fatalism, that possibilities are exhausted, perhaps from birth, and that one cannot be changed. This runs directly against Martin Luther King’s famous statement where he rejected identity based on race but preferred an individual assessment.

“I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin but by the content of their character.”

The problem with the proposition that the state should protect its citizens against what Waldron calls “group defamation” is that it runs the risk of its citizens becoming infantalised, that in fact such an approach undermines their individual dignity by assuming that they cannot answer for themselves.

Rather than encouraging people to be thin-skinned, what is required in a world of increasingly intimate diversity is to learn how to be more thick-skinned and to recognize and celebrate the difference that lies in diversity. As Ronald Dworkin put it, no one has a right not to be offended and in fact we should not take offence too readily. In a free society I may be free to feel offended but should not use that offence to interfere with the freedoms of another.

Dangerous Speech

It will be by now apparent that my view is that “hate speech” is a term that should be avoided, although I accept that it is part of the lexicon, whether we like it or not. Perhaps it might be proper to focus upon the type of speech that society should consider to be unacceptable and that warrants the interference of law.

Any interference must be based on reasonableness and demonstrable justification, given that the right of freedom of expression under the Bill of Rights Act is the subject of interference. To warrant such interference I suggest that rather than use the term “hate speech” the threshold for the interference of the law could be termed “dangerous speech” – speech that presents a danger to an individual or group of individuals.

The intentional advocacy or inciting of physical harm may be classified as “dangerous speech” and justifies the intervention of the law. It is non-specific and available both to individuals and the groups identified in the Human Rights Act. In certain circumstances – where there is incitement to or advocacy of actual physical harm, the intervention of the criminal law is justified.

The law also deals with psychological harm of a special type – serious emotional distress. That is a test in the Harmful Digital Communications Act (HDCA). That legislation applies only to online speech. That may be a lesser form of “dangerous speech” but within the context of the provisions of section 22 HDCA such interference is justified. The elements of intention, actual serious emotional distress and the mixed subjective objective test provide safeguards that could be considered to be a proportionate interference with the freedom of expression and would harmonise the remedies presently available for online speech with that in the physical world.

There are a number of other provisions in the law that deal with forms of speech or communication harms. Some of these warrant discussion because they demonstrate the proper themes that the law should address.

Existing Communications Offences – a summary

The law has been ambivalent towards what could be called speech crimes. Earlier this year the crime of blasphemous libel was removed from the statute book. Sedition and offences similar to it were removed in 2008. Criminal libel was removed as long ago as 1993.

The Crimes Act 1961

At the same time the law has recognized that it must turn its face against those who would threaten to commit offences. Thus section 306 criminalises the actions of threatening to kill or do grievous bodily harm to any person or sends or causes to be received a letter or writing threatening to kill of cause grievous bodily harm. The offence requires knowledge of the contents of the communication.

A letter or writing threatening to destroy or damage any property or injure any animal where there is knowledge of the contents of the communication and it is done without lawful justification or excuse and without claim or right is criminalized by section 307.

It will be noted that the type of communication in section 306 may be oral or written but for a threat to damage property the threat must be in writing.

Section 307A is a complicated section.[8] It was added to the Act in 2003 and was part of a number of measures enacted to deal with terrorism after the September 11 2001 tragedy. It has received attention in one case since its enactment – that of Police v Joseph.[9]

Joseph was charged with a breach of s 307A(1)(b) of the Crimes Act 1961 in that he, without lawful justification or reasonable excuse and intending to cause a significant disruption to something that forms part of an infrastructure facility in New Zealand namely New Zealand Government buildings, did communicate information that he believed to be about an act namely causing explosions likely to cause major property damage.

Mr. Joseph, a secondary school student at the time, created a video clip that lasted a little over three minutes. He used his laptop and sent messages of threats to the New Zealand Government accompanied by some images that linked the language with terrorism, such as pictures of the aerial attack on the World Trade Centre and images of Osama Bin Laden. The message:[10]

  • threatened a terror attack on the New Zealand Government and New Zealand Government buildings.
  • claimed that large amounts of explosives had been placed in hidden locations on all buildings.
  • warned that New Zealand Government websites would be taken down.
  • threatened the hacking of New Zealand’s media websites.
  • threatened to disclose all Government secrets that have not been released to Wikileaks nor the public.
  • warned that obstruction would lead to harm.

The clip demanded that the New Zealand Government repeal or refrain from passing an amendment to the Copyright Act 1994. It was posted on 6 September 2010 and a deadline was set for 11 September 2010. The clip was attributed to the hacktavist group known as Anonymous.

The clip was posted to YouTube. It was not available to the public by means of a search. It was unlisted and could only be located by a person who was aware of the link to the particular clip.

The clip came to the attention of the Government Communications Security Bureau (GCSB) on 7 September 2010 who passed the information on to the Police Cybercrime Unit to commence an investigation. An initial communication from the GCSB on the morning of 7 September postulated that the clip could be a “crackpot random threat” and confirmed that its communication was “completely outside the Anonymous MO”.[11]

The site was quickly disabled and Mr. Joseph was spoken to by the Police. He made full admissions of his involvement.

The real issue at the trial was one of intent. The intention had to be a specific one. The Judge found that the intention of the defendant was to have his message seen and observed on the Internet and, although his behaviour in uploading the clip to YouTube in an Internet café and using an alias could be seen as pointing to an awareness of unlawful conduct it did not, however, point to proof of the intention to cause disruption of the level anticipated by the statute. It transpired that the defendant was aware that the clip would probably be seen by the authorities and also that he expected that it would be “taken down”.

The offence prescribed in section 308 does involve communication as well as active behavior. It criminalises the breaking or damaging or the threatening to break or damage any dwelling with a specific intention – to intimidate or to annoy. Annoyance is a relatively low level reaction to the behavior. A specific behavior – the discharging of firearms that alarms or intends to alarm a person in a dwelling house – again with the intention to intimidate or annoy – is provided for in section 308(2).

The Summary Offences Act

The Summary Offences Act contains the offence of intimidation in section 21. Intimidation may be by words or behavior. The “communication” aspect of intimidation is provided in section 21(1) which states:

Every person commits an offence who, with intent to frighten or intimidate any other person, or knowing that his or her conduct is likely to cause that other person reasonably to be frightened or intimidated,—

  • threatens to injure that other person or any member of his or her family, or to damage any of that person’s property;

Thus, there must be a specific intention – to frighten or intimidate – together with a communicative element – the threat to injure the target or a member of his or her family, or damage property.

In some respects section 21 represents a conflation of elements of section 307 and 308 of the Crimes Act together with a lesser harm threatened – that of injury – than appears in section 306 of that Act.

However, there is an additional offence which cannot be overlooked in this discussion and it is that of offensive behavior or language provided in section 4 of the Summary Offences Act.

The language of the section is as follows:

  • Every person is liable to a fine not exceeding $1,000 who,—
  • in or within view of any public place, behaves in an offensive or disorderly manner; or
  • in any public place, addresses any words to any person intending to threaten, alarm, insult, or offend that person; or
  • in or within hearing of a public place,—

(i)  uses any threatening or insulting words and is reckless whether any person is alarmed or insulted by those words; or

(ii) addresses any indecent or obscene words to any person.

  • Every person is liable to a fine not exceeding $500 who, in or within hearing of any public place, uses any indecent or obscene words.
  • In determining for the purposes of a prosecution under this section whether any words were indecent or obscene, the court shall have regard to all the circumstances pertaining at the material time, including whether the defendant had reasonable grounds for believing that the person to whom the words were addressed, or any person by whom they might be overheard, would not be offended.
  • It is a defence in a prosecution under subsection (2) if the defendant proves that he had reasonable grounds for believing that his words would not be overheard.

In some respects the consequences of the speech suffered by the auditor (for the essence of the offence relies upon oral communication) resemble those provided in section 61 of the Human Rights Act.

Section 4 was considered by the Supreme Court in the case of Morse v Police.[12] Valerie Morse was convicted in the District Court of behaving in an offensive manner in a public place, after setting fire to the New Zealand flag at the Anzac Day dawn service in Wellington in 2007.

In the District Court, High Court and Court of Appeal offensive behavior was held to mean behaviour capable of wounding feelings or arousing real anger, resentment, disgust or outrage in the mind of a reasonable person of the kind actually subjected to it in the circumstances. A tendency to disrupt public order was not required to constitute behaviour that was offensive. Notwithstanding the freedom of expression guaranteed by NZBORA, the behavior was held to be offensive within the context of the ANZAC observance.

The Supreme Court held that offensive behavior must be behaviour which gives rise to a disturbance of public order. Although agreed that disturbance of public order is a necessary element of offensive behaviour under s 4(1)(a), the Judges differed as to the meaning of “offensive” behaviour. The majority considered that offensive behaviour must be capable of wounding feelings or arousing real anger, resentment, disgust or outrage, objectively assessed, provided that it is to an extent which impacts on public order and is more than those subjected to it should have to tolerate. Furthermore it will be seen that a mixed subjective\objective test is present in that the anger, resentment, disgust or outrage must be measured objectively – how would a reasonable person in this situation respond.

It is important to note that in addition to the orality or behavioural quality of the communication – Anderson J referred to it as behavioural expression[13] –  it must take place in or within view of a public place. It falls within that part of the Summary Offences Act that is concerned with public order and conduct in public places. Finally, offensive behavior is behavior that does more than merely create offence.

Observations on Communications Offences

In some respects these various offences occupy points on a spectrum. Interestingly, the offence of offensive behavior has the greatest implications for freedom of expression or expressive behavior, in that the test incorporates a subjective one in the part of the observer. But it also carries the lightest penalty, and as a summary offence can be seen to be the least serious on the spectrum. The section could be applied in the case of oral or behavioural expression against individuals or groups based on colour, race, national or ethnic origin, religion, gender, disability or sexual orientation as long as the tests in Morse are met.

At the other end of the spectrum is section 307 dealing with threats to kill or cause grievous bodily harm which carries with it a maximum sentence of 7 years imprisonment. This section is applicable to all persons irrespective of colour, race, national or ethnic origin, religion, gender, disability or sexual orientation as are sections 307, 308, section 21 of the Summary Offences Act and section 22 of the Harmful Digital Communications Act which could all occupy intermediate points on the spectrum based on the elements of the offence and the consequences that may attend upon a conviction.

There are some common themes to sections 306, 307, 308 of the Crimes Act and section 21 of the Summary Offences Act.

First, there is the element of fear that may be caused by the behavior. Even although the issue of intimidation is not specifically an element of the offences under sections 306 and 307, there is a fear that the threat may be carried out.

Secondly there is a specific consequence prescribed – grievous bodily harm or damage to or destruction of property.

Thirdly there is the element of communication or communicative behavior that has the effect of “sending a message”.

These themes assist in the formulation of a speech-based offence that is a justifiable limitation on free speech, that recognizes that there should be some objectively measurable and identifiable harm that flows from the speech, but that does not stifle robust debate in a free and democratic society.

A Possible Solution

There is a change that could be made to the law which would address what appears to be something of a gulf between the type of harm contemplated by section 306 and lesser, yet just as significant harms.

I propose that the following language could cover the advocacy or intentional incitement of actual physical injury against individuals or groups. Injury is a lesser physical harm than grievous bodily harm and fills a gap between serious emotional distress present in the HDCA and the harm contemplated by section 306.

The language of the proposal is technology neutral. It could cover the use of words or communication either orally, in writing, electronically or otherwise. Although I dislike the use of the words “for the avoidance of doubt” in legislation for they imply a deficiency of clarity of language in the first place, there could be a definition of words or communication to include the use of electronic media.

The language of the proposal is as follows:

It is an offence to use words or communication that advocates or intends to incite actual physical injury against an individual or group of individuals based upon, in the case of a group, identifiable particular characteristics of that group

This proposal would achieve a number of objectives. It would capture speech or communications that cause or threaten to cause harm of a lesser nature than grievous bodily harm stated in section 306.

The proposal is based upon ascertaining an identifiable harm caused by the speech or communicative act. This enables the nature of the speech to be crystallised in an objective manner rather than the unclear, imprecise and potentially inconsistent use of the umbrella term “hate speech.”

The proposal would cover speech, words or communication across all media. It would establish a common threshold for words or communication below which an offence would be committed.

The proposal would cover any form of communicative act which was the term used by Anderson J in Morse and which the word “expression” used in section 14 of NZBORA encompasses.

The tension between freedom of expression and the limitations that may be imposed by law is acknowledged. It would probably need to be stated, although it should not be necessary, that in applying the provisions of the section the Court would have to have regard to the provisions of the New Zealand Bill of Rights Act 1990.

Other Legislative Initiatives

The Human Rights Act

There has been consideration of expanding other legislative avenues to address the problem of “dangerous” speech. The first avenue lies in the Human Rights Act which prohibits the incitement of disharmony on the basis of race, ethnicity, colour or national origins. One of the recent criticisms of the legislation is that it does not apply to incitement for reasons of religion, gender, disability or sexual orientation.[14]

Before considering whether such changes need to be made – a different consideration to whether they should be made – it is important to understand how the Human Rights Act works in practice. The Act prohibits a number of discriminatory practices in relation to various activities and services.[15] It also prohibits indirect discrimination which is an effects based form of activity.[16] Victimisation or less favourable treatment based on making certain disclosures is prohibited.[17] Discrimination in advertising along with provisions dealing with sexual or racial harassment are the subject of provisions.[18]

The existing provisions relating to racial disharmony as a form of discrimination and racial harassment are contained in section 61 and 63 of the Act.[19]

There are two tests under section 61. One is an examination of the content of the communication. Is it threatening, abusive or insulting? If that has been established the next test is to consider whether it is:

  1. Likely to excite hostility against or
  2. Bring into contempt

Any group of persons either in or coming to New Zealand on the ground of colour, race or ethnic or national origins.

These provisions could well apply to “dangerous speech”. Is it necessary, therefore, to extend the existing categories in section 61 to include religion, gender, disability or sexual orientation.

Religion

Clearly if one were to add religion, threatening, abusive or insulting language about adherents of the Islamic faith would fall within the first limb of the section 61(1) test. But is it necessary that religion be added? And should this be simply because a religious group was targeted?

The difficulty with including threatening, abusive or insulting language against groups based upon religion means that not only would Islamaphobic “hate speech” be caught, but so too would the anti-Christian, anti-West, anti “Crusader” rhetoric of radical Islamic jihadi groups be caught. Would the recent remarks by Winston Peters condemning the implementation of strict sharia law in Brunei that would allow the stoning of homosexuals and adulterers be considered speech that insults members of a religion?[20]

A further difficulty with religious-based speech is that often there are doctrinal differences that can lead to strong differences of opinion that are strongly voiced. Often the consequences for doctrinal heresy will be identified as having certain consequences in the afterlife. Doctrinal disputes, often expressed in strong terms, have been characteristics of religious discourse for centuries. Indeed the history of the development of the freedom of expression and the freedom of the press was often in the context of religious debate and dissent.

It may well be that to add a category of religion or religious groups will have unintended consequences and have the effect of stifling or chilling debate about religious belief.

An example of the difficulty that may arise with restrictions on religious speech may be demonstrated by the statement “God is dead.” This relatively innocuous statement may be insulting or abusive to members of theist groups who would find a fundamental aspect of their belief system challenged. For some groups such a statement may be an invitation to violence against the speaker. Yet the same statement could be insulting or abusive to atheists as well simply for the reason that for God to be dead presupposes the existence of God which challenges a fundamental aspect of atheist belief.

This example illustrates the danger of placing religious discourse into the unlawful categories of discrimination.

If it were to be determined that religious groups would be added to those covered by section 61, stronger wording relating to the consequences of speech should be applicable to such groups. Instead of merely “exciting hostility against” or “bring into contempt” based upon religious differences perhaps the wording should be “advocating and encouraging physical violence against..” .

This would have the effect of being a much stronger test than exists at present under section 61 and recognizes the importance of religious speech and doctrinal dispute.

Gender, Disability or Sexual Orientation

The Human Rights Act already has provisions relating to services-based discrimination on these additional grounds. The question is whether or not there is any demonstrated need to extend the categories protected under section 61 to these groups.

Under the current section 61 test, any threatening, abusive or insulting language directed towards or based upon gender, disability or sexual orientation could qualify as “hate speech” if the speech was likely to excite hostility against or bring into contempt a group of persons. The difficulty lies not so much with threatening language, which is generally clear and easy to determine, but with language which may be abusive or insulting.

Given the sensitivities that many have and the ease with which many are “offended” it could well be that a softer and less robust approach may be taken to what constitutes abusive or insulting language.

For this reason the test surrounding the effect of such speech needs to be abundantly clear. If the categories protected by section 61 are to be extended there must be a clear causative nexus between the speech and the exciting of hostility or the bringing into contempt. Alternatively the test could be strengthened as suggested above to replace the test of exciting hostility or bringing into contempt with “advocating and encouraging physical violence against..”

It should be observed that section 61 covers groups that fall within the protected categories. Individuals within those groups have remedies available to them under the provisions of the Harmful Digital Communications Act 2015.

The Harmful Digital Communications Act 2015

The first observation that must be made is that the Harmful Digital Communications Act 2015 (HDCA) is an example of Internet Exceptionalism in that it deals only with speech communicated via electronic means. It does not cover speech that may take place in a physical public place, by a paper pamphlet or other form of non-electronic communication.

The justification for such exceptionalism was considered by the Law Commission in the Ministerial Briefing Paper.[21] It was premised upon the fact that digital information is pervasive, its communication is not time limited and can take place at any time – thus extending the reach of the cyber-bully – and it is often shared among groups with consequent impact upon relationships. These are some of the properties of digital communications systems to which I have made reference elsewhere.[22]

A second important feature of the HDCA is that the remedies set out in the legislation are not available to groups. They are available only to individuals. Individuals are defined as “natural persons” and applications for civil remedies can only be made by an “affected individual” who alleges that he or she has suffered or will suffer harm as a result of a digital communication.[23] Under section 22 – the offence section – the victim of an offence is the individual who is the target of a posted digital communication.[24]

The HDCA provides remedies for harmful digital communications. A harmful digital communication is one which

  1. Is a digital communication communicated electronically and includes any text message, writing, photograph, picture, recording, or other matter[25]
  2. Causes harm – that is serious emotional distress

In addition there are ten communications principles[26]. Section 6(2) of the Act requires the Court to take these principles into account in performing functions or exercising powers under the Act.

For the purposes of a discussion about “dangerous speech” principles 2, 3, 8 and 10 are relevant. Principle 10 extends the categories present in section 61 of the Human Rights Act to include those discussed above.

The reason for the difference is that the consequences of a harmful digital communication are more of an individual and personal nature. Harm or serious emotional distress must be caused. This may warrant an application for an order pursuant to section 19 of the Act – what may be described as a civil enforcement order. A precondition to an application for any of the orders pursuant to section 19 is that the matter must be considered by the Approved Agency – presently Netsafe.[27] If Netsafe is unable to resolve the matter, then it is open to the affected individual to apply to the District Court.

The orders that are available are not punitive but remedial in nature. They include an order that the communication be taken down or access to it be disabled; that there be an opportunity for a reply or for an apology; that there be a form of restraining order so that the defendant is prohibited from re-posting the material or encouraging others to do so.

In addition orders may be made against online content hosts requiring them to take material down along with the disclosure of the details and particulars of a subscriber who may have posted a harmful digital communication. Internet Service Providers (described in the legislation as IPAPs) may be required to provide details of an anonymous subscriber to the Court.

It should be noted that the element of intending harm need not be present on the part of the person posting the electronic communication. In such a situation the material is measured against the communications principles along with evidence that the communication has caused serious emotional distress.

Section 22 – Causing harm by posting a digital communication

The issue of intentional causation of harm is covered by section 22 of the Act. A mixed subjective-objective test that is required for an assessment of content. The elements necessary for an offence under section 22 HDCA are as follows:

A person must post a digital communication with a specific intention – that it cause harm to a victim;

It must be proven that the posting of the communication would cause harm to an ordinary reasonable person in the position of the victim;

Finally, the communication must cause harm to the victim.

Harm is defined as serious emotional distress. In addition the Court may take a number of factors into account in determining whether a post may cause harm

  1. the extremity of the language used:
  2. the age and characteristics of the victim:
  3. whether the digital communication was anonymous:
  4. whether the digital communication was repeated:
  5. the extent of circulation of the digital communication:
  6. whether the digital communication is true or false:
  7. the context in which the digital communication appeared.

The requirement that harm be intended as well as caused has been the subject of some criticism. If there has been an intention to cause harm, is it necessary that there be proof that harm was caused? Similarly, surely it is enough that harm was caused even if it were not intended?

As to the first proposition it must be remembered that section 22 criminalises a form of expression. The Law Commission was particularly concerned that the bar should be set high, given the New Zealand Bill of Rights Act 1990 provisions in section 14 regarding freedom of expression. If expression is to be criminalized the consequences of that expression must warrant the involvement of the criminal law and must be accompanied by the requisite mens rea or intention.

As to the second proposition, the unintended causation of harm is covered by the civil enforcement provisions of the legislation. To eliminate the element of intention would make the offence one of strict liability – an outcome reserved primarily for regulatory or public interest types of offence.

The Harmful Digital Communications Act and “Dangerous Speech”

Could the HDCA in its current form be deployed to deal with “dangerous speech”. The first thing to be remembered is that the remedies in the legislation are available to individuals. Thus if there were a post directed towards members of a group, an individual member of that group could consider proceedings.

Would that person be “a victim” within the meaning of section 22? It is important to note that the indefinite article is used rather than the definite one. Conceivably if a post were made about members of a group the collective would be the target of the communication and thus every individual member of that collective could make a complaint and claim to be a target of the communication under section 22(4).

To substantiate the complaint it would be necessary to prove that the communication caused serious emotional distress[28] which may arise from a cumulation of a number of factors.[29] Whether the communication fulfilled the subjective\objective test in section 22(1)(b) would, it is suggested, be clear if the communication amounted to “hate speech”, taking into account the communications principles, along with the factors that should be taken into account in section 22(2)((a) – (g). The issue of intention to cause harm could be discerned either directly or by inference from the nature of the language used in the communication.

In addition it is suggested that the civil remedies would also be available to a member of a group to whom “dangerous speech” was directed. Even although a group may be targeted, an individual member of the group would qualify as an affected individual if serious emotional distress were suffered. A consideration of the communications principles and whether or not the communication was in breach of those principles would be a relatively straightforward matter of interpretation.

The Harmful Digital Communications Act in Action

Although the principal target of the legislation was directed towards cyber-bullying by young people, most of the prosecutions under the Act have been within the context of relationship failures or breakdowns and often have involved the transmission of intimate images or videos – a form of what the English refer to as “revenge porn”. There have been a relatively large number of prosecutions under section 22 – something that was not anticipated by the Law Commission in its Briefing Paper.[30]

Information about the civil enforcement process is difficult to obtain. Although the Act is clear that decisions, including reasons, in proceedings must be published.[31] There are no decisions available on any website to my knowledge.

From my experience there are two issues that arise regarding the civil enforcement process. The first is the way the cases come before the Court. When the legislation was enacted the then Minister of Justice, Judith Collins, considered that the Law Commission recommendation that there be a Communications Tribunal to deal with civil enforcement applications was not necessary and that the jurisdiction under the legislation would form part of the normal civil work of the District Court.

Because of pressures on the District Court, civil work does not receive the highest priority and Harmful Digital Communications applications take their place as part of the ordinary business of the Court. This means that the purpose of the Act in providing a quick and efficient means of redress for victimsis not being fulfilled. [32]  One case involving communications via Facebook in January of 2017 has been the subject of several part-heard hearings and has yet to be concluded. Even if the Harmful Digital Communications Act is not to be deployed to deal with “dangerous speech”, it is suggested that consideration be given to the establishment of a Communications Tribunal as suggested by the Law Communication so that hearings of applications can be fast-tracked.

The second issue surrounding the civil enforcement regime involves that of jurisdiction over off-shore online content hosts such as Facebook, Twitter, Instagram and the like. Although Facebook and Google have been cited as parties and have been served in New Zealand, they do not acknowledge the jurisdiction of the Court but nevertheless indicate a willingness to co-operate with requests made by the Court without submitting to the jurisdiction of the Court.

In my view the provisions of Subpart 3 of Part 6 of the District Court Rules would be applicable. These provisions allow service outside New Zealand as a means of establishing the jurisdiction of the New Zealand Courts. The provisions of Rule 6.23 relating to service without leave are not applicable and, as the law stands, the leave of the Court would have to be sought to serve an offshore online content host. This is a complex process that requires a number of matters to be addressed about a case before leave may be granted. Once leave has been granted there may be a protest to the jurisdiction by the online content host before the issue of jurisdiction could be established.

One possible change to the law might be an amendment to Rule 6.23 allowing service of proceedings under the HDCA without the leave of the Court. There would still be the possibility that there would be a protest to the jurisdiction but if that could be answered it would mean that the Courts would be able to properly make orders against offshore online content hosts.

Are Legislative Changes Necessary?

It will be clear by now that the law relating to “dangerous speech” in New Zealand does not require major widespread change or reform. What changes may be needed are relatively minor and maintain the important balance contained in the existing law between protecting citizens or groups from speech that is truly harmful and ensuring that the democratic right to freedom of expression is preserved.

The Importance of Freedom of Expression

The New Zealand Bill of Rights Act 1990

The New Zealand Bill of Rights Act 1990 (NZBORA) provides at section 14

“Everyone has the right to freedom of expression, including the freedom to seek, receive, and impart information and opinions of any kind in any form.”

This right is not absolute. It is subject to section 5 which provides “the rights and freedoms contained in this Bill of Rights may be subject only to such reasonable limits prescribed by law as can be demonstrably justified in a free and democratic society.”

Section 4 reinforces the concept of Parliamentary supremacy. If a specific piece of legislation conflicts or is inconsistent with NZBORA, the specific piece of legislation prevails. Thus, specific pieces of legislation which impose restrictions or limitations upon freedom of expression – such as the Human Rights Act 1993 and the Harmful Digital Communications Act 2015 – prevail although if an enactment can be given a meaning that is consistent with the rights and freedoms contained in NZBORA, that meaning shall be preferred to any other meaning.[33]

This then provides a test for considering limitations or restrictions on the rights under NZBORA. Limitations must be reasonable and must be demonstrably justified within the context of a free and democratic society.

Thus, when we consider legislation that may impinge upon or limit the freedom of expression the limitation must be

  1. Reasonable
  2. Demonstrably justified
  3. Yet recognizing that we live in a free and democratic society.

The justified limitations test contains within it a very real tension. On the one hand there is a limitation on a freedom. On the other there is a recognition of freedom in that we live in a free and democratic society. I would suggest that although NZBORA does not use this language, the emphasis upon a free and democratic society, and the requirement of reasonableness and demonstrable justification imports an element of necessity. Is the limitation of the freedom necessary?

The problem with freedom of expression is that it is elusive. What sort of limitations on the freedom of expression may be justified?

Freedom of Expression in Practice

The reality with freedom of expression is that it is most tested when we hear things with which we disagree. It is not limited to the comfortable space of agreeable ideas.

Salman Rushdie said that without the freedom to offend the freedom of expression is nothing. Many critics of current debates seem to conflate the freedom to express those ideas with the validity of those ideas, and their judgement on the latter means that they deny the freedom to express them.

The case of Redmond-Bate v DPP[34]  [1999] EWHC Admin 733 was about two women who were arrested for preaching on the steps of a church. Sedley LJ made the following comments:[35]

“I am unable to see any lawful basis for the arrest or therefore the conviction. PC Tennant had done precisely the right thing with the three youths and sent them on their way. There was no suggestion of highway obstruction. Nobody had to stop and listen. If they did so, they were as free to express the view that the preachers should be locked up or silenced as the appellant and her companions were to preach. Mr. Kealy for the prosecutor submitted that if there are two alternative sources of trouble, a constable can properly take steps against either. This is right, but only if both are threatening violence or behaving in a manner that might provoke violence. Mr. Kealy was prepared to accept that blame could not attach for a breach of the peace to a speaker so long as what she said was inoffensive. This will not do. Free speech includes not only the inoffensive but the irritating, the contentious, the eccentric, the heretical, the unwelcome and the provocative provided it does not tend to provoke violence. Freedom only to speak inoffensively is not worth having. What Speakers’ Corner (where the law applies as fully as anywhere else) demonstrates is the tolerance which is both extended by the law to opinion of every kind and expected by the law in the conduct of those who disagree, even strongly, with what they hear. From the condemnation of Socrates to the persecution of modern writers and journalists, our world has seen too many examples of state control of unofficial ideas. A central purpose of the European Convention on Human Rights has been to set close limits to any such assumed power. We in this country continue to owe a debt to the jury which in 1670 refused to convict the Quakers William Penn and William Mead for preaching ideas which offended against state orthodoxy.”

One way of shutting down debate and the freedom of expression is to deny a venue, as we have seen in the unwise decision of Massey University Vice Chancellor Jan Thomas to deny Mr Don Brash a chance to speak on campus. The Auckland City did the same with the recent visit by speakers Lauren Southern and Stefan Molyneux.

Lord Justice Sir Stephen Sedley (who wrote the judgement in Redmond-Bate v DPP above) writing privately, commented on platform denial in this way:

” A great deal of potentially offensive speech takes place in controlled or controllable forums – schools, universities, newspapers, broadcast media – which are able to make and enforce their own rules. For these reasons it may be legitimate to criticise a periodical such as Charlie Hebdo for giving unjustified offence – for incivility, in other words – without for a moment wanting to see it or any similarly pungent periodical penalised or banned. Correspondingly, the “no platform” policies adopted by many tertiary institutions and supported in general by the National Union of Students are intended to protect minorities in the student body from insult or isolation. But the price of this, the stifling of unpopular or abrasive voices, is a high one, and it is arguable that it is healthier for these voices to be heard and challenged. Challenge of course brings its own problems: is it legitimate to shout a speaker down? But these are exactly the margins of civility which institutions need to think about and manage. They are not a justification for taking sides by denying unpopular or abrasive speakers a platform.”[36]

So the upshot of all this is that we should be careful in overreacting in efforts to control, monitor, stifle or censor speech with which we disagree but which may not cross the high threshold of “dangerous speech”. And certainly be careful in trying to hobble the Internet platforms and the ISPs. Because of the global distributed nature of the Internet it would be wrong for anyone to impose their local values upon a world wide communications network. The only justifiable solution would be one that involved international consensus and a recognition of the importance of freedom of expression.

Conclusion

The function of government is to protect its citizens from harm and to hold those who cause harm accountable. By the same token a free exchange of ideas is essential in a healthy and diverse democracy. In such a way diversity of opinion is as essential as the diversity of those who make up the community.

I have posited a solution that recognizes and upholds freedom of expression and yet recognizes that there is a threshold below which untrammeled freedom of expression can cause harm. It is when expression falls below that threshold that the interference of the law is justified,

I have based my proposal upon a term based upon an identifiable and objective consequence – speech which is dangerous – rather than the term “hate speech”. Indeed there are some who suggest that mature democracies should move beyond “hate speech” laws.[37] Ash suggests that it is impossible to reach a conclusive verdict upon the efficacy of “hate speech” laws and suggests that there is scant evidence that mature democracies with extensive hate speech laws manifest any less racism, sexism or other kinds of prejudice than those with few or no such laws.[38] Indeed, it has been suggested that the application of “hate speech” laws has been unpredictable and disproportionate. A further problem with “hate speech” is that they tend to encourage people to take offence rather than learn to live with the fact that there is a diversity of opinions, or ignore it or deal with it by speaking back – preferably with reasoned argument rather than veto statements.

It is for this reason that I have approached the problem from the perspective of objective, identifiable harm rather than wrestling with the very fluid concept of “hate speech.” For that I may be criticized for ducking the issue. The legal solution proposed is a suggested way of confronting the issue rather than ducking it. It preserves freedom of expression as an essential element of a healthy and functioning democracy yet recognizes that there are occasions when individuals and members of groups may be subjected to physical danger arising from forms of expression.

What is essential is that the debate should be conducted in a measured, objective and unemotive manner. Any interference with freedom of expression must be approached with a considerable degree of care. An approach based upon an objectively identifiable danger rather than an emotive concept such as “hate” provides a solution.

[1] Presumably on the grounds that they depict, promote or encourage crime or terrorism or that the publication is injurious to the public good. See the definition of objectionable in the Films Videos and Publications Classification Act 1993

[2] Timothy Garton Ash Free Speech: Ten Principles for a Connected World (Atlantic Books, London 2016) p. 211

[3] US v Schwimmer 279 US 644 (1929)

[4] Daphne Patai Heterophobia: sexual harassment and the future of feminism (Rowman and Littlefield, Lanham 1998).

[5] See Irving v Penguin Books Ltd [2000] EWHC  QB 115.

[6] Jeremy Waldron The Harm in Hate Speech (Harvard University Press, Cambridge 2012 p. 120.

[7] Beauharnais v Illinois 343 US 250 (1952).

[8] Section 307A reads as follows:

307A Threats of harm to people or property

(1)           Every one is liable to imprisonment for a term not exceeding 7 years if, without lawful justification or reasonable excuse, and intending to achieve the effect stated in subsection (2), he or she—

(a)           threatens to do an act likely to have 1 or more of the results described in subsection (3); or

(b)           communicates information—

(i)            that purports to be about an act likely to have 1 or more of the results described in subsection (3); and

(ii)           that he or she believes to be false.

(2)           The effect is causing a significant disruption of 1 or more of the following things:

(a)           the activities of the civilian population of New Zealand:

(b)           something that is or forms part of an infrastructure facility in New Zealand:

(c)            civil administration in New Zealand (whether administration undertaken by the Government of New Zealand or by institutions such as local authorities, District Health Boards, or boards of trustees of schools):

(d)           commercial activity in New Zealand (whether commercial activity in general or commercial activity of a particular kind).

(3)           The results are—

(a)           creating a risk to the health of 1 or more people:

(b)           causing major property damage:

(c)            causing major economic loss to 1 or more persons:

(d)           causing major damage to the national economy of New Zealand.

(4)           To avoid doubt, the fact that a person engages in any protest, advocacy, or dissent, or engages in any strike, lockout, or other industrial action, is not, by itself, a sufficient basis for inferring that a person has committed an offence against subsection (1).

[9] [2013] DCR 482. For a full discussion of this case see David Harvey Collisions in the Digital Paradigm: Law and rulemaking in the Internet Age (Hart Publishing, Oxford, 2017) at p. 268 and following.

[10] Police v Joseph above at [2].

[11] Ibid at [7].

[12] [2011] NZSC 45.

[13] Ibid at para [123].

[14] See Human Rights Commission chief legal advisor Janet Bidois quoted in Michelle Duff “Hate crime law review fast-tracked following Christchurch mosque shootings” Stuff 30 March 2019. https://www.stuff.co.nz/national/christchurch-shooting/111661809/hate-crime-law-review-fasttracked-following-christchurch-mosque-shooting

[15] Human Rights Act 1993 sections 21 – 63.

[16] Ibid section 65.

[17] Ibid section 66

[18] Ibid sections 67 and 69.

[19] The provisions of section 61(1) state:

(1)           It shall be unlawful for any person—

(a)           to publish or distribute written matter which is threatening, abusive, or insulting, or to broadcast by means of radio or television or other electronic communication words which are threatening, abusive, or insulting; or

(b)           to use in any public place as defined in section 2(1) of the Summary Offences Act 1981, or within the hearing of persons in any such public place, or at any meeting to which the public are invited or have access, words which are threatening, abusive, or insulting; or

(c)            to use in any place words which are threatening, abusive, or insulting if the person using the words knew or ought to have known that the words were reasonably likely to be published in a newspaper, magazine, or periodical or broadcast by means of radio or television,—

being matter or words likely to excite hostility against or bring into contempt any group of persons in or who may be coming to New Zealand on the ground of the colour, race, or ethnic or national origins of that group of persons.

It should be noted that Internet based publication is encompassed by the use of the words “or other electronic communication”.

[20] Derek Cheng “Winston Peters criticizes Brunei for imposing strict Sharia law” NZ Herald 31 March 2019 https://www.nzherald.co.nz/nz/news/article.cfm?c_id=1&objectid=12217917

[21] New Zealand Law Commission Ministerial Briefing Paper Harmful Digital Communications:The adequacy of the current sanctions and remedies. (New Zealand Law Commission, Wellington, August 2012) https://www.lawcom.govt.nz/sites/default/files/projectAvailableFormats/NZLC%20MB3.pdf (last accessed 26 April 2019)

[22] See David Harvey Collisions in the Digital Paradigm: Law and Rulemaking in the Internet (Hart Publishing, Oxford, 2017) especially at Chapter 2

[23] Harmful Digital Communications Act 2015 section 11.

[24] Harmful Digital Communications Act 2015 section 22(4).

[25] It may also include a consensual or non-consensual intimate video recording

[26] Harmful Digital Communications Act 2015 section 6. These principles are as follows:

Principle 1  A digital communication should not disclose sensitive personal facts about an individual.

Principle 2  A digital communication should not be threatening, intimidating, or menacing.

Principle 3  A digital communication should not be grossly offensive to a reasonable person in the position of the affected individual.

Principle 4 A digital communication should not be indecent or obscene.

Principle 5  A digital communication should not be used to harass an individual.

Principle 6  A digital communication should not make a false allegation.

Principle 7  A digital communication should not contain a matter that is published in breach of confidence.

Principle 8  A digital communication should not incite or encourage anyone to send a message to an individual for the purpose of causing harm to the individual.

Principle 9  A digital communication should not incite or encourage an individual to commit suicide.

Principle 10 A digital communication should not denigrate an individual by reason of his or her colour, race, ethnic or national origins, religion, gender, sexual orientation, or disability.

[27] http://netsafe.org.nz

[28] Harmful Digital Communications Act Section 22(1)(c)

[29] See Police v B [2017] NZHC 526.

[30] For some of the statistics on prosecutions under the Act see Nikki MacDonald “Revenge Porn: Is the Harmful Digital Communications Act Working?” 9 March 2019 https://www.stuff.co.nz/national/crime/110768981/revenge-porn-is-the-harmful-digital-communications-act-working

[31] Harmful Digital Communications Act Section 16(4)

[32] Harmful Digital Communications Act Section 3(b)

[33] See New Zealand Bill of Rights Act section 6. Note also that the Harmful Digital Communications Act provides at section 6 that in performing its functions or exercising powers under the Act the Approved Agency and the Courts must act consistently with the rights and freedoms provided in NZBORA.

[34] [1999] EWHC Admin 733.

[35] Ibid at  para [20].

[36] Stephen Sedley Law and the Whirligig of Time (Hart Publishing, Oxford, 2018) p. 176-177. The emphasis is mine.

[37] For example see Timothy Garton Ash Free Speech: Ten Principles for a Connected World (Atlantic, London 2016) especially at 219 and following.

[38] Ibid.

Do Social Network Providers Require (Further?) Regulation – A Commentary

This is a review and commentary of the Sir Henry Brooke Student Essay Prize winning essay for 2019. The title of the essay topic was “Do Social Network Providers Require (Further?) Regulation

Sir Henry Brooke was a Court of Appeal judge in England. He became a tireless campaigner during retirement on issues including access to justice. His post-judicial renown owed much to his enthusiastic adoption of digital technology although he spear-headed early initiatives for technology in courts and led and was first Chair of the British and Irish Legal Information Institute (BAILII) – a website that provides access to English and Irish case and statute law. Upon his retirement many came to know of him through his blog and tweets. He drafted significant sections of the Bach Commission’s final report on access to justice, and also acted as patron to a number of justice organisations including the Public Law Project, Harrow Law Centre and Prisoners Abroad.

The SCL (Society for Computers and Law) Sir Henry Brooke Student Essay Prize honours his legacy.  For 2019 the designated essay question this year was 2000-2,500 words on the prompt “Do social network providers require (further?) regulation?” the winner was Robert Lewis from the University of Law. His essay considers some of the regulatory responses to social media. His starting point is the events of 15 March 2019 in Christchurch.

The first point that he makes is that

“(h)orrors such as Christchurch should be treated cautiously: they often lead to thoughtless or reflexive responses on the part of the public and politicians alike.”

One of his concerns is the possibility of regulation by outrage, given the apparent lack of accountability of social networking platforms.

He then goes on to examine some examples of legislative and legal responses following 15 March and demonstrates the problem with reflexive responses. He starts with the classification of the live stream footage and the manifesto posted by the alleged shooter. He referred to a warning by the Department of Internal Affairs that those in possession of the material should delete it.

He then examines some of the deeper ramifications of the decision. Classification instantly rendered any New Zealander with the video still in his computer’s memory cache, or in any of his social media streams, knowingly or not, potentially guilty of a criminal offence under s.131 of Films Videos and Publications Classification Act 1993. He comments

“Viewing extracts of  the footage shown on such websites was now illegal in New Zealand, as was the failure to have adequately wiped your hard drive having viewed the footage prior to its classification. A significant proportion of the country’s population was, in effect, presented with a choice: collective self-censorship or criminality.”

Whilst he concedes that the decision may have been an example of civic responsibility, in his opinion it did not make good law. Mr. Lewis points out that the legislation was enacted in 1993 just as the Internet was going commercial. His view is that the law targets film producers, publishers and commercial distributors, pointing out that

“these corporate entities have largely been supplanted by the social network providers who enjoy broad exemptions from the law, which has instead been inverted to criminalise “end users”, namely the public which the law once served to protect.”

He also made observations about the maximum penalties which are minimal against the revenue generated by social media platforms.

He then turned his attention to the case of the arrest of a 22 year old man charged with sharing the objectionable video online. He commented that

“that faced with mass public illegality, and a global corporation with minimal liability, New Zealand authorities may have sought to make an example of a single individual. Again, this cannot be good law.”

Mr. Lewis uses this as a springboard for a discussion about the “safe harbor” provisions of the Communications Decency Act (US) and EU Directive 2000/31/EC, which created the “safe harbour” published or distributed.

Mr Lewis gives a telling example of some of the difficulties encountered by the actions of social media platforms in releasing state secrets and the use of that released information as evidence in unrelated cases. He observes

“The regulatory void occupied by social network providers neatly mirrors another black hole in Britain’s legal system: that of anti-terrorism and state security. The social network providers can be understood as part of the state security apparatus, enjoying similar privileges, and shrouded in the same secrecy. The scale of their complicity in data interception and collection is unknown, as is the scale and level of the online surveillance this apparatus currently performs. The courts have declared its methods unlawful on more than one occasion and may well do so again.”

A theme that becomes clear from his subsequent discussion is that the current situation with apparently unregulated social media networks is evidence of a collision between the applicability of the law designed for a pre-digital environment and the challenges to the expectations of the applicability of the law in the digital paradigm. For example, he observes that

“The newspapers bear legal responsibility for their content. British television broadcasters are even under a duty of impartiality and accuracy. In contrast, social network providers are under no such obligations. The recent US Presidential election illustrates how invidious this is.”

He also takes a tilt at those who describe the Internet as “the Wild West”.

“This is an unfortunate phrase. The “wild west” was lawless: the lands of the American west, prior to their legal annexation by the United States, were without legal systems, and any pre-annexation approximation of one was illegal in and of itself. In contrast, the social network providers reside in highly developed, and highly regulated, economies where they are exempted from certain legal responsibilities. These providers have achieved enormous concentrations of capital and political influence for precisely this reason.”

He concludes with the observation that unlawful behaviour arises from a failure to apply the law as it exists and ends with a challenge:

“ In England, this application – of a millennium-old common law tradition to a modern internet phenomenon such as the social networks – is the true task of the technology lawyer. The alternative is the status quo, a situation where the online publishing industry has convinced lawmakers “that its capacity to distribute harmful material is so vast that it cannot be held responsible for the consequences of its own business model.””

The problem that I have with this essay is that it suggests a number of difficulties but, apart from suggesting that the solution lies in the hands of technology lawyers, no coherent solution is suggested. It cites examples of outdated laws, of the difficulty of retroactive solutions and the mixed blessings and problems accompanying social media platforms. The question really is whether or not the benefits outweigh the disadvantages that these new communications platforms provide. There are a number of factors which should be considered.

First, we must recognize that in essence social media platforms enhance and enable communication and the free exchange of ideas – albeit that they may be banal, maudlin or trivial – which is a value of the democratic tradition.

Secondly, we must recognize and should not resent the fact that social media platforms are able to monetise the mere presence of users of the service. This seems to be done in a number or what may appear to be arcane ways, but they reflect the basic concept of what Robert A. Heinlein called TANSTAFL – there ain’t no such thing as a free lunch. Users should not expect service provided by others to be absolutely free.

Thirdly, we must put aside doctrinaire criticisms of social media platforms as overwhelming big businesses that have global reach. Doing business on the Internet per se involves being in a business with global reach. The Internet extends beyond our traditional Westphalian concepts of borders, sovereignty and jurisdiction.

Fourthly, we must recognize that the Digital Paradigm by its very nature has within it various aspects – I have referred to them elsewhere as properties – that challenge and contradict many of our earlier pre-digital expectations of information and services. In this respect many of our rules which have a basis in underlying qualities of earlier paradigms and the values attaching to them are not fit for purpose. But does this mean that we adapt those rules to the new paradigm and import the values (possibly no longer relevant) underpinning them or should we start all over with a blank slate?

Fifthly, we must recognize that two of the realities in digital communications have been permissionless innovation – a concept that allows a developer to bolt an application on to the backbone – and associated with that innovation, continuous disruptive change.

These are two of the properties I have mentioned above. What we must understand is that if we start to interfere with say permissionless innovation and tie the Internet up with red tape, we may be if not destroying but seriously inhibiting the further development of this communications medium. This solution would, of course, be attractive to totalitarian regimes that do not share democratic values such as freedom of expression

Sixthly, we have to accept that disruptive change in communications methods, behaviours and values is a reality. Although it may be comfortable to yearn for a nostalgic but non-existent pre digital Golden Age, by the time such yearning becomes expressed it is already too late. If we drive focused upon the rear view mirror we are not going to recognize the changes on the road ahead. Thus, the reality of modern communications is that ideas to which we may not have been exposed by monolithic mainstream media are now being made available. Extreme views, which may in another paradigm, have been expressed within a small coterie, are now accessible to all who wish to read or see them. This may be an uncomfortable outcome for many but it does not mean that these views have only just begun to be expressed. They have been around for some time. It is just that the property of exponential dissemination means that these views are now available. And because of the nature of the Internet, many of these views may not in any event be available to all or even searchable, located, as many of them are, away from the gaze of search engines on the Dark Web.

Seventhly, it is only once we understand not only the superficial content layer but the deeper implications of the digital paradigm – McLuhan expressed it as “the medium is the message” can we begin to develop any regulatory strategies that we need to develop.

Eighthly, in developing regulatory strategies we must ask ourselves whether they are NECESSARY. What evil are the policies meant to address. As I have suggested above, the fact that a few social media and digital platforms are multi-national organisations with revenue streams that are greater than the GDP of a small country is not a sufficient basis for regulation per se – unless the regulating authority wishes to maintain its particular power base. But then, who is to say that Westphalian sovereignty has not had its day. Furthermore, it is my clear view that any regulatory activity must be the minimum that is required to address the particular evil. And care must be taken to avoid the “unintended consequences” to which Mr Lewis has referred and some of which I have mentioned above.

Finally, we are faced with an almost insoluble problem when it comes to regulation in the Digital Paradigm. It is this. The legislative and regulatory process is slow although the changes to New Zealand’s firearms legislation post 15 March could be said to have been done with unusual haste. The effect has been that the actions of one person have resulted in relieving a large percentage of the population of their lawfully acquired property. Normally the pace of legislative or regulatory change normally is slow, deliberative and time consuming.

On the other hand, change in the digital paradigm is extremely fast. For example, when I started my PhD thesis in 2004 I contemplated doing something about digital technologies. As it happens I didn’t and looked at the printing press instead. But by the time my PhD was conferred, social media happened. And now legislators are looking at social media as if it was new but by Internet standards it is a mature player. The next big thing is already happening and by the time we have finally worked out what we are going to do about social media, artificial intelligence will be demanding attention. And by the time legislators get their heads around THAT technology in all its multiple permutations, some thing else – perhaps quantum computing – will be with us.

I am not saying therefore that regulating social media should be put in the “too hard” basket but that what regulation there is going to be must be focused, targeted, necessary, limited to a particular evil and done with a full understanding of the implications of the proposed regulatory structures.

Facebook and the Printing Press

A recent article in the New Zealand Herald cites historian Niall Ferguson as drawing comparisons between the early days of the printing press and the current free wheeling Digital Paradigm. The argument is that we should learn from the lessons of history

There is no comparison between the technologies.

To suggest that the printing press enjoyed the “permissionless innovation” afforded by internet and digital technologies ignores that fact that in England the press was under the control of the Stationers Guild (later Company after 1556) who licensed what printers could print and kept a very close eye on what printers did. Indeed, their control was such that only the Universities of Oxford and Cambridge were the sites of presses outside of London.
Then there was state regulation of printing that took a number of forms. The Royal Stationer – later the Royal Printer – was responsible for printing the King’s view on things – statutes, proclamations and other such. Thomas Cromwell used the press to great effect during the English Reformation. It was he who used preambles in Statutes to identify the “mischief” that the statute was intended to remedy.
After the incorporation of the Stationers (during the reign of Mary I) it was anticipated that the Company would aid the State using its newly granted search powers to root out the printers of heretical tracts. However the power was deployed to root out unlicensed printers who were not members of the Stationers.
There were also many other efforts by the State to regulate content, some more successful than others. The Star Chamber Decrees of 1587 and 1634 were rather dramatic examples. The Decrees were in fact judgments of the Court in cases involving printing disputes.
Just prior to the Civil War that power of Star Chamber was nullified and printers enjoyed considerable freedom and lack of regulation but it did not last once Oliver Cromwell and the Puritans gathered strength.
After the Restoration there was significant regulation both of printers and the content of the Press by means of Licensing Acts the first of which was in 1662 and which was renewed regularly thereafter until 1694. Charles II’s enforcer as far as print was concerned was a phanatick (to use the spelling adopted by Neal Stephenson in his Baroque Quartet) by the name of Roger L’Estrange – a very nasty piece of work both by the standards of his time and ours.
In 1694 the Licensing Acts came to an end, primarily as a result of political strife within a greater context, and until 1710 there was a lack of restriction on printing. This all changed when the focus moved from the printer to the author who should have control of content and the Statute of Anne was the first Copyright Act.
So to say that there is a parallel between Silicon Valley’s freedom to develop platforms and bolt them on to the Internet and the early history of the printing press is wrong. Indeed, the whole structure of the communications technologies is different. The printing press was the technology and essentially books, magazines, pamphlets and papers were the medium. Today the Internet is the communications technology and Facebook, Twitter, blogs etc etc are platforms bolted on to it. The absence of red tape (what I call permissionless innovation) is what has enabled the growth of the Internet and the proliferation of platforms.
The call is for regulation, but regulation of what. Better to have a regulatory plan in place that we can discuss rather than disembodied pleas to “do something”. Perhaps we could turn to history but I think we have moved on from the semi-absolutist model of the Tudors and Stuarts.

Fearing Technology Giants

On 15 January 2018 opinion writer Deborah Hill Cone penned a piece entitled “Why tech giants need a kick in the software”

Not a lot of it is very original and echoes many of the arguments in Jonathan Taplin’s “Move Fast and Break Things.” I have already critiqued some of Taplin’s propositions in my earlier post Misunderstanding the Internet . Over the Christmas break I revisited Mr. Taplin’s book. It is certainly not a work of scholarship, rather it is a perjorative filled polemic that in essence calls for regulation of Internet platforms to preserve certain business and economic models that are challenged by the new paradigm. Mr. Taplin comes from a background of involvement primarily in the music industry and the realities of the digital paradigm have hit that industry very hard. But, as was the case with the film industry, music took an inordinate amount of time to adapt to the new paradigm and develop new business models. It seems that is now happening with iTunes and Spotify and the movie industry seems to have recognised other models of online distribution such as Netflix, Hulu and other on-demand streaming services.

For Mr. Taplin these new business models are not enough. His argument is that artists should have an expectation that they should draw the same level of income that they enjoyed in the pre-digital age. And that ignores the fact that the whole paradigm has changed.

But Mr. Taplin directs most of his argument against the Internet giants – Facebook, Google, Amazon and the like and singles out their creators and financiers as members of a libertarian conspiracy dedicated to eliminating competition – although to conflate monopolism with libertarianism has its own problems.

Much of Mr. Taplin’s argument uses labels and generalisations which do not stand up to scrutiny. For example he frequently cites one of the philosophical foundations for the direction travelled by the Internet Giants as Ayn Rand whom he describes as a libertarian. In fact Ms. Rand’s philosophy was that of objectivism rather than libertarianism. Indeed, libertarianism has its own subsets. In using the term does Mr. Taplin refer to Thomas Jefferson’s flavour of libertarianism or that advocated by John Stuart Mill in his classic “On Liberty”?  It is difficult to say.

Another problem for Mr Taplin is his brief discussion on the right to be forgotten He says (at page 98) “In Europe, Google continues to challenge the “right to be forgotten” – customers’ ability to eliminate false articles written about them from Google’s search engine.” (The emphasis is mine).

The Google Spain Case which gave rise the the right to be forgotten discussion was not a case about a false article or false information. In fact the article that Sr Costeja-Gonzales wished to deindex was true. It was an advertisement regarding his financial that was published in La Vanguardia newspaper in Barcelona some years before. The reason why deindexing was sought was because the article was no longer relevant to Sr Consteja-Gonzales improved fortunes. To characterise the desire by Google to resist attempts to remove false information misunderstands the nuances of the right to be forgotten.

One thing is clear. Mr. Taplin wants regulation and the nature of the regulation that he seeks is considerable and of such a nature that it might stifle much of the stimulus to creativity that the Internet allows. I have already discussed some of these concepts in other posts but in summary there must be an understanding not of the content that is delivered via Internet platforms but rather of the underlying properties or affordances of digital technologies.

One of these is the fact that digital technologies cannot operate without copying. From the moment a user switches on a computer or a digital device to the moment that device is shut down, copying takes place. Quite simply, the device won’t work without copying. This is a challenge to concepts of intellectual property that developed after the first information technology – the printing press. The press allowed for mechanised copying and challenged the earlier manual copying processes that characterised the scribal paradigm of information communication.

Now we have a digital system that challenges the assumptions that content “owners” have had about control of their product. And the digital horse has bolted and a new paradigm is in place that has altered behaviours, attitudes, expectations and values surrounding information. And can regulation hold back the flood? One need only look at the file sharing provisions of the Copyright Act 1994 in New Zealand. These provisions were put in place, as the name suggests, to combat file sharing. They are now out of date and were little used when introduced. Technology has overtaken them. The provisions were used sporadically by the music industry and, despite extensive lobbying, not at all by the movie industry.

Two other affordances that underlie digital technologies are linked. The first is that of permissionless innovation which is interlinked with the second – continuing disruptive change.  Indeed it could be argued that permissionless innovation is what drives continuing disruptive change.

Permissionless innovation is the quality that allows entrepreneurs, developers and programmers to develop protocols using standards that are available and that have been provided by Internet developers to “bolt‑on” a new utility to the Internet.

Thus we see the rise of Tim Berners-Lee’s World Wide Web which, in the minds of many, represents the Internet as a whole.  Permissionless innovation enabled Shawn Fanning to develop Napster; Larry Page and Sergey Brin to develop Google; Mark Zuckerberg to develop Facebook and Jack Dorsey, Evan Williams, Biz Stone and Noah Glass to develop Twitter along with dozens of other utilities and business models that proliferate the Internet.  There is no need to seek permission to develop these utilities.  Using the theory “if you build it, they will come”[1] new means of communicating information are made available on the Internet.  Some succeed but many fail[2].  No regulatory criteria need to be met other than that the particular utility complies with basic Internet standards.

What permissionless innovation does allow is a constantly developing system of communication tools that change in sophistication and the various levels of utility that they enable.  It is also important to recognize that permissionless innovation underlies changing means of content delivery.

So are these the aspects of the Internet and its associated platforms that are to be regulated? If the Internet Giants are to be reined in the affordances of the Internet that give them sustenance must be denied them. But in doing that, it may well be that the advantages of the Internet may be lost. So the answer I would give to Mr Taplin is to be careful what you wish for.

This rather long introduction leads me to a consideration of Ms. Hill Cone’s slightly less detailed analysis that nevertheless seizes upon Mr Taplin’s themes. Her list of “things to loathe” follow along with some of my own observations

1.) These companies (Apple, Alphabet, Facebook, Amazon) have simply been allowed to get unhealthily large and dominant with barely any checks or balances. The tech firms are more powerful than the telco AT&T ever was, yet regulators do nothing (AT&T was split up). In this country the Commerce Commission spent millions fighting to stop one firm, NZME (publisher of the New Zealand Herald) from merging with another Fairfax (Now called Stuff), a sideshow, while they appear stubbornly uninterested in tackling the real media dominance battle: how Facebook broke the media. I know we’re just little old New Zealand, but we still have sovereignty over our nation, surely? [Commerce Commission chairman] Mark Berry? Can’t you do something? The EU at least managed to fine Google a couple of lazy bill.

Taplin deals with this argument in an extensive analysis of the way in which antitrust law in the United States has become somewhat toothless. He attributes this to the teachings of Robert Bork and the Chicago School of law and economics.

Ms Hill Cones critique suggests that there is something wrong with large corporate conglomerates and that simply because something has become too big it must be bad and therefore should be regulated rather than identifying a particular mischief and then deciding whether regulation is necessary – and I emphasise the word necessary.

2.) Some of these tech companies have got richer and richer exploiting the creative content of writers and artists who create things of real value and who can no longer earn a living from doing so.

This is straight out of the Taplin playbook which I have discussed above. I don’t think its has been suggested that artists are not earning. They are – perhaps not to the level that they used to and perhaps not from sales of remuneration from Spotify tracks. But what Taplin points out – and this is how paradigmatic change drives behavioural change – is that artists are moving back to live performance to earn an income. Gone are the days when the artist could rely on recorded performances. So Ms Hill Cone’s critique may be partially correct as it applies to the earlier expectation of making an income.

3.) Mark Zuckerberg’s mea culpa, announced in the last few days that Facebook is going to focus on what he called “meaningful interaction”, is like a drug dealer offering a cut-down dose of its drug, hoping addicts won’t give up the drug completely. Even Zuckerberg’s former mentor, investor Robert McNamee said in the Guardian that all Zuckerberg is doing is deflecting criticism and leaving users “in peril.”

The perjorative analogy of the drug dealer ignores the fact that no one is required to sign up to Facebook. It is, after all, a choice. And in some respects, Zuckerberg’s announcement is an example of continuing disruptive change that affects Internet Giants as much as it does a startup.

4.) These companies have created technology and thrown it out there, without any sense of responsibility for its potential impact. It’s time for them to be held accountable. Last week Jana Partners, a Wall Street investment firm, wrote to Apple pushing it to look at its products’ health effects, especially on children. Even Facebook founder Sean Parker has recently admitted “God knows what [technology) is doing to our children’s brains.”

The target here is that of permissionless innovation. Upon what basis is it necessary to regulate permissionless innovation. Or does Ms Hill Cone wish to wrap up the Internet with regulatory red tape. Aa far as the effects of social media are concern, I think what worries may digital immigrants and indeed digital deniers is that all social media does is to enable communication – which is what people do. It is an alternative to face to face, telephone, snail mail, email, smoke signals etc. We need to accept that new technologies drive behavioural change.

5.) While it’s funny when the bong-sucking entrepreneur Erlich Bachman says in the HBO comedy Silicon Valley: “We’re walking in there with three foot c**ks covered in Elvis dust!” in reality, many of these firms have a repugnant, arrogant and ignorant culture. In the upcoming Vanity Fair story “Oh. My god, this is so f***ed up: inside Silicon Valley’s secretive orgiastic dark side” insiders talked about the creepy tech parties in which young women are exploited and harassed by tech guys who are still making up for getting bullied at school. (Just as bad, they use the revolting term “cuddle puddles”) The romantic image of scrappy, visionary nerds inventing the future in a garage has evolved into a culture of entitled frat boys behaving badly. “Too much swagger and not enough self-awareness,” as one investor said.

I somehow don’t think that the bad behaviours described here is limited to tech companies. I am sure that in her days as a business journalist (and a very good one too) Ms Hill Cone saw examples of the behaviours she condemns in any number of enterprises.

6.) These giant companies suck millions in profits out of our country but do little to participate as good corporate citizens. If they even have an office here at all, it is tiny. And don’t get started on how much tax they pay. A few years ago Google’s New Zealand operation consisted of three people who would fly back and forth from Sydney to manage sales over here. Apparently, Apple has opened a Wellington office and lured “several employees” from Weta Digital. But there is little transparency about how or where these companies do business or how to hold them accountable. There is no local number to call, there is no local door to knock on. And don’t hold your breath that our children might get good jobs working for any of these corporations.

This criticism goes to the tax problem and probably has underneath it a much larger debate about the purposes and morality of the tax system. The classic statement, since modified, is stated in the case of Inland Revenue Commissioners v Duke of Westminster [1936] AC 1 where it was stated:

“Every man is entitled if he can to order his affairs so that the tax attaching under the appropriate Acts is less than it otherwise would be. If he succeeds in ordering them so as to secure this result, then, however unappreciative the Commissioners of Inland Revenue or his fellow tax-payers may be of his ingenuity, he cannot be compelled to pay an increased tax.”

There can be no doubt that the tax laws will be changed to close the loophole that exists whereby the relationship between the income derived by Google and Apple from their NZ activites will be subject to NZ tax. But Ms Hill Cone goes further and suggests that these companies should have a physical presence – a local door to knock on. This is the digital paradigm. It is no longer necessary to have a suite of offices in a CBD building paying rent.

7.) Mark Zuckerberg preaches that Facebook’s mission is to connect people. But Johann Hari’s new book Lost Connections: Uncovering the real causes of depression and the unexpected solutions, out this week, provides convincing evidence that in the digital age people are more lonely than ever. Hari argues the very companies which are trying to “fix” loneliness – Facebook, for example – are the ones which have made people feel more disconnected and depressed in the first place.

The book cited by Ms Cone is by a journalist writing about depression. Apparently the diagnosis for hsi depression was supposedly from a chemical imbalance in his brain whereas he discovered after investigating some of the social science evidence that depression and anxiety are caused by key problems with the way that we live. He uncovered nine causes of depression and anxiety and offers seven solutions to the problems. Much of the book is about the author and the problems that he had with the treatment he received. His book is as much a critique of the pharmaceutical industry as much as anything. It is described in the Guardian as a flawed study.  Certainly it cannot be said that Hari’s argument is directed towards the suggestion that social media platforms are causative of depression.

8.) Is all this technology really making the world a better place? At this week’s CES (Consumer Electronics Show) in Las Vegas some of the innovations were positive but a lot of them were really, quite dumb. Do you really need a robot that will fold your laundry or a suitcase that will follow you? Or a virtual reality headset that will make you feel like you are flying on a dinosaur (Okay, maybe that one would be fun.)

Point taken. A lot of inventions are not going to make the world a better place. On the other hand many do. Think Thomas Alva Edison and then think about the Edsel motor vehicle. Ms Hill Cone accepts that some of the innovations were positive and the positive ones will probably survive the “Dragon’s Den” of funding rounds and the market.

These eight points were advanced by Ms Hill Cone as reasons why tech companies should get their comeuppance as she puts it. It is difficult to decide whether the article is merely a rant or a restatement of some deeper concerns about Tech Giants. If it should be the latter there should be more thorough analysis. But unless it is absolutely necessary and identifies and addresses a particular mischief in my view regulation is not the answer.

But Ms Hill-Cone is not alone. Later in January a significant beneficiary of Silicon Valley, Marc Benioff compares the crisis of trust facing tech giants to the financial crisis of a decade ago. He suggest that Google, Facebook and other dominant forms pose a threat and he made these comments at the World Economic Forum in Davos. He suggested that what is needed is more regulation and his call was backed by Sir Martin Sorrell who suggested that Apple, Facebook, Amazon, Google, Microsoft, and China’s Alibaba and Tencent had become too big. Sir Martin compared Amazon founder Jeff Bezos to a modern John D. Rockefeller.

One of the suggestions by Sir Martin was that Google and Facebook were media companies, echoing concerns that had been expressed by Rupert Murdoch. The argument is that as the Internet Giants get bigger, it is not a fair fight. And then, of course, there were the criticisms that the Internet Giants had become so big that they were unaware of the nefarious use of their services by those who would spread fake news.

George Soros added his voice to the calls for regulation in two pieces here and here. At the Davos forum he suggested that Facebook and Google have become “obstacles to innovation” and are a “menace” to society whose “days are numbered”. As mining companies exploited the physical environment, so social media companies exploited the social environment.

“This is particularly nefarious because social media companies influence how people think and behave without them even being aware of it. This has far-reaching adverse consequences on the functioning of democracy, particularly on the integrity of elections.”

In addition to skewing democracy, social media companies “deceive their users by manipulating their attention and directing it towards their own commercial purposes” and “deliberately engineer addiction to the services they provide”. The latter, he said, “can be very harmful, particularly for adolescents”.

He considers that the Internet Giants are unlikely to change without regulation. He compared social media companies to casinos, accusing them of deceiving users “by manipulating their attention” and “deliberately engineering addiction” to their services, arguing that they should be broken up. The basis for following a model that was applied in the break up of AT & T Soros suggested that the fact that the Internet Giants are near-monopoly distributors makes them public utilities and should subject them to more stringent regulation, aimed at preserving competition, innovation and fair and open access.

Soros pointed to steps that had been taken in Europe where he described regulators as more farsighted than those in the US when it comes to social policies, referring to the work done by EU Competition Commissioner Margrethe Vestager, who hit Google with a 2.4 billion euro fine ($3 billion) in 2017 after the search giant was found in violation of antitrust rules.

Even more recently, in light of the indictments proferred by Spevial Prosecutor Mueller against a number of Russians who attempted to interfere with the US election of 2016 and who used social media to do so, a call has gone up to regulate social media so that this does not happen again. Of course that is a knee jerk reaction that seems to forget the rights of freedom of expression enshrined in both international convention and domestic legislation and the First Amendment to the US Constitution which protects freedom of speech and where political speech is given the highest level of protection in subsequent cases. But nevertheless, the call goes out to regulate.

Facebook has responded to these concerns by reducing the news feeds that may be provided and more recently in New Zealand Google has restructured its tax arrangements. Both of these steps represent a response by the Internet Giants to public concern – perhaps an indication of a willingness to self-regulate

The urge to regulate is a strong one especially on the part of those who favour the status quo. There can be little doubt that ultimately what is sought is control of the digital environment. The content deliverers like Facebook and Google will be first, but thereafter the architecture – the delivery system that is the Internet that must be free and open – will increasingly come under a form of regulatory control that will have little to do with operational efficiency.

Of course, content is a low-hanging fruit. Marshall McLuhan recognised that when he called the “content” of a medium is like the juicy piece of meat carried by the burglar to distract the watchdog of the mind.” I doubt very much that content is the real target. Nicholas Sarkozy called for regulation of the Internet in 2012 so that urge to regulate is not new by any means.

At the risk of being labelled a technological determinist, I suggest that trying to impose regulatory structures that preserve the status quo inhibits innovation and creativity as much if not more than the suggestion that such an outcome will happen if we leave the Internet Giants alone. Rather I suggest that we should recognise that the changes that are being wrought are paradigmatic. There will be a transformation of the way in which we use communication systems after the current disruption that is being experienced. That means that what comes out the other end may not be immediately recognisable to those of us whose values and predispositions were formed during the analog or pre-digital paradigm.

On the other hand those who reject technological determinism still recognise the inevitability of change. Mark Kurlansky in his excellent book “Paper: Paging through history” argues that technologies have arisen to meet societal needs. It is futile to denounce the technology itself. Rather you have to change the operation of society for which the technology was created.  For every new technology there are detractors, those who see the new invention destroying everything that is good in the old.

To suggest that regulation will preserve the present – if indeed it is worth preserving – is rear view mirror thinking at its worst. Rather we should be looking at the opportunities and advantages that the new paradigm presents. And this isn’t going to be done by wishing for a world that used to be, because that is what regulation will do – it will freeze the inevitable development of the new paradigm.

__________________________________________________________________________________________

[1] In fact a misquote that has fallen into common usage from the movie Field of Dreams (Director and Screenplay by Phil Alden Robinson 1989). The correct quote is “If you build it he will come” (my emphasis) http://www.imdb.com/title/tt0097351/quotes (last accessed 3 February 2015).

[2] See for example  Andrew Keen The Internet is Not the Answer (Atlantic Books, London 2015)

Memory Illusions and Cybernannies

A while back I read a couple of very interesting books. One was Dr Julia Shaw’s The Memory Illusion. Dr. Shaw describes herself as a “memory hacker” and has a You Tube presence where she explains a number of the issues that arise in her book.

The other book was The Cyber Effect by Dr Mary Aiken who reminds us on a number of occasions in every chapter that she is a trained cyberpsychologist and cyberbehavioural specialist and who was a consultant for CSI-Cyber which, having watched a few episodes, I abandoned. Regrettably I don’t see that qualification as a recommendation, but that is a subjective view and I put it to one side.

Both books were fascinating. Julia Shaw’s book in my view should be required reading for lawyers and judges. We place a considerable amount of emphasis upon memory assisted by the way in which a witness presents him or herself -what we call demeanour. Demeanour has been well and truly discredited by Robert Fisher QC in an article entitled “The Demeanour Fallacy” [2014] NZ Law Review 575. The issue has already been covered by  Chris Gallavin in a piece entitled “Demeanour Evidence as the backbone of the adversarial process” Lawtalk Issue 834 14 March 2014 http://www.lawsociety.org.nz/lawtalk/issue-837/demeanour-evidence-as-the-backbone-of-the-adversarial-process

A careful reading of The Memory Illusion is rewarding although worrisome. The chapter on false memories, evidence and the way in which investigators may conclude that “where there is smoke there is fire” along with suggestive interviewing techniques is quite disturbing and horrifying at times.

But the book is more than that, although the chapter on false memories, particularly the discussions about memory retrieval techniques, was very interesting. The book examines the nature of memory and how memories develop and shift over time, often in a deceptive way. The book also emphasises how the power of suggestion can influence memory. What does this mean – that everyone is a liar to some degree? Of course not. A liar is a person who tells a falsehood knowing it to be false. Slippery memory, as Sir Edward Coke described it, means that what we are saying we believe to be true even although, objectively it is not.

A skilful cross-examiner knows how to work on memory and highlight its fallibility. If the lawyer can get the witness in a criminal trial to acknowledge that he or she cannot be sure, the battle is pretty well won. But even the most skilful cross-examiner will benefit from a reading of The Memory Illusion. It will add a number of additional arrows to the forensic armoury. For me the book emphasises the risks of determining criminal liability on memory or recalled facts alone. A healthy amount of scepticism and a reluctance to take an account simply and uncritically at face value is a lessor I draw from the book.

The Cyber Effect is about how technology is changing human behaviour. Although Dr Aiken starts out by stating the advantages of the Internet and new communications technologies, I fear that within a few pages the problems start with the suggestion that cyberspace is an actual place. Although Dr Aiken answers unequivocally in the affirmative it clearly is not. I am not sure that it would be helpful to try and define cyberspace – it is many things to many people. The term was coined by William Gibson in his astonishingly insightful Neuromancer and in subsequent books Gibson imagines the network (I use the term generically) as a place. But it isn’t. The Internet is no more and no less than a transport system to which a number of platforms and applications have been bolted. Its purpose –  Communication. But it is communication plus interactivity and it is that upon which Aiken relies to support her argument. If that gives rise to a “place” then may I congratulate her imagination. The printing press – a form of mechanised writing that revolutionised intellectual activity in Early-modern Europe – didn’t create a new “place”. It enabled alternative means of communication. The Printing Press was the first Information Technology. And it was roundly criticised as well.

Although the book purports to explain how new technologies influence human behaviour it doesn’t really offer a convincing argument. I have often quoted the phrase attributed to McLuhan – we shape our tools and thereafter our tools shape us – and I was hoping for a rational expansion of that theory. It was not to be. Instead it was a collection of horror stories about how people and technology have had problems. And so we get stories of kids with technology, the problems of cyberbullying, the issues of on-line relationships, the misnamed Deep Web when she really means the Dark Web – all the familiar tales attributing all sorts of bizarre behaviours to technology – which is correct – and suggesting that this could become the norm.

What Dr Aiken fails to see is that by the time we recognise the problems with the technology it is too late. I assume that Dr Aiken is a Digital Immigrant, and she certainly espouses the cause that our established values are slipping away in the face of an unrelenting onslaught of cyber-bad stuff. But as I say, the changes have already taken place. By the end of the book she makes her position clear (although she misquotes the comments Robert Bolt attributed to Thomas More in A Man for All Seasons which the historical More would never have said). She is pro-social order in cyberspace, even if that means governance or regulation and she makes no apology for that.

Dr Aiken is free to hold her position and to advocate it and she argues her case well in her book. But it is all a bit unrelenting, all a bit tiresome these tales of Internet woe. It is clear that if Dr Aiken had her way the very qualities that distinguish the Digital Paradigm from what has gone before, including continuous disruptive and transformative change and permissionless innovation, will be hobbled and restricted in a Nanny Net.

For another review of The Cyber Effect see here

All Data is Created Equal

 

I must acknowledge the assistance I have received from an excellent unpublished dissertation by Reuel Baptista whose insights into and examinations of potential regulatory outcomes for Net Neutrality are worthy of consideration.

Net Neutrality is an emotive subject for many who are involved in the workings of the Internet and the provision of Internet services and access. It essentially asserts that the transport layer of the Internet – the means by which data moves across the Internet – should be non-discriminatory as to content and treat all data packets equally regardless of nature or origin.

It is a concept that has been developed primarily by Internet engineers but since the Internet went public in the 1990’s it is a concept that has been the subject of challenge, primarily from commercial entities. There are examples, particularly from the US, of data discrimination and preferential treatment of data in certain circumstances.

The location of the concept of Net Neutrality in Internet legal theory has been generally considered as a governance issue  and so it is. Yet despite opportunities to review or address issues of Net neutrality, in the Government’s recent consultation paper on the shape of the delivery of Telecommunications services post 2019 no mention was made of Net Neutrality.

This state of affairs was also referred to by the Commerce Commission in its determination of the application for merger between Sky and Vodafone where it said at para 90:

Unlike in a number of other jurisdictions, New Zealand does not have any specific laws requiring TSPs to treat all internet traffic equally (known as ‘net neutrality’). This means that TSPs can discriminate between different types of traffic,either by:

90.1 not carrying certain types of content; or

90.2 limiting the speed at which certain content is carried (known as ‘throttling’), which impacts the quality of the content.

Despite this for New Zealand providers Net Neutrality is not really as issue – at least not yet.  This doesn’t mean that it won’t become an issue some way down the track and the concern must be, when ISPs start discriminating between content and allocating preferential bandwidth, that by then it will be too late to do anything about it.

But the reality is that there is more to Net Neutrality than treating data equally. It helps address the negative effects of discriminatory practices such as blocking, paid prioritization and zero rating. Competition within the fixed line broadband and content markets, recognition of human rights and a country’s standing in the online economy are all affected by network neutrality. The tension is that there is a need to prevent big or monolithic ISPs from abusing their power but allow them to optimise the Internet for subsequent waves of innovation and efficiency. Other counties have had this debate and have introduced network neutrality into their telecommunications regulatory framework.

It is therefore interesting to read Juha Saarinen’s piece in this morning’s Herald where he suggests that net neutrality no longer matters. He locates his discussion against a background of developing content delivery systems which use geography to enhance speedy delivery. He points out that big services providers can afford to put data centres near customers and cache content there. Others use content delivery networks such as Akamai, Amazon Web Service, and Cloudflare that sit between the customer and the service provider. This, he says, violates Net Neutrality as it makes some sites seem to perform better than others.

With respect, I disagree. That argument is not based on the non-discriminatory treatment of data packets across the Internet but rather is based upon geography and location of data.

Saarinen goes on to dismiss Net Neutrality as an important idea a few years ago but today “we’re probably better off expending our energy elsewhere, like how to keep a diverse and competitive internet provider and Telco market alive in New Zealand.”

So does Saarinen suggest that we kick Net Neutrality to the kerb?

The reality is that in fact, as I have already suggested, it is an essential part of the regulatory and governance processes necessary to ensure a competitive internet provider and Telco market. Net neutrality is an integral part of that activity.

With the Telecommunications Act review in progress, this is the right time for New Zealand to formally adopt network neutrality as part of our telecommunications regulatory framework. Susan Chalmers said in 2015 at a Law Conference

“The thicket of commercial agreements between content and applications providers and ISPs must not be allowed to develop to such an extent that there will be no political will left to clear a path for [network] neutrality.”

The rapid pace of change in the online world means there may not be another opportunity to discuss network neutrality regulation for some time.

Memory Illusions and Cybernannies

Over the last week I read a couple of very interesting books. One was Dr Julia Shaw’s The Memory Illusion. Dr. Shaw describes herself as a “memory hacker” and has a You Tube presence where she explains a number of the issues that arise in her book.

The other book was The Cyber Effect by Dr Mary Aiken who reminds us on a number of occasions in every chapter that she is a trained cyberpsychologist and cyberbehavioural specialist and who was a consultant for CSI-Cyber which, having watched a few episodes, I abandoned. Regrettably I don’t see that qualification as a recommendation, but that is a subjective view and I put it to one side.

Both books were fascinating. Julia Shaw’s book in my view should be required reading for lawyers and judges. We place a considerable amount of emphasis upon memory assisted by the way in which a witness presents him or herself -what we call demeanour. Demeanour has been well and truly discredited by Robert Fisher QC in an article entitled “The Demeanour Fallacy” [2014] NZ Law Review 575. The issue has also been covered by  Chris Gallavin in a piece entitled “Demeanour Evidence as the backbone of the adversarial process” Lawtalk Issue 834 14 March 2014 http://www.lawsociety.org.nz/lawtalk/issue-837/demeanour-evidence-as-the-backbone-of-the-adversarial-process

A careful reading of The Memory Illusion is rewarding although worrisome. The chapter on false memories, evidence and the way in which investigators may conclude that “where there is smoke there is fire” along with suggestive interviewing techniques is quite disturbing and horrifying at times.

But the book is more than that, although the chapter on false memories, particularly the discussions about memory retrieval techniques, was very interesting. The book examines the nature of memory and how memories develop and shift over time, often in a deceptive way. The book also emphasises how the power of suggestion can influence memory. What does this mean – that everyone is a liar to some degree? Of course not. A liar is a person who tells a falsehood knowing it to be false. Slippery memory, as Sir Edward Coke described it, means that what we are saying we believe to be true even although, objectively, it is not.

A skilful cross-examiner knows how to work on memory and highlight its fallibility. If the lawyer can get the witness in a criminal trial to acknowledge that he or she cannot be sure, the battle is pretty well won. But even the most skilful cross-examiner will benefit from a reading of The Memory Illusion. It will add a number of additional arrows to the forensic armoury. For me the book emphasises the risks of determining criminal liability on memory or recalled facts alone. A healthy amount of scepticism and a reluctance to take an account simply and uncritically at face value is a lesson I draw from the book.

The Cyber Effect is about how technology is changing human behaviour. Although Dr Aiken starts out by stating the advantages of the Internet and new communications technologies, I fear that within a few pages the problems start with the suggestion that cyberspace is an actual place. Although Dr Aiken answers unequivocally in the affirmative it clearly is not. I am not sure that it would be helpful to try and define cyberspace – it is many things to many people. The term was coined by William Gibson in his astonishingly insightful Neuromancer and in subsequent books Gibson imagines the network (I use the term generically) as a place. But it isn’t. The Internet is no more and no less than a transport system to which a number of platforms and applications have been bolted. Its purpose –  Communication. But it is communication plus interactivity and it is that upon which Aiken relies to support her argument. If that gives rise to a “place” then may I congratulate her imagination. The printing press – a form of mechanised writing that revolutionised intellectual activity in Early-modern Europe – didn’t create a new “place”. It enabled alternative means of communication. The Printing Press was the first Information Technology. And it was roundly criticised as well.

Although the book purports to explain how new technologies influence human behaviour it doesn’t really offer a convincing argument. I have often quoted the phrase attributed to McLuhan – we shape our tools and thereafter our tools shape us – and I was hoping for a rational expansion of that theory. It was not to be. Instead it was a collection of horror stories about how people and technology have had problems. And so we get stories of kids with technology, the problems of cyberbullying, the issues of on-line relationships, the misnamed Deep Web when she really means the Dark Web – all the familiar tales attributing all sorts of bizarre behaviours to technology – which is correct – and suggesting that this could become the norm.

What Dr Aiken fails to see is that by the time we recognise the problems with the technology it is too late. I assume that Dr Aiken is a Digital Immigrant, and she certainly espouses the cause that our established values are slipping away in the face of an unrelenting onslaught of cyber-bad stuff. But as I say, the changes have already taken place. By the end of the book she makes her position clear (although she misquotes the comments Robert Bolt attributed to Thomas More in A Man for All Seasons which the historical More would never have said). She is pro-social order in cyberspace, even if that means governance or regulation and she makes no apology for that.

Dr Aiken is free to hold her position and to advocate it and she argues her case well in her book. But it is all a bit unrelenting, all a bit tiresome these tales of Internet woe. It is clear that if Dr Aiken had her way the very qualities that distinguish the Digital Paradigm from what has gone before, including continuous disruptive and transformative change and permissionless innovation, will be hobbled and restricted in a Nanny Net.

For another review of The Cyber Effect see here

Further Obscurity on the Internet – Collisions in the Digital Paradigm VIII

 

Introduction

Yet again a Court of law has made an order against Google, requiring it to deindex search results in a particular case. This example does not deal with the so-called “right to be forgotten” but with issues surrounding efforts by one company to infringe the intellectual property rights of another. But Google’s involvement in this case as not as a party to the action. They were not involved. No wrongdoing by them was alleged. All they did was provide index links via their automated processes. These links were to the infringers. An injunction was sought to compel de-indexing not just in the country where the case was heard but world wide.

Equustek v Jack

Equustek v Jack came before the British Columbia Supreme Court in 2014. The circumstances of the case were these.

Equustek manufactured electronic networking devices for industrial use.  A company named Datalink created a competing product. Equustek claimed that one of its former employees conspired with Datalink, and the competing product used Equustek’s trade secrets and trademarks.

Equustek commenced proceedings against Datalink and a number of individual defendants.  The Datalink defendants did not play any part in the litigation and their defences were struck out but they continued to sell products from a number of websites.

Pending trial the Supreme Court made a number of interlocutory orders against the defendants including an  order prohibiting the defendants from dealing with Equustek’s intellectual property. Even the issue of a criminal arrest warrant against one of the defendants did not stop the sale of the disputed products on the web from undisclosed locations.

So far the case is procedurally unremarkable. But what happened next is quite extraordinary. Equustek turned to Google and asked it to stop indexing the defendant’s websites worldwide. Google voluntarily removed 345 URLs from search results on Google.ca. But the problem remained. Almost all the infringing material was still available online. So Equustek took the matter a step further.

Remember, Google was not a party to the original suit. They had not been involved in the allegations of intellectual property infringement . Google’s response to Equustek’s approach was a co-operative one. They did not have to comply with Equustek’s request.

Equustek sought an order from the Court restraining Google from  displaying any part of the websites with which it was concerned on any search results worldwide. The order was in the nature of an interlocutory injunction. The grounds for the application were that Google’s search engine facilitated the defendants’ ongoing breach of court orders.

Google argued that the court did not have jurisdiction over Google or should decline jurisdiction, In any event it should not issue the requested injunction. The Court observed that the application raised  novel questions about the Court’s authority to make such an order against a global internet service provider.

The court held that it had jurisdiction over Google because Google, through its search engine and advertising business, carried on business in British Columbia. This in itself is not remarkable. It is consistent with the theory of connection with the forum jurisdiction and the concept of the grounding of activities in the forum state that gives rise to  a Court’s jurisdictional competence. Cases abound arising from e-commerce and Internet based business activities.

The court considered that Google’s search engine websites were not passive information sites, but rather were interactive and displayed targeted advertisements. The court noted that this rationale might give every state in the world jurisdiction over Google’s search services, but noted that was a consequence of a multinational doing business on a global scale rather than from a flaw in the territorial competence analysis.

Again this is a reality of jurisdictional theory. In the Australian defamation case of Dow Jones v Gutnick it was observed that a cause of action might lie in every country where publication of the defamatory article had taken place. Mr Gutnick undertook to commence only in Australia because that is where his reputation lay and needed to be vindicated.

The court also refused to decline jurisdiction over Google, because Google failed to establish that another jurisdiction (California) was a more appropriate forum and the court could effectively enforce its order against Google outside Canada. This is what is called a forum conveniens argument – it will arise in the context not of whether or not a court has jurisdiction but where jurisdiction may lie in two states (in this case British Columbia, Canada and California, United States of America) which court should properly hear the case.

The Court found that it had authority to grant an injunction with extra-territorial effect against a non-party resident in a foreign jurisdiction if it is just or convenient to do so.

The judge observed that new circumstances require adaptation of existing remedies  – an aspect of the reality of e-commerce with its potential for abuse. This would be especially so if there was to be any credibility and integrity of Court orders.

The court then considered the test for ordering an injunction against a third party. The standard test was modified.

 (1) a good arguable case or fair question to be tried (which relates to the plaintiff’s claim against defendant); and

 (2) a balancing of the interests (irreparable harm and convenience) of the plaintiff and the non-party to whom the injunction would apply.

The court identified a number of relevant considerations, including

  1. whether the third party is somehow involved in the defendant’s wrongful acts;
  2. whether the order against the third party is the only practicable means to obtain the relief sought;
  3. whether the third party can be indemnified for the costs to which it will be exposed by the order;
  4. whether the interests of justice favour the granting of the order; and
  5. the degree to which the interests of persons other than the applicant and the non-party could be affected.

The court granted the injunction against Google requiring Google to block the defendants’ websites (identified in the court order) from Google’s search engine results worldwide finding that Google was unwittingly facilitating the defendants’ ongoing breaches of court orders, and there was no other practical way to stop the defendants.

Google appealed to the British Columbia Court of Appeal who upheld the order issued at first instance.

Equustek v Google

The Court of Appeal observed that it is unusual for courts to grant remedies against persons who are not parties to an action. The reasons for this are obvious – most civil claims are concerned with the vindication of a right, and the remedial focus will be on that right. Further, notions of justice demand that procedural protections be afforded to a person against whom a remedy is sought. The usual method of providing such protections is to require the claimant to bring an action against the respondent, giving the respondent the rights of a party.

However, this does not mean that the Courts are powerless to issue orders against non-parties. What is known as a Norwich Pharmcal order was cited as an example. There are, in fact, many types of orders that are routinely made against non-parties – subpoenas to witnesses, summonses for jury duty and garnishing orders are common examples. Many of these orders have a statutory basis or are purely procedural, but others derive from the inherent powers of the court or are more substantive in nature.

The Appeal Court observed that Canadian courts have jurisdiction to grant injunctions in cases where there is a justiciable right, even if the court is not, itself, the forum where the right will be determined. Canadian courts have also long recognized that injunctions aimed at maintaining order need not be directed solely to the parties to the litigation.

Google argued that the Court should not grant an injunction with extraterritorial effect. It submitted

As a matter of law, the court is not competent to regulate the activities of non-residents in foreign jurisdictions. This competence-limiting rule is dictated both by judicial pragmatism and considerations of comity. The pragmatic consideration is that the court should not make an order that it cannot enforce. The comity consideration is that the court refrains from purporting to direct the activities of persons in other jurisdictions and expects courts in other jurisdictions to reciprocate.

The Court did not accept that the case law establishes the broad proposition that the court is not competent to regulate the activities of non-residents in foreign jurisdictions.

The Court noted that the case exhibited a sufficient real and substantial connection to British Columbia to be properly within the jurisdiction of the Province’s courts.

From a comity perspective, the question must be whether, in taking jurisdiction over the matter, British Columbia courts have failed to pay due respect to the right of other courts or nations. The only comity concern that was articulated in this case was the concern that the order made by the trial judge could interfere with freedom of expression in other countries. For that reason, there had to be considerable caution in making orders that might place limits on expression in another country. The Court stated that where there is a realistic possibility that an order with extraterritorial effect may offend another state’s core values, the order should not be made.

In considering the issue of freedom of expression the Court noted that there was no realistic assertion that the judge’s order would offend the sensibilities of any other nation.

It was not suggested that the order prohibiting the defendants from advertising wares that violate the intellectual property rights of the plaintiffs offended the core values of any nation. The Court noted that the order made against Google is a very limited ancillary order designed to ensure that the plaintiffs’ core rights are respected.

The Court also noted that there were a number of cases where orders had been made with international implications. Cases such as APC v. Auchan Telecom, 11/60013, Judgment (28 November 2013) (Tribunal de Grand Instance de Paris); McKeogh v. Doe (Irish High Court, case no. 20121254P); Mosley v. Google, 11/07970, Judgment (6 November 2013) (Tribunal de Grand Instance de Paris); Max Mosley v. Google (see “Case Law, Hamburg District Court: Max Mosley v. Google Inc. online: Inform’s Blog Moserly v Crossley – Hamburg) and ECJ Google Spain SL, Google Inc. v. Agencia Española de Protecciób de Datos, Mario Costeja González, C-131/12 [2014], CURIA are well known to Internet lawyers.

Some of the cases involving extraterritorial implications have been controversial, such as La Ligue contre le racisme et l’antisémitisme c. La Société YAHOO!Inc., Tribunal de Grande Instance de Paris (May 22, 2000 and November 20, 2000), Court File No. 00/05308 and YAHOO! INC. v. La Ligue contre le racisme et l’antisémitisme, 169 F.Supp. 2d 1181 (N. Dist. Cal., 2001) rev’d 379 F.3d 1120 (9th Cir., 2004) and 433 F.3d 1199 (9th Cir. en banc, 2006)).

This extensive case law does indicates that courts in other countries do not see extraterritorial orders as being unnecessarily intrusive or contrary to the interests of comity.

Commentary

Google appealed to the Supreme Court of Canada and leave to appeal has been granted. Thus, there is one more act to this drama to be played out.

One issue that will need to be resolved is whether the order that was made can be even be granted against a third party not involved in any wrongful activity. If so, the test to obtain such an order will need to be determined, as well as its geographic and temporal scope.

What about the issue of access to justice? In many areas of law, courts have expressed concern that effective remedies should not be limited to individuals or companies with deep pockets. The type of order granted against Google is certainly an effective additional remedy from a plaintiff’s perspective. But are only large corporates expected to be the sole parties in cases such as these simply because they are large corporates with a high profile. Only Google seems to be a party in this case – no other search engine features.

Furthermore what are the boundaries of a Canadian court’s territorial jurisdiction. May a Canadian court order a search engine company in California to prevent users in other countries from viewing entire websites? It is also expected that Google will raise constitutional issues, specifically whether blocking search results limits access to information or freedom of expression on the Internet.

But there is more to the case than this. It involves the ability to locate Internet based information that is facilitated by search engines. This case has the same impact on the Internet as Google Spain  – its consequence is de-indexing of information.

The decision is unremarkable for its application of conflict of laws theory. But having said that, the issue of extraterritorialty is a complex one, and because other jurisdictions and Courts have made extraterritorial orders that may or may not be enforceable does not mean that such an order is correct of justified in law. The anti-Nazi organisations LICRA and UEJEF found this out when Yahoo, having had extraterritorial orders made against it in France came to the US Courts seeking a declaration that they were unenforceable. Would Google be on less firm ground if it adopts a similar course of action against Equustek – assuming that a US Court has jurisdiction?

Throttling the Web

The development of the World Wide Web was, in the vision of Tim Berners-Lee, to assist in making information available and, creating a method of accessing stored information and sharing it.  Yet it had already become clear, even pre-Web, that locating information was a problem and the solution lay in developing search engines of means of locating a specific piece or pieces of information. Search engines such as Gopher provided a form of a solution in the pre-graphical interface, pre-Web environment, and there were a number of search engines such as Altavista, Lycos, Find-What, GoTo, Excite, Infoseek, RankDex, WebCrawler Yahoo, Hotbot, Inktomi and AskJeeves that provided assistance in locating elusive content. However, the entry of Google into the marketplace, and the development of innovative search algorithms meant that Google became the default source for locating information.

What must be remembered is that Google is a search and indexing engine. It does not store the source information, other than in cached form. Using some advanced mathematics, founders Larry Page and Sergey Brin developed a method for measuring the links across websites by ranking a website more highly when other sites linked to it. Putting it very simply, the algorithm measured the popularity of a webpage. Utilising the hypertext link of Berners-Lee, Google locates content and enables a user to access it.

As a lawyer\technologist, I see Equustek v Google in the same way as I saw Google Spain – as a clog on progress that may slow the development and promise of information systems that depend upon a reliable search facility to locate information on the greatest central source of information that the world has ever known. The propositions that underlie Google Spain and Equustek and the application of law in this area amounts to a real and significant collision in the Digital Paradigm. Perhaps it is time for the Courts to understand that an automated indexing system that is completely content neutral and involves no human input into the way that it identifies and indexes should be seen as simply an intermediary and no more. Google is able to monetise its search engine  but to suggest that its search engine is not a passive information system, but rather is interactive and displays targeted advertisements in and of itself is, in my respectful view, insufficient justification to require a de-indexing of search results.

Linking and the Law – Part 2 – A Diversion to TPMs

Linking and the Law

PART 2

7    The New Zealand position — Technological Protection Measures, Anti-circumvention and communication

7.1             Introduction

The Digital Millennium Copyright Act came into force in the US in October 1998 as a response to the 1996 WIPO Copyright Treaty. Updated copyright legislation, including provisions relating to anti-circumvention of copy protection were enacted in the UK in 1988[54] and in New Zealand in 1994.[55]  In this section I shall consider the provisions of the Copyright Act that deal with Technological Protection Measures or TPMs.[56] It will become clear as the discussion progresses that the New Zealand legislation as amended by the Copyright (New Technologies)Amendment Act 2008 addresses the issues that were raised in Reimerdes and Corley.

The discussion in this section is admittedly lengthy but necessary to understand the approach to TPMs and the way that the Legislature has attempted to address  the problem. It is helpful within the context of linking because the decision in Reimerdes and Carley centreed around providing access to TPMs. A consideration of the New Zealand position, especially following the 2008 amendments to the Copyright Act will show significant differences in approach to TPMs from that in the DMCA and which would mean that the approach to linking in Reimerdes and Corley need not necessarily be applicable in New Zealand.

7.2             The Former Section 226 of the Copyright Act 1994

The provisions of the former s 226 of the Copyright Act 1994 created a right in favour of a person issuing copies (effectively a publisher). That person has the same rights as a copyright owner and the remedies that are available are provided.

Subsection (2) of s 226 defined how a person “infringes” the new right. The elements of the prohibited activity were:

•   Making, selling, offering or exposing for sale or hire; or

•   Advertising for sale or hire;

•   Any device or means;

•   Specifically designed or adapted to circumvent the form of copy protection employed; or

•   Publishing information;

•   With the intention to enable or assist persons to circumvent that form of copy protection;

•   Knowing or having reason to believe that the devices, means, or information will be used to make infringing copies.

State of mind is significant. For the publishing of information there were two states of mind involved. First, there had to be an intention to enable or assist persons to circumvent copy protection — the specific intention. Secondly, it had to be proven that the publisher knew, or had reason to believe, that the information would be used to make infringing copies.

The prohibition on the distribution of circumvention devices involved proof of the same state of mind relating to the use of devices. There had to be knowledge (or reason to believe) that the device would be used to make infringing copies. In addition, the device had to be specifically designed or adapted to circumvent the copy protection employed. Knowledge would seem to follow from the specific design or adaptation of the device. One would hardly distribute a circumvention device specifically designed for that purpose if one did not know or have reason to believe that the device would be used for circumvention purposes.

As far as devices or means were concerned it appeared that the use of those words extended not only to hardware devices that prevented copyright infringement taking place but the software devices such as DECSS.[57] As far as devices that have substantial non-infringing uses but incidentally include a circumvention device the situation is a little more difficult. At present DVD and Blu-Ray players have a built device that decrypts the CSS copy protection system. Imagine a DVD player/recorder that could not only play back material, but could record from a DVD as well. The CSS decoding system would be present for legitimate and authorised playback provisions. Thus the machine would be specifically designed to circumvent copy protection. But such a use would be authorised. Then there is the recording use. For liability to follow there would have to be specific knowledge on the part of the distributor of such a device that it would be used to make infringing copies. The mere presence of a circumvention means or device is not enough. It must be accompanied by the requisite knowledge or reason to believe.

The provision of information about circumvention means or devices was limited by two state of mind requirements that, arguably, would mean that the publication of, for example, academic research regarding circumvention technologies would not be caught by the section if:

•    the intention to enable or assist circumvention were absent; and/or

•    there was an absence of knowledge or reason to believe that the publication would be used to make infringing copies.

Thus, when we consider the examples the scope of the former section was somewhat narrower than it first appeared.

7.3              The 2008 Amendment

The 2008 Amendment of s 226 and following amendments have made a number of changes. The first is that definitions have been provided. The second is that the essence of the former s 226 is retained in s 226A.

The focus of the new s 226 continues to be on the link between circumvention and copyright infringement and on the making, sale and hire of devices or information rather than on the act of actual circumvention. Actual circumvention is not prohibited, but any unauthorised use of the material that is facilitated by circumvention continues to be an infringement of copyright.

The new amendments recognise that consumers should be able to make use of materials under the permitted acts, or view or execute a non-infringing copy of a work. This is consistent with New Zealand’s position on parallel importation of legitimate goods; for example, genuine DVDs from other jurisdictions. New provisions have also been introduced to enable the actual exercise of permitted acts where TPMs have been applied.

What the new TPM provisions do is two-fold — broadly they prohibit and criminalise.

There is a prohibition of commercial conduct that undermines the TPM by putting a circumvention device into circulation or providing a service including the publication of information which relates to overriding TPM protection. Contravention has civil consequences — specifically the issue of the work protected by a TPM is protected as if the conduct was an infringement of copyright. The second leg is to make the prohibited conduct a criminal offence.[58]

There is a knowledge element for both the prohibition and the offence — the knowledge of the use to which the circumvention device or the service or published information will, or is likely to, be put.

There are however some limits on the prohibition when circumvention device has a legitimate use.

7.3.1           The Definitions

There are three definitions which are applicable to ss 226A–226E. The first is a technological protection measure or TPM:[59]

TPM or technological protection measure

(a)    means any process, treatment, mechanism, device, or system that in the normal course of its operation prevents or inhibits the infringement of copyright in a TPM work; but

(b)   for the avoidance of doubt, does not include a process, treatment, mechanism, device, or system to the extent that, in the normal course of operation, it only controls any access to a work for non-infringing purposes (for example, it does not include a process, treatment, mechanism, device, or system to the extent that it controls geographic market segmentation by preventing the playback in New Zealand of a non-infringing copy of a work)

Significantly, the legislature differentiated between a TPM for the purposes of the prevention of infringement and one that relates to access to a work for non-infringing purposes. The example is given of the control of “geographic market segmentation”, which clearly relates to a region protection in games or DVDs. Thus, if a person legitimately acquired a DVD that was coded for region 1, the region coding device or process in the DVD player which would otherwise prevent the use of the DVD may be circumvented so that the non-infringing purpose of viewing the DVD could be carried out.

The second definition relates to a TPM circumvention device:[60]

TPM circumvention device means a device or means that—

(a)    is primarily designed, produced, or adapted for the purpose of enabling or facilitating the circumvention of a technological protection measure; and

(b)   has only limited commercially significant application except for its use in circumventing a technological protection measure

The primary purpose of the circumvention device must be to circumvent a TPM — taking into account that the TPM must prevent infringement rather than access for non-infringing purposes and as well as its primary design production or adaptation it must have limited commercially significant application other than for its use in circumventing a TPM.

Both paras (a) and (b) are conjunctive; it may well be that a TPM circumvention device may have other commercially significant applications or, as the Americans put it, substantial non-infringing uses.

The third definition relates to a TPM work which is defined as a copyright work that is protected by a TPM. A TPM work must be a copyright work but it may well be that this cannot prevent a entrepreneur locking up a public domain work with a TPM if there is some significance in the way in which the work has been typographically arranged.

7.3.2      The Operative Sections

Section 226A sets out the prohibited conduct in relation to a TPM, stating:

226A Prohibited conduct in relation to technological protection measure

(1) A person (A) must not make, import, sell, distribute, let for hire, offer or expose for sale or hire, or advertise for sale or hire, a TPM circumvention device that applies to a technological protection measure if A knows or has reason to believe that it will, or is likely to, be used to infringe copyright in a TPM work.

(2) A person (A) must not provide a service to another person (B) if—

(a)    A intends the service to enable or assist B to circumvent a technological protection measure; and

(b)   A knows or has reason to believe that the service will, or is likely to, be used to infringe copyright in a TPM work.

(3) A person (A) must not publish information enabling or assisting another person to circumvent a technological protection measure if A intends that the information will be used to infringe copyright in a TPM work.

Section 226A provides a useful example of modern statutory drafting techniques by clarifying the behaviours of the certain actors that the section addresses.

Section 226A(1) is identical in scope to the former s 226(1), with the exception that the definitions contained in the new s 226 impact upon the scope. Whereas the previous legislation referred to a form of copy protection, the definition of a TPM work, a TPM and a TPM circumvention device now govern.

Section 226A(2) relates to the publishing information limb of the former s 226, except that a new term (“service”) is used. This is undefined but clearly encompasses information.

Once again there are two limbs underlying the prohibition: the intention that the service enable or assists circumvention of TPM; and specific knowledge that the service will or is likely to be used to infringe copyright in a TPM work.

If the service is for the purposes of university research, it is difficult to imagine that B could be satisfied, thus the prohibitive conduct is not complete.

The use of the word “service” in s. 226A(2) is new. The earlier iteration used the words “device” or “means”. Service is a very wide concept and although s. 226A(3) refers to the publication of information to enable or assist another person to circumvent a TPM, service extends the scope of s. 226A(1) and in essence addresses any form of assistance enabling circumvention of a TPM accompanied by knowledge or reason to believe that the assistance or service will be used to infringe copyright. There seems to be little doubt that a “service” could concveivably encompass a computer program or code.

Section 226A(3) relates specifically to the publication of information, and although the behaviour could be encompassed by a service, the legislature saw fit to make publication of information about TPM circumvention a discreet behaviour.

There is only one knowledge element in s 226A(3), as opposed to the two in s 226A(2). That knowledge element is that person A must know that the provider of the information must intend that the information is to be used to infringe copyright in a TPM work. Thus s 226A prohibits:

•    the making or distribution of a TPM;

•    the provision of a service with the two limbs of intention to assist circumvention and knowledge that the service will be or likely to be used to circumvent; and

•    publication of information enabling circumvention if it is intended that information will be used to circumvent.

Unlike the original  s.226, which was restricted in the language to commercial activity (sells, lets for hire, offers or exposes for sale or hire or advertises for sale or hire) s. 226A prohibits not only the making, selling, letting for hire, offering or exposing for sake, but also prohibits importing or distributing a TPM circumvention device. These terms can encompass an individual who downloads a TPM circumvention device from an off-shore site. This was not thje case in the earlier legislation. Distibution is also prohibite3d. Thus if one makes a TPM circumvention device available for download from a website, and uses a link to facilitate delivery, such an action could fall within the ambit or “distribution”.

However, the provision of information has a commercial aspect to it, for the provision of such information must be “in the course of business” and is therefore of a narrower scope that had those words been omitted.

Section 226B sets out the rights that accrue to the issuer of a TPM work. These rights are what Kirby J referred to as para copyright in Stevens v Kabushiki Kaisha Sony Computer Entertainment.[61] Essentially, the issue of a TPM work has the same rights against a person who contravenes s 226A as the copyright owner has in respect of infringement. The provisions of the Copyright Act relating to delivery up in civil or criminal proceedings is available to the issuer of a TPM work as are certain presumptions that are contained in ss 126–129 of the Copyright Act. The provisions of s 134 relating to disposing of infringing copies or objects applies as well with the necessary modifications.

Absent from the 1994 version of s 226 was the offence of contravening s 226A. Section 226C creates that offence:

226C Offence of contravening section 226A

(1) A person (A) commits an offence who, in the course of business, makes, imports, sells, distributes, lets for hire, offers or exposes for sale or hire, or advertises for sale or hire, a TPM circumvention device that applies to a technological protection measure if A knows that it will, or is likely to, be used to infringe copyright in a TPM work.

(2) A person (A) commits an offence who, in the course of business, provides a service to another person (B) if—

(a)    A intends the service to enable or assist B to circumvent a technological protection measure; and

(b)   A knows that the service will, or is likely to, be used to infringe copyright in a TPM work.

(3) A person (A) commits an offence who, in the course of business, publishes information enabling or assisting another person to circumvent a technological protection measure if A intends that the information will be used to infringe copyright in a TPM work.

(4) A person who commits an offence under this section is liable on conviction on indictment to a fine not exceeding $150,000 or a term of imprisonment not exceeding 5 years or both.

The first important thing to note is that subs (4) requires the conviction to be on indictment, so the matter must be dealt with in the jury jurisdiction and cannot be dealt with summarily although the position may well be altered by the provisions of the Criminal Procedure Act 2011..

Section 226C mirrors the prohibitions in s 226A, but the critical matter for an offence is that there is a commercial element — “in the course of business”.

Similarly, the provision of the service in subs (2) of 226C must have a commercial element as must the publication of information in subs (3).

This then brings the criminalisation of para-copyright in line with the provisions of s 135 of the Copyright Act which relates to piracy or commercial infringement. Clearly, s 226D considers that the offence should relate to commercial activity involving TPMs. In this way the rather wider prohibitions contained in s. 226A  do not automatically lead to potential liability under s. 226C

Section 226D clarifies the position relating to the scope of the rights of the issuer of a TPM work. The operative part states:

226D When rights of issuer of TPM work do not apply

(1) The rights that the issuer of a TPM work has under section 226B do not prevent or restrict the exercise of a permitted act.

(2) The rights that the issuer of a TPM work has under section 226B do not prevent or restrict the making, importation, sale, or letting for hire of a TPM circumvention device to enable—

(a)    a qualified person to exercise a permitted act under Part 3 using a TPM circumvention device on behalf of the user of a TPM work; or

(b)   a person referred to in section 226E(3) to undertake encryption research.

(3) In this section and in section 226E, qualified person means—

(a)    the librarian of a prescribed library; or

(b)   the archivist of an archive; or

(c)    an educational establishment; or

(d)   any other person specified by the Governor-General by Order in Council on the recommendation of the Minister.

(4) A qualified person must not be supplied with a TPM circumvention device on behalf of a user unless the qualified person has first made a declaration to the supplier in the prescribed form.

The issuer of a TPM work cannot prevent or restrict the exercise of a permitted act. Nor can the prohibition prevent or restrict the making, importation, sale or letting for hire of a TPM circumvention device to enable encryption research under s 226E(3), or to enable a qualified person to exercise a permitted act using a TPM circumvention device.

The legislation goes on to define “qualified person”, who, in this case, is required to make a declaration relating to certain matters.

On their own the provisions of s 226D seem confusing, although the provisions of s 226 and following do not prohibit the act of circumvention. Subsection (1) of 226D makes it clear that circumvention may be permissible for the purposes of the exercise of a permitted act.

Section 226E takes the matter further.

226E User’s options if prevented from exercising permitted act by TPM

(1) Nothing in this Act prevents any person from using a TPM circumvention device to exercise a permitted act under Part 3.

(2) The user of a TPM work who wishes to exercise a permitted act under Part 3 but cannot practically do so because of a TPM may do either or both of the following:

(a)    apply to the copyright owner or the exclusive licensee for assistance enabling the user to exercise the permitted act:

(b)   engage a qualified person (see section 226D(3)) to exercise the permitted act on the user’s behalf using a TPM circumvention device, but only if the copyright owner or the exclusive licensee has refused the user’s request for assistance or has failed to respond to it within a reasonable time.

(3) Nothing in this Act prevents any person from using a TPM circumvention device to undertake encryption research if that person—

(a)    is either—

(i)     engaged in a course of study at an educational establishment in the field of encryption technology; or

(ii)    employed, trained, or experienced in the field of encryption technology; and

(b)   has either—

(i)     obtained permission from the copyright owner or exclusive licensee of the copyright to the use of a TPM circumvention device for the purpose of the research; or

(ii)    has taken, or will take, all reasonable steps to obtain that permission.

(4) A qualified person who exercises a permitted act on behalf of the user of a TPM work must not charge the user more than a sum consisting of the total of the cost of the provision of the service and a reasonable contribution to the qualified person’s general expenses.

Once again the section makes it clear that the act of circumvention to exercise a permitted act is not prohibited. Thus a person may use a circumvention device to copy a selection from a TPM work for the purposes of review, a comment or inclusion (with attribution) in an academic work.

Subsection (2) glosses over that, however. If the user of a TPM work wishes to exercise a permitted act, he or she may use a TPM circumvention device to do so, but the subsection includes the words “but cannot practically do so because of a TPM”. It is unclear what this means. If a person whose access to a work to carry out a permitted act is prevented by a TPM, does subs (2) automatically apply? Or, if a circumvention device is available, is the user able to use that circumvention device to exercise the permitted act? Does subs (2) relate to the situation where there is no circumvention device available? Subsection (2), in providing certain options for the person who is stymied by a TPM, challenges the market failure theory of fair use.

A person wishing to do one of the permitted acts may apply to the copyright owner or licensee for assistance. The alternative is to engage a qualified person (see s 226D(3)) to exercise a permitted act on the user’s behalf using a circumvention device. But that can only apply if the copyright owner exclusive licensee refuses the user’s request for assistance or fail to respond within a reasonable time.

A sensible interpretation of s 226E suggests that subs (2) must be followed if there is no readily available circumvention device enabling the user to exercise a permitted act.

It is also important to note that s 226E makes a specific exception for the use of circumvention devices to undertake encryption research in certain circumstances.

7.4              Comment

The new provisions of s 226 and following are indeed helpful. The incorporation of clear definitions, that make it clear that TPMs are for the purposes of prevention of infringement rather than access, are to be welcomed (although s 226E seems to introduce a somewhat unnecessary level of complexity).

Underlying the whole issue of para-copyright is the fact that, in reality, TPMs are a somewhat blunt instrument for the purposes of copyright protection, presenting an “all or nothing” level of protection. TPMs cannot discriminate between a permitted or prohibited use. They are international and are applied internationally, whereas copyright law is territorial. TPMs place the control in the hands of the copyright owner of a technological rather than a legal nature and, as already observed, provide a potential for market failure. Essentially, TPMs do not provide an absolute protection, rather they impose another layer of protection that sits on top of the balance of interests created by statute, and muddy the waters between what is and is not allowed. The various options relating to behaviour regarding TPMs contained in ss 226B, 226D and 226E suggest that certain behaviours may be permissible while others are not. Clearly, the legislature did not want to impose a prohibition on the act of circumvention, but the various alternatives given in ss 226D and 226E seem to suggest prohibition.

The legislation, while addressing the issue of circumvention of TPMs, and restricting prohibited conduct to the means by which copyright protection (rather than access prevention) may be circumvented therefore makes it clear that the provisions of means by which access controls may be circumvented is not within the scope of prohibited conduct. This means that one may provide services, information and programs that assist in circumventing access protections. In this way the legislation addresses its target – the copy right – rather than allowing the engraftment of another “para-copyright” – the “prevention of access” right. This is eminently justifiable. Region coding is a means by which copyright owners facilitate distribution of their products. The only issue is obne of market segmentation and a release strategy that copyright owners may have in place. There is no reason, in terms of copyright, why a person who legitimately acquires content in one geographical area should be prohibited from accessing it in another.

However, unlike the New Zealand legislation the DMCA prohibits thje circumvention of access control systems, despite there being no copyright implications and, to further complicate matters, criminalises such behaviour.  It should be a matter of concern that should international trade treaty negotiations result in the application of a DMCA style of anti-TPM circumvention regime, the results will be:

a) the imposition of a foreign marketing system that goes far beyond those chosen say for the release of non-digital product such as movies and CDs

b) the end of the parallel importing regime insofar as geographically segmented digital product is concerned

c) the criminalisation of behaviour that has nothing to do with copyright infringement and has no economic implications for content owners whatsoever.

Finally, it is still not clear whether licence terms or conditions of sale may override the way in which circumvention devices may be used in the limited situations provided in s 226. Unlike s 84, which statutorily negates such conditions, the matter is left open. The legislature has gone to considerable lengths to ensure the balance of interests that underlies copyright law is maintained. It seems unusual that those rights may be subverted by contractual arrangements.


[54]      The Copyright, Designs and Patents Act 1988.

[55]      The Copyright Act 1994 as amended by the Copyright (New Technologies) Amendment Act 2008

[56]      Section 226 – 226E

[57]      CSS is the DVD content scrambling system that prohibits the copying of the files on a DVD movie disk. DECSS is the system that circumvents the content scrambling system.

[58]      The offence of contravening s 226A is set out in s 226C.

[59]      See the new s 226.

[60]      See the new s 226.

[61]      Stevens v Kabushiki Kaisha Sony Computer Entertainment [2005] HCA 58, (2005) 224 CLR 193, (2005) 221 ALR 448.