Dangerous Speech – some legislative proposals

Preface

This piece was written in April 2019. I sat on it for a while and then published it on the Social Science Research Network. It has attracted some interest since it was posted and was recently listed on SSRN’s Top Ten download list for LSN: Criminal Offenses & Defenses. As at 21 January a copy had been downloaded 21 times and there have been 180 abstract views.

Of more interest is the fact that a colleague in the United States has used the paper as a teaching aid for his First Amendment teaching course on the case of Terminiello v City of Chicago 337 U.S. 1 (1949). Terminiello held that a “breach of peace” ordinance of the City of Chicago that banned speech which “stirs the public to anger, invites dispute, brings about a condition of unrest, or creates a disturbance” was unconstitutional under the First and Fourteenth Amendments to the United States Constitution.

My piece, which I have decided to publish on this blog, deals primarily with the position under NZ Law. I had not come across Terminiello but it is interesting to see that it comes largely to a similar conclusion. It is a real thrill that has been found to be useful for teaching purposes.

Abstract

This paper considers steps that can be taken to legislate against hate speech.

 The first issue is the term “hate speech” itself and, in light of the proposals advanced, this emotive and largely meaningless term should be replaced with that of “dangerous speech” which more adequately encapsulates the nature of the harm that the law should address.

The existing criminal provisions relating to what I call communications offences are outlined. Proposals are advanced for an addition to the Crimes Act to fill what appears to be a gap in the communications offences and which should be available to both individuals and groups. A brief discussion then follows about section 61 of the Human Rights Act and section 22 of the Harmful Digital Communications Act. It is suggested that major changes to these pieces of legislation is unnecessary.

Communications offences inevitably involve a tension with the freedom of expression under the New Zealand Bill of Rights Act and the discussion demonstrates that the proposal advanced are a justifiable limitation on freedom of expression, but also emphasises that a diverse society must inevitably contain a diversity of opinion which should be freely expressed.  

 Introduction

The Context

In the early afternoon of 15 March 2019 a gunman armed with semi-automatic military style weapons attacked two mosques in Christchurch where people had gathered to pray. There were 50 deaths. The alleged gunman was apprehended within about 30 minutes of the attacks. It was found that he had live streamed his actions via Facebook. The stream was viewed by a large number of Facebook members and was shared across Internet platforms.

It also transpired that the alleged gunman had sent a copy of his manifesto entitled “The Great Replacement: Towards a New Society” to a number of recipients using Internet based platforms. Copies of both the live stream and the manifesto have been deemed objectionable by the Chief Censor.[1]

In addition it appears that the alleged gunman participated in discussions on Internet platforms such as 4Chan and 8Chan which are known for some of their discussion threads advocating White Supremacy and Islamophobic tropes

The Reaction

There can be no doubt that what was perpetrated in Christchurch amounted to a hate crime. What has followed has been an outpouring of concern primarily at the fact that the stream of the killings was distributed via Facebook and more widely via the Internet.

The response by Facebook has been less than satisfactory although it would appear that in developing their Livestream facility they then were unable to monitor and control the traffic across it – a digital social media equivalent of Frankenstein’s creature.

However, the killings have focused attention on the wider issue of hate speech and the adequacy of the law to deal with this problem.

Whither “Hate” Speech

The problem with the term “hate speech” is that it is difficult, if not impossible, to define.

Any speech that advocates, incites and intends physical harm to another person must attract legal sanction. It is part of the duty of government to protect its citizens from physical harm.

In such a situation, it matters not that the person against whom the speech is directed is a member of a group or not. All citizens, regardless of any specific identifying characteristics are entitled to be protected from physical harm or from those who would advocate or incite it.

Certain speech may cause harm that is not physical. Such harm may be reputational, economic or psychological. The law provides a civil remedy for such harms.

At the other end of the spectrum – ignoring speech that is anodyne – is the speech that prompts the response “I am offended” – what has been described as the veto statement.[2] From an individual perspective this amounts to a perfectly valid statement of opinion. It may not address the particular argument or engage in any meaningful debate. If anything it is a statement of disengagement akin to “I don’t like what I am hearing.”

Veto Statements

The difficulty arises when such a veto statement claims offence to a group identity. Such groups could include the offended woman, the offended homosexual, the offended person of colour or some other categorization based on the characteristics of a particular group. The difficulty with such veto statements – characterizing a comment as “racist” is another form of veto of the argument – is that they legitimize the purely subjective act of taking offence, generally with negative consequences for others.

Should speech be limited, purely because it causes offence? There are many arguments against this proposition. That which protects people’s rights to say things I find objectionable or offensive is precisely what protects my right to object.  Do we want to live in a society that is so lacking in robustness that we are habitually ready to take offence? Do we want our children to be educated or socialized in this way? Do we desire our children to be treated as adults, or our adults to be treated as children? Should our role model be the thin-skinned individual who cries “I am offended” or those such as Mandela, Baldwin or Gandhi who share the theme that although something may be grossly offensive, it is beneath my dignity to take offence? Those who abuse me demean themselves.

It may well be that yet another veto statement is applied to the mix. What right does a white, privileged, middle-class old male – a member of a secure group – have to say this. It is my opinion that the marginalization of the “I’m offended” veto statement is at least to open the door to proper debate and disagreement.

Furthermore, the subjective taking of offence based on group identity ignores the fact that we live in a diverse and cosmopolitan society. The “I’m offended” veto statement discourages diversity and, in particular, diversity of opinion. One of the strengths of our society is its diversity and multi-cultural nature. Within this societal structure are a large number of different opinions. For members of one group to shut down the opinions of another on the basis of mere offence is counter to the diverse society that we celebrate.

The term “hate speech” is itself a veto statement and often an opposing view is labelled as “hate speech”. The problem with this approach seems to be that the listener hates what has been said and therefore considers the proposition must be “hate speech”. This is arrant nonsense. The fact that we may find a proposition hateful to our moral or philosophical sense merely allows us to choose not to listen further. But it does not mean that because I find a point of view hateful that it should be shut down. As Justice Holmes said in US v Schwimmer[3] “if there is any principle of the Constitution that more imperatively calls for attachment than any other, it is the principle of free thought—not free thought for those who agree with us but freedom for the thought that we hate.”

Our commitment to freedom of expression lies not in allowing others the freedom to say things with which we agree, but in allowing them the right to say things with which we absolutely disagree.

Finally, in considering the nature of the veto statement “I’m offended” or categorizing a comment as “hate speech” where lies the harm. Is anybody hurt? The harm in fact comes in trying to shut down the debate with the use of the veto statement.

Aspects of “Harm”

However, recent thinking has had a tendency to extend the concept of harm suffered by individuals. It is accepted that the law should target physical harm, but should it protect an individual from any sort of harm. Catherine MacKinnon has formulated a view, based on the work of J.L. Austin, that many words or sentiments are essentially indistinguishable from deeds and therefore, sexist or misogynistic language should be regarded as a form of violence.[4] This form of assaultive speech can be extended to be available to any group based of distinguishing characteristics or identity.

The emphasis is upon the subjectivity of the person offended. What offence there may be is in the sphere of feelings. It may follow from this that if I do not feel I have been offended then I have not been offended. If we reverse the proposition only the individual may judge whether or not they have been offended. I would suggest that this element of subjectivity is not the interest of the law.

The problem is that such an extension of potentially harmful speech becomes equated with “hate speech” and virtually encompasses any form of critical dialogue. To conflate offence with actual harm means that any sort of dialogue may be impossible.

To commit an offence of violence is to perform an action with objective, observable detrimental physical consequences, the seriousness of which requires the intervention of the law. To give offence is to perform an action – the making of a statement – the seriousness of which is in part dependant upon another person’s interpretation of it.

An example may be given by looking at Holocaust denial. Those who deny the Holocaust may insult the Jewish people. That may compound the injury that was caused by the event itself. But the insult is not identical to the injury. To suggest otherwise is to invite censorship. The denial of the Holocaust is patently absurd. But it needs to be debated as it was when Deborah Lipstadt challenged the assertions of David Irving. In an action brought by Irving for defamation his claims of Holocaust denial were examined and ultimately ridiculed.[5]

Jeremy Waldron is an advocate for limits on speech. He argues that since the aim of “hate speech” is to compromise the dignity of those at whom it is targeted it should be subject to restrictions.[6] Waldron argues that public order means more than an absence of violence but includes the peaceful order of civil society and a dignitary order of ordinary people interacting with one another in ordinary ways based upon an arms-length respect.

So what does Waldron mean by dignity. He relies upon the case of Beauharnais v Illinois[7] where the US Supreme Court upheld the constitutionality of a law prohibiting any material that portrayed “depravity, criminality, unchastity or lack of virtue of a class of citizens, of any race, colour, creed or religion.” On this basis Waldron suggests that those who attack the basic social standing and reputation of a group should be deemed to have trespassed upon that group’s dignity and be subject to prosecution. “Hate speech”, he argues, should be aimed at preventing attacks on dignity and not merely offensive viewpoints. Using this approach I could say that Christianity is an evil religion but I could not say Christians are evil people.

The problem with Waldron’s “identity” approach is that is that the dignity of the collective is put before the dignity of its individual members. This raises the difficulty of what may be called “groupthink”. If I think of myself primarily as a member of a group I have defined my identity by my affiliation rather than by myself. This group affiliation suggests a certain fatalism, that possibilities are exhausted, perhaps from birth, and that one cannot be changed. This runs directly against Martin Luther King’s famous statement where he rejected identity based on race but preferred an individual assessment.

“I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin but by the content of their character.”

The problem with the proposition that the state should protect its citizens against what Waldron calls “group defamation” is that it runs the risk of its citizens becoming infantalised, that in fact such an approach undermines their individual dignity by assuming that they cannot answer for themselves.

Rather than encouraging people to be thin-skinned, what is required in a world of increasingly intimate diversity is to learn how to be more thick-skinned and to recognize and celebrate the difference that lies in diversity. As Ronald Dworkin put it, no one has a right not to be offended and in fact we should not take offence too readily. In a free society I may be free to feel offended but should not use that offence to interfere with the freedoms of another.

Dangerous Speech

It will be by now apparent that my view is that “hate speech” is a term that should be avoided, although I accept that it is part of the lexicon, whether we like it or not. Perhaps it might be proper to focus upon the type of speech that society should consider to be unacceptable and that warrants the interference of law.

Any interference must be based on reasonableness and demonstrable justification, given that the right of freedom of expression under the Bill of Rights Act is the subject of interference. To warrant such interference I suggest that rather than use the term “hate speech” the threshold for the interference of the law could be termed “dangerous speech” – speech that presents a danger to an individual or group of individuals.

The intentional advocacy or inciting of physical harm may be classified as “dangerous speech” and justifies the intervention of the law. It is non-specific and available both to individuals and the groups identified in the Human Rights Act. In certain circumstances – where there is incitement to or advocacy of actual physical harm, the intervention of the criminal law is justified.

The law also deals with psychological harm of a special type – serious emotional distress. That is a test in the Harmful Digital Communications Act (HDCA). That legislation applies only to online speech. That may be a lesser form of “dangerous speech” but within the context of the provisions of section 22 HDCA such interference is justified. The elements of intention, actual serious emotional distress and the mixed subjective objective test provide safeguards that could be considered to be a proportionate interference with the freedom of expression and would harmonise the remedies presently available for online speech with that in the physical world.

There are a number of other provisions in the law that deal with forms of speech or communication harms. Some of these warrant discussion because they demonstrate the proper themes that the law should address.

Existing Communications Offences – a summary

The law has been ambivalent towards what could be called speech crimes. Earlier this year the crime of blasphemous libel was removed from the statute book. Sedition and offences similar to it were removed in 2008. Criminal libel was removed as long ago as 1993.

The Crimes Act 1961

At the same time the law has recognized that it must turn its face against those who would threaten to commit offences. Thus section 306 criminalises the actions of threatening to kill or do grievous bodily harm to any person or sends or causes to be received a letter or writing threatening to kill of cause grievous bodily harm. The offence requires knowledge of the contents of the communication.

A letter or writing threatening to destroy or damage any property or injure any animal where there is knowledge of the contents of the communication and it is done without lawful justification or excuse and without claim or right is criminalized by section 307.

It will be noted that the type of communication in section 306 may be oral or written but for a threat to damage property the threat must be in writing.

Section 307A is a complicated section.[8] It was added to the Act in 2003 and was part of a number of measures enacted to deal with terrorism after the September 11 2001 tragedy. It has received attention in one case since its enactment – that of Police v Joseph.[9]

Joseph was charged with a breach of s 307A(1)(b) of the Crimes Act 1961 in that he, without lawful justification or reasonable excuse and intending to cause a significant disruption to something that forms part of an infrastructure facility in New Zealand namely New Zealand Government buildings, did communicate information that he believed to be about an act namely causing explosions likely to cause major property damage.

Mr. Joseph, a secondary school student at the time, created a video clip that lasted a little over three minutes. He used his laptop and sent messages of threats to the New Zealand Government accompanied by some images that linked the language with terrorism, such as pictures of the aerial attack on the World Trade Centre and images of Osama Bin Laden. The message:[10]

  • threatened a terror attack on the New Zealand Government and New Zealand Government buildings.
  • claimed that large amounts of explosives had been placed in hidden locations on all buildings.
  • warned that New Zealand Government websites would be taken down.
  • threatened the hacking of New Zealand’s media websites.
  • threatened to disclose all Government secrets that have not been released to Wikileaks nor the public.
  • warned that obstruction would lead to harm.

The clip demanded that the New Zealand Government repeal or refrain from passing an amendment to the Copyright Act 1994. It was posted on 6 September 2010 and a deadline was set for 11 September 2010. The clip was attributed to the hacktavist group known as Anonymous.

The clip was posted to YouTube. It was not available to the public by means of a search. It was unlisted and could only be located by a person who was aware of the link to the particular clip.

The clip came to the attention of the Government Communications Security Bureau (GCSB) on 7 September 2010 who passed the information on to the Police Cybercrime Unit to commence an investigation. An initial communication from the GCSB on the morning of 7 September postulated that the clip could be a “crackpot random threat” and confirmed that its communication was “completely outside the Anonymous MO”.[11]

The site was quickly disabled and Mr. Joseph was spoken to by the Police. He made full admissions of his involvement.

The real issue at the trial was one of intent. The intention had to be a specific one. The Judge found that the intention of the defendant was to have his message seen and observed on the Internet and, although his behaviour in uploading the clip to YouTube in an Internet café and using an alias could be seen as pointing to an awareness of unlawful conduct it did not, however, point to proof of the intention to cause disruption of the level anticipated by the statute. It transpired that the defendant was aware that the clip would probably be seen by the authorities and also that he expected that it would be “taken down”.

The offence prescribed in section 308 does involve communication as well as active behavior. It criminalises the breaking or damaging or the threatening to break or damage any dwelling with a specific intention – to intimidate or to annoy. Annoyance is a relatively low level reaction to the behavior. A specific behavior – the discharging of firearms that alarms or intends to alarm a person in a dwelling house – again with the intention to intimidate or annoy – is provided for in section 308(2).

The Summary Offences Act

The Summary Offences Act contains the offence of intimidation in section 21. Intimidation may be by words or behavior. The “communication” aspect of intimidation is provided in section 21(1) which states:

Every person commits an offence who, with intent to frighten or intimidate any other person, or knowing that his or her conduct is likely to cause that other person reasonably to be frightened or intimidated,—

  • threatens to injure that other person or any member of his or her family, or to damage any of that person’s property;

Thus, there must be a specific intention – to frighten or intimidate – together with a communicative element – the threat to injure the target or a member of his or her family, or damage property.

In some respects section 21 represents a conflation of elements of section 307 and 308 of the Crimes Act together with a lesser harm threatened – that of injury – than appears in section 306 of that Act.

However, there is an additional offence which cannot be overlooked in this discussion and it is that of offensive behavior or language provided in section 4 of the Summary Offences Act.

The language of the section is as follows:

  • Every person is liable to a fine not exceeding $1,000 who,—
  • in or within view of any public place, behaves in an offensive or disorderly manner; or
  • in any public place, addresses any words to any person intending to threaten, alarm, insult, or offend that person; or
  • in or within hearing of a public place,—

(i)  uses any threatening or insulting words and is reckless whether any person is alarmed or insulted by those words; or

(ii) addresses any indecent or obscene words to any person.

  • Every person is liable to a fine not exceeding $500 who, in or within hearing of any public place, uses any indecent or obscene words.
  • In determining for the purposes of a prosecution under this section whether any words were indecent or obscene, the court shall have regard to all the circumstances pertaining at the material time, including whether the defendant had reasonable grounds for believing that the person to whom the words were addressed, or any person by whom they might be overheard, would not be offended.
  • It is a defence in a prosecution under subsection (2) if the defendant proves that he had reasonable grounds for believing that his words would not be overheard.

In some respects the consequences of the speech suffered by the auditor (for the essence of the offence relies upon oral communication) resemble those provided in section 61 of the Human Rights Act.

Section 4 was considered by the Supreme Court in the case of Morse v Police.[12] Valerie Morse was convicted in the District Court of behaving in an offensive manner in a public place, after setting fire to the New Zealand flag at the Anzac Day dawn service in Wellington in 2007.

In the District Court, High Court and Court of Appeal offensive behavior was held to mean behaviour capable of wounding feelings or arousing real anger, resentment, disgust or outrage in the mind of a reasonable person of the kind actually subjected to it in the circumstances. A tendency to disrupt public order was not required to constitute behaviour that was offensive. Notwithstanding the freedom of expression guaranteed by NZBORA, the behavior was held to be offensive within the context of the ANZAC observance.

The Supreme Court held that offensive behavior must be behaviour which gives rise to a disturbance of public order. Although agreed that disturbance of public order is a necessary element of offensive behaviour under s 4(1)(a), the Judges differed as to the meaning of “offensive” behaviour. The majority considered that offensive behaviour must be capable of wounding feelings or arousing real anger, resentment, disgust or outrage, objectively assessed, provided that it is to an extent which impacts on public order and is more than those subjected to it should have to tolerate. Furthermore it will be seen that a mixed subjective\objective test is present in that the anger, resentment, disgust or outrage must be measured objectively – how would a reasonable person in this situation respond.

It is important to note that in addition to the orality or behavioural quality of the communication – Anderson J referred to it as behavioural expression[13] –  it must take place in or within view of a public place. It falls within that part of the Summary Offences Act that is concerned with public order and conduct in public places. Finally, offensive behavior is behavior that does more than merely create offence.

Observations on Communications Offences

In some respects these various offences occupy points on a spectrum. Interestingly, the offence of offensive behavior has the greatest implications for freedom of expression or expressive behavior, in that the test incorporates a subjective one in the part of the observer. But it also carries the lightest penalty, and as a summary offence can be seen to be the least serious on the spectrum. The section could be applied in the case of oral or behavioural expression against individuals or groups based on colour, race, national or ethnic origin, religion, gender, disability or sexual orientation as long as the tests in Morse are met.

At the other end of the spectrum is section 307 dealing with threats to kill or cause grievous bodily harm which carries with it a maximum sentence of 7 years imprisonment. This section is applicable to all persons irrespective of colour, race, national or ethnic origin, religion, gender, disability or sexual orientation as are sections 307, 308, section 21 of the Summary Offences Act and section 22 of the Harmful Digital Communications Act which could all occupy intermediate points on the spectrum based on the elements of the offence and the consequences that may attend upon a conviction.

There are some common themes to sections 306, 307, 308 of the Crimes Act and section 21 of the Summary Offences Act.

First, there is the element of fear that may be caused by the behavior. Even although the issue of intimidation is not specifically an element of the offences under sections 306 and 307, there is a fear that the threat may be carried out.

Secondly there is a specific consequence prescribed – grievous bodily harm or damage to or destruction of property.

Thirdly there is the element of communication or communicative behavior that has the effect of “sending a message”.

These themes assist in the formulation of a speech-based offence that is a justifiable limitation on free speech, that recognizes that there should be some objectively measurable and identifiable harm that flows from the speech, but that does not stifle robust debate in a free and democratic society.

A Possible Solution

There is a change that could be made to the law which would address what appears to be something of a gulf between the type of harm contemplated by section 306 and lesser, yet just as significant harms.

I propose that the following language could cover the advocacy or intentional incitement of actual physical injury against individuals or groups. Injury is a lesser physical harm than grievous bodily harm and fills a gap between serious emotional distress present in the HDCA and the harm contemplated by section 306.

The language of the proposal is technology neutral. It could cover the use of words or communication either orally, in writing, electronically or otherwise. Although I dislike the use of the words “for the avoidance of doubt” in legislation for they imply a deficiency of clarity of language in the first place, there could be a definition of words or communication to include the use of electronic media.

The language of the proposal is as follows:

It is an offence to use words or communication that advocates or intends to incite actual physical injury against an individual or group of individuals based upon, in the case of a group, identifiable particular characteristics of that group

This proposal would achieve a number of objectives. It would capture speech or communications that cause or threaten to cause harm of a lesser nature than grievous bodily harm stated in section 306.

The proposal is based upon ascertaining an identifiable harm caused by the speech or communicative act. This enables the nature of the speech to be crystallised in an objective manner rather than the unclear, imprecise and potentially inconsistent use of the umbrella term “hate speech.”

The proposal would cover speech, words or communication across all media. It would establish a common threshold for words or communication below which an offence would be committed.

The proposal would cover any form of communicative act which was the term used by Anderson J in Morse and which the word “expression” used in section 14 of NZBORA encompasses.

The tension between freedom of expression and the limitations that may be imposed by law is acknowledged. It would probably need to be stated, although it should not be necessary, that in applying the provisions of the section the Court would have to have regard to the provisions of the New Zealand Bill of Rights Act 1990.

Other Legislative Initiatives

The Human Rights Act

There has been consideration of expanding other legislative avenues to address the problem of “dangerous” speech. The first avenue lies in the Human Rights Act which prohibits the incitement of disharmony on the basis of race, ethnicity, colour or national origins. One of the recent criticisms of the legislation is that it does not apply to incitement for reasons of religion, gender, disability or sexual orientation.[14]

Before considering whether such changes need to be made – a different consideration to whether they should be made – it is important to understand how the Human Rights Act works in practice. The Act prohibits a number of discriminatory practices in relation to various activities and services.[15] It also prohibits indirect discrimination which is an effects based form of activity.[16] Victimisation or less favourable treatment based on making certain disclosures is prohibited.[17] Discrimination in advertising along with provisions dealing with sexual or racial harassment are the subject of provisions.[18]

The existing provisions relating to racial disharmony as a form of discrimination and racial harassment are contained in section 61 and 63 of the Act.[19]

There are two tests under section 61. One is an examination of the content of the communication. Is it threatening, abusive or insulting? If that has been established the next test is to consider whether it is:

  1. Likely to excite hostility against or
  2. Bring into contempt

Any group of persons either in or coming to New Zealand on the ground of colour, race or ethnic or national origins.

These provisions could well apply to “dangerous speech”. Is it necessary, therefore, to extend the existing categories in section 61 to include religion, gender, disability or sexual orientation.

Religion

Clearly if one were to add religion, threatening, abusive or insulting language about adherents of the Islamic faith would fall within the first limb of the section 61(1) test. But is it necessary that religion be added? And should this be simply because a religious group was targeted?

The difficulty with including threatening, abusive or insulting language against groups based upon religion means that not only would Islamaphobic “hate speech” be caught, but so too would the anti-Christian, anti-West, anti “Crusader” rhetoric of radical Islamic jihadi groups be caught. Would the recent remarks by Winston Peters condemning the implementation of strict sharia law in Brunei that would allow the stoning of homosexuals and adulterers be considered speech that insults members of a religion?[20]

A further difficulty with religious-based speech is that often there are doctrinal differences that can lead to strong differences of opinion that are strongly voiced. Often the consequences for doctrinal heresy will be identified as having certain consequences in the afterlife. Doctrinal disputes, often expressed in strong terms, have been characteristics of religious discourse for centuries. Indeed the history of the development of the freedom of expression and the freedom of the press was often in the context of religious debate and dissent.

It may well be that to add a category of religion or religious groups will have unintended consequences and have the effect of stifling or chilling debate about religious belief.

An example of the difficulty that may arise with restrictions on religious speech may be demonstrated by the statement “God is dead.” This relatively innocuous statement may be insulting or abusive to members of theist groups who would find a fundamental aspect of their belief system challenged. For some groups such a statement may be an invitation to violence against the speaker. Yet the same statement could be insulting or abusive to atheists as well simply for the reason that for God to be dead presupposes the existence of God which challenges a fundamental aspect of atheist belief.

This example illustrates the danger of placing religious discourse into the unlawful categories of discrimination.

If it were to be determined that religious groups would be added to those covered by section 61, stronger wording relating to the consequences of speech should be applicable to such groups. Instead of merely “exciting hostility against” or “bring into contempt” based upon religious differences perhaps the wording should be “advocating and encouraging physical violence against..” .

This would have the effect of being a much stronger test than exists at present under section 61 and recognizes the importance of religious speech and doctrinal dispute.

Gender, Disability or Sexual Orientation

The Human Rights Act already has provisions relating to services-based discrimination on these additional grounds. The question is whether or not there is any demonstrated need to extend the categories protected under section 61 to these groups.

Under the current section 61 test, any threatening, abusive or insulting language directed towards or based upon gender, disability or sexual orientation could qualify as “hate speech” if the speech was likely to excite hostility against or bring into contempt a group of persons. The difficulty lies not so much with threatening language, which is generally clear and easy to determine, but with language which may be abusive or insulting.

Given the sensitivities that many have and the ease with which many are “offended” it could well be that a softer and less robust approach may be taken to what constitutes abusive or insulting language.

For this reason the test surrounding the effect of such speech needs to be abundantly clear. If the categories protected by section 61 are to be extended there must be a clear causative nexus between the speech and the exciting of hostility or the bringing into contempt. Alternatively the test could be strengthened as suggested above to replace the test of exciting hostility or bringing into contempt with “advocating and encouraging physical violence against..”

It should be observed that section 61 covers groups that fall within the protected categories. Individuals within those groups have remedies available to them under the provisions of the Harmful Digital Communications Act 2015.

The Harmful Digital Communications Act 2015

The first observation that must be made is that the Harmful Digital Communications Act 2015 (HDCA) is an example of Internet Exceptionalism in that it deals only with speech communicated via electronic means. It does not cover speech that may take place in a physical public place, by a paper pamphlet or other form of non-electronic communication.

The justification for such exceptionalism was considered by the Law Commission in the Ministerial Briefing Paper.[21] It was premised upon the fact that digital information is pervasive, its communication is not time limited and can take place at any time – thus extending the reach of the cyber-bully – and it is often shared among groups with consequent impact upon relationships. These are some of the properties of digital communications systems to which I have made reference elsewhere.[22]

A second important feature of the HDCA is that the remedies set out in the legislation are not available to groups. They are available only to individuals. Individuals are defined as “natural persons” and applications for civil remedies can only be made by an “affected individual” who alleges that he or she has suffered or will suffer harm as a result of a digital communication.[23] Under section 22 – the offence section – the victim of an offence is the individual who is the target of a posted digital communication.[24]

The HDCA provides remedies for harmful digital communications. A harmful digital communication is one which

  1. Is a digital communication communicated electronically and includes any text message, writing, photograph, picture, recording, or other matter[25]
  2. Causes harm – that is serious emotional distress

In addition there are ten communications principles[26]. Section 6(2) of the Act requires the Court to take these principles into account in performing functions or exercising powers under the Act.

For the purposes of a discussion about “dangerous speech” principles 2, 3, 8 and 10 are relevant. Principle 10 extends the categories present in section 61 of the Human Rights Act to include those discussed above.

The reason for the difference is that the consequences of a harmful digital communication are more of an individual and personal nature. Harm or serious emotional distress must be caused. This may warrant an application for an order pursuant to section 19 of the Act – what may be described as a civil enforcement order. A precondition to an application for any of the orders pursuant to section 19 is that the matter must be considered by the Approved Agency – presently Netsafe.[27] If Netsafe is unable to resolve the matter, then it is open to the affected individual to apply to the District Court.

The orders that are available are not punitive but remedial in nature. They include an order that the communication be taken down or access to it be disabled; that there be an opportunity for a reply or for an apology; that there be a form of restraining order so that the defendant is prohibited from re-posting the material or encouraging others to do so.

In addition orders may be made against online content hosts requiring them to take material down along with the disclosure of the details and particulars of a subscriber who may have posted a harmful digital communication. Internet Service Providers (described in the legislation as IPAPs) may be required to provide details of an anonymous subscriber to the Court.

It should be noted that the element of intending harm need not be present on the part of the person posting the electronic communication. In such a situation the material is measured against the communications principles along with evidence that the communication has caused serious emotional distress.

Section 22 – Causing harm by posting a digital communication

The issue of intentional causation of harm is covered by section 22 of the Act. A mixed subjective-objective test that is required for an assessment of content. The elements necessary for an offence under section 22 HDCA are as follows:

A person must post a digital communication with a specific intention – that it cause harm to a victim;

It must be proven that the posting of the communication would cause harm to an ordinary reasonable person in the position of the victim;

Finally, the communication must cause harm to the victim.

Harm is defined as serious emotional distress. In addition the Court may take a number of factors into account in determining whether a post may cause harm

  1. the extremity of the language used:
  2. the age and characteristics of the victim:
  3. whether the digital communication was anonymous:
  4. whether the digital communication was repeated:
  5. the extent of circulation of the digital communication:
  6. whether the digital communication is true or false:
  7. the context in which the digital communication appeared.

The requirement that harm be intended as well as caused has been the subject of some criticism. If there has been an intention to cause harm, is it necessary that there be proof that harm was caused? Similarly, surely it is enough that harm was caused even if it were not intended?

As to the first proposition it must be remembered that section 22 criminalises a form of expression. The Law Commission was particularly concerned that the bar should be set high, given the New Zealand Bill of Rights Act 1990 provisions in section 14 regarding freedom of expression. If expression is to be criminalized the consequences of that expression must warrant the involvement of the criminal law and must be accompanied by the requisite mens rea or intention.

As to the second proposition, the unintended causation of harm is covered by the civil enforcement provisions of the legislation. To eliminate the element of intention would make the offence one of strict liability – an outcome reserved primarily for regulatory or public interest types of offence.

The Harmful Digital Communications Act and “Dangerous Speech”

Could the HDCA in its current form be deployed to deal with “dangerous speech”. The first thing to be remembered is that the remedies in the legislation are available to individuals. Thus if there were a post directed towards members of a group, an individual member of that group could consider proceedings.

Would that person be “a victim” within the meaning of section 22? It is important to note that the indefinite article is used rather than the definite one. Conceivably if a post were made about members of a group the collective would be the target of the communication and thus every individual member of that collective could make a complaint and claim to be a target of the communication under section 22(4).

To substantiate the complaint it would be necessary to prove that the communication caused serious emotional distress[28] which may arise from a cumulation of a number of factors.[29] Whether the communication fulfilled the subjective\objective test in section 22(1)(b) would, it is suggested, be clear if the communication amounted to “hate speech”, taking into account the communications principles, along with the factors that should be taken into account in section 22(2)((a) – (g). The issue of intention to cause harm could be discerned either directly or by inference from the nature of the language used in the communication.

In addition it is suggested that the civil remedies would also be available to a member of a group to whom “dangerous speech” was directed. Even although a group may be targeted, an individual member of the group would qualify as an affected individual if serious emotional distress were suffered. A consideration of the communications principles and whether or not the communication was in breach of those principles would be a relatively straightforward matter of interpretation.

The Harmful Digital Communications Act in Action

Although the principal target of the legislation was directed towards cyber-bullying by young people, most of the prosecutions under the Act have been within the context of relationship failures or breakdowns and often have involved the transmission of intimate images or videos – a form of what the English refer to as “revenge porn”. There have been a relatively large number of prosecutions under section 22 – something that was not anticipated by the Law Commission in its Briefing Paper.[30]

Information about the civil enforcement process is difficult to obtain. Although the Act is clear that decisions, including reasons, in proceedings must be published.[31] There are no decisions available on any website to my knowledge.

From my experience there are two issues that arise regarding the civil enforcement process. The first is the way the cases come before the Court. When the legislation was enacted the then Minister of Justice, Judith Collins, considered that the Law Commission recommendation that there be a Communications Tribunal to deal with civil enforcement applications was not necessary and that the jurisdiction under the legislation would form part of the normal civil work of the District Court.

Because of pressures on the District Court, civil work does not receive the highest priority and Harmful Digital Communications applications take their place as part of the ordinary business of the Court. This means that the purpose of the Act in providing a quick and efficient means of redress for victimsis not being fulfilled. [32]  One case involving communications via Facebook in January of 2017 has been the subject of several part-heard hearings and has yet to be concluded. Even if the Harmful Digital Communications Act is not to be deployed to deal with “dangerous speech”, it is suggested that consideration be given to the establishment of a Communications Tribunal as suggested by the Law Communication so that hearings of applications can be fast-tracked.

The second issue surrounding the civil enforcement regime involves that of jurisdiction over off-shore online content hosts such as Facebook, Twitter, Instagram and the like. Although Facebook and Google have been cited as parties and have been served in New Zealand, they do not acknowledge the jurisdiction of the Court but nevertheless indicate a willingness to co-operate with requests made by the Court without submitting to the jurisdiction of the Court.

In my view the provisions of Subpart 3 of Part 6 of the District Court Rules would be applicable. These provisions allow service outside New Zealand as a means of establishing the jurisdiction of the New Zealand Courts. The provisions of Rule 6.23 relating to service without leave are not applicable and, as the law stands, the leave of the Court would have to be sought to serve an offshore online content host. This is a complex process that requires a number of matters to be addressed about a case before leave may be granted. Once leave has been granted there may be a protest to the jurisdiction by the online content host before the issue of jurisdiction could be established.

One possible change to the law might be an amendment to Rule 6.23 allowing service of proceedings under the HDCA without the leave of the Court. There would still be the possibility that there would be a protest to the jurisdiction but if that could be answered it would mean that the Courts would be able to properly make orders against offshore online content hosts.

Are Legislative Changes Necessary?

It will be clear by now that the law relating to “dangerous speech” in New Zealand does not require major widespread change or reform. What changes may be needed are relatively minor and maintain the important balance contained in the existing law between protecting citizens or groups from speech that is truly harmful and ensuring that the democratic right to freedom of expression is preserved.

The Importance of Freedom of Expression

The New Zealand Bill of Rights Act 1990

The New Zealand Bill of Rights Act 1990 (NZBORA) provides at section 14

“Everyone has the right to freedom of expression, including the freedom to seek, receive, and impart information and opinions of any kind in any form.”

This right is not absolute. It is subject to section 5 which provides “the rights and freedoms contained in this Bill of Rights may be subject only to such reasonable limits prescribed by law as can be demonstrably justified in a free and democratic society.”

Section 4 reinforces the concept of Parliamentary supremacy. If a specific piece of legislation conflicts or is inconsistent with NZBORA, the specific piece of legislation prevails. Thus, specific pieces of legislation which impose restrictions or limitations upon freedom of expression – such as the Human Rights Act 1993 and the Harmful Digital Communications Act 2015 – prevail although if an enactment can be given a meaning that is consistent with the rights and freedoms contained in NZBORA, that meaning shall be preferred to any other meaning.[33]

This then provides a test for considering limitations or restrictions on the rights under NZBORA. Limitations must be reasonable and must be demonstrably justified within the context of a free and democratic society.

Thus, when we consider legislation that may impinge upon or limit the freedom of expression the limitation must be

  1. Reasonable
  2. Demonstrably justified
  3. Yet recognizing that we live in a free and democratic society.

The justified limitations test contains within it a very real tension. On the one hand there is a limitation on a freedom. On the other there is a recognition of freedom in that we live in a free and democratic society. I would suggest that although NZBORA does not use this language, the emphasis upon a free and democratic society, and the requirement of reasonableness and demonstrable justification imports an element of necessity. Is the limitation of the freedom necessary?

The problem with freedom of expression is that it is elusive. What sort of limitations on the freedom of expression may be justified?

Freedom of Expression in Practice

The reality with freedom of expression is that it is most tested when we hear things with which we disagree. It is not limited to the comfortable space of agreeable ideas.

Salman Rushdie said that without the freedom to offend the freedom of expression is nothing. Many critics of current debates seem to conflate the freedom to express those ideas with the validity of those ideas, and their judgement on the latter means that they deny the freedom to express them.

The case of Redmond-Bate v DPP[34]  [1999] EWHC Admin 733 was about two women who were arrested for preaching on the steps of a church. Sedley LJ made the following comments:[35]

“I am unable to see any lawful basis for the arrest or therefore the conviction. PC Tennant had done precisely the right thing with the three youths and sent them on their way. There was no suggestion of highway obstruction. Nobody had to stop and listen. If they did so, they were as free to express the view that the preachers should be locked up or silenced as the appellant and her companions were to preach. Mr. Kealy for the prosecutor submitted that if there are two alternative sources of trouble, a constable can properly take steps against either. This is right, but only if both are threatening violence or behaving in a manner that might provoke violence. Mr. Kealy was prepared to accept that blame could not attach for a breach of the peace to a speaker so long as what she said was inoffensive. This will not do. Free speech includes not only the inoffensive but the irritating, the contentious, the eccentric, the heretical, the unwelcome and the provocative provided it does not tend to provoke violence. Freedom only to speak inoffensively is not worth having. What Speakers’ Corner (where the law applies as fully as anywhere else) demonstrates is the tolerance which is both extended by the law to opinion of every kind and expected by the law in the conduct of those who disagree, even strongly, with what they hear. From the condemnation of Socrates to the persecution of modern writers and journalists, our world has seen too many examples of state control of unofficial ideas. A central purpose of the European Convention on Human Rights has been to set close limits to any such assumed power. We in this country continue to owe a debt to the jury which in 1670 refused to convict the Quakers William Penn and William Mead for preaching ideas which offended against state orthodoxy.”

One way of shutting down debate and the freedom of expression is to deny a venue, as we have seen in the unwise decision of Massey University Vice Chancellor Jan Thomas to deny Mr Don Brash a chance to speak on campus. The Auckland City did the same with the recent visit by speakers Lauren Southern and Stefan Molyneux.

Lord Justice Sir Stephen Sedley (who wrote the judgement in Redmond-Bate v DPP above) writing privately, commented on platform denial in this way:

” A great deal of potentially offensive speech takes place in controlled or controllable forums – schools, universities, newspapers, broadcast media – which are able to make and enforce their own rules. For these reasons it may be legitimate to criticise a periodical such as Charlie Hebdo for giving unjustified offence – for incivility, in other words – without for a moment wanting to see it or any similarly pungent periodical penalised or banned. Correspondingly, the “no platform” policies adopted by many tertiary institutions and supported in general by the National Union of Students are intended to protect minorities in the student body from insult or isolation. But the price of this, the stifling of unpopular or abrasive voices, is a high one, and it is arguable that it is healthier for these voices to be heard and challenged. Challenge of course brings its own problems: is it legitimate to shout a speaker down? But these are exactly the margins of civility which institutions need to think about and manage. They are not a justification for taking sides by denying unpopular or abrasive speakers a platform.”[36]

So the upshot of all this is that we should be careful in overreacting in efforts to control, monitor, stifle or censor speech with which we disagree but which may not cross the high threshold of “dangerous speech”. And certainly be careful in trying to hobble the Internet platforms and the ISPs. Because of the global distributed nature of the Internet it would be wrong for anyone to impose their local values upon a world wide communications network. The only justifiable solution would be one that involved international consensus and a recognition of the importance of freedom of expression.

Conclusion

The function of government is to protect its citizens from harm and to hold those who cause harm accountable. By the same token a free exchange of ideas is essential in a healthy and diverse democracy. In such a way diversity of opinion is as essential as the diversity of those who make up the community.

I have posited a solution that recognizes and upholds freedom of expression and yet recognizes that there is a threshold below which untrammeled freedom of expression can cause harm. It is when expression falls below that threshold that the interference of the law is justified,

I have based my proposal upon a term based upon an identifiable and objective consequence – speech which is dangerous – rather than the term “hate speech”. Indeed there are some who suggest that mature democracies should move beyond “hate speech” laws.[37] Ash suggests that it is impossible to reach a conclusive verdict upon the efficacy of “hate speech” laws and suggests that there is scant evidence that mature democracies with extensive hate speech laws manifest any less racism, sexism or other kinds of prejudice than those with few or no such laws.[38] Indeed, it has been suggested that the application of “hate speech” laws has been unpredictable and disproportionate. A further problem with “hate speech” is that they tend to encourage people to take offence rather than learn to live with the fact that there is a diversity of opinions, or ignore it or deal with it by speaking back – preferably with reasoned argument rather than veto statements.

It is for this reason that I have approached the problem from the perspective of objective, identifiable harm rather than wrestling with the very fluid concept of “hate speech.” For that I may be criticized for ducking the issue. The legal solution proposed is a suggested way of confronting the issue rather than ducking it. It preserves freedom of expression as an essential element of a healthy and functioning democracy yet recognizes that there are occasions when individuals and members of groups may be subjected to physical danger arising from forms of expression.

What is essential is that the debate should be conducted in a measured, objective and unemotive manner. Any interference with freedom of expression must be approached with a considerable degree of care. An approach based upon an objectively identifiable danger rather than an emotive concept such as “hate” provides a solution.

[1] Presumably on the grounds that they depict, promote or encourage crime or terrorism or that the publication is injurious to the public good. See the definition of objectionable in the Films Videos and Publications Classification Act 1993

[2] Timothy Garton Ash Free Speech: Ten Principles for a Connected World (Atlantic Books, London 2016) p. 211

[3] US v Schwimmer 279 US 644 (1929)

[4] Daphne Patai Heterophobia: sexual harassment and the future of feminism (Rowman and Littlefield, Lanham 1998).

[5] See Irving v Penguin Books Ltd [2000] EWHC  QB 115.

[6] Jeremy Waldron The Harm in Hate Speech (Harvard University Press, Cambridge 2012 p. 120.

[7] Beauharnais v Illinois 343 US 250 (1952).

[8] Section 307A reads as follows:

307A Threats of harm to people or property

(1)           Every one is liable to imprisonment for a term not exceeding 7 years if, without lawful justification or reasonable excuse, and intending to achieve the effect stated in subsection (2), he or she—

(a)           threatens to do an act likely to have 1 or more of the results described in subsection (3); or

(b)           communicates information—

(i)            that purports to be about an act likely to have 1 or more of the results described in subsection (3); and

(ii)           that he or she believes to be false.

(2)           The effect is causing a significant disruption of 1 or more of the following things:

(a)           the activities of the civilian population of New Zealand:

(b)           something that is or forms part of an infrastructure facility in New Zealand:

(c)            civil administration in New Zealand (whether administration undertaken by the Government of New Zealand or by institutions such as local authorities, District Health Boards, or boards of trustees of schools):

(d)           commercial activity in New Zealand (whether commercial activity in general or commercial activity of a particular kind).

(3)           The results are—

(a)           creating a risk to the health of 1 or more people:

(b)           causing major property damage:

(c)            causing major economic loss to 1 or more persons:

(d)           causing major damage to the national economy of New Zealand.

(4)           To avoid doubt, the fact that a person engages in any protest, advocacy, or dissent, or engages in any strike, lockout, or other industrial action, is not, by itself, a sufficient basis for inferring that a person has committed an offence against subsection (1).

[9] [2013] DCR 482. For a full discussion of this case see David Harvey Collisions in the Digital Paradigm: Law and rulemaking in the Internet Age (Hart Publishing, Oxford, 2017) at p. 268 and following.

[10] Police v Joseph above at [2].

[11] Ibid at [7].

[12] [2011] NZSC 45.

[13] Ibid at para [123].

[14] See Human Rights Commission chief legal advisor Janet Bidois quoted in Michelle Duff “Hate crime law review fast-tracked following Christchurch mosque shootings” Stuff 30 March 2019. https://www.stuff.co.nz/national/christchurch-shooting/111661809/hate-crime-law-review-fasttracked-following-christchurch-mosque-shooting

[15] Human Rights Act 1993 sections 21 – 63.

[16] Ibid section 65.

[17] Ibid section 66

[18] Ibid sections 67 and 69.

[19] The provisions of section 61(1) state:

(1)           It shall be unlawful for any person—

(a)           to publish or distribute written matter which is threatening, abusive, or insulting, or to broadcast by means of radio or television or other electronic communication words which are threatening, abusive, or insulting; or

(b)           to use in any public place as defined in section 2(1) of the Summary Offences Act 1981, or within the hearing of persons in any such public place, or at any meeting to which the public are invited or have access, words which are threatening, abusive, or insulting; or

(c)            to use in any place words which are threatening, abusive, or insulting if the person using the words knew or ought to have known that the words were reasonably likely to be published in a newspaper, magazine, or periodical or broadcast by means of radio or television,—

being matter or words likely to excite hostility against or bring into contempt any group of persons in or who may be coming to New Zealand on the ground of the colour, race, or ethnic or national origins of that group of persons.

It should be noted that Internet based publication is encompassed by the use of the words “or other electronic communication”.

[20] Derek Cheng “Winston Peters criticizes Brunei for imposing strict Sharia law” NZ Herald 31 March 2019 https://www.nzherald.co.nz/nz/news/article.cfm?c_id=1&objectid=12217917

[21] New Zealand Law Commission Ministerial Briefing Paper Harmful Digital Communications:The adequacy of the current sanctions and remedies. (New Zealand Law Commission, Wellington, August 2012) https://www.lawcom.govt.nz/sites/default/files/projectAvailableFormats/NZLC%20MB3.pdf (last accessed 26 April 2019)

[22] See David Harvey Collisions in the Digital Paradigm: Law and Rulemaking in the Internet (Hart Publishing, Oxford, 2017) especially at Chapter 2

[23] Harmful Digital Communications Act 2015 section 11.

[24] Harmful Digital Communications Act 2015 section 22(4).

[25] It may also include a consensual or non-consensual intimate video recording

[26] Harmful Digital Communications Act 2015 section 6. These principles are as follows:

Principle 1  A digital communication should not disclose sensitive personal facts about an individual.

Principle 2  A digital communication should not be threatening, intimidating, or menacing.

Principle 3  A digital communication should not be grossly offensive to a reasonable person in the position of the affected individual.

Principle 4 A digital communication should not be indecent or obscene.

Principle 5  A digital communication should not be used to harass an individual.

Principle 6  A digital communication should not make a false allegation.

Principle 7  A digital communication should not contain a matter that is published in breach of confidence.

Principle 8  A digital communication should not incite or encourage anyone to send a message to an individual for the purpose of causing harm to the individual.

Principle 9  A digital communication should not incite or encourage an individual to commit suicide.

Principle 10 A digital communication should not denigrate an individual by reason of his or her colour, race, ethnic or national origins, religion, gender, sexual orientation, or disability.

[27] http://netsafe.org.nz

[28] Harmful Digital Communications Act Section 22(1)(c)

[29] See Police v B [2017] NZHC 526.

[30] For some of the statistics on prosecutions under the Act see Nikki MacDonald “Revenge Porn: Is the Harmful Digital Communications Act Working?” 9 March 2019 https://www.stuff.co.nz/national/crime/110768981/revenge-porn-is-the-harmful-digital-communications-act-working

[31] Harmful Digital Communications Act Section 16(4)

[32] Harmful Digital Communications Act Section 3(b)

[33] See New Zealand Bill of Rights Act section 6. Note also that the Harmful Digital Communications Act provides at section 6 that in performing its functions or exercising powers under the Act the Approved Agency and the Courts must act consistently with the rights and freedoms provided in NZBORA.

[34] [1999] EWHC Admin 733.

[35] Ibid at  para [20].

[36] Stephen Sedley Law and the Whirligig of Time (Hart Publishing, Oxford, 2018) p. 176-177. The emphasis is mine.

[37] For example see Timothy Garton Ash Free Speech: Ten Principles for a Connected World (Atlantic, London 2016) especially at 219 and following.

[38] Ibid.

Do Social Network Providers Require (Further?) Regulation – A Commentary

This is a review and commentary of the Sir Henry Brooke Student Essay Prize winning essay for 2019. The title of the essay topic was “Do Social Network Providers Require (Further?) Regulation

Sir Henry Brooke was a Court of Appeal judge in England. He became a tireless campaigner during retirement on issues including access to justice. His post-judicial renown owed much to his enthusiastic adoption of digital technology although he spear-headed early initiatives for technology in courts and led and was first Chair of the British and Irish Legal Information Institute (BAILII) – a website that provides access to English and Irish case and statute law. Upon his retirement many came to know of him through his blog and tweets. He drafted significant sections of the Bach Commission’s final report on access to justice, and also acted as patron to a number of justice organisations including the Public Law Project, Harrow Law Centre and Prisoners Abroad.

The SCL (Society for Computers and Law) Sir Henry Brooke Student Essay Prize honours his legacy.  For 2019 the designated essay question this year was 2000-2,500 words on the prompt “Do social network providers require (further?) regulation?” the winner was Robert Lewis from the University of Law. His essay considers some of the regulatory responses to social media. His starting point is the events of 15 March 2019 in Christchurch.

The first point that he makes is that

“(h)orrors such as Christchurch should be treated cautiously: they often lead to thoughtless or reflexive responses on the part of the public and politicians alike.”

One of his concerns is the possibility of regulation by outrage, given the apparent lack of accountability of social networking platforms.

He then goes on to examine some examples of legislative and legal responses following 15 March and demonstrates the problem with reflexive responses. He starts with the classification of the live stream footage and the manifesto posted by the alleged shooter. He referred to a warning by the Department of Internal Affairs that those in possession of the material should delete it.

He then examines some of the deeper ramifications of the decision. Classification instantly rendered any New Zealander with the video still in his computer’s memory cache, or in any of his social media streams, knowingly or not, potentially guilty of a criminal offence under s.131 of Films Videos and Publications Classification Act 1993. He comments

“Viewing extracts of  the footage shown on such websites was now illegal in New Zealand, as was the failure to have adequately wiped your hard drive having viewed the footage prior to its classification. A significant proportion of the country’s population was, in effect, presented with a choice: collective self-censorship or criminality.”

Whilst he concedes that the decision may have been an example of civic responsibility, in his opinion it did not make good law. Mr. Lewis points out that the legislation was enacted in 1993 just as the Internet was going commercial. His view is that the law targets film producers, publishers and commercial distributors, pointing out that

“these corporate entities have largely been supplanted by the social network providers who enjoy broad exemptions from the law, which has instead been inverted to criminalise “end users”, namely the public which the law once served to protect.”

He also made observations about the maximum penalties which are minimal against the revenue generated by social media platforms.

He then turned his attention to the case of the arrest of a 22 year old man charged with sharing the objectionable video online. He commented that

“that faced with mass public illegality, and a global corporation with minimal liability, New Zealand authorities may have sought to make an example of a single individual. Again, this cannot be good law.”

Mr. Lewis uses this as a springboard for a discussion about the “safe harbor” provisions of the Communications Decency Act (US) and EU Directive 2000/31/EC, which created the “safe harbour” published or distributed.

Mr Lewis gives a telling example of some of the difficulties encountered by the actions of social media platforms in releasing state secrets and the use of that released information as evidence in unrelated cases. He observes

“The regulatory void occupied by social network providers neatly mirrors another black hole in Britain’s legal system: that of anti-terrorism and state security. The social network providers can be understood as part of the state security apparatus, enjoying similar privileges, and shrouded in the same secrecy. The scale of their complicity in data interception and collection is unknown, as is the scale and level of the online surveillance this apparatus currently performs. The courts have declared its methods unlawful on more than one occasion and may well do so again.”

A theme that becomes clear from his subsequent discussion is that the current situation with apparently unregulated social media networks is evidence of a collision between the applicability of the law designed for a pre-digital environment and the challenges to the expectations of the applicability of the law in the digital paradigm. For example, he observes that

“The newspapers bear legal responsibility for their content. British television broadcasters are even under a duty of impartiality and accuracy. In contrast, social network providers are under no such obligations. The recent US Presidential election illustrates how invidious this is.”

He also takes a tilt at those who describe the Internet as “the Wild West”.

“This is an unfortunate phrase. The “wild west” was lawless: the lands of the American west, prior to their legal annexation by the United States, were without legal systems, and any pre-annexation approximation of one was illegal in and of itself. In contrast, the social network providers reside in highly developed, and highly regulated, economies where they are exempted from certain legal responsibilities. These providers have achieved enormous concentrations of capital and political influence for precisely this reason.”

He concludes with the observation that unlawful behaviour arises from a failure to apply the law as it exists and ends with a challenge:

“ In England, this application – of a millennium-old common law tradition to a modern internet phenomenon such as the social networks – is the true task of the technology lawyer. The alternative is the status quo, a situation where the online publishing industry has convinced lawmakers “that its capacity to distribute harmful material is so vast that it cannot be held responsible for the consequences of its own business model.””

The problem that I have with this essay is that it suggests a number of difficulties but, apart from suggesting that the solution lies in the hands of technology lawyers, no coherent solution is suggested. It cites examples of outdated laws, of the difficulty of retroactive solutions and the mixed blessings and problems accompanying social media platforms. The question really is whether or not the benefits outweigh the disadvantages that these new communications platforms provide. There are a number of factors which should be considered.

First, we must recognize that in essence social media platforms enhance and enable communication and the free exchange of ideas – albeit that they may be banal, maudlin or trivial – which is a value of the democratic tradition.

Secondly, we must recognize and should not resent the fact that social media platforms are able to monetise the mere presence of users of the service. This seems to be done in a number or what may appear to be arcane ways, but they reflect the basic concept of what Robert A. Heinlein called TANSTAFL – there ain’t no such thing as a free lunch. Users should not expect service provided by others to be absolutely free.

Thirdly, we must put aside doctrinaire criticisms of social media platforms as overwhelming big businesses that have global reach. Doing business on the Internet per se involves being in a business with global reach. The Internet extends beyond our traditional Westphalian concepts of borders, sovereignty and jurisdiction.

Fourthly, we must recognize that the Digital Paradigm by its very nature has within it various aspects – I have referred to them elsewhere as properties – that challenge and contradict many of our earlier pre-digital expectations of information and services. In this respect many of our rules which have a basis in underlying qualities of earlier paradigms and the values attaching to them are not fit for purpose. But does this mean that we adapt those rules to the new paradigm and import the values (possibly no longer relevant) underpinning them or should we start all over with a blank slate?

Fifthly, we must recognize that two of the realities in digital communications have been permissionless innovation – a concept that allows a developer to bolt an application on to the backbone – and associated with that innovation, continuous disruptive change.

These are two of the properties I have mentioned above. What we must understand is that if we start to interfere with say permissionless innovation and tie the Internet up with red tape, we may be if not destroying but seriously inhibiting the further development of this communications medium. This solution would, of course, be attractive to totalitarian regimes that do not share democratic values such as freedom of expression

Sixthly, we have to accept that disruptive change in communications methods, behaviours and values is a reality. Although it may be comfortable to yearn for a nostalgic but non-existent pre digital Golden Age, by the time such yearning becomes expressed it is already too late. If we drive focused upon the rear view mirror we are not going to recognize the changes on the road ahead. Thus, the reality of modern communications is that ideas to which we may not have been exposed by monolithic mainstream media are now being made available. Extreme views, which may in another paradigm, have been expressed within a small coterie, are now accessible to all who wish to read or see them. This may be an uncomfortable outcome for many but it does not mean that these views have only just begun to be expressed. They have been around for some time. It is just that the property of exponential dissemination means that these views are now available. And because of the nature of the Internet, many of these views may not in any event be available to all or even searchable, located, as many of them are, away from the gaze of search engines on the Dark Web.

Seventhly, it is only once we understand not only the superficial content layer but the deeper implications of the digital paradigm – McLuhan expressed it as “the medium is the message” can we begin to develop any regulatory strategies that we need to develop.

Eighthly, in developing regulatory strategies we must ask ourselves whether they are NECESSARY. What evil are the policies meant to address. As I have suggested above, the fact that a few social media and digital platforms are multi-national organisations with revenue streams that are greater than the GDP of a small country is not a sufficient basis for regulation per se – unless the regulating authority wishes to maintain its particular power base. But then, who is to say that Westphalian sovereignty has not had its day. Furthermore, it is my clear view that any regulatory activity must be the minimum that is required to address the particular evil. And care must be taken to avoid the “unintended consequences” to which Mr Lewis has referred and some of which I have mentioned above.

Finally, we are faced with an almost insoluble problem when it comes to regulation in the Digital Paradigm. It is this. The legislative and regulatory process is slow although the changes to New Zealand’s firearms legislation post 15 March could be said to have been done with unusual haste. The effect has been that the actions of one person have resulted in relieving a large percentage of the population of their lawfully acquired property. Normally the pace of legislative or regulatory change normally is slow, deliberative and time consuming.

On the other hand, change in the digital paradigm is extremely fast. For example, when I started my PhD thesis in 2004 I contemplated doing something about digital technologies. As it happens I didn’t and looked at the printing press instead. But by the time my PhD was conferred, social media happened. And now legislators are looking at social media as if it was new but by Internet standards it is a mature player. The next big thing is already happening and by the time we have finally worked out what we are going to do about social media, artificial intelligence will be demanding attention. And by the time legislators get their heads around THAT technology in all its multiple permutations, some thing else – perhaps quantum computing – will be with us.

I am not saying therefore that regulating social media should be put in the “too hard” basket but that what regulation there is going to be must be focused, targeted, necessary, limited to a particular evil and done with a full understanding of the implications of the proposed regulatory structures.

Facebook and the Printing Press

A recent article in the New Zealand Herald cites historian Niall Ferguson as drawing comparisons between the early days of the printing press and the current free wheeling Digital Paradigm. The argument is that we should learn from the lessons of history

There is no comparison between the technologies.

To suggest that the printing press enjoyed the “permissionless innovation” afforded by internet and digital technologies ignores that fact that in England the press was under the control of the Stationers Guild (later Company after 1556) who licensed what printers could print and kept a very close eye on what printers did. Indeed, their control was such that only the Universities of Oxford and Cambridge were the sites of presses outside of London.
Then there was state regulation of printing that took a number of forms. The Royal Stationer – later the Royal Printer – was responsible for printing the King’s view on things – statutes, proclamations and other such. Thomas Cromwell used the press to great effect during the English Reformation. It was he who used preambles in Statutes to identify the “mischief” that the statute was intended to remedy.
After the incorporation of the Stationers (during the reign of Mary I) it was anticipated that the Company would aid the State using its newly granted search powers to root out the printers of heretical tracts. However the power was deployed to root out unlicensed printers who were not members of the Stationers.
There were also many other efforts by the State to regulate content, some more successful than others. The Star Chamber Decrees of 1587 and 1634 were rather dramatic examples. The Decrees were in fact judgments of the Court in cases involving printing disputes.
Just prior to the Civil War that power of Star Chamber was nullified and printers enjoyed considerable freedom and lack of regulation but it did not last once Oliver Cromwell and the Puritans gathered strength.
After the Restoration there was significant regulation both of printers and the content of the Press by means of Licensing Acts the first of which was in 1662 and which was renewed regularly thereafter until 1694. Charles II’s enforcer as far as print was concerned was a phanatick (to use the spelling adopted by Neal Stephenson in his Baroque Quartet) by the name of Roger L’Estrange – a very nasty piece of work both by the standards of his time and ours.
In 1694 the Licensing Acts came to an end, primarily as a result of political strife within a greater context, and until 1710 there was a lack of restriction on printing. This all changed when the focus moved from the printer to the author who should have control of content and the Statute of Anne was the first Copyright Act.
So to say that there is a parallel between Silicon Valley’s freedom to develop platforms and bolt them on to the Internet and the early history of the printing press is wrong. Indeed, the whole structure of the communications technologies is different. The printing press was the technology and essentially books, magazines, pamphlets and papers were the medium. Today the Internet is the communications technology and Facebook, Twitter, blogs etc etc are platforms bolted on to it. The absence of red tape (what I call permissionless innovation) is what has enabled the growth of the Internet and the proliferation of platforms.
The call is for regulation, but regulation of what. Better to have a regulatory plan in place that we can discuss rather than disembodied pleas to “do something”. Perhaps we could turn to history but I think we have moved on from the semi-absolutist model of the Tudors and Stuarts.

Fearing Technology Giants

On 15 January 2018 opinion writer Deborah Hill Cone penned a piece entitled “Why tech giants need a kick in the software”

Not a lot of it is very original and echoes many of the arguments in Jonathan Taplin’s “Move Fast and Break Things.” I have already critiqued some of Taplin’s propositions in my earlier post Misunderstanding the Internet . Over the Christmas break I revisited Mr. Taplin’s book. It is certainly not a work of scholarship, rather it is a perjorative filled polemic that in essence calls for regulation of Internet platforms to preserve certain business and economic models that are challenged by the new paradigm. Mr. Taplin comes from a background of involvement primarily in the music industry and the realities of the digital paradigm have hit that industry very hard. But, as was the case with the film industry, music took an inordinate amount of time to adapt to the new paradigm and develop new business models. It seems that is now happening with iTunes and Spotify and the movie industry seems to have recognised other models of online distribution such as Netflix, Hulu and other on-demand streaming services.

For Mr. Taplin these new business models are not enough. His argument is that artists should have an expectation that they should draw the same level of income that they enjoyed in the pre-digital age. And that ignores the fact that the whole paradigm has changed.

But Mr. Taplin directs most of his argument against the Internet giants – Facebook, Google, Amazon and the like and singles out their creators and financiers as members of a libertarian conspiracy dedicated to eliminating competition – although to conflate monopolism with libertarianism has its own problems.

Much of Mr. Taplin’s argument uses labels and generalisations which do not stand up to scrutiny. For example he frequently cites one of the philosophical foundations for the direction travelled by the Internet Giants as Ayn Rand whom he describes as a libertarian. In fact Ms. Rand’s philosophy was that of objectivism rather than libertarianism. Indeed, libertarianism has its own subsets. In using the term does Mr. Taplin refer to Thomas Jefferson’s flavour of libertarianism or that advocated by John Stuart Mill in his classic “On Liberty”?  It is difficult to say.

Another problem for Mr Taplin is his brief discussion on the right to be forgotten He says (at page 98) “In Europe, Google continues to challenge the “right to be forgotten” – customers’ ability to eliminate false articles written about them from Google’s search engine.” (The emphasis is mine).

The Google Spain Case which gave rise the the right to be forgotten discussion was not a case about a false article or false information. In fact the article that Sr Costeja-Gonzales wished to deindex was true. It was an advertisement regarding his financial that was published in La Vanguardia newspaper in Barcelona some years before. The reason why deindexing was sought was because the article was no longer relevant to Sr Consteja-Gonzales improved fortunes. To characterise the desire by Google to resist attempts to remove false information misunderstands the nuances of the right to be forgotten.

One thing is clear. Mr. Taplin wants regulation and the nature of the regulation that he seeks is considerable and of such a nature that it might stifle much of the stimulus to creativity that the Internet allows. I have already discussed some of these concepts in other posts but in summary there must be an understanding not of the content that is delivered via Internet platforms but rather of the underlying properties or affordances of digital technologies.

One of these is the fact that digital technologies cannot operate without copying. From the moment a user switches on a computer or a digital device to the moment that device is shut down, copying takes place. Quite simply, the device won’t work without copying. This is a challenge to concepts of intellectual property that developed after the first information technology – the printing press. The press allowed for mechanised copying and challenged the earlier manual copying processes that characterised the scribal paradigm of information communication.

Now we have a digital system that challenges the assumptions that content “owners” have had about control of their product. And the digital horse has bolted and a new paradigm is in place that has altered behaviours, attitudes, expectations and values surrounding information. And can regulation hold back the flood? One need only look at the file sharing provisions of the Copyright Act 1994 in New Zealand. These provisions were put in place, as the name suggests, to combat file sharing. They are now out of date and were little used when introduced. Technology has overtaken them. The provisions were used sporadically by the music industry and, despite extensive lobbying, not at all by the movie industry.

Two other affordances that underlie digital technologies are linked. The first is that of permissionless innovation which is interlinked with the second – continuing disruptive change.  Indeed it could be argued that permissionless innovation is what drives continuing disruptive change.

Permissionless innovation is the quality that allows entrepreneurs, developers and programmers to develop protocols using standards that are available and that have been provided by Internet developers to “bolt‑on” a new utility to the Internet.

Thus we see the rise of Tim Berners-Lee’s World Wide Web which, in the minds of many, represents the Internet as a whole.  Permissionless innovation enabled Shawn Fanning to develop Napster; Larry Page and Sergey Brin to develop Google; Mark Zuckerberg to develop Facebook and Jack Dorsey, Evan Williams, Biz Stone and Noah Glass to develop Twitter along with dozens of other utilities and business models that proliferate the Internet.  There is no need to seek permission to develop these utilities.  Using the theory “if you build it, they will come”[1] new means of communicating information are made available on the Internet.  Some succeed but many fail[2].  No regulatory criteria need to be met other than that the particular utility complies with basic Internet standards.

What permissionless innovation does allow is a constantly developing system of communication tools that change in sophistication and the various levels of utility that they enable.  It is also important to recognize that permissionless innovation underlies changing means of content delivery.

So are these the aspects of the Internet and its associated platforms that are to be regulated? If the Internet Giants are to be reined in the affordances of the Internet that give them sustenance must be denied them. But in doing that, it may well be that the advantages of the Internet may be lost. So the answer I would give to Mr Taplin is to be careful what you wish for.

This rather long introduction leads me to a consideration of Ms. Hill Cone’s slightly less detailed analysis that nevertheless seizes upon Mr Taplin’s themes. Her list of “things to loathe” follow along with some of my own observations

1.) These companies (Apple, Alphabet, Facebook, Amazon) have simply been allowed to get unhealthily large and dominant with barely any checks or balances. The tech firms are more powerful than the telco AT&T ever was, yet regulators do nothing (AT&T was split up). In this country the Commerce Commission spent millions fighting to stop one firm, NZME (publisher of the New Zealand Herald) from merging with another Fairfax (Now called Stuff), a sideshow, while they appear stubbornly uninterested in tackling the real media dominance battle: how Facebook broke the media. I know we’re just little old New Zealand, but we still have sovereignty over our nation, surely? [Commerce Commission chairman] Mark Berry? Can’t you do something? The EU at least managed to fine Google a couple of lazy bill.

Taplin deals with this argument in an extensive analysis of the way in which antitrust law in the United States has become somewhat toothless. He attributes this to the teachings of Robert Bork and the Chicago School of law and economics.

Ms Hill Cones critique suggests that there is something wrong with large corporate conglomerates and that simply because something has become too big it must be bad and therefore should be regulated rather than identifying a particular mischief and then deciding whether regulation is necessary – and I emphasise the word necessary.

2.) Some of these tech companies have got richer and richer exploiting the creative content of writers and artists who create things of real value and who can no longer earn a living from doing so.

This is straight out of the Taplin playbook which I have discussed above. I don’t think its has been suggested that artists are not earning. They are – perhaps not to the level that they used to and perhaps not from sales of remuneration from Spotify tracks. But what Taplin points out – and this is how paradigmatic change drives behavioural change – is that artists are moving back to live performance to earn an income. Gone are the days when the artist could rely on recorded performances. So Ms Hill Cone’s critique may be partially correct as it applies to the earlier expectation of making an income.

3.) Mark Zuckerberg’s mea culpa, announced in the last few days that Facebook is going to focus on what he called “meaningful interaction”, is like a drug dealer offering a cut-down dose of its drug, hoping addicts won’t give up the drug completely. Even Zuckerberg’s former mentor, investor Robert McNamee said in the Guardian that all Zuckerberg is doing is deflecting criticism and leaving users “in peril.”

The perjorative analogy of the drug dealer ignores the fact that no one is required to sign up to Facebook. It is, after all, a choice. And in some respects, Zuckerberg’s announcement is an example of continuing disruptive change that affects Internet Giants as much as it does a startup.

4.) These companies have created technology and thrown it out there, without any sense of responsibility for its potential impact. It’s time for them to be held accountable. Last week Jana Partners, a Wall Street investment firm, wrote to Apple pushing it to look at its products’ health effects, especially on children. Even Facebook founder Sean Parker has recently admitted “God knows what [technology) is doing to our children’s brains.”

The target here is that of permissionless innovation. Upon what basis is it necessary to regulate permissionless innovation. Or does Ms Hill Cone wish to wrap up the Internet with regulatory red tape. Aa far as the effects of social media are concern, I think what worries may digital immigrants and indeed digital deniers is that all social media does is to enable communication – which is what people do. It is an alternative to face to face, telephone, snail mail, email, smoke signals etc. We need to accept that new technologies drive behavioural change.

5.) While it’s funny when the bong-sucking entrepreneur Erlich Bachman says in the HBO comedy Silicon Valley: “We’re walking in there with three foot c**ks covered in Elvis dust!” in reality, many of these firms have a repugnant, arrogant and ignorant culture. In the upcoming Vanity Fair story “Oh. My god, this is so f***ed up: inside Silicon Valley’s secretive orgiastic dark side” insiders talked about the creepy tech parties in which young women are exploited and harassed by tech guys who are still making up for getting bullied at school. (Just as bad, they use the revolting term “cuddle puddles”) The romantic image of scrappy, visionary nerds inventing the future in a garage has evolved into a culture of entitled frat boys behaving badly. “Too much swagger and not enough self-awareness,” as one investor said.

I somehow don’t think that the bad behaviours described here is limited to tech companies. I am sure that in her days as a business journalist (and a very good one too) Ms Hill Cone saw examples of the behaviours she condemns in any number of enterprises.

6.) These giant companies suck millions in profits out of our country but do little to participate as good corporate citizens. If they even have an office here at all, it is tiny. And don’t get started on how much tax they pay. A few years ago Google’s New Zealand operation consisted of three people who would fly back and forth from Sydney to manage sales over here. Apparently, Apple has opened a Wellington office and lured “several employees” from Weta Digital. But there is little transparency about how or where these companies do business or how to hold them accountable. There is no local number to call, there is no local door to knock on. And don’t hold your breath that our children might get good jobs working for any of these corporations.

This criticism goes to the tax problem and probably has underneath it a much larger debate about the purposes and morality of the tax system. The classic statement, since modified, is stated in the case of Inland Revenue Commissioners v Duke of Westminster [1936] AC 1 where it was stated:

“Every man is entitled if he can to order his affairs so that the tax attaching under the appropriate Acts is less than it otherwise would be. If he succeeds in ordering them so as to secure this result, then, however unappreciative the Commissioners of Inland Revenue or his fellow tax-payers may be of his ingenuity, he cannot be compelled to pay an increased tax.”

There can be no doubt that the tax laws will be changed to close the loophole that exists whereby the relationship between the income derived by Google and Apple from their NZ activites will be subject to NZ tax. But Ms Hill Cone goes further and suggests that these companies should have a physical presence – a local door to knock on. This is the digital paradigm. It is no longer necessary to have a suite of offices in a CBD building paying rent.

7.) Mark Zuckerberg preaches that Facebook’s mission is to connect people. But Johann Hari’s new book Lost Connections: Uncovering the real causes of depression and the unexpected solutions, out this week, provides convincing evidence that in the digital age people are more lonely than ever. Hari argues the very companies which are trying to “fix” loneliness – Facebook, for example – are the ones which have made people feel more disconnected and depressed in the first place.

The book cited by Ms Cone is by a journalist writing about depression. Apparently the diagnosis for hsi depression was supposedly from a chemical imbalance in his brain whereas he discovered after investigating some of the social science evidence that depression and anxiety are caused by key problems with the way that we live. He uncovered nine causes of depression and anxiety and offers seven solutions to the problems. Much of the book is about the author and the problems that he had with the treatment he received. His book is as much a critique of the pharmaceutical industry as much as anything. It is described in the Guardian as a flawed study.  Certainly it cannot be said that Hari’s argument is directed towards the suggestion that social media platforms are causative of depression.

8.) Is all this technology really making the world a better place? At this week’s CES (Consumer Electronics Show) in Las Vegas some of the innovations were positive but a lot of them were really, quite dumb. Do you really need a robot that will fold your laundry or a suitcase that will follow you? Or a virtual reality headset that will make you feel like you are flying on a dinosaur (Okay, maybe that one would be fun.)

Point taken. A lot of inventions are not going to make the world a better place. On the other hand many do. Think Thomas Alva Edison and then think about the Edsel motor vehicle. Ms Hill Cone accepts that some of the innovations were positive and the positive ones will probably survive the “Dragon’s Den” of funding rounds and the market.

These eight points were advanced by Ms Hill Cone as reasons why tech companies should get their comeuppance as she puts it. It is difficult to decide whether the article is merely a rant or a restatement of some deeper concerns about Tech Giants. If it should be the latter there should be more thorough analysis. But unless it is absolutely necessary and identifies and addresses a particular mischief in my view regulation is not the answer.

But Ms Hill-Cone is not alone. Later in January a significant beneficiary of Silicon Valley, Marc Benioff compares the crisis of trust facing tech giants to the financial crisis of a decade ago. He suggest that Google, Facebook and other dominant forms pose a threat and he made these comments at the World Economic Forum in Davos. He suggested that what is needed is more regulation and his call was backed by Sir Martin Sorrell who suggested that Apple, Facebook, Amazon, Google, Microsoft, and China’s Alibaba and Tencent had become too big. Sir Martin compared Amazon founder Jeff Bezos to a modern John D. Rockefeller.

One of the suggestions by Sir Martin was that Google and Facebook were media companies, echoing concerns that had been expressed by Rupert Murdoch. The argument is that as the Internet Giants get bigger, it is not a fair fight. And then, of course, there were the criticisms that the Internet Giants had become so big that they were unaware of the nefarious use of their services by those who would spread fake news.

George Soros added his voice to the calls for regulation in two pieces here and here. At the Davos forum he suggested that Facebook and Google have become “obstacles to innovation” and are a “menace” to society whose “days are numbered”. As mining companies exploited the physical environment, so social media companies exploited the social environment.

“This is particularly nefarious because social media companies influence how people think and behave without them even being aware of it. This has far-reaching adverse consequences on the functioning of democracy, particularly on the integrity of elections.”

In addition to skewing democracy, social media companies “deceive their users by manipulating their attention and directing it towards their own commercial purposes” and “deliberately engineer addiction to the services they provide”. The latter, he said, “can be very harmful, particularly for adolescents”.

He considers that the Internet Giants are unlikely to change without regulation. He compared social media companies to casinos, accusing them of deceiving users “by manipulating their attention” and “deliberately engineering addiction” to their services, arguing that they should be broken up. The basis for following a model that was applied in the break up of AT & T Soros suggested that the fact that the Internet Giants are near-monopoly distributors makes them public utilities and should subject them to more stringent regulation, aimed at preserving competition, innovation and fair and open access.

Soros pointed to steps that had been taken in Europe where he described regulators as more farsighted than those in the US when it comes to social policies, referring to the work done by EU Competition Commissioner Margrethe Vestager, who hit Google with a 2.4 billion euro fine ($3 billion) in 2017 after the search giant was found in violation of antitrust rules.

Even more recently, in light of the indictments proferred by Spevial Prosecutor Mueller against a number of Russians who attempted to interfere with the US election of 2016 and who used social media to do so, a call has gone up to regulate social media so that this does not happen again. Of course that is a knee jerk reaction that seems to forget the rights of freedom of expression enshrined in both international convention and domestic legislation and the First Amendment to the US Constitution which protects freedom of speech and where political speech is given the highest level of protection in subsequent cases. But nevertheless, the call goes out to regulate.

Facebook has responded to these concerns by reducing the news feeds that may be provided and more recently in New Zealand Google has restructured its tax arrangements. Both of these steps represent a response by the Internet Giants to public concern – perhaps an indication of a willingness to self-regulate

The urge to regulate is a strong one especially on the part of those who favour the status quo. There can be little doubt that ultimately what is sought is control of the digital environment. The content deliverers like Facebook and Google will be first, but thereafter the architecture – the delivery system that is the Internet that must be free and open – will increasingly come under a form of regulatory control that will have little to do with operational efficiency.

Of course, content is a low-hanging fruit. Marshall McLuhan recognised that when he called the “content” of a medium is like the juicy piece of meat carried by the burglar to distract the watchdog of the mind.” I doubt very much that content is the real target. Nicholas Sarkozy called for regulation of the Internet in 2012 so that urge to regulate is not new by any means.

At the risk of being labelled a technological determinist, I suggest that trying to impose regulatory structures that preserve the status quo inhibits innovation and creativity as much if not more than the suggestion that such an outcome will happen if we leave the Internet Giants alone. Rather I suggest that we should recognise that the changes that are being wrought are paradigmatic. There will be a transformation of the way in which we use communication systems after the current disruption that is being experienced. That means that what comes out the other end may not be immediately recognisable to those of us whose values and predispositions were formed during the analog or pre-digital paradigm.

On the other hand those who reject technological determinism still recognise the inevitability of change. Mark Kurlansky in his excellent book “Paper: Paging through history” argues that technologies have arisen to meet societal needs. It is futile to denounce the technology itself. Rather you have to change the operation of society for which the technology was created.  For every new technology there are detractors, those who see the new invention destroying everything that is good in the old.

To suggest that regulation will preserve the present – if indeed it is worth preserving – is rear view mirror thinking at its worst. Rather we should be looking at the opportunities and advantages that the new paradigm presents. And this isn’t going to be done by wishing for a world that used to be, because that is what regulation will do – it will freeze the inevitable development of the new paradigm.

__________________________________________________________________________________________

[1] In fact a misquote that has fallen into common usage from the movie Field of Dreams (Director and Screenplay by Phil Alden Robinson 1989). The correct quote is “If you build it he will come” (my emphasis) http://www.imdb.com/title/tt0097351/quotes (last accessed 3 February 2015).

[2] See for example  Andrew Keen The Internet is Not the Answer (Atlantic Books, London 2015)

Memory Illusions and Cybernannies

A while back I read a couple of very interesting books. One was Dr Julia Shaw’s The Memory Illusion. Dr. Shaw describes herself as a “memory hacker” and has a You Tube presence where she explains a number of the issues that arise in her book.

The other book was The Cyber Effect by Dr Mary Aiken who reminds us on a number of occasions in every chapter that she is a trained cyberpsychologist and cyberbehavioural specialist and who was a consultant for CSI-Cyber which, having watched a few episodes, I abandoned. Regrettably I don’t see that qualification as a recommendation, but that is a subjective view and I put it to one side.

Both books were fascinating. Julia Shaw’s book in my view should be required reading for lawyers and judges. We place a considerable amount of emphasis upon memory assisted by the way in which a witness presents him or herself -what we call demeanour. Demeanour has been well and truly discredited by Robert Fisher QC in an article entitled “The Demeanour Fallacy” [2014] NZ Law Review 575. The issue has already been covered by  Chris Gallavin in a piece entitled “Demeanour Evidence as the backbone of the adversarial process” Lawtalk Issue 834 14 March 2014 http://www.lawsociety.org.nz/lawtalk/issue-837/demeanour-evidence-as-the-backbone-of-the-adversarial-process

A careful reading of The Memory Illusion is rewarding although worrisome. The chapter on false memories, evidence and the way in which investigators may conclude that “where there is smoke there is fire” along with suggestive interviewing techniques is quite disturbing and horrifying at times.

But the book is more than that, although the chapter on false memories, particularly the discussions about memory retrieval techniques, was very interesting. The book examines the nature of memory and how memories develop and shift over time, often in a deceptive way. The book also emphasises how the power of suggestion can influence memory. What does this mean – that everyone is a liar to some degree? Of course not. A liar is a person who tells a falsehood knowing it to be false. Slippery memory, as Sir Edward Coke described it, means that what we are saying we believe to be true even although, objectively it is not.

A skilful cross-examiner knows how to work on memory and highlight its fallibility. If the lawyer can get the witness in a criminal trial to acknowledge that he or she cannot be sure, the battle is pretty well won. But even the most skilful cross-examiner will benefit from a reading of The Memory Illusion. It will add a number of additional arrows to the forensic armoury. For me the book emphasises the risks of determining criminal liability on memory or recalled facts alone. A healthy amount of scepticism and a reluctance to take an account simply and uncritically at face value is a lessor I draw from the book.

The Cyber Effect is about how technology is changing human behaviour. Although Dr Aiken starts out by stating the advantages of the Internet and new communications technologies, I fear that within a few pages the problems start with the suggestion that cyberspace is an actual place. Although Dr Aiken answers unequivocally in the affirmative it clearly is not. I am not sure that it would be helpful to try and define cyberspace – it is many things to many people. The term was coined by William Gibson in his astonishingly insightful Neuromancer and in subsequent books Gibson imagines the network (I use the term generically) as a place. But it isn’t. The Internet is no more and no less than a transport system to which a number of platforms and applications have been bolted. Its purpose –  Communication. But it is communication plus interactivity and it is that upon which Aiken relies to support her argument. If that gives rise to a “place” then may I congratulate her imagination. The printing press – a form of mechanised writing that revolutionised intellectual activity in Early-modern Europe – didn’t create a new “place”. It enabled alternative means of communication. The Printing Press was the first Information Technology. And it was roundly criticised as well.

Although the book purports to explain how new technologies influence human behaviour it doesn’t really offer a convincing argument. I have often quoted the phrase attributed to McLuhan – we shape our tools and thereafter our tools shape us – and I was hoping for a rational expansion of that theory. It was not to be. Instead it was a collection of horror stories about how people and technology have had problems. And so we get stories of kids with technology, the problems of cyberbullying, the issues of on-line relationships, the misnamed Deep Web when she really means the Dark Web – all the familiar tales attributing all sorts of bizarre behaviours to technology – which is correct – and suggesting that this could become the norm.

What Dr Aiken fails to see is that by the time we recognise the problems with the technology it is too late. I assume that Dr Aiken is a Digital Immigrant, and she certainly espouses the cause that our established values are slipping away in the face of an unrelenting onslaught of cyber-bad stuff. But as I say, the changes have already taken place. By the end of the book she makes her position clear (although she misquotes the comments Robert Bolt attributed to Thomas More in A Man for All Seasons which the historical More would never have said). She is pro-social order in cyberspace, even if that means governance or regulation and she makes no apology for that.

Dr Aiken is free to hold her position and to advocate it and she argues her case well in her book. But it is all a bit unrelenting, all a bit tiresome these tales of Internet woe. It is clear that if Dr Aiken had her way the very qualities that distinguish the Digital Paradigm from what has gone before, including continuous disruptive and transformative change and permissionless innovation, will be hobbled and restricted in a Nanny Net.

For another review of The Cyber Effect see here

All Data is Created Equal

 

I must acknowledge the assistance I have received from an excellent unpublished dissertation by Reuel Baptista whose insights into and examinations of potential regulatory outcomes for Net Neutrality are worthy of consideration.

Net Neutrality is an emotive subject for many who are involved in the workings of the Internet and the provision of Internet services and access. It essentially asserts that the transport layer of the Internet – the means by which data moves across the Internet – should be non-discriminatory as to content and treat all data packets equally regardless of nature or origin.

It is a concept that has been developed primarily by Internet engineers but since the Internet went public in the 1990’s it is a concept that has been the subject of challenge, primarily from commercial entities. There are examples, particularly from the US, of data discrimination and preferential treatment of data in certain circumstances.

The location of the concept of Net Neutrality in Internet legal theory has been generally considered as a governance issue  and so it is. Yet despite opportunities to review or address issues of Net neutrality, in the Government’s recent consultation paper on the shape of the delivery of Telecommunications services post 2019 no mention was made of Net Neutrality.

This state of affairs was also referred to by the Commerce Commission in its determination of the application for merger between Sky and Vodafone where it said at para 90:

Unlike in a number of other jurisdictions, New Zealand does not have any specific laws requiring TSPs to treat all internet traffic equally (known as ‘net neutrality’). This means that TSPs can discriminate between different types of traffic,either by:

90.1 not carrying certain types of content; or

90.2 limiting the speed at which certain content is carried (known as ‘throttling’), which impacts the quality of the content.

Despite this for New Zealand providers Net Neutrality is not really as issue – at least not yet.  This doesn’t mean that it won’t become an issue some way down the track and the concern must be, when ISPs start discriminating between content and allocating preferential bandwidth, that by then it will be too late to do anything about it.

But the reality is that there is more to Net Neutrality than treating data equally. It helps address the negative effects of discriminatory practices such as blocking, paid prioritization and zero rating. Competition within the fixed line broadband and content markets, recognition of human rights and a country’s standing in the online economy are all affected by network neutrality. The tension is that there is a need to prevent big or monolithic ISPs from abusing their power but allow them to optimise the Internet for subsequent waves of innovation and efficiency. Other counties have had this debate and have introduced network neutrality into their telecommunications regulatory framework.

It is therefore interesting to read Juha Saarinen’s piece in this morning’s Herald where he suggests that net neutrality no longer matters. He locates his discussion against a background of developing content delivery systems which use geography to enhance speedy delivery. He points out that big services providers can afford to put data centres near customers and cache content there. Others use content delivery networks such as Akamai, Amazon Web Service, and Cloudflare that sit between the customer and the service provider. This, he says, violates Net Neutrality as it makes some sites seem to perform better than others.

With respect, I disagree. That argument is not based on the non-discriminatory treatment of data packets across the Internet but rather is based upon geography and location of data.

Saarinen goes on to dismiss Net Neutrality as an important idea a few years ago but today “we’re probably better off expending our energy elsewhere, like how to keep a diverse and competitive internet provider and Telco market alive in New Zealand.”

So does Saarinen suggest that we kick Net Neutrality to the kerb?

The reality is that in fact, as I have already suggested, it is an essential part of the regulatory and governance processes necessary to ensure a competitive internet provider and Telco market. Net neutrality is an integral part of that activity.

With the Telecommunications Act review in progress, this is the right time for New Zealand to formally adopt network neutrality as part of our telecommunications regulatory framework. Susan Chalmers said in 2015 at a Law Conference

“The thicket of commercial agreements between content and applications providers and ISPs must not be allowed to develop to such an extent that there will be no political will left to clear a path for [network] neutrality.”

The rapid pace of change in the online world means there may not be another opportunity to discuss network neutrality regulation for some time.

Memory Illusions and Cybernannies

Over the last week I read a couple of very interesting books. One was Dr Julia Shaw’s The Memory Illusion. Dr. Shaw describes herself as a “memory hacker” and has a You Tube presence where she explains a number of the issues that arise in her book.

The other book was The Cyber Effect by Dr Mary Aiken who reminds us on a number of occasions in every chapter that she is a trained cyberpsychologist and cyberbehavioural specialist and who was a consultant for CSI-Cyber which, having watched a few episodes, I abandoned. Regrettably I don’t see that qualification as a recommendation, but that is a subjective view and I put it to one side.

Both books were fascinating. Julia Shaw’s book in my view should be required reading for lawyers and judges. We place a considerable amount of emphasis upon memory assisted by the way in which a witness presents him or herself -what we call demeanour. Demeanour has been well and truly discredited by Robert Fisher QC in an article entitled “The Demeanour Fallacy” [2014] NZ Law Review 575. The issue has also been covered by  Chris Gallavin in a piece entitled “Demeanour Evidence as the backbone of the adversarial process” Lawtalk Issue 834 14 March 2014 http://www.lawsociety.org.nz/lawtalk/issue-837/demeanour-evidence-as-the-backbone-of-the-adversarial-process

A careful reading of The Memory Illusion is rewarding although worrisome. The chapter on false memories, evidence and the way in which investigators may conclude that “where there is smoke there is fire” along with suggestive interviewing techniques is quite disturbing and horrifying at times.

But the book is more than that, although the chapter on false memories, particularly the discussions about memory retrieval techniques, was very interesting. The book examines the nature of memory and how memories develop and shift over time, often in a deceptive way. The book also emphasises how the power of suggestion can influence memory. What does this mean – that everyone is a liar to some degree? Of course not. A liar is a person who tells a falsehood knowing it to be false. Slippery memory, as Sir Edward Coke described it, means that what we are saying we believe to be true even although, objectively, it is not.

A skilful cross-examiner knows how to work on memory and highlight its fallibility. If the lawyer can get the witness in a criminal trial to acknowledge that he or she cannot be sure, the battle is pretty well won. But even the most skilful cross-examiner will benefit from a reading of The Memory Illusion. It will add a number of additional arrows to the forensic armoury. For me the book emphasises the risks of determining criminal liability on memory or recalled facts alone. A healthy amount of scepticism and a reluctance to take an account simply and uncritically at face value is a lesson I draw from the book.

The Cyber Effect is about how technology is changing human behaviour. Although Dr Aiken starts out by stating the advantages of the Internet and new communications technologies, I fear that within a few pages the problems start with the suggestion that cyberspace is an actual place. Although Dr Aiken answers unequivocally in the affirmative it clearly is not. I am not sure that it would be helpful to try and define cyberspace – it is many things to many people. The term was coined by William Gibson in his astonishingly insightful Neuromancer and in subsequent books Gibson imagines the network (I use the term generically) as a place. But it isn’t. The Internet is no more and no less than a transport system to which a number of platforms and applications have been bolted. Its purpose –  Communication. But it is communication plus interactivity and it is that upon which Aiken relies to support her argument. If that gives rise to a “place” then may I congratulate her imagination. The printing press – a form of mechanised writing that revolutionised intellectual activity in Early-modern Europe – didn’t create a new “place”. It enabled alternative means of communication. The Printing Press was the first Information Technology. And it was roundly criticised as well.

Although the book purports to explain how new technologies influence human behaviour it doesn’t really offer a convincing argument. I have often quoted the phrase attributed to McLuhan – we shape our tools and thereafter our tools shape us – and I was hoping for a rational expansion of that theory. It was not to be. Instead it was a collection of horror stories about how people and technology have had problems. And so we get stories of kids with technology, the problems of cyberbullying, the issues of on-line relationships, the misnamed Deep Web when she really means the Dark Web – all the familiar tales attributing all sorts of bizarre behaviours to technology – which is correct – and suggesting that this could become the norm.

What Dr Aiken fails to see is that by the time we recognise the problems with the technology it is too late. I assume that Dr Aiken is a Digital Immigrant, and she certainly espouses the cause that our established values are slipping away in the face of an unrelenting onslaught of cyber-bad stuff. But as I say, the changes have already taken place. By the end of the book she makes her position clear (although she misquotes the comments Robert Bolt attributed to Thomas More in A Man for All Seasons which the historical More would never have said). She is pro-social order in cyberspace, even if that means governance or regulation and she makes no apology for that.

Dr Aiken is free to hold her position and to advocate it and she argues her case well in her book. But it is all a bit unrelenting, all a bit tiresome these tales of Internet woe. It is clear that if Dr Aiken had her way the very qualities that distinguish the Digital Paradigm from what has gone before, including continuous disruptive and transformative change and permissionless innovation, will be hobbled and restricted in a Nanny Net.

For another review of The Cyber Effect see here