THERE’S SOMETHING HAPPENING HERE?

Introduction

Bruce Cotterill wrote an opinion piece for the New Zealand Herald. It was published on Saturday 5 November 2022. It was about free speech and entitled “Free speech – worth speaking up for.” It presented some important and compelling arguments in support of the importance and necessity of freedom of speech.

Mr Cotterill’s article attracted some comment. Even something as fundamentally important as freedom of speech is a contentious topic. Critics of advocates of free speech use the ability to express themselves freely in opposition. If it were not for free speech they would be unable to do so. That in itself demonstrates the vital importance of freedom of expression.

One critic of Mr Cotterill’s piece took him to task for conflating freedom of speech issues and disinformation. The reasoning is clear. There is a move afoot to point out and deal with disinformation. That in itself is a freedom of speech issue. No matter how wrong headed a point of view might be, if there is no immediacy of physical harm caused by the expression of the point of view, freedom of expression allows it to be communicated.

I should observe at this stage that rather than the term “Free speech” I prefer to use “freedom of expression.” There are two reasons for this.

The first is that is the term that is used in Section 14 of the New Zealand Bill of Rights Act 1990.

The second is that the right as expressed in section 14 recognises that freedom of expression is a two way street. There is the right to impart information and opinions of any kind in any form – what could be called the “outward flow”. There is also the right to seek and receive information and opinions of any kind in any form – what could be called the “inward flow”.

In the discussion that follows I go another step further than Mr Cotterill and conflate what is referred to generically as “hate speech” with disinformation. Both concepts have freedom of expression implications. My reasons for conflating the concepts will become clear in what follows.

My discussion commences with a prologue, highlighting some of the remarks made by the Primes Minister of New Zealand Ms Jacinda Ardern at the United Nations General Assembly.

These remarks set the stage for the discussion that follows. The starting point for that discussion is the announcement by the Minister of Justice Ms Kiri Allan that “hate speech” legislation – legislation that has had a gestation period that would rival that of a blue whale – will be enacted by the general election in 2023.

The discussion then moves to consider two documentaries that were screened on television during the week of 31 October 2022. One is entitled “Web of Chaos”. The other was the final episode of the series “A Question of Justice” and addresses hate crimes.

I then go on to make some observations about the climate of fear that has continued to develop in New Zealand, fed not only by documentaries such as “Fire and Fury” and “Web of Chaos” but also by some disturbing and sonorous remarks by the Director of the New Zealand Security Intelligence Service, Ms Rebecca Kitteridge.

Taken collectively these various events and pronouncements provide a backdrop against which a discussion of hate speech legislation, mis/disinformation and the tension with the freedom of expression is going to take place.

I pose a question – taken from the opening lines of a 1967 song by Buffalo Springfield entitled “For What its Worth” – “There’s something happening here?

Prologue

On 23rd September 2022 Prime Minister Ardern addressed the United Nations General Assembly. She spoke generally of the issues of the day before segueing into a discussion of the new weapons of war, referring to cyber-attacks, prolific disinformation and the manipulation of communities and societies.

The cyberattacks are easily understood. It was the second part that was concerning because the weapons to which Ms Ardern referred were words.

She quickly reassured her audience that “even those most light touch approaches to disinformation could be misinterpreted as being hostile to the values of free speech we value so highly”.

Yet within moments she retreated from that view when she posed the rhetorical question  “How do you tackle climate change, if people do not believe it exists?”

The answer becomes clear when you line that comment up against the claim made during the height of the COVID pandemic that the Government was the sole source of truth. The answer is to shut down speech that is hostile to the received wisdom of the Government.

If there is to be a move towards further restrictions of speech – and this is in the wind following the announcement during the week of 30 October that the Minister of Justice will introduce “hate speech” legislation before the next election – who is to decide what speech should be restricted? When does opinion become misinformation? What is an accurate opinion as opposed to an inaccurate one? When does mis/disinformation become “hate speech?” If the law manages to shut down one side of an argument the community is the poorer for being unable to evaluate an alternative view.

Two Documentaries

On 1 November 2022, TV1 screened the documentary “Web of Chaos”. The following day, Prime screened the fourth instalment of the series “A Question of Justice” which addressed hate crimes.

I shall start my consideration of the documentaries by explaining why I conflate disinformation and hate speech.

The predominant theme of “Web of Chaos” is that of disinformation and the way that online networks have enabled its spread. Sadly, at no time is disinformation defined. This is curious because much of the documentary contains interviews or commentary from two academics involved in The Disinformation Project. One of these academics is Ms. Kate Hannah.

Ms. Hannah describes how people are drawn into mis/disinformation networks in in different ways. She refers to the “trad wife” viewpoint. She claims that white Christian pseudo-Celtic pseudo-Nordic ideology lies behind this viewpoint. They (presumably the “white Christian pseudo-Celtic pseudo-Nordic”) use Pinterest and Instagram to draw in other women who are interested in interior design, children’s clothing, knitting, healthy food for children.

From this innocent start people are drawn in towards a set of white nationalist ideas. Fair skinned children with braids is a danger signal according to Ms Hannah. She did not explain why this was the case.

She then referred to the association of these ideas with a toxic masculinity which had

 ”…very fixed ideas about gender roles, race, ethnic identity, national identity, nationalism and rights to  things like free speech – very influenced by a totally US centric model.” (“Web of Chaos” at 21.5) 

In essence these characteristics, according to Hannah, derive from US based alt-right perspectives.

If I understand Ms Hannah’s position disinformation is associated with extremist ideologies. These ideologies are nationalistic, white supremacist and far right.

This may be viewed alongside the material presented in the documentary by Professor Lisa Ellis, Political Philosopher, Otago University. She commented on some aspects leading to the rise of the Nazi’s in 1930’s Germany. The racist hatred of Nazis is reflected in some modern extremist organisations. Ms Hannah and Professor Ellis focus on the Far Right but similar racist hatred is expressed in other ideologies represented by Al Quaeda or ISIS.

The Stuff documentary “Fire and Fury” – which I have written about here – dealt with the rise of disinformation and the way in which that led to radical and violent action and extreme expressions of hatred especially towards politicians.

The very clear message from these sources is that disinformation and racial hatred or hate speech are two sides of the same coin. According to Ms Hannah they are inextricably intertwined. One inevitably leads to another. It seems that any discussion of disinformation ultimately ends up in a consideration of hate speech or extremist speech.

In her address to opening of New Zealand’s Hui on Countering Terrorism and Violent Extremism – He Whenua Taurikura the Ms Ardern made a similar association between disinformation and violent extremism. I discuss this in detail below.

It is for those reasons that I conflate disinformation and hate speech as both worthy of consideration in a discussion about freedom of expression.

1 November 2022 – Web of Chaos – TV 1

This TV programme was described as “A deep dive into the world of disinformation, exploring why it’s spreading at pace throughout Aotearoa and the world, with specialists warning of striking consequences for social cohesion and democracy.”

In many respects, both in the manner of presentation and the content presented it bore a close relationship to the “Fire and Fury” documentary put out by Stuff. It starts with a recognition of the way in which online platforms can enable communities but then rapidly descends into a critique of what is described as cultish behaviour.

Kate Hannah was joined by Dr. Sanjana Hattotuwa, also of the Disinformation Project and assisted by David Farrier, described as a journalist and podcaster. Farrier tracks the development of Internet communication from the early days of discussion groups to the current world of social media platforms and algorithm driven content.

A fair section of the programme focusses upon the Wellington Protests of February – March 2022, covering the same material as “Fire and Fury” and expressing similar concerns about perceptions of violent radicalism or extremism. A concern by Dr. Hattotuwa is that the Internet provides a means of communication and connection between previously isolated radicals. He describes it as the algorithmic amplification of psychosis.

Although it is not clearly explained there is ample evidence to establish that social media platforms use algorithms in the background. These algorithms are designed to track the search or interest patterns of a user and then provide more information of a similar type. The problem is that as the user follows a particular interest, more and more information associated with that interest will be provided. This can be troublesome if the users’ interests are oriented towards violence or extremism. More problematic is the situation where a user may hover around the edges of extremist content but be served up more and more content of that nature.

Both Dr Hattotuwa and Ms Hannah immerse themselves in the vast amount of what comprises misinformation, disinformation and radical extremism online.

 Dr Hattotuwa subscribes to 130 Telegram channels and groups. He concedes he does not read everything that comes across his screen. Because of the way he organizes the information, he claims that he gets an insight into the mindset of the people who frequent the channels.

Dr Hattotuwa discussed what he calls toxic information and commentary including material directed about the Prime Minister. What was extraordinary was the suggestion that this toxic informational landscape was being used by 350,000 New Zealanders – all grooming and harvesting. Dr Hattotuwa emphasizes “It is here. It is amongst you” (“Web of Chaos” at 29.30). No evidence is offered to support either the numbers or the assertion.

Ms Hannah expressed concerns about death threats that she received and records the ritualistic washing of hands she undertakes before she examines archival material – a form of symbolic disengagement from reading unpleasant material.  She does the same investigating information on the computer. Dr. Hattotuwa describes how he has two showers a day to symbolically wash away the detritus of the online material he has been viewing. These actions on the part of two individuals who are meant to be carrying out dispassionate and objective research is interesting if only for the level of subjectivity it introduces.

Marc Daalder – reporter on Technology and the Far Right which must be a clear indicator of other than an objective perspective – suggests that although there may not be funding of extreme groups in New Zealand the Internet allows the importation and availability of this material.

Ms Hannah suggests that groups are using New Zealand as a laboratory for disinformation strategies to see if they work.

The documentary offers no solutions other than to have Professor Ellis observe that today’s Digital Natives are less likely to be taken in by mis/disinformation and Conspiracy theories. She holds out some hope for the future.

What the documentary does do is to further enhance the aura of fear that was generated by the “Fire and Fury” piece, identifying what is perceived as a problem but leaving the door open as to solutions.

The conflation of disinformation with hate speech suggests that whatever proposals there may be for restricting or limiting hate speech should be applied equally to disinformation and possibly even misinformation. This would result in a significant limitation upon the freedom of expression.

Ms Hannah and Dr Hattotuwa expressed their views in the “Fire and Fury” documentary as well as the “Web of Chaos” documentary. They are entitled to express their views. My suggestion is that those views should be approached with caution. Although they may be able to point to evidence of what they describe as mis/disinformation, the way in which they interpret that evidence gives me some cause for concern.

Certainly they are neither dispassionate nor objective about their topic. This is evidenced by the reactions that they have to the content of the material that they view. They clearly are responding subjectively to it. They make value judgements rather than empirical or descriptive ones.

One astonishing connection was made by Ms Hannah to which I have referred above. In her discussion about connection between white nationalism and the slide towards extremism she said that an identifier of the groups of which she was critical involved the “advocacy of rights to things like free speech.” (My emphasis)

I trust Ms Hannah does not stand by that generalization. The implication is clear. If one is an advocate of rights such as free speech, one is a right-wing extremist, supporting white nationalism or white supremacy.

That conclusion cannot be supported by the facts. Those who advocate liberty are not extremists. Those who advocate freedom of expression are not far-right wing. For example, an examination of the Council of the Free Speech Union reveals some commentators who occupy a position on the Left of the political spectrum.

Ms Hannah’s sweeping generalisation does neither her argument nor her credibility any good. Dr Hattotuwa’s unsupported assertion that 350,000 subscribe to the toxic informational network does little for dispassionate analysis or objectivity.

Indeed, examples such as this cause one to examine with a greater critical lens, the assertions and validity of material that emanates from the Disinformation Project.

Indeed the whole tone of the “Web of Chaos” documentary had a whiff of hysteria to it. Suggestions of a far-Right conspiracy peddling disinformation with the objective of destroying democracy echo the themes underlying “Fire and Fury”.

This was my conclusion on that documentary

What the Fire and Fury documentary seeks to do is re-channel that fear to a form of opposition to and distrust of the contrarian movement. But after viewing the documentary I was left with an uncomfortable feeling. In all the talk about the weird conspiracy theories put about by the contrarians perhaps the underlying theme of the documentary is a conspiracy theory itself and it seemed to come from Kate Hannah who is one of the heads of the Disinformation Project. She implies that the real threat to democracy comes from a few people given to euphemistic language who make no secret of their views, who are openly all over social media, making no secret of their views and who are well known to Police and the Security Services. Do we really need to fear this vocal minority.

Perhaps Fire and Fury is an example of a mainstream media-based conspiracy theory based on fear and should be treated as such. Or perhaps it is rather a tale told by an idiot, full of Sound and Fury signifying nothing.

One writer described “Fire and Fury” as an example of agitprop. I am driven to agree. I ascribe the same word to the “Web of Chaos” documentary.

2 November 2022 – A Question of Justice – Hate Crimes

The documentary programme “A Question of Justice – Hate Crimes” was the fourth in a series which examined aspects of the New Zealand justice system. Earlier episodes focused on the role of victims in the system, the over-representation of Maori in the criminal justice system and whether there should be degrees of the crime of murder.

The style of the series was to take a case or a couple of cases as exemplars of a problem and then carry out an investigation focusing on the issues raised by those cases.

The episode on hate crimes focused on the Christchurch mosque attacks and the killing of Jae Hyeon Kim by white supremacists. The programme examined the nature of hate crimes and the proposals by the Royal Commission on the mosque attacks surrounding hate speech.

The documentary used an “investigative team” approach who reported back and developed an itemized set of problems or shortcomings and then examined possible solutions. Each episode focused on a certain case or cases.

The investigators themselves acted as reporters and were clearly neutral. Occasionally questions about shortcomings in the system might arise but these were stratagems for further lines of inquiry rather than criticism or advocacy for a particular point of view or outcome.

Documentary maker Bryan Bruce who leads the series said of the style of the show:

“I try not to go into any investigation with a ‘stance’. What I try to do is formulate questions that hopefully will get to the core of an issue. Then I talk to a whole lot of people wiser than me to try and find the answer”

Speaking of the first programme in the series about victims, Bruce observed:

“If I had to pick one thing that surprised me, it would be that I had always wrongly assumed the State prosecutes an offender to get justice for the victim. In fact, the prosecutor prosecutes the offender on behalf of the Crown and no one actually represents the victim in court… and that’s something I think we need to look at.”

Bruce stated that the overall purpose of the series was to use

“case studies to examine the law by which we are all bound. Viewers, I hope, will find it engaging but the purpose in making the series was not to produce sheer entertainment.”

The tone of the series was more that of the traditional documentary. It was generally dispassionate and objective and helped to identify problems and at time suggesting possible solutions without advocating any particular outcome.

In this respect the approach to hate speech differed from that of “Wed of Chaos” or “Fire and Fury”. In many respects the “Question of Justice” episode benefitted from a more measured and less emotional approach.

Rather than use dramatic footage and video tricks, it focused upon the nature of the problem and, although not specifically identifying it as such, the way in which the Royal Commission had addressed hate speech and the various tensions between freedom of expression and speech which incited hatred and violent action towards others. In this respect one was left with a sense that reason and objectivity predominated, and that some sense had been brought into the debate.

It would have been helpful if the documentary had detailed the solutions offered by the Royal Commission. I have written on the Royal Commission proposals here.

One of the matters that the Commission’s report was to abandon the use of the word “incite”. It suggested that the term “stirring up” was a better one. It described the way in which speech could potentially be transformed into action. However, the documentary closed by focusing on the term “incite”.

One thing that the documentary did not do was attempt to define “hate” or “hate speech”. In this respect it left and interpretative door wide open. It recognized the tension between freedom of expression and harmful speech. It acknowledged the difficulty in where to draw the line. But the wider association of “hate speech” and “disinformation” that has been touted by “Fire and Fury” and “Web of Chaos” remains.

31 October – 1 November 2022 New Zealand’s Hui on Countering Terrorism and Violent Extremism – He Whenua Taurikura

The focus of the hui was the prevention of terrorism and violent extremism. In her opening remarks, Prime Minister Ardern referred to threats to our security. Second and third on the list of the five top threats of most concern to New Zealanders was misinformation and hacking – a reprise of the concerns that she mentioned at the United Nations speech. She went on to say

  • “Greater efforts are needed to detect dis-information campaigns and networks, and disrupt them, while calling out those that sponsor this activity. We are committed to working with communities, media, academia, civil society, the private sector – especially our social media platforms to counter the threat of disinformation, and I will talk about this and the Christchurch Call in the second part of my speech today.”

In discussing the Christchurch Call, Ms Ardern said:

“There must always be space for radical ideas; these are valued and vital in Aotearoa New Zealand as a free, open, democratic and progressive society.”

A reiteration of her acknowledgement of the importance of freedom of expression that she made at the UN

“However, when dehumanising and hateful ideas are part of ideologies that include hate and intolerance toward specific groups or communities, promoting or enabling violence, these may indicate a path toward violent extremism.”

To deal with this problem she itemized the importance of research the problems arising from the online environment upon which we are dependent and the importance of the international effort – the Christchurch Call.

Using the collective power of national governments who have joined the call the objective is to bring pressure upon technology platforms to change the online and societal landscape.

Ms Ardern then went on to talk about the development of a Strategic Framework for Preventing and Countering Violent Extremism, which includes solutions and approaches developed by society for society. A prevention framework includes a fund for preventing and countering violent extremism. The fund, over three years, will provide grants to civil society and community organisations to support them to deliver initiatives for building resilience to violent extremism and radicalisation.

Finally she stressed the importance of talking about national security, and in this respect the hui was addressed by SIS Director Ms Rebecca Kitteridge.

Ms Kitteridge made the following statement:

“Recognising a potential warning sign and then alerting NZSIS or Police could be the vital piece in the puzzle that ultimately saves lives.”

To that end the SIS has published a guide called “Know the Signs” to help identify terrorists. The Guide is directed towards violent extremism rather than non-violent forms of extremism. Ms Kitteridge suggests that if a person sees something that is “off” or that worries or concerns, the suggestion is to consult the guide and try and work out if the person is on the road to perpetrating an attack.

The guide lists 50 signs from the very obvious (like writing messages on a weapon) to a person who is developing an “us versus them” world view. The SIS is monitoring some 40 – 50 potential terrorists but now a new suspicious class has emerged – those driven by politics. Ms Kitteridge suggests that this could be motivated by the measures that the Government took over COVID or other policies that are interpreted as infringing on rights – what Ms Kitteridge describes as a hot mess of ideologies and beliefs fuelled by conspiracy theories.

It is clear that the publication of the guide means that the SIS recognizes that it cannot do their work alone and that they need the help of the public.

In the introduction to the Guide Ms Kitteridge states:

“I am asking all New Zealanders to look out for concerning behaviours or activities that could be easily observed, and to report them. You may be uniquely placed to see the signs, and to help NZSIS to understand the true threat an individual poses.”

Paul Spoonley obviously buys into the SIS proposal but sees it as a first step. He sees a problem in upskilling people to understand what it is that they are seeing.

So citizens are being encouraged to monitor friends, family, neighbours and those around them, and must be watchful for the “signs”. They must be upskilled to recognize the “signs”. This air of suspicion is grounded upon fear. This has echoes of the “Red Scare” in the USA between 1917 and 1920. The Red Scare was the promotion of a widespread fear of a potential rise of communism, anarchism or other leftist ideologies by a society or state.

There was a second Red Scare in the USA from 1947 – 1957 associated with the rise of McCarthyism and the fear of Soviet espionage in US Government agencies and the “witch-hunts” that followed. Fear and suspicion characterized both of these periods. History is repeating itself but on these shores.

The Fear Factor

When the COVID pandemic hit, the Government was able to obtain compliance with a draconian suspension of our rights and liberties. It did this within a context of a climate of fear. The fear was that if the restrictions were not put in place people would contract COVID and die.

The fear factor was a part of the Government strategy through to the vaccination programme, the mandates that were imposed and through until the so-called “traffic-light” system.

It became apparent, after the numbers began to subside, that the fears of death had been overstated. The “fear factor” was received with skepticism on the part of the public which was prepared to assume risk and take their own measures to protect their health and well being.

Now the fear factor has shifted. The shift has been a gradual one. Instead of the fear of disease and death, what is being advanced is a fear of attacks upon democracy and our way of life – the scare tactics that were applied in the US with the fear of the Communist menace and infiltration.

This narrative began during the pandemic and was highlighted during the vaccine mandates. Those who resisted the mandates – the anti-vaxxers – were viewed as a contrarian threat to the Government line that emanated from “the podium of truth.”

This has morphed into a fear of the erosion of democracy arising from disinformation. The likelihood of terrorism in our own backyard. The need for vigilance. An insidious vaguely identified threat to our way of life.

This fear is magnified by messaging from our politicians. It is suggested that the election next year will be a different one as politicians – at least from the Government – are afraid of walking the streets and canvassing for votes as they once did. An air of hostility is abroad – or at least that is the narrative.

The cultivation of this atmosphere of fear enables the Government to justify erosions of liberty. One example of this will be to target “hate speech” and its close relative, disinformation. A fearful public will be more willing to accept interference with the freedom of expression if it may be seen to address a problem that will supposedly lessen or reduce the fear.

There is a wider issue arising from the climate of fear. I have already addressed it in some detail in an earlier post entitled “Fear Itself”. In that post I conclude with a consideration of the vested interest of mainstream media in promoting the “narrative of truth”. I said there:

Finally it is of interest to observe how vexed the mainstream news media get with the issue of mis/disinformation. Because the warnings emanating from the Disinformation Project, the Chief Censor’s Office and the University of Auckland Centre for Informed Futures, the news media are quick to fan the flames of fear and perhaps overdramatise the significance of the message. But perhaps there is an unstated interest that the news media might have in campaigning against mis/disinformation. In the past they have been the organs of reliable information and their editing and checking systems ensure this.

The Disinformation Project study indicates that on 10 February 2022 misinformation (as they define it) overtook NZ Media for the first time. Perhaps mainstream media has some territory to protect in the contest for the information audience and in fact what they are doing is campaigning strongly against the purveyors of mis/disinformation not to alert the public or perform some altruistic public interest goal but to do whatever they can to protect their own turf, their position as the purveyors of “truth” (despite significant column inches dedicated to “opinion”) and, not least, their advertising revenues and income streams.

I also made some observations on the fear factor engendered by the agitprop “Fire and Fury” documentary. In that piece I said:

It is a matter of comment in mainstream media that some of the leading lights of Voices for Democracy and other contrarian groups are putting themselves forward for election in the upcoming local body elections. Some of them have done so before. None of them have so far been elected. Yet there is concern about contrarians exercising their democratic right to stand for election. As I understand it the availability of democratic process does not depend on the quality of your beliefs, although those beliefs may cause rejection by the electorate.

So where does this leave us. Certainly during the early days of the Covid-19 Pandemic the Government was able to prey on public fears of the outbreak of plague and imminent death to justify lockdowns and to enable the acceptance of discriminatory treatment of citizens based on their vaccination status. The initial response was unplanned but necessary. But we are past that now

What the Fire and Fury documentary seeks to do is re-channel that fear to a form of opposition to and distrust of the contrarian movement. But after viewing the documentary I was left with an uncomfortable feeling. In all the talk about the weird conspiracy theories put about by the contrarians perhaps the underlying theme of the documentary is a conspiracy theory itself and it seemed to come from Kate Hannah who is one of the heads of the Disinformation Project. She implies that the real threat to democracy comes from a few people given to euphemistic language who make no secret of their views, who are openly all over social media, making no secret of their views and who are well known to Police and the Security Services. Do we really need to fear this vocal minority?

Perhaps “Fire and Fury” is an example of a mainstream media-based conspiracy theory based on fear and should be treated as such. Or perhaps it is rather a tale told by an idiot, full of Sound and Fury signifying nothing.

Conclusion – What it Is is Becoming Clear

The debate about so-called “hate” or “dangerous speech” must take place in a calm and objective environment. I realise that this is a sentiment based more on hope than reality, for the subject is an emotive one.

But the debate must not take place against a backdrop of fear which may mean that the solutions proposed are more extreme than the problem itself.

The growing panic on the part of some of misinformation and disinformation feeds into the wider landscape of concerns about “messaging” and, as I have argued, seems to have fed into the “hate speech” milieu with calls for regulation.

Comments like “disinformation corrodes the foundation of liberal democracy” – made by Ms Ardern – add to the scaremongering, softening up the populace so that they become pliable and amenable to greater restrictions on the freedom of expression and ultimately their liberty. It won’t just be about “hate speech.” The net will become incrementally and subtly wider to catch other forms of dissident and contrarian opinion.

Indeed, as Thomas Jefferson said “eternal vigilance is the price we pay for liberty” (1817) but perhaps not the form of vigilance suggested by Ms. Kitteridge.   

We must be vigilant to ensure our liberty, and its foundation stone freedom of expression, is not further eroded.

­­­­­­­­­­­­­­­­_____________________________________________________________

Postscript

The title of this post is taken from the first line of a song recorded by Buffalo Springfield in 1966 entitled “For What its Worth”. The lyrics follow:

There’s something happening here

But what it is ain’t exactly clear

There’s a man with a gun over there

Telling me I got to beware

I think it’s time we stop

Children, what’s that sound?

Everybody look, what’s going down?

There’s battle lines being drawn

Nobody’s right if everybody’s wrong

Young people speaking their minds

Getting so much resistance from behind

It’s time we stop

Hey, what’s that sound?

Everybody look, what’s going down?

What a field day for the heat

A thousand people in the street

Singing songs and they carrying signs

Mostly say, “Hooray for our side”

It’s time we stop

Hey, what’s that sound?

Everybody look, what’s going down?

Paranoia strikes deep

Into your life it will creep

It starts when you’re always afraid

Step out of line, the men come and take you away

We better stop

Hey, what’s that sound?

Everybody look, what’s going down?

Advertisement

Testing Expression

Introduction

There seems to be an ambivalence in New Zealand about freedom of expression. Although the right to communicate and receive information is guaranteed by section 14 of the New Zealand Bill of Rights Act 1990, the exercise of that right in certain circumstances is questioned. Indeed there seems to be a shift towards banning or censoring some manifestations of expression. In this piece I outline the approach that should be adopted to controversial speech, and the rare circumstances in which censorship – an extreme remedy – should be contemplated. The approach that I have developed owes much to the material in Professor Nadine Strossen’s excellent book “Hate: Why We Should Resist It With Free Speech Not Censorship”

The Approach

There are two major principles that must guide an assessment of whether or not an expression should be stifled, censored or punished. These principles are known as the emergency and viewpoint neutrality principle. They have developed in the United States but can operate as useful guidelines for an approach to the application of the freedom of expression guarantees in the New Zealand Bill of Rights Act 1990.

As freedom of expression jurisprudence developed in the United States of America, the Supreme Court held that a government could punish speech based on a feared “bad” or “harmful” tendency. This was based on a vague, general fear that the speech might indirectly contribute to some possible harm at some indefinite future time. This could be called the “harmful tendency” test. This test allowed the State to punish speech that contained ideas that it opposed or did not favour. That included speech that criticized government policies or officials.

The ”harmful tendency” approach was rejected by the US Supreme Court in the early twentieth century. It was replaced by a stricter test known as the “emergency” test. Under this test the State could punish speech only when it poses an emergency – that is when it directly, demonstrably and imminently causes certain specific, objectively ascertainable serious harms that cannot be averted other than by censorship. One of those other ways is by what has been described as “counterspeech”.

Counterspeech counters or responds to speech with a message that the speaker rejects. Counterspeech may address various audiences including the speaker and those who share the speaker’s views, the people whom the speech disparages and the general public. It may include denunciations and refutations of the message. It may provide support for persons who the speech disparages. It may include information that seeks to alter the views of the speaker and those who may be sympathetic to those views. If speech does not satisfy the emergency test, the proper response is counterspeech.

Speech should not be the subject of State interference solely because the message is unpleasant, discomforting, disfavoured or feared to be dangerous by the State. This is known as “content or viewpoint neutrality”. This approach prevents the State from regulating speech simply because the speech’s message, idea or viewpoint is unpleasant, discomforting, offensive, disfavoured or feared to be dangerous by government officials or community members. That approach – what could be called “viewpoint discriminatory” regulation – would attack individual liberty but also democratic principles. Officials could use it to suppress unpopular idea or information or manipulate public debate.

Censoring speech because it is disfavoured, no matter how deeply, violates the viewpoint neutrality principle. That principle is also violated when the State suppresses speech about public issues. This can include “hate speech” simply because its views might have a disturbing impact upon the emotions or psyches of some audience members. The State may not punish “hate speech” or speech with other messages simply because of its offensive, discomforting, disfavoured, disturbing or feared message.

Counterspeech is available to address such messages. Only when the speech crosses the threshold into the emergency test – that is when it directly, demonstrably and imminently causes certain specific, objectively ascertainable serious harms that cannot be averted by other than censorship – may the State intervene.

I referred to “hate speech” in the preceding paragraph. I have put it in quotation marks. This is because the term lacks specificity of meaning. Its generally understood core meaning is speech that expresses hateful or discriminatory views about certain groups that historically have been subject to discrimination such as people of colour, Jews, Muslims, women and LGBTQ persons, or about certain characteristics that have been the basis for discrimination such as race, gender, religion and sexual orientation. It is not speech that the listener hates to hear. Only when the speech crosses the threshold and satisfies the emergency test should the State intervene. It is for that reason that I prefer to refer to such speech as dangerous speech because it poses a clear and present danger of serious physical harm.

In New Zealand we have a number of State interventions in the area of speech regulation. These can be found in the Films, Videos and Publications Classification Act 1993, the Harmful Digital Communications Act 2015 and the various sections of the Crimes Act 1961 and the Summary Offences Act 1981 dealing with threatening language or behaviour.

Some of these pieces of legislation provide examples of the emergency test in action. For the provisions of the Harmful Digital Communications Act to be engaged serious emotional distress (harm) must be suffered. Criminal penalties are attracted if the person posting the digital material has the requisite intention to post the material with the associated intention of causing serious emotional distress. Thus actual harm is an element that engages legislative intervention. Mere offence or disfavour is not sufficient.

The declaring of material to be objectionable under the Films, Videos and Publications Classification Act 1993 leans towards a harmful tendency test. Material may be objectionable if it describes, depicts, expresses, or otherwise deals with matters such as sex, horror, crime, cruelty, or violence in such a manner that the availability of the publication is likely to be injurious to the public good. This definition suggests that the particular publication may be injurious to the public good – not as an imminent threat – but at some indefinite future time.

The Classification Office is careful to ensure that its determinations fall within the ambit of the categories expressed in the definition of objectionable. Recently, however, there have been a couple of examples where political expression – albeit abhorrent – has been classified as objectionable. However, unless the level of abhorrence comes within the statute it can be addressed by counterspeech.

One of the difficulties facing freedom of expression in New Zealand lies in the climate of fear that has generated over the period of the Covid pandemic. There has been fear about the consequences of the disease, fear if the various directives of the government are not complied with, and fear arising from the expression of contrary views.

Anti-vax sentiments have morphed into anti-government protests and those who express contrarian views have been accused of spreading misinformation and disinformation. All of these views are in the main disfavoured, disturbing or adding to the climate of fear. So much so that the former Chief Censor lent the weight of his office to a publication about misinformation and disinformation entitled the “The Edge of the Infodemic – Challenging Misinformation in Aotearoa”.

One wonders whether the Chief Censor of the time wished to see misinformation come within his ambit and be subject to classification or even being classed as objectionable. It is difficult to see how misinformation or disinformation could fall within the emergency test. Although it may be disfavoured, wrong-headed or disturbing it falls within the scope of viewpoint neutrality, best met with counterspeech.

The ”Harmful Tendency” In Action

A recent demonstration of the overreaction of the public to forms of expression, the rise of the harmful tendency approach and the belief that the State should intervene is chilling and concerning. Rather than addressing the problem with counterspeech or some such similar demonstration, citizens required the Police to investigate incidents involving the flying of flags.

In Wanaka the investigation involved a red flag with a white circle. Inside the circle was a three pointed icon. What could this have been? Some far-right white supremacist coven, perhaps. It was reported as a racist flag. But no. The flag in fact was a Klingon battle flag from the TV series Star Trek. The Police investigated nevertheless.

The second flag that was investigated was a little more confrontational. A flag was flying from a dwelling bearing the insignia of the gang Black Power along with the iconic clenched fist salute. It was what was written below the salute that caused concern. It was the “N” word but instead of ending “er” it just ended with “a”.

So concerned were the Police that they referred the flag to the Censor in an effort to have it declare objectionable. Quite properly the application was refused.

Although these cases may seem insignificant or trivial in themselves there is a deeper level of concern. Are we becoming too precious about taking offence? Are we leaning towards a “harmful tendency” position? Is the answer to something with which we disagree to complain to the authorities or try to shut it down? That is not what freedom of expression in a democratic society is all about.

That these sentiments seem to be surfacing should be no surprise. The Government holds itself out as the sole source of truth and any disagreement is cast as misinformation or disinformation. Some elements of the media demonise contrary opinions and there seems to be a developing trend to silence or cancel opposing points of view simply because they are perceived to be disagreeable or offensive, rather than engaging with the issue.

The reason that is advanced for failing to engage with the issue is that to do so merely gives oxygen to a contrary point of view, but only by discussion and challenge can the holders of contrary views understand and perhaps even accept they are wrong.

We need to be more robust in the way that we deal with views with which we disagree. We must remember that those expressing such views have as much right to express their sentiments as we have to express ours. And we must remember that the only time speech should be censored is if there is a clear, immediate and present danger that it may cause harm. If the ideas that are the subject of speech are controversial, offensive or disfavoured the remedy lies in debate or persuasion and not the intervention of the State.

Media Safety? Responding to Tohatoha

On 25 July a new online safety code came into effect. It was drawn up and agreed between a number of online players such as Netsafe, NZTech, Meta (owner of Facebook, Instagram and WhatsApp), Google owner YouTube, Twitch-owner Amazon, Twitter and TikTok.

The Code obliges tech companies to actively reduce harmful content on relevant digital platforms and services in New Zealand as the country grapples with what Netsafe calls a 25 per cent increase in complaints about harmful content over the past year.

It has drawn criticism from InternetNZ and Tohatoha. One of the criticisms is that the Code is very much a work in progress. This cannot be seen as a problem. Any attempt to address harmful content on digital platforms in a dynamic and everchanging environment such as the Internet must be a continuing and developing task that organically morphs to deal with changes in the digital and content ecosystem.

However, there are other concerns surrounding the development of the Safety Code and the way in which it is to be funded and administered, the most concerning being what seems to be a conflict of interest.

As to the development of the Safety Code the concern is that consultation and the process of development was limited. It was conducted primarily through the agency of Netsafe who co-ordinated the development process. Accordingly there seems to have been little input from other agencies such as Tohatoha and InternetNZ, at least until the first draft was released in February 2022. Civil society organisations nor community representatives were not engaged to the same extent. The view is that online safety must be developed with the community at the forefront. The perception is that there was a “coziness” between Netsafe (who will appoint the Administrator) and the corporates.

This criticism is directed primarily at the legitimacy of the Online Safety Code. It suggests quite properly that there should have been wider involvement of the Online Community from the outset rather than being consulted from time to time. The Code would have greater acceptance had it been developed from the ground up with deep involvement by the wider community. Doubtless there were consultations and certainly a draft of the Code was released in February 2022 but that was a call for comment of a developed proposal rather than seeking detailed input on the devising of the proposal itself.

There should have been a greater level of engagement with the wider community in the development of the proposal if only to ensure that there would be consensus on what was ultimately devised and a level of acceptance of the legitimacy of the Code. As matters stand, those who were not deeply involved will be able to stand on the side-lines and criticise as indeed organisations like Tohatoha and InternetNZ are already doing. Given that situation the legitimacy of the Code, at least as far as the wider community is concerned, is questionable.

Another of the criticisms is associated with that of legitimacy and is directed to what is perceived as a conflict of interest.

The key conflict of interest is that NetSafe would be taking funding from the very organisations it is set up to regulate. In addition, the big platforms know that there is a government media regulation review underway. The Code is perceived as an attempt to undermine what should be the public process of the media regulation review which is conducted by Government and any legislation emanating from such review would go through the Select Committee process and the scrutiny of parliament, the media and the general public. The perception is that in developing the review as essentially a non-Government process NetSafe is undermining democratic processes, in collusion with tech platforms.

This criticism has a number of difficulties. Taken to its logical conclusion, it suggests that any form of industry regulation must be government-led. This ignores the various industries and interests that have developed their own methodologies for regulating their own operations in the wider and more public sense. After all, who better to develop a regulatory system than those who have an intimate knowledge of what is to be regulated and who can devise something workable. Involving government would be to add layers of complexity and an absence of specialist knowledge.

But to be fair, this is not the first time that a review of media regulatory structures has been proposed. In 2011 the New Zealand Law Commission released an Issues Paper entitled “The News Media Meets ‘New Media’: Rights, Responsibilities and Regulation in the Digital Age”. This was in response to a Government request for a review of the legal and regulatory environment in which New Zealand’s news media and other communicators are operating in the digital era. After a lengthy consultation period which was punctuated by a further paper recommending the enactment of Harmful Digital Communications legislation, in 2013 the final report was released.

What had happened over the lengthy consultation period was that those active in the digital space including mainstream media looked at the regulatory structures that were discussed by the Law Commission in the Issues Paper. There were existing regulatory bodies like the Advertising Standards Authority and the Press Council (which were industry funded and voluntary bodies) and the Broadcasting Standards Authority which was a Government Agency. There were no bodies that dealt specifically with the online space. It was clear to those involved in the dissemination of information online – mainstream media as well as bloggers and the alternative online media – that a regulatory model was on the way. To try and provide an alternative to a government led initiative the Online Media Standards Authority was set up. This was a private organisation, funded by the media itself. Membership was voluntary. It had a complaints process and the Tribunal hearing complaints was chaired by a retired High Court Judge. It dealt with complaints about online media on the same basis as the Press Council dealt with mainstream news organisations.

When the Law Commission report finally came out in 2013 it recommended a new converged standards body, folding the functions of the press council, the Broadcasting Standards Authority and the new formed Online Media Standards Authority (OMSA) into one standards body – the News Media Standards Authority or NMSA.  This would be established to enforce standards across all publishers of news including linear and non-linear broadcasters, web publishers and the print media.

The NMSA and the regulatory model proposed by the Law Commission did not come to pass. As it happened OMSA recognised that in some respects its role was redundant, that there was a very low level of work for it and that it should merge with the Press Council which is what happened. The name of the new regulatory body – still voluntary, still funded by the media – is the New Zealand Media Council or NZMC. The members of the Council are drawn from a wide array and the Chair is the Hon Rayner Asher QC, a former High Court and Court of Appeal Judge.

This example demonstrates that there is nothing sinister in organisations establishing and funding their own regulatory structures, even when there is Government interest going on in the background. As I have suggested before, it is often preferable for an industry to regulate itself rather than submit to some “one size fits all” model proposed by Government.

This, then leads to some concerns that I have regarding the critique delivered by Tohatoha and endorsed by a number of other bodies including InternetNZ.

Tohatoha says

“In our view, this is a weak attempt to pre-empt regulation – in New Zealand and overseas – by promoting an industry-led model that avoids the real change and real accountability needed to protect communities, individuals and the health of our democracy, which is being subjected to enormous amounts of disinformation designed to increase hate and destroy social cohesion.”

The statement goes on to say

“We badly need regulation of online content developed through a government-led process. Only government has the legitimacy and resourcing needed to bring together the diverse voices needed to develop a regulatory framework that protects the rights of internet users, including freedom of expression and freedom from hate and harassment.”[1]

These statements must give cause for concern. The first concern is that it suggests that there should be regulation of content on the Internet. The second concern is that this should be through a government-led process. I have already commented on the problems that Government brings to the table in the field of regulation. For Government to be involved in the regulation of news media or indeed any medium that involves the communication of ideas is something that requires a great deal of care. Already Government is involved in a number of areas, such as the enactment of the Films, Videos and Publications Classification Act and the Harmful Digital Communications Act. In addition there is Government involvement in the broadcasting spectrum surrounding the licensing of frequencies under the Radicommunications Act 1989 (and regulations made thereunder) the Telecommunications Act 2001 and the Broadcasting Act 1989.

It seems to me that Tohatoha has overemphasized its advocacy role and overlooked the implications of what it is suggesting. It is clear that by suggesting regulation of content it means a form of control of content. There is another word for this and it is censorship. That a government should lead such regulatory (censorship) process is of even more concern.

Censorship has always been on the side of authoritarianism, conformity, ignorance and the status quo. Advocates for free speech have always been on the side of making societies more democratic, more diverse, more tolerant, more educated and more open to progress.[2]

Finally there is a concern about a loss of social cohesion. By this term what is really meant is a form of coerced conformity and as John Stuart Mill recognized, the most dire threat to freedom comes from social conformity which leads to a shortage of diversity – of inclination, interest, talent and opinion and makes eccentricity a reproach.


[1] https://www.tohatoha.org.nz/2022/07/statement-on-the-release-of-the-aotearoa-code-of-practice-for-online-safety-and-harms/

[2] Erwin Chemerinsky and Howard Gillman Free Speech on Campus (Yale University Press 2017) p. 27.

Fear Itself?

­­­­­­­­­­­­Introduction

This is another piece about misinformation and disinformation. I have already written about these issues here and here. In this piece I discuss a paper recently released by the Disinformation Project. I consider the definitions that are used and offer a slightly more nuanced approach to the meaning of the terms “misinformation” and “disinformation”. I then go on to discuss some of the available remedies for problems arising from the dissemination of disinformation and close with a discussion of the way in which fear seems to be weaponised to achieve the goal of “social cohesion”. I close with an observation about vested interests and the campaign against disinformation.

Definitional Issues

The working paper “The murmuration of information disorders: Aotearoa New Zealand’s mis- and disinformation ecologies and the Parliament Protest” from the Disinformation Project[1] captured media attention and is itself an interesting study.

I have previously been rather critical of the way in which the terms “misinformation” and “disinformation” have been bandied about and the authors of the working paper have defined their terms.

Misinformation is false information that was not created with the intent to harm people.

Disinformation is false information that was created with the intent to harm a person, community, or organisation.

The material that is available from the Disinformation Project website does not offer any discussion of how these definitions were settled although it is fair to say that similar definitions have appeared in other publications.

Regrettably, the definitions both suffer from a lack of nuance. The nature of the information is not clarified. The definitions do not state whether or not the information conveyed is a statement of fact or opinion. Furthermore the definitions fail to recognise that often a fact may be determined by a process of inference or conclusion based on other existing facts. It may well be that upon further analysis an inferential conclusion may be erroneous. Whether or not it should be described as false gives rise to another issue. The use of the word “false” suggests a fraudulent, dishonest or morally questionable motive. Yet an inferential conclusion may be reached honestly and in good faith.

The definition of “misinformation” goes on to suggest that the information (which may be incorrect) was created and in that sense the suggestion is that it derived from imagination rather than from a number of other pieces of evidence or sources. In my view rather than use the word “created” the word “communicated” should be used and more properly crystallises the nature of the problem.

A person may develop some information either from imagination or from other evidential sources but may do nothing with it. In that respect the information, irrespective of its correctness, is passive. Only when it is communicated and comprehended by an audience does the information become active.

The definition of misinformation also contains the element of motive. A person may analyse a number of facts and arrive at a conclusion. That conclusion may be communicated. The conclusion may be incorrect or  misleading. But the communication of the information was in good faith as to the correctness of the conclusion or its veracity. In such circumstances, the motive for the communication of the information does not matter.

If one is looking for a more nuanced definition of “misinformation” that incorporates the above matters it could read “misinformation is information that is communicated and that is erroneous.”

That definition avoids the issue of motive and the use of the rather loaded word “false”.

“Disinformation” as defined creates some issues. A simple word to describe disinformation is that it is a lie. However, in the definition the word false is used which, in the context of a lie, is a correct term. I have some difficulty with the issue of intention. The intention must be to harm a person, community, or organisation.

A Matter of Harm

I wonder if harm is the correct term. In the context of the Harmful Digital Communications Act, harm is defined as “serious emotional distress” which would be a satisfactory, albeit limited, definition for a person or a community. However, it would not be applicable to an organisation.

Harm could also mean some form of adverse consequence which causes loss or damage. In this respect the communication of false information with the intention of causing loss or damage resembles a crime involving dishonesty. In this respect it could be argued that section 240(1)(d) of the Crimes Act 1961 is applicable. This reads:

“Every one is guilty of obtaining by deception or causing loss by deception who, by any deception and without claim of right….. causes loss to any other person.”

Deception is defined as follows:

  •  a false representation, whether oral, documentary, or by conduct, where the person making the representation intends to deceive any other person and—
  •  knows that it is false in a material particular; or
  •  is reckless as to whether it is false in a material particular; or
  •  an omission to disclose a material particular, with intent to deceive any person, in circumstances where there is a duty to disclose it; or
  •  a fraudulent device, trick, or stratagem used with intent to deceive any person.

Thus it would seem that the communication of false information would fall within the ambit of deception. It is accompanied by the necessary intention and if it causes loss/harm then the offence would be available.

However, as I understand it from the material that is available on the Disinformation Project website and the various commentaries on the “Mumuration” paper the harm that is contemplated is more inchoate and nebulous.

The paper states:

“Disinformation highlights differences and divisions that can be used to target and scapegoat, normalise prejudices, harden us-versus-them mentalities, and justify violence.

Disinformation and its focus on social division are at risk of cementing increasingly angry, anxious and antagonistic ways around how we interact with one another, eroding social cohesion and cooperation.

This has dangerous implications for our individual and collective safety”

Thus, the harm that is perceived is that of divisiveness, antagonism, prejudice and possible physical danger resulting from the use of language that is inciteful. There is concern at the erosion of social cohesion and co-operation.

This theme is picked up by David Fisher in his analysis of the paper. Fisher suggests that the trafficking of false and misleading information should be elevated to the level of national security. With respect I consider such a statement to be unnecessarily shrill and the proposal to be unwarranted. The underlying theme of Fisher’s analysis is that the dissemination of disinformation, some of which originates from overseas sources, poses a threat to established institutions and processes. He cites local body elections and the general election next year which could see a rise in disinformation.

Fisher states:

When it comes to next year’s general election – which attracts much higher public engagement – expect to experience friction as a growing faction with a discordant perception of reality bangs into those who retain faith in the way we live.

The concerns that are voiced by the Disinformation Project and by Fisher express a fear that society is under threat from the spread of disinformation primarily from a cluster of 12 groups of Facebook or social media platforms.

These concerns carry an implicit message that “something must be done”. For some of the disinformation concerns there are already remedies. I categorise these remedies available under existing law as “communications offences”. I have discussed them in an earlier post entitled “Dangerous Speech” but I shall summarise these remedies here.

Existing Remedies

Threats of violence or of harm are covered by section 306 – 307A of the Crimes Act.

Section 307A would seem to be a possible answer to the consequences of disinformation although the language of the section is difficult.

The relevant portions of the section read as follows:

Every one is liable to imprisonment for a term not exceeding 7 years if, without lawful justification or reasonable excuse, and intending to achieve the effect stated in subsection (2), he or she:…..

communicates information—

  •  that purports to be about an act likely to have 1 or more of the results described in subsection (3); and
  •  that he or she believes to be false.

Subsection (2) which deals with the effects that are sought to be achieved reads as follows:

The effect is causing a significant disruption of 1 or more of the following things:

  •  the activities of the civilian population of New Zealand:
  •  something that is or forms part of an infrastructure facility in New Zealand:
  •  civil administration in New Zealand (whether administration undertaken by the Government of New Zealand or by institutions such as local authorities, District Health Boards, or boards of schools):
  •  commercial activity in New Zealand (whether commercial activity in general or commercial activity of a particular kind).

The results that are likely to occur are set out in subsection (3) which reads as follows:

The results are—

  •  creating a risk to the health of 1 or more people:
  •  causing major property damage:
  •  causing major economic loss to 1 or more persons:
  •  causing major damage to the national economy of New Zealand.

However, subsection (4) creates an exception and exempts certain activities from the effect of s. 307A. It reads:

“To avoid doubt, the fact that a person engages in any protest, advocacy, or dissent, or engages in any strike, lockout, or other industrial action, is not, by itself, a sufficient basis for inferring that a person has committed an offence against subsection (1).” (The emphasis is mine)

There has been one case, to my knowledge, that specifically deals with section 307A – that of Police v Joseph [2013] DCR 482.

Other examples of communications offences may be found in the following statutes:

a) the Human Rights Act 1993;

b) the Summary Offences Act 1981;

c) the Harmful Digital Communications Act 2015;

d) the Broadcasting Act 1984; and

e) the Films, Videos, and Publications Classification Act 1993.

f) the Crimes Act 1961.

It should be conceded that not all of the offences created by these statutes deal with the problem of disinformation and I do not propose to discuss all of them and refer the reader to my earlier post on “Dangerous Speech”.

Indeed, the law has been ambivalent towards what could be called communications offences . In 2019 the crime of blasphemous libel was removed from the statute book. Sedition and offences similar to it were removed in 2008. Criminal libel was removed as long ago as 1993.

At the same time the law has recognized that it must turn its face against those who would threaten to commit offences. Thus section 306 criminalises the actions of threatening to kill or do grievous bodily harm to any person or sends or causes to be received a letter or writing threatening to kill of cause grievous bodily harm. The offence requires knowledge of the contents of the communication.

The offence prescribed in section 308 of the Crimes Act involves communication as well as active behavior. It criminalises the breaking or damaging or the threatening to break or damage any dwelling with a specific intention – to intimidate or to annoy. Annoyance is a relatively low level reaction to the behavior. A specific behavior – the discharging of firearms that alarms or intends to alarm a person in a dwelling house – again with the intention to intimidate or annoy – is provided for in section 308(2).

The Summary Offences Act contains the offence of intimidation in section 21. Intimidation may be by words or behavior. The “communication” aspect of intimidation is provided in section 21(1) which states:

Every person commits an offence who, with intent to frighten or intimidate any other person, or knowing that his or her conduct is likely to cause that other person reasonably to be frightened or intimidated,—

(a)     threatens to injure that other person or any member of his or her family, or to damage any of that person’s property;

Thus, there must be a specific intention – to frighten or intimidate – together with a communicative element – the threat to injure the target or a member of his or her family, or damage property.

In some respects section 21 represents a conflation of elements of section 307 and 308 of the Crimes Act together with a lesser harm threatened – that of injury – than appears in section 306 of that Act.

However, there is an additional offence which cannot be overlooked in this discussion and it is that of offensive behavior or language provided in section 4 of the Summary Offences Act.

The language of the section is as follows:

(1)     Every person is liable to a fine not exceeding $1,000 who,—

(a)     in or within view of any public place, behaves in an offensive or disorderly manner; or

(b)     in any public place, addresses any words to any person intending to threaten, alarm, insult, or offend that person; or

(c)     in or within hearing of a public place,—

(i)  uses any threatening or insulting words and is reckless whether any person is alarmed or insulted by those words; or

(ii) addresses any indecent or obscene words to any person.

(2)     Every person is liable to a fine not exceeding $500 who, in or within hearing of any public place, uses any indecent or obscene words.

(3)     In determining for the purposes of a prosecution under this section whether any words were indecent or obscene, the court shall have regard to all the circumstances pertaining at the material time, including whether the defendant had reasonable grounds for believing that the person to whom the words were addressed, or any person by whom they might be overheard, would not be offended.

(4)     It is a defence in a prosecution under subsection (2) if the defendant proves that he had reasonable grounds for believing that his words would not be overheard.

In some respects the consequences of the speech suffered by the auditor (for the essence of the offence relies upon oral communication) resemble those provided in section 61 of the Human Rights Act.

Section 4 was considered by the Supreme Court in the case of Morse v Police [2011] NZSC 45.

In some respects these various offences occupy points on a spectrum. Interestingly, the offence of offensive behaviour has the greatest implications for freedom of expression or expressive behaviour, in that the test incorporates a subjective one in the part of the observer. But it also carries the lightest penalty, and as a summary offence can be seen to be the least serious on the spectrum. The section could be applied in the case of oral or behavioural expression against individuals or groups based on colour, race, national or ethnic origin, religion, gender, disability or sexual orientation as long as the tests in Morse are met.

At the other end of the spectrum is section 307 dealing with threats to kill or cause grievous bodily harm which carries with it a maximum sentence of 7 years imprisonment. This section is applicable to all persons irrespective of colour, race, national or ethnic origin, religion, gender, disability or sexual orientation as are sections 307, 308, section 21 of the Summary Offences Act and section 22 of the Harmful Digital Communications Act which could all occupy intermediate points on the spectrum based on the elements of the offence and the consequences that may attend upon a conviction.

There are some common themes to sections 306, 307, 308 of the Crimes Act and section 21 of the Summary Offences Act.

First, there is the element of fear that may be caused by the behavior. Even although the issue of intimidation is not specifically an element of the offences under sections 306 and 307, there is a fear that the threat may be carried out.

Secondly there is a specific consequence prescribed – grievous bodily harm or damage to or destruction of property.

Thirdly there is the element of communication or communicative behavior that has the effect of “sending a message”.

These themes assist in the formulation of a speech-based offence that is a justifiable limitation on free speech, that recognizes that there should be some objectively measurable and identifiable harm that flows from the speech, but that does not stifle robust debate in a free and democratic society.

Democracy vs Cohesion

The concerns about the effects of disinformation other than those effects which may cause harm relate more to issues of what are described as social cohesiveness. This is a phrase that seems to have been gaining in traction since the Royal Commission Report on the March 15 Christchurch tragedy. It is emphasised in both the “Mumuration” paper and in Fisher’s analysis. The problem with social cohesiveness is that, taken to its ultimate result, we have a society based on silent conformity without any room for dissent, opposition or contrary or contentious opinions.

These elements are essential to a functioning democracy which is cacophonous by nature and which often involves strongly held and differing opinions. Much of the debate surrounding differing opinions can get quite heated and result in what the Disinformation Project claims are angry, anxious and antagonistic arguments. These have been with us for centuries. One need only look at the arguments that have taken place withing the Christian faith over the centuries to understand the passion with which people often approach matters of belief. And, indeed, conflicting opinions within that context would, at the very least be termed “misinformation” or, at worst “disinformation”.

Although the printing press was responsible for the wide dissemination of the contentious arguments surrounding the Reformation and, later in England, the constitutional debates that led to the English Civil War, the dissemination of information afforded by social media platforms is exponentially greater. It is perhaps the delivery of the message, rather than the message itself, that seems to be the root of the problem.

Weaponising Fear

Coupled with this is the fact that the perceived disinformation problem is accompanied by a sense of threat to established institutions which in turn generates a sense of fear and foreboding if the problem is allowed to continue or at least to go unrecognised.

Fear seems to be a widely distributed currency these days. Perhaps older generations have had more experience of the reality of fear having lived through events like various outbreaks of war – Korean, Viet-Nam, Gulf 1 and 2, Afghanistan as a few examples – along with the continuing threat of nuclear conflict which seemed to dissipate in the 1990’s but has now once again loomed and the spectre of terrorism which preceded 9/11 – which was its most egregious example – and which has also been exemplified not only by jihadis but by extremists such as Timothy McVeigh, Anders Breivik and Brenton Tarrant.

But fear is used to market other products. The response to the Covid Pandemic in New Zealand was underpinned by fear, with concerns about potentially high numbers of deaths from the disease if strong measures were not taken. That fear of death and of the consequences of the pandemic underpinned most of the steps taken by the Government and was probably responsible for the complacent response by the populace at least in the first year or 18 months of the pandemic.

Fear can be a strong motivator and often drives extreme responses. Senator Joseph McCarthy played on the fear of a Communist conspiracy in post-World War II USA the reverberations of which were still present in the early 1960’s. The end of the Cold War meant that the fear of the Communist threat was ephemeral but it was shortly replaced by fear of terrorism in the US.

What concerns me is that the fears that are being expressed around misinformation and disinformation suggest that the phenomenon is a new one.  It isn’t but has been exacerbated by the exponential dissemination quality of online platforms.

It is also suggested that there are no remedies to deal with particularly disinformation.

There are and in certain cases the provisions of s. 307A of the Crimes Act 1961 could be deployed along with other remedies discussed if they fit the circumstances.

There are some remedies along with critical analysis of posts that may contain disinformation. To engender a climate of fear is unhelpful, especially when there are existing tools to deal with the issues.

The problem can be summed up by the remark by Franklin D. Roosevelt at his 1933 inauguration –  “the only thing we have to fear is…fear itself — nameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance.”

Misinformation occupies a different space and in my view poses no threat. The views expressed may be contentious or contrarian perspectives. Often the information contained in these views will be opinions based on certain facts which may or may not be valid. Statements of opinion appear regularly in mainstream media and are labelled as such. Often they are the subject of debate and discussion in online comments sections or in letters to the editor. This is part and parcel of life in a liberal democracy that places a high value upon the right to impart and receive information – no matter how wrongheaded it might be.

In fact the way to deal with misinformation was referred to in the NZ Herald for 18 May 2020 entitled “’Tectonic shift’: How Parliament protest supercharged NZ’s misinfodemic” which contained commentary on the “Mumuration” paper. The Prime Minister’s Chief Science adviser Dame Juliette Gerrard is quotes as saying:

“New Zealand needs to play its part in the global effort to foster social cohesion and to empower our children to learn skills which make the next generation strong critical thinkers who are as resilient as possible to an increasingly polluted online environment.”

Whilst I would take issue with the “social cohesion” comment I strongly endorse the suggestion that we need to engage in critical analysis and evaluation of the information that we receive. This is something that needs to be done not only by our children but by ourselves.

Social cohesion is a vague and ephemeral concept for defining acceptable behaviour in society. As I have said in an earlier post:

Without the Rule of Law what is being proposed is some form of “understood” code of behaviour based on the concept of a resilient society that has its foundation in social cohesiveness. I would have thought that a clearly communicated and understood Rule system would establish the metes and bounds of acceptable behaviour.

In my view although a peaceable society is an objective that is the goal of the Rule of Law which allows for a variety of behaviours but provides consequences for unacceptable behaviours – either by civil remedies or criminal sanctions. It is far better to have a clearly defined approach rather than a vague and ephemeral one.

Conclusion – Vested Interests.

Finally it is of interest to observe how vexed the mainstream news media get with the issue of mis/disinformation. Because the warnings emanating from the Disinformation Project, the Chief Censor’s Office and the University of Auckland Centre for Informed Futures, the news media are quick to fan the flames of fear and perhaps overdramatise the significance of the message. But perhaps there is an unstated interest that the news media might have in campaigning against mis/disinformation. In the past they have been the organs of reliable information and their editing and checking systems ensure this.

The Disinformation Project study indicates that on 10 February 2022 misinformation (as they define it) overtook NZ Media for the first time. Perhaps mainstream media has some territory to protect in the contest for the information audience and in fact what they are doing is campaigning strongly against the purveyors of mis/disinformation not to alert the public or perform some altruistic public interest goal but to do whatever they can to protect their own turf, their position as the purveyors of “truth” (despite significant column inches dedicated to “opinion”) and, not least, their advertising revenues and income streams.


[1] It is important to note that the Disinformation Project referred to is based at Victoria University, Wellington and is separate and distinct from the Disinformation Project – a American organization based in Fairfax, Virginia. The website of the NZ organization is https://thedisinfoproject.org. That of the American group is https://thedisinformationproject.org

Suppressing Contentious Material – Misinformation – the new Seditious Libel

Abstract

This post is about the way in which contentious and and contraian views have been dealt with by the authorities. It argues that the terms “misinformation” and “disinformation” are convenient umbrella words to describe the expression of opinions that are other than “mainstream” and that are claimed to be harmful, false and injurious. In this respect the attitude and approach to contentious and contrarian views today echoes the attitude of the “Establishment” to similar contrarian views expressed in the late seventeenth and eighteenth centuries. In those days censorship of contrarian material was carried out primarily through the use of Licensing Acts which gave effective control of printed content to Government officials. After the Licensing Acts lapsed, and in the face of Jacobite conspiracies (both real and imagined) contrarian views were dealt with by charging the authors and printers with seditious libel. The penalties, as will be seen, could be very serious. It is my contention that “misinformation” and “disinformation” are the new seditious libel and the interest of the Government, government agencies, the Chief Censor and indeed the mainstream news media suggests that these views should be more than merely discouraged. The implications for freedom of expression are considerable.

In closing this abstract I wish to clearly state that I do not necessarily endorse the content of contrarian or contentious viewpoints but I do support the right of those who hold them to express them.

A Slice of History

After the Restoration of the Monarchy in England the later Stuarts were confronted with a problem. What was to be done about the business of printing. The reign of the Elizabeth I as well as Charles I saw concerted efforts by Star Chamber to regulate not only the content of what was printed but, in the reign of Charles I, how the business of printing was carried on. This did not stop the printing of material critical of the regime and contesting and contentious material in matters of politics and faith.

Attempts to regulate printing continued during Cromwell’s Commonwealth but it wasn’t until 1662 through until 1694 that a new programme for regulation of the press was put into effect. Star Chamber had gone and was not revived. On 10 June 1662 Parliament enacted the “Act for preventing the frequent Abuses in printing seditious, treasonable and unlicensed books and Pamphlets and for regulating of Printing and printing Presses.

The printing trade was strictly regulated and limited to the Master Printers of the Stationers Company of London and the printers of the two universities of Oxford and Cambridge. Those who manufactured type were limited as were the number of Master Printers at any one time.

Anything that was printed had to be licensed by official licensers who had their own specialities – law, history and the affairs of state, divinity, philosophy, science and art. The process of obtaining a license was detailed. Nevertheless, contentious material managed to get through the net despite the best efforts of Sir Roger Le’Estrange, the Surveyor of the Press and the King’s chief enforcer of print licensing. In spite of L’Estrange’s efforts probably no more than half the pamphlet literature which appear carried the official imprimatur. There continued to flow from the presses a stream of publications which in that day and age were considered seditious or offensive and frequently contained lively and vituperative political criticism.

In the seventeenth century both the government and the populace were inexperienced in either digesting the printed page or judging its effects. The reading public was not sufficiently aware of the fact that not all printed material is not necessarily authoritative and the government had not become accustomed to wide public discussion of its acts not had it discovered that unjust criticism often carries its own antidote[1].

The context is important in understanding the seventeenth century state of mind. A monarch had been executed – a serious matter at that time. Several political coteries had been supplanted during the Interregnum. The Oates and Ryehouse plots confirmed the suspicions by the people of Catholic and other conspiracies during the reign of Charles II. The Monmouth Rebellion challenged the established rules of succession. James II was ultimately deposed and forced to flee. After the Glorious Revolution of 1688 the fear of a Jacobite retaliation – the treatment of the regicides in 1660 turned even Charles II’s stomach and it was thought that a restored Jacobite monarchy would do the same – continued to disturb English statesmen down to and even after the establishment of the Hanoverian dynasty. Thus it seemed necessary to control and suppress contentious material.

L’Estrange supervised the suppression of contentious religious material in 1666 and was given elaborate instructions to apprehend the parties responsible for The Whore’s Petition in 1669. L’Estrange was an enthusiastic enforcer of the Statute as Surveyor of the Press especially between 1662 and 1666, and his influence and activity continued when he was appointed Licensor of the Press until 1679

In 1671 orders went out to round up all the unauthorized printers in London and from time to time it was necessary for the authorities to remind the Stationers Company of their obligations. Indeed, L’Estrange’s approach was that printing should be limited to a few trustworthy and reliable printers who were controlled not by the Stationers Company but by an officer of the Crown (L’Estrange himself). L’Estrange was adamant that the Stationers could not be trusted, that their interests were not those of the authorities and in the past they had failed to enforce the regulations

Enforcement was also in the hands of the Secretary of the King’s Council and the various messengers that he appointed. The Secretary was responsible for issuing permissions for searches and seizures which, during the eighteenth century, were successfully disputed most notably in the case of Entick v Carrington [1765] EWHC KB J98; 19 St Tr 1030.

The Regulation of Printing Act had to be renewed every so often although its renewal was opposed or resisted by the Stationers Company. Finally in 1698 the Act was not renewed. The new century saw a number of unsuccessful steps to revive the Act but the main purpose of the Act, the suppression of objectionable printing, had unsatisfactory results. No test had been devised to set out to determine which books could be classed as offensive. In addition, towards its demise the Act demonstrated that the officially appointed censors could not be trusted.

But that was not the end of the censorship or suppression of contentious material. The State had other means by which it could deal with the dissemination of unpopular or critical views. Contrarian opinion could be classified as seditious and not only the author but the printer could be pursued.

In the summer of 1705 a woman wearing what was described as a “vizard mask” delivered a package to David Edwards, a London printer, along with a coded means by which she could be contacted. The package contained an anonymous, illegal and highly contentious pamphlet entitled The Memorial of the Church of England. The argument in it proposed to topple the Government. But Edwards knew well that sedition sold well in the coffeehouses of the City and he set about printing and distributing the pamphlet.

Edwards had carried on his trade on the fringes of what could be considered the printing of contentious material. In the early 1690’s he had printed a Catholic Manual of Prayers and Christian Devotions which contained in it a prayer for exiled Jacobites – supporters of the deposed King James II. In 1695 Edwards’ premises were raided and a run of Catholic prayer books were seized. He managed to escape prosecution but it was after he printed a virulent piece of Jacobite propaganda called The Anti-Curse that he was charged with seditious libel, was fined and sentenced to stand in the pillory on three occasions.

Depending on the mood of the crowd this could be a serious punishment. On occasion it could be accompanied by having one’s ears nailed to the pillory or being pelted with everything from rotten vegetables to rocks by the crowd. When Daniel Defoe was pilloried for seditious libel in 1705 he wrote a poem declaring his innocence which moved the crowd in his favour.

We do not know what happened to Edwards. He kept his ears and remained silent and compliant but by 1699 he was publishing controversial pamphlets with considerable vigour. Even after the death of James II he continued to print pamphlets in support of the Jacobite cause and was known for these activities.

The Memorial which Edwards printed contained material that made suggestions about how one fixed a broken society. It argued that all faction, wickedness and conflict could be traced back to a split in religious loyalties. Society was divided because of the wildly contrary religious sects proclaiming their messages and “truths”. Toleration would make the problem worse. What was needed was the outlawing of what was “occasional conformity” which was a way that non-conformists could occasionally take communion and thus become eligible for public office, and the casting out of dissenters from society. Among the “pretended” members of the Church were Lord Godolphin, Queen Anne’s treasurer, the Duke of Marlborough and Robert Harley, one of the Queen’s Ministers.

The document was an explosive one and Edwards went to considerable lengths to distance himself from association with the document while still printing it. The title page of the work contains no name and provides no information and the ornamentation that he normally used did not appear.

When the publication hit the streets and coffeehouses Harley himself launched an investigation determined to track down not only the printer but also the author of the work. After publication Edwards was nowhere to be found. His wife was imprisoned and all copies of the pamphlet that could be found were burnt. The mysterious lady “in the vizard mask” who delivered the manuscript was never located and to this day the identity of the author is a mystery.

Harley’s political fortunes rose and fell but in 1711 he was enobled as the Earl of Oxford and Earl Mortimer and was Lord Treasurer until his downfall in 1714. Harley was a patron of the arts and left behind a collection of manuscripts containing Renaissance and Anglo-Saxon literature known as the Harleian Collection which is in the British Library.

Harley’s pursuit of the printer and author of the Memorial was driven by the desire to stamp out what was an incendiary and seditious publication and although the tools of print licensing and associated controls were no longer available, nevertheless the desire on the part of the authorities to suppress contentious content remained.

Even after the end of the Stuarts and the installation of the Hanoverians on the throne, the need to control the message by the authorities continued with increasing fervour. James II may have died but he had heirs and there had been, and continued to be, moves by the supporters of the Jacobites to install a Stuart heir on the throne. There was an attempt by James II’s son, the Old Pretender, to regain the throne in 1715 following the death of Queen Anne, and the Young Pretender, Bonnie Prince Charlie, attempted a Jacobite uprising in 1745 which met its end on Drummossie (Culloden) Moor.

In November 1719 John Mathews aged 18, a printer of Jacobite pamphlets, having been found guilty of treason, was dragged on a hurdle to Tyburn, hanged but cut down while still alive, disembowelled and quartered. He was the last printer to be executed in England but he was not the last victim of the censorship of contentious material. It has been suggested that Harley would not have approved of Mathew’s execution. Better that dissdent printers be persuaded to turn on their allies and discover further information for the State. Mathews, however, took his secrets to the gallows.

The persecution of those who propagated contentious views continued. In 1763 John Wilkes published a satirical pamphlet called “The North Briton.” His attacks on the Government in that publication, particularly in the 45th edition, led to his arrest under a general warrant. In January 1764 Wilkes was expelled from the House of Commons but the concern aroused by the general warrants affair led to them being no longer used for the arrest of persons.

Wilke’s publication also featured in the case of Entick v Carrington [1765] 19 St Tr 1030 which established the basis for the requirement of a search warrant to make lawful the entry of law enforcement officers upon private property.

 On 11th November 1762 Carrington and three other named individuals entered a property in Grub Street, a well known area where printing took place, belonging to the Entick and spent four hours there searching all of the rooms, breaking open boxes and going through all of the claimant’s possessions.

They removed one hundred charts and one hundred pamphlets from the property. They were searching for copies of “The Monitor or The British Freeholder”  which was similar to Wilkes’ The North Briton along with other seditious material believed to have been written by Entick. Entick sued Carrington and his associates in trespass

Carrington and his associates were acting under the orders of Lord Halifax who was Secretary of State. Halifax’s orders were described as a warrant but the Court, speaking through Lord Camden held that Halifax had no right under statute or the common law to issue any warrant in such wide terms.

Lord Camden said

The great end, for which men entered into society, was to secure their property. That right is preserved sacred and incommunicable in all instances, where it has not been taken away or abridged by some public law for the good of the whole. The cases where this right of property is set aside by private law, are various. Distresses, executions, forfeitures, taxes etc are all of this description; wherein every man by common consent gives up that right, for the sake of justice and the general good. By the laws of England, every invasion of private property, be it ever so minute, is a trespass. No man can set his foot upon my ground without my licence, but he is liable to an action, though the damage be nothing; which is proved by every declaration in trespass, where the defendant is called upon to answer for bruising the grass and even treading upon the soil. If he admits the fact, he is bound to show by way of justification, that some positive law has empowered or excused him. The justification is submitted to the judges, who are to look into the books; and if such a justification can be maintained by the text of the statute law, or by the principles of common law. If no excuse can be found or produced, the silence of the books is an authority against the defendant, and the plaintiff must have judgment.

This attempt by the authorities to stifle Entick publishing contrarian views was unsuccessful.

What History Tells Us

What does this foray into the history of the censorship and various attacks on printers tells us.

First, there have always been contrarian views, and the printing press, like Internet platforms of today, enabled the dissemination of those views.

Secondly, the authorities recognized that not only should those who produced the content be brought to book, but those who enabled the wider distribution – the printers and booksellers – should be deterred and punished, in a similar vein to the cries that go up today about the regulation of Internet platforms

Thirdly, the contrarian views that were expressed were usually minority opinions and were frequently aimed at the establishment. They were also associated with suggestions of “conspiracies”.

In the seventeenth and early eighteenth centuries the authorities went after the printers – the equivalent although not precisely so of today’s platforms in that printers necessarily were aware of what it was that emerged from their presses even although they may not have supported it.

As I have noted, a particular context was the Jacobite threat and the fear of the re-establishment of a Catholic monarch in a state that was largely Protestant with associated fears of retributive persecution. It was a dangerous business to be a Catholic in England in the sixteenth and seventeenth centuries and although there were claims of toleration, the Titus Oates allegations about a Catholic conspiracy and the Gordon anti-papist riots in 1780 showed that toleration was, if anything, skin deep.

Today’s contrarians do not focus upon religious differences but their views run against the flow of the thinking of the majority. The suppression, condemnation or censorship of those views may not be met with the physical sanctions visited upon Mathews for treason, or upon Defoe or Edwards for seditious libel, although some might say that social media may provide a digital analogue for the pillory.

In addition there are other strategies available today to deal with contrarian views. One is to label them with the umbrella word of “misinformation”. Others use official and unofficial channels to demonise contrarians and their opinions.

Contentious Material and its Control Today

The main censorship vehicle in New Zealand is the Films, Videos and Publications Classification Act 1993 (FVPC).

This set up a Classification Office to review and classify material submitted to it, and a Board of Review to which an appeal could be made. 

Prior to its enactment there were three separate regimes with their own criteria: a Chief Censor of Films under the Films Act, a Video Recordings Tribunal under the Video Recordings Act and the Indecent Publication Tribunal under the Indecent Publications Act.

The primary focus of the FPVC Act is upon objectionable publications. Its purpose is to restrict or ban publications which might cause harm to the New Zealand public. A number of agencies are involved in the censorship regime. The Act itself is administered by the Ministry of Justice however the Department of Internal Affairs via its Censorship Compliance Unit is responsible for enforcing its provisions.[2] Both the Police and Customs Office have important roles to play in the Act’s enforcement.[3]

The classification system is central to the operation of the Act. For any material to be objectionable, it must first be classified as such.

The classification process is administered by the Office of Film and Literature Classification, the Classification Office, which is an independent Crown Entity[4] headed by a Chief Censor and a Deputy Chief Censor.

The Classification Office is not responsible for all media. Broadcasting, for example is covered by other legislation.[5] However, if a broadcaster wishes to show a film that has been cut or banned by the Classification Office the broadcaster must obtain a waiver from the Chief Censor to do so.[6]

Special provisions were recently enacted to deal with the classification of streaming and on-demand media from providers such as Netflix, Neon and Disney.

The Annual Report from the Classifications Office for 2019/20 observes that there was a rise of violent and potentially extremist material submitted to the Office, which is handled by a specialist Countering Violent Extremism team.

Following the Christchurch terrorist attacks, it was noted that there was a shift in the ideological subject matter of potentially extremist material submitted – with a greater volume of material relating to white supremacy, the far-right, and online hate speech, compared to material related to Islamic fundamentalism.[7]

The Classification Office Annual Report for 2020-2021[8] observes a continuation of this trend. It states:

“there has also been a noticeable increase in publications that deal with violence and violent extremism. The Classification Office also expects to see an increase in such material submitted by the Department of Internal Affairs as their Countering Violent Extremism team becomes fully operational.”

In addition the Classification Office released a report entitled The Edge of the Infodemic: Challenging Misinformation in Aotearoa. It considered that given what it described as a wave of “misinformation” which the Report unhelpfully does not define the Office needed to better understand how New Zealanders felt about misinformation and what they think should be done. By doing so it is hoped to start a conversation about what better, more inclusive solutions might look like.

The Office hastens to point out that addressing misinformation doesn’t mean telling people what to think, or stifling debate with more censorship – but it claims New Zealanders want to know they can trust the news and information they’re getting, and government can work together with communities to combat misinformation. It suggests that there must be better ways for government, community, and online platforms to come together to prevent harm.

It suggested steps that can be taken to stop the spread of misinformation such as looking at the source of an article before sharing it, questioning the perspectives represented in it and feeling comfortable about discussing the content with another trusted person.

But having said that the issue of misinformation is not about telling people what to think or stifling debate. Research can support cross-government collaboration on potential policy and regulatory responses, including a broad media regulatory review, aid education initiatives, and develop information and resources for the public.

Lately the Chief Censor, Mr. David Shanks, has been calling for a widening of his brief. At an Otago University conference about ‘Social Media and Democracy’ in March 2021, Mr. Shanks told the conference the way we regulate media is not fit for the future.

“We can be better than this. I think there’s some very obvious moves that we can do here to make the current regulatory system and framework more coherent for a digital environment,”[9]

As part of an overall review of regulatory structures surrounding harmful information dissemination, the Government released a discussion paper on hate speech and at the same time the Chief Censor released the report referred to above – “The Edge of the Infodemic”[10] which in essence is a survey about how citizens are concerned about misinformation. The internet and social media are identified as key sources – while experts and government are trusted more than news media. 

The Chief Censor says it shows the need for urgent action. It is quite clear that some of the concerns that have been raised about the development of “misinformation” coming as they have alongside moves by the government to address “hate speech” would suggest a shift in attitude towards the robust discussions that characterise a liberal democracy. Indeed there seems to be a general move towards the position that misinformation is in fact harmful content that should be the subject of some form of regulatory response.

Given that the meaning of objectionable is clearly set out in section 3 of the Films Videos and Publications Classification Act 1993 it seems that the Chief Censor is of the view that his powers should be extended beyond that meaning.

In an article by Kristin Hall of 1News dated 3 April 2022 entitled “Misinformation: How social media turned protest into a problem” Mr Shanks was asked to comment on the increase in “misinformation” since his Infodemic report. He commented that

“We’re seeing an increase in the number of bad actors who have learned how to use digital platforms to spread their distrust of public institutions and the media, that means they create followers who really only believe what they say.”

He claimed that the Government needed to push for tighter regulations of platforms that promote misinformation and then went on to cite the Christchurch terror attack.

Mr Shanks quite rightly deemed the live streamed video of that attack as objectionable. It clearly fell within the definition contained in section 3 of the Films Videos and Publications Classification Act 1993 in that it obviously promoted or encouraged criminal acts or acts of terrorism.

However, to equate that video with “misinformation” – a term that Mr Shanks has not defined and which clearly depends on its own circumstances – is in my view an overreach and it is to be hoped that Mr Shanks does not succeed in having objectionable content include the umbrella term “misinformation”.

In an article by Toby Manhire entitled “Inaction on NZ ‘Nuremberg’ site sparks calls for overhaul of system ‘not fit for purpose’” the author calls out the Domain Name Commissioner for failing to cancel the domain name Nuremberg NZ. The website lists, ranks and depicts New Zealand politicians, academics, scientists and journalists and promises “judgement day is here”. Manhire’s complaint is that the site has been left untroubled by New Zealand regulatory and enforcement agencies, a lack of action that, experts say, exposes shortcomings in the apparatus for responding to dangerous online activity.

In the article Manhire observes that the Chief Censor had not received any complaints about the site. Mr Shanks is quoted as saying

“We have been speaking with other agencies, who have been receiving complaints, and we are aware of the very serious concerns about it. The bar for an objectionable (banned) publication is necessarily very high, and our 1993 legislation is not well suited to responding to the kind of harms presented by websites of this kind.  We are committed to working with other regulatory authorities to determine what we can do with the tools available.”

It is difficult to discern what it is about the site that Manhire considers to be dangerous or why the Chief Censor should consider the website to be harmful, It is a ridiculous and stupid form of publication at worst that gives a voice to those who wish to express a contrary opinion about certain named individuals. The site seems to be more aligned with elements of the “sovereign citizen” philosophy (I wouldn’t call it a movement) with which I am familiar having had to deal with such individuals in my Court.

Calls to limit the spread of misinformation are not restricted to the Chief Censor.

Stephen Judd of FACT Aotearoa – the Fight Against Conspiracy Theories – considers misinformation to be harmful and claims that some of those involved in misinformation are promoting “completely different media and information universes”. Mr Judd would like to see misinformation propagators’ accounts shut down and in Kirstin Hall’s article is quoted as saying

“”If people who are spreading misinformation are prevented from using mainstream platforms like Facebook, they may go elsewhere but the good thing about that is that they may be harder to find, which means they have to work harder to get a platform for their ideas. So even there that can have a real effect.”

The implications of such a statement for the freedom of expression are chilling. One wonders what Mr. Judd’s response would be if a similar suggestion were to be made about his platform.

Concerns about “misinformation” start at a Government level. At the end of the Parliament grounds occupation the Prime Minister commented “One day it will be our job to try to understand how a group of people could succumb to such wild and dangerous mis- and disinformation.”

When asked whether or not misinformation was a national security issue she did not respond to the question but went on to say “Government agencies are working together to look at how we can better combat the spread of mis- and disinformation and it’s very clear that it’s a whole-of-society approach that’s needed.”[11]

One hopes that in her deliberations the Prime Minister does not lose sight of the provisions of section 14 of the New Zealand Bill of Rights Act protecting the freedom of expression. Or perhaps her long tenure on the “podium of truth” means that she believes her own publicity. It is well known that politicians rarely answer “yes/no” questions with a yes or no and that “spin” is chapter one of the political playbook.

It is therefore encouraging that the head of the SIS Rebecca Kitteridge recognizes the importance of freedom of expression. She says the NZSIS is interested in disinformation when it engages violent extremism or is carried out by a foreign state. “Freedom of speech is a human right,” she says, so the security services step carefully.[12]

Rebecca Kitteridge said “indicators of violence” included someone with an unusual interest in a crowded place or symbolic location or who was seeking explosive material, firearms or knives without good reason[13]. One wonders if this will mean that an PhD student researching material for a thesis on the tools and methods of terrorism will come under the SIS microscope.

However, other Government agencies seem to have an interest in misinformation including the Government’s Combined Threat Assessment Group (CTAG)[14].

Although threats to life, limb and property are properly the concern of Government and law enforcement, and it is necessary that investigations of such threats are carried out it must be a matter of concern that often hyperbolistic chatter is being lumped under the heading of misinformation and that those who engage in such activity may be under suspicion.

Elements of mainstream media have joined in the calls to deal with what is perceived as a rising tide of misinformation. Both Stuff and the New Zealand Herald emphasise that they are trusted sources of information, thus setting themselves above alternative contrarian or contentious opinions which fall into the classification of misinformation.

New Zealand Herald Senior Reporter David Fisher appeared on the Herald Front Page podcast commenting on the dangerous online world of the man charged with threatening to kill the PM. After making some desultory comments about the accused, Fisher wisely steered the discussion away from a matter which was sub judice but went on to discuss at some length the extent of the mis/disinformation realm that was available on the platform Telegram.

Fisher gave an interesting and informative background to those who frequented some of the Telegram messaging streams and there is no doubt that there are some rather strongly expressed contentious and contrarian views expressed on the platform. However, as has been the case in the discourse about “misinformation” there is no attempt to either define the term or clearly state why it is that certain content amounts to misinformation.

Much of the material that is referred to by researchers in the field[15] who seem to be regular “go to” people for the mainstream media is lumped in under the generic term of misinformation but once again little effort is made to define the term or identify the content.

Fisher’s research is extensive and is documented in a couple of recent articles in the NZ Herald “Domestic Terrorism: NZ security agencies’ public guide as violent online talk increases” NZ Herald 9 April 2022 and “Violent talk and fake news: how extremism went mainstream” NZ Herald 9 April 2022. I am indebted to him for some of the material that I have used for this piece.

In his “Violent Talk” piece Fisher identifies some of those who are engaged in what he characterizes as “misinformation.” He refers to Voices of Freedom organisers Claire Deeks, Libby Johnson and Alia Bland and the venerable blogger Cameron Slater once known as Whale Oil with whom I once had dealings. He also refer to the somewhat hysterical Kelvyn Alps of the Counterspin website. In researching this piece I spent a couple of hours (time lost which I shall never recover) watching Mr Alps.

Anyone with a modicum of intelligence would recognize that Alps comes from a long line of angry anarchists (the stereotypes generated in many 19th century cartoons come to mind although they had more hair atop than Mr Alps and their beards were more unkempt than Mr Alps neat goatee) who have nothing good to say about any form of authority (other than his own) and who spends his time propogating his rants.

To suggest that he might have any credibility in motivating the overthrow of the establishment is laughable. The fact that he seems to attract an audience (of which, statistically, I regret to say I am now one) is meaningless. He may have disciples and there may be a few poor souls who hang on his every word but I doubt they are going to blow up government buildings or take up a gun.[16]

I wonder if perhaps Mr Fisher overstates the case or has substituted the earlier fear that everyone had of COVID-19 for some other target – in this case those who propagate the undefined “misinformation”. This fear that is a subtext of recent news media interest in this phenomenon should not be allowed to grow into some form of New Zealand equivalent of the activities of Senator Joseph McCarthy and his communist witch hunt of 1950 – 54 which formed the inspiration for Henry Miller’s play The Crucible.

The personalisation of attacks upon those who express contentious or contrarian views is further evidenced by an article appearing in Stuff for 10 April 2022 by Kirsty Johnson about Sue Grey, a well known contrarian from Nelson who is also a lawyer. Johnson starts her article benignly enough in what may be characterized as a “profile piece” but then mounts an attack on her subject, observing that a complaint about professional misconduct to the New Zealand Law Society has been escalated to Lawyers and Conveyancers Disciplinary Tribunal.

There can be no doubt from the article that Ms Grey espouses some contentious and debateable causes which one is free to do in a liberal democracy but many of her views are systematically demolished. I don’t have much time for Kirsty Johnson’s journalism but in this case she at least has avoided the umbrella term of “misinformation” and clearly rebuts with evidence where it is that she says Ms Grey falls into error. That at least is refreshing in the current mainstream media campaign against “misinformation”.

In making these observations I am not unmindful of the importance of the freedom of the press and a journalist’s freedom of expression. However, I cannot avoid the thought that there may be an agenda involving carving out the informational space so that there is but one authoritative source – the mainstream media – for information and that there is no room for contrarian or contentious views. There are, of course, economic imperatives which drive this. In addition there is the matter of the unfortunate optics involving the availability of some $55 million from the Public Interest Journalism Fund managed by NZ On Air but emanating from Government cofferss for news media purposes, presumably to ensure the publication of truth.[17]

Misinformation – the new “seditious libel”

So what has changed since the seventeenth century in the way is which we deal with contrarians and their contentious views? It seems to me that the term “misinformation” is a modern equivalent of the charge of seditious libel that was employed by the later Stuarts to silence dissent or contrary or contentious viewpoints and one wonders whether or not we have made very much progress as a society. Are the Chief Censor, FACT Aotearoa, the Disinformation Project and elements of mainstream media the twenty-first century equivalents of Sir Roger L’Estrange? Is the fear of “violent extremism” the parallel to the fear of the Catholic Jacobites of the eighteenth century?

Misinformation seems to be used as a veto word which like other emotive terms such as racism or sexism are ways of avoiding any confrontation with the argument in much the same vein as the way in which the Prime Minister deflects an uncomfortable question by disagreeing with the premise of it.

I wonder too whether or not the concerns about misinformation are driven to a certain degree by an air of panic that in fact there are contrary and contentious opinions at large in the community and that they are being voiced and in some cases gaining traction.

The added difficulty is that there seems to be an assumption that citizens are unable to make up their own minds about the validity of certain content and that essentially the whole of society is gullible and needs to be protected from itself. This is no more than a form of, at best, patronizing paternalism driven by a high level of arrogance fostered by a strong belief that the few know what is best for the many.

What seems to be developing is a form of association between a contrary or contentious opinion which rapidly becomes associated with groups who are anti-vaccination. This in turn automatically translates to a suggestion that those who express contrary or contentious opinions are conspiracy theorists who have fallen into a rabbit hole. From there is it but a short leap of faith to the suggestion that those voicing contentious or contrary opinions are dangerous and may even be terrorists.

Some of the more extreme expressions of contentious or contrarian opinions may come close to committing an offence against existing laws and if that can be proven beyond a reasonable doubt then well and good. But to lump all those who express a contrary or contentious view together with the extremists is a gross generalisation and dead wrong.

What is of concern is that the current campaign – for that is what it is – against misinformation is directed against those who express a contrary view. It is almost as if a form of “group-think” is being encouraged and those who do not conform are eroding the peace order and good government of New Zealand. What such thinking ignores is the importance of freedom of expression in a democracy. Freedom of expression allows a cacophony of views – indeed it encourages it. Section 14 of the New Zealand Bill of Rights Act 1990 states that the freedom of expression is not only to impart information but to receive it. It says nothing about the quality of that information. To try and restrict of suppress contentious or contrarian views eliminates a vital element of our democracy.

Of course our government doesn’t want to be challenged. Of course they don’t like to be told they are wrong. Of course it is concerning if the facts are interpreted to arrive at a conclusion that differs from that of the mainstream. But that is the system that we have got and that we enjoy. Everyone has the same ability regardless of rank, office, position in society or background to freely express a point of view and the Internet – that democratiser of information – allows their voices to be heard. And for it to be suggested that the Government is the only arbiter of truth and that contesting premises may be dismissed is to start to travel an Orwellian path.

To conclude, the current drive against “misinformation”  – today’s seditious libel – seems to me to be another attack of the freedom of expression and upon the ability to express views that may be contrary to those of the majority. A justification for this is often cited as the need for “social cohesion” – another term for blind conformity – but in reality it is really yet another manifestation of well-meaning but misguided, paternalistic and patronising “liberals” who know better than everyone else what is good for them.


[1] This form of patronizing paternalism persists today in the assumption that people cannot apply critical processes to the assessment of material emanating from Internet based platforms.

[2] For more detail see https://www.dia.govt.nz/Censorship-Our-Role

[3] See Films, Videos, and Publications Classification Act 1993, s 103 making every constable an Inspector of Publications and s 118A (3) (powers given to Customs officer).

[4] Films, Videos, and Publications Classification Act 1993, s 76.

[5] Broadcasting Act 1989.

[6] Broadcasting Act 1989, s 4(2).

[7] Classification Office Annual Report 2019 – 2020 p. 10 https://www.classificationoffice.govt.nz/assets/PDFs/2020-Classification-Office-Annual-Report.pdf (Last Accessed 30 August 2021)

[8] https://www.classificationoffice.govt.nz/documents/127/Annual_Report_-_Classification_Office_2021.pdf (Last Accessed 5 April 2022)

[9] Battle Against Online Harm beefs up censor’s power” Media watch 21 March 2021 https://www.rnz.co.nz/national/programmes/mediawatch/audio/2018788055/battle-against-online-harm-beefs-up-censor-s-power

[10] https://www.classificationoffice.govt.nz/assets/PDFs/Classification-Office-Edge-of-the-Infodemic-Report.pdf (Last accessed 30 August 2021.

[11] David Fisher “Violent talk and fake news: how extremism went mainstream” NZ Herald 9 April 2022 Big Read: Violent talk and fake news – how extremism went mainstream – NZ Herald (Last accessed 10 April 2022)

[12] Quoted in David Fisher “Violent talk and fake news: how extremism went mainstream” NZ Herald 9 April 2022 Big Read: Violent talk and fake news – how extremism went mainstream – NZ Herald (Last accessed 10 April 2022).

[13] David Fisher “Domestic Terrorism: NZ security agencies’ public guide as violent online talk increases” NZ Herald 9 April 2022 Domestic terrorism: NZ security agencies’ public guide as violent online talk increases – NZ Herald (Last accessed 10 April 2022)

[14] It may be of interest that an earlier piece of mine about misinformation which appeared on my blog was accessed by no less a body than the Department of Prime Minister and Cabinet. I have not scanned my home for monitoring devices and have not seen people in overcoats and wearing slouch hats following me around the Westfield Mall in Newmarket or swimming in the lane next to me at the pool so I can only assume it was coincidence.

[15] Such as Stephen Judd’s FACT Aotearoa and Sanjana Hattotuwa of the Disinformation Project

[16] As an example of the over-reaction of the authorities to a message posted on the Internet threatening to blow up buildings if a law relating to copyright was enacted see Police v Joseph [2013] DCR 482.

[17] In this regard the line by Pilate in “Jesus Christ – Superstar” “we all have truths – are mine the same as yours” comes to mind.

Talking About “Misinformation”

The following was written some weeks ago before the Parliament Garden Ocupation began, Since then the term “misinformation” has been bandied about largely as a motivating basis for the protesters and occupiers.

But misinformation is not new, by any stretch. The advent of the printing press and the growth in the amount of printed material rocketed to 4,038 items printed in 1642 alone – just before the beginning of the English Civil War. Clare Jackson in her book “Devil-land: England under seige 1588 – 1688 states

“Moreover all information – whether audaciously printed or whispered rumour – could be denounced as erroneous, misleading or damaging. Royal courtiers and foreign ambassadors alike detected an unwelcome rise in “false news”, fearing that London’s Royal Exchange could rival the Rialto in Venice or the Piazza Navona in Rome as a notorious site for rumour-mongering”

Chris Keall in the NZ Herald (4 March) sets out ways to address “misinformation” while failing to define the term. In short, he subscribes to the suggestion by Don Christie that all that is needed is for existing laws to be enforced.

In the “bad old days” the information technology of the printing press was regulated quite severely especially during the reign of Charles II and James II with the enactment of the Licensing Act 1662. This set up a rigorous licensing regime that came to an end in 1696 when the Act was not renewed. An unintended consequence of that was the enactment of the Statute of Anne 1710 which formed the foundation for our modern law of copyright.

Plus ca change?

The term “misinformation” is a curious one. It is frequently used in commentary, especially in the context of the Covid pandemic. It has been used in a number of official publications (The Disinformation Pandemic; Sustaining Social Cohesion in Aotearoa New Zealand). In those publications it has not been defined. It seems to be assumed that its meaning is understood. Yet the way in which it is used seems to suggest that it is a veto word and that the subject matter to which it refers is to be discounted as misinformation without further explanation.

My training as a lawyer and as an academic has taught me to question assumptions. As a lawyer I have been concerned in establishing a proposition by supporting it with specialized information which lawyers call evidence. As a judge I was required to give reasons for decisions. As a PhD candidate I had to justify every assertion and assumption that I made. In all three examples intellectual rigour is required.

If a speaker or writer asserts that there is misinformation, the question that first springs to mind is “what is misinformation”. What does the critic mean when he or she asserts that the subject matter of the criticism is misinformation. What is the definition of misinformation. Or is it a term that has a number of meanings and the correct meaning depends on the context in which it is used.

Perhaps we should start by breaking the word down.

“To Inform” is a verb and means to give or impart knowledge of a fact or circumstances[1] – to tell someone about something.[2]

“To misinform” qualifies the word “inform” with the modifier “mis” and means to give someone false or misleading information (we will look at “information” in a moment) – to tell someone something that is not correct.[3]

“Information” like the verb “to inform” derives originally from Latin informare – to inform. It is the nominative (noun) version of the word and has a variety of meanings. The most common synthesis of these meanings is material that informs.

Information need not be verbally transmitted. It may be by way of written material or material that is gathered by the senses and which may then be interpreted into something that provides meaning to or knowledge about what has been experienced. Information may be data which is then analysed and interpreted into a form that has meaning to the person being informed.

Some might say that “misinformation” is a contradiction in terms. If information is to apprise one of something that informs – that is knowledge of a fact or circumstances – something that misinforms and gives false or misleading information cannot be information because information must necessarily be factually true.

There are some who would say that this is the sort of argument that is pettifogging and to a certain extent I would agree. Yet  it is probably typical of much of the intellectual laziness that characterizes discourse in these times.

I go back suggestion in the first paragraph of this post that “misinformation” is used as a veto word. A veto word effectively shuts down the debate. In a sense it prohibits the continued exchange of ideas. To say that one if “offended” is a form of veto word because it fails to address the argument and often does not explain why an argument suggests offence. Accusations of “racism” and “racist” are veto words in that they do not progress the argument and are often tantamount to “in personam” attacks.

So it is with “misinformation”. The way it is used, without proper definition or understanding is vague and imprecise. To characterize a position as “misinformation” generally means that there is disagreement with the position or that the position is contrary to a view held by a majority or that the evidence begs a different conclusion.

Yet it is considered that to characterize a position as misinformation is an answer to the argument whereas in proper discourse there should be some explanation of why it is that a position may be characterized as misinformation.

Frequently what is considered “misinformation” is an opinion or one person’s interpretation of the facts. It may be that the facts selected are selective and do not tell the whole story. Or the interpretation of the facts selected lacks context and nuance. Or that there is a line of authority that refutes the basis for the opinion held.

But to dispose of a position as “misinformation” without more is intellectually lazy and seems to suggest disagreement rather than a reasoned and logical answer to a position.

The problem is that “misinformation” has become so misused that there seems to be a move afoot to either stamp it out, eliminate it or stem its spread. Many of the Internet based platforms are accused of spreading misinformation. And this should be stopped. But how? How do you separate honest opinion from a wilful attempt to mislead. And is such a move rather insulting to listeners, viewers or consumers. Should they not be able to make up their own minds and exercise their own judgement?

Neil Young has decided to remove his music from Spotify because that platform hosts John Rogan whose podcasts contain “misinformation”. In a general sense the criticism of Rogan is that he has spread vaccine misinformation but sadly the detail of the misinformation is missing. I suppose I could listen to Rogan’s podcasts to work out where the misinformation lies – if any – or if what is characterized as misinformation is in fact a statement of opinion but frankly, I have better things to do with my time.

It would have been helpful for the argument to have been clearly stated but those who throw the word “misinformation” about are more concerned with expressing their disagreement than engaging with an issue in a meaningful way.

The current drive against “misinformation” seems to me to be another attack of the freedom of expression and upon the ability to express views that may be contrary to those of the majority. A justification for this is often cited as the need for “social cohesion” – another term for blind conformity – but in reality it is really yet another manifestation of well-meaning but misguided “liberals” who know better than everyone else what is good for them.


[1] Collins English Dictionary

[2] Cambridge English Dictionary

[3] Cambridge English Dictionary

Social Cohesion or Social Conformity?

The recent paper “Sustaining Aotearoa New Zealand as a Cohesive Society”[1] addresses technology as an aspect of and threat to social cohesion. From a wider perspective it questions the assumptions about social cohesion as a supporter or an essential for a liberal democracy.

It puts forward matters that need to be considered in achieving social cohesion. It suggests that social cohesion is breaking down in the face of a fragmentation of values arising from disparate sources but the main one being “misinformation” or “disinformation” disseminated via social media platforms causing a questioning and distrust of the institutions that underpin society.

Social cohesion is seen as a vital element of a resilient liberal democracy. What amounts to a resilient liberal democracy, nor indeed a liberal democracy itself, is not defined but it is assumed that those terms mean a robust political and governmental system where the government governs with the consent of the people and that the system fulfils Lincoln’s Gettysburg definition of government as being of the people, by the people and for the people.

The consent of the governed was (and still is) an essential element of some of the Enlightenment thinking about the nature of government that was expressed by Jefferson in the Declaration of Independence – We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. — That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed.

It is probably within the concept of consent of the governed that social cohesion begins to grip as a supporter of a liberal democracy.

The paper discusses various approaches to defining social cohesion and settles on a wide definition.

The definition depends on

• Sufficient trust and respect between those who are governed and the institutions and individuals they empower to govern them;

• Sufficient trust and respect between all members of a society (which by inference reflects a diverse set of identities, worldviews, values, beliefs, and interests) to foster cooperation for the good of the society as a whole;

• Institutions and structures that promote trust and respect between all members of society; and allowing

• Belonging, inclusion, participation, recognition, and legitimacy to be universally possible.

Therefore the underlying themes of social cohesion are trust, inclusion and respect which result in recognition and thus the legitimacy of the governmental system

This definition is based upon two groups of criteria – the elements of socially cohesive behaviour and a high level of conditions necessary for a socially cohesive society.

The paper then develops matters that should be considered in halting the perceived erosion of social cohesion.

However in its analysis of the decline of social cohesion two elements become clear. The first is that implicit within social cohesion is an assumption that a single world view or set of values is the ideal. To challenge the established view is to undermine social cohesion and the ordered society. To offer an alternative or contrary view is seen not as dissent but as misinformation or disinformation.

In offering this approach to an argument is to employ a form of “veto statement” but worse still it suggests that there is only one correct view which may be described as a “truth”. Indeed the paper focusses upon the nature of information in what it calls the “post-truth” world.

If by disinformation is meant the dissemination of views that are unsupported by evidence or fact but that are presented as factual material rather than opinion based commentary, then the best counter lies in the market-place of ideas rather than any form of censorship. In this respect there should be greater educational focus upon the ability to analyse and think critically. Sadly this is undergoing considerable deterioration in the current education systemwhich seems to focus upon revisionism and anecdote – peoples’ “stories” – rather than objective realities.

There must be cause for concern if the means of dissemination (social media platforms) are seen as the problem because, absent content shifting algorithms, the problem lies within those who post content.[2]

If one were looking for threats to social cohesion, perhaps the problem really lies in the way in which growing centralisation both in national and local government are depriving citizens of the opportunity to discover their own solutions.

A further element that undermines the nature of objective truth is the current tendency to focus upon anecdotal evidence rather than a proper empirical study. We reap what we sow when our analysis of factual information is based upon anecdotes and perceptions of reality than from and empirical analysis of the evidence. We seem to be more concerned with how we “feel” about things rather that what we think of them – thinking being a rational process than involves a level of analysis.

Technology and Social Cohesion

If there is one clear theme that comes through from the paper it is the concern at the influence of technology and especially social media as a disruptive element – or threat – to social cohesion. This is described as “affective polaristion” which is the decline in objective assessment in the liberal democratic system based on citizens choosing between parties that reflect different ideologies, values and worldviews. Objective assessment is replaced by emotion – anger, fear, and hatred of others have emerged in the public square. Of course, this decline in objective assessment can be laid at the feet of the education system, to which reference has been made.

“Affective polarization” is fuelled by the polarising effects of media, technology, and “misinformation”. The paper suggests that a major challenge to social cohesion is the rapid emergence of the relatively ungoverned virtual world. On one hand, the internet has empowered some groups by enhancing communication and knowledge access. On the other, it has provided opportunities to cultivate and disseminate misinformation and disinformation, and to increase polarisation. Freedom of expression has always been accompanied with a certain level of chaos and background noise.

The arrival of powerful and effective ways of anonymously transmitting ad hominem attacks, the paper suggests, has undermined the traditional institutions on which all societies rely to sustain cooperation and respect. The emergence of the Internet of Things, virtual reality, and the metaverse, along with the development of new economies and networks enabled by cryptocurrencies, is rapidly altering the constraints which helped glue societies together.

To blame the Internet – the backbone – is incorrect. This seems to demonstrate a misunderstanding of what “the Internet” is. The Internet is a transport system for data. The Internet is not the problem. Permissive innovation – the ability to bolt platforms on to the Internet without going through a series of red tape or bureaucratic approvals – has enabled the development of the various platforms that allow users to communicate. Thus it is not the Internet but the platforms that are bolted on which form the agency for a certain type of human behaviour.

This fundamental misstatement of the nature of the Internet is something I would not have expected from a paper of this pedigree. I imagine that the rather glib response by the authors would be that they are using the vernacular understanding of the “Internet” but in a paper that condemns “misinformation” with a high level of vehemence I would have thought that more care would have been applied to accuracy of definition.

A problem is seen with the emergence of virtual and manipulated realities in so-called metaverses. The internet has enormously increased access to information, and in that sense can be seen as democratising. However, the information is of variable reliability, and exposure to “misinformation” and “disinformation” is greatly enhanced by millions of users being exposed to both unintentional misunderstanding (often through ignorance) and deliberate misrepresentation by bad actors (including agents of foreign states) Internet based platforms are also empowering in that it allows people to engage in activities of social affirmation online, although I would characterize the role of these platforms of agencies of a certain level of communication.

The gathering into online groups was anticipated by Michael Froomkin who put forward the proposition of Regulatory Arbitrage – that users would migrate to elements that favoured their point of view or perspective. This theory was more related to the types of rule sets that might apply to Internet users and was a matter of jurisdiction although with the rise in social media it seems to be more a matter of congregating with likeminded users.

This should not be seen as unusual. People have long sought out those whose views or beliefs are similar. Gatherings in clubs or other organisations has been a feature of human social existence for some centuries. The communicative properties of Internet based platforms enhances this desire and its fulfilment. The problem, therefore, is not one of technology but of human behaviour.

Concern is expressed at the way in which “disinformation” and “misinformation” are disseminated via Internet platforms. The phenomena of mis/disinformation is recognized as one that has been present for some time. It is not new. But social media, the internet, and algorithm-targeted messaging have taken intentional disinformation to a new level. Thus technology and social media platforms fulfil and agency function rather than a causative feature.

Throughout the paper disinformation and misinformation are used without being defined. The issue that I have is that dissent or the expression of a point of view that is contrary to that of the majority may be characterized or demonized as mis/disinformation. In this way dissent is sidelined or even worse deplatformed or “cancelled”.

Intentional disinformation is referred to and by that I gather that what is being propagated are lies or information that has no factual basis and that the originator disseminates with the intention of misleading. 

Once again this type of behaviour has been with us for some time but the scope of these lies spreads from fraudulent scams to challenges to objectively ascertainable facts.

There is a suggestion that the development of new technologies alters the constraints that glue societies together.  I think that there is once again a failure to recognize that new technologies and especially information and communication technologies may alter behaviours and attitudes – acting as agents for change in values. This is an example of the aphorism attributed to Marshall McLuhan “we shape our tools and thereafter our tools shape us”.

Furthermore the “problems” of Internet-based platforms as is so often the case focusses upon the content of communication rather than the means of communication – the medium is the message; another of McLuhan’s aphorisms. In some respects the horse bolted long ago and it is only now that we are beginning to understand that and come to terms with the new reality that besets us.

To condemn new communications technologies as the cause of the problem and to call for some form of restructuring or regulation is in some respects a mournful cry for a time that has been irretrievably lost and represents a form of conservatism that would anchor us in a societal position where any sort of change is decried. In some respects the calls for reversal of climate change are an attempt to preserve a way of living that may no longer be possible and ironically (because the Greens and those who favour positive steps to reverse climate change consider themselves Progressives) represents another manifestation of what could be called a “yearning conservatism”.

The two examples may be said to suggest a form of technological determinism and in some respects that is acknowledged. In the area of climate change although the effects of human activity have seen an increase in the pace of climate change, the reality of climate change has been with us since before records were kept and are reflected in the geological record as well as the more modern written records.

Climate change is and always has been inevitable and in the past the way that humans have dealt with it is not to reverse reality but adapt to the new circumstances. This may mean that we are no longer able to sustain certain activities to which we have become accustomed. It may mean the abandonment of the ocean view for a form of shelter in higher places. These are the realities for which we should be planning rather than arguing about whether there should be cycle lanes over the harbour bridge or banning fossil fuelled motor vehicles. Such would be a token gesture.

I advance climate change as an example of certain inevitabilities that underlie some aspects of technological change coupled to a degree with aspects of technological determinism. Eisenstein described the printing press as an agent of change and by so doing avoided the deterministic label. But in some respects she was correct. The press was an agency of a change in attitudes. It enabled changes in communications associated behaviours and by so doing enabled changes in a number of areas of human activity. There can be no doubt that the disseminatory powers of print enabled the swift transmission of Luther’s arguments that formed the basis of the Reformation.

Were Luther’s theses a form of sixteenth century misinformation? Is “misinformation” the Twenty-first Century characterization of “heresy”. To the Catholic Church Luther’s theses certainly were. And the new information technology enabled the spread of the ideas that underpinned the theses. The response in many cases was to break up the printing presses to stop the spread of this “heresy”. The Catholic Church professed concern for the souls of the believers but there was no doubt that its response to Luther was as much in the interests of maintaining its position of power.

Thus one wonders whether or not – despite the focus on the importance of “liberal democracy” – social cohesion is just another form of power play – a desire by those with a vested interest in established institutions to maintain those institutions in the interests of maintaining conformity with existing power structures and (im)balances. Thus liberal democracy – as a trope – occupies the position of the soul in a modern secular society – something intangible, lacking coherence and ephemeral that has its own particular value.

As if to support this argument the paper states (P. 3)

(G)overnments need to place the opportunities and challenges of the digital future more centrally and to consider them through the lens of sustaining or undermining social cohesion. Not doing so may threaten democracy itself, seeing it replaced by a more autocratic form of governance. Societies could fracture in ways that undermine their very essence and identity.

This suggests that the only alternative is autocracy yet in many respects we are living in an autocratic system in what could be called “The Covid Autocracy” or “The Covid Despotism”

To sum up this aspect of the discussion – technology in and of itself is not the problem and to propose to “regulate the technology” is not a solution. Nor does the answer lie in reining in the social media companies. The concerns seem to be that they are allowing the dissemination of contrarian content some of which can be dangerous. It seems to me that despite the difficulty of assessing the huge volumes of data that flow through their servers, some social media providers attempt to adopt a responsible attitude to truly harmful content. Much of the problem lies in the assessment of that content. For some “hate speech” is speech that they hate to hear. For others misinformation is a twisting or reinterpretation of existing facts. For others disinformation may be, and often is, downright lies. The responsibility lies with the individual to resolve the problem, and not for some patronising and paternalistic State to proclaim a single and all-embracing truth.

Social Cohesion and Conformity

Underlying the discussion of social cohesion is the theme of conformity. Citizens should conform to understood precepts of social order. Conformity is associated with an element of collectivism which seems to be gaining traction in the Twenty-first Century milieu. The problem with the underlying elements of social cohesion that are discussed in the paper is that individualism is subsumed and individual aspiration is sacrificed on the altar of social cohesion.

Belonging, participation, inclusion, recognition and legitimacy are all seen as elements of social cohesion. However, the focus upon social cohesion as an element supporting a liberal democracy seems to depend up on collectivist approach especially in regard to the communication of information and the spread of views, opinions and interpretations of facts that may be present within a community.

The word freedom has become somewhat devalued of late, sneered at and associated with contrarian or anti-vax sentiments. Yet it is an essential aspect of a liberal democracy. It is for that reason that I point to the importance of the freedoms guaranteed by the New Zealand Bill of Rights Act as well as the freedom to think as we please, the freedom to make our own decisions and to act on them. It is in this respect that I have concerns about social cohesion as it is developed in the paper.

The focus upon contrary points of view disseminated over social media strongly suggests a collectivist conformist approach that is inimical to concepts of individual liberty within a liberal democracy. It is that individualism that sustains innovation and diversity of points of view, that accepts differing manifestations of behaviour as long as there is compliance with the bottom line allowed by the law.

I suggest that the law sets the boundaries for social cohesion. Moral suasion or some ill-defined standard suggests some other way apart from law in which society modifies and monitors behaviour, and disapproves or condemns that which is outside what may be described as “norms accepted by the majority”.

This form of moral coercion masquerading as social cohesion has little to do with life in a free and liberal democracy, and indeed if this is the goal behind the paper – and I earnestly hope that it is not – then the conceptualisation of social cohesion as operating in this way is to be resisted.

Maintaining Social Cohesion

I suggested above that the law sets the boundaries for social cohesion. The paper ignores the fact that there is already in place a means of maintaining a level of social cohesion that is consistent with a liberal democracy and that is the Rule of Law.

The paper suggests that living in an organised society implies a contract of reciprocal behaviour, or a social contract, between citizens and the society’s institutions. We cannot operate outside those bounds and remain functioning and free members of that society. No one, it points out, has absolute free will.

There can be tension around what the bounds are, as we have seen in debates over constraints imposed during the Covid-19 pandemic, and as are more generally reflected in differing preferences across various ideologies and value sets.

In many respects this tension that develops is a good thing because it demonstrates that within the community there are a variety of different points of view about a proposed course of action. If social cohesion in the form of a collective point of view proposes that there should not be a variety of different points of view, then liberal democracy is in difficulty and social cohesion cannot be said to support it – rather it erodes a fundamental aspect of a liberal democracy which involves the right and the opportunity to disagree.

What the paper ignores, or perhaps sidesteps, is the importance of the Rule of Law as an element of the social contract. There seems to be little discussion about the effect of law in fixing the boundaries for acceptable social behaviour.

Without the Rule of Law what is being proposed is some form of “understood” code of behaviour based on the concept of a resilient society that has its foundation in social cohesiveness. I would have thought that a clearly communicated and understood Rule system would establish the metes and bounds of acceptable behaviour.

The New Zealand Bill of Rights Act 1990 clearly defines the rights of individuals vis-à-vis the State. If I were looking for a recipe for social cohesion NZBORA would be the prime ingredient, despite the various exceptions and riders that the legislation contains. What it does contain are clear statements about the freedom of expression, freedom of association, freedom of peaceful assembly, freedom of movement, freedom of thought, conscience, religion or belief, freedom from discrimination, the right not to be deprived of life, nor subjected to torture, nor subjected to medical experimentation and the right to refuse medical treatment. These, it seems to me, must be the essential ingredients of a liberal democracy.

Furthermore, there must be a clear understanding that everything is permitted unless it is prohibited, thus constraining the power of the State and allowing individual citizens a high level of liberty of conduct under the Rule of Law which focusses on the maintenance of internal stability. Otherwise the formula “everything is prohibited unless it is permitted” sows the seeds of an autocratic society based on a top down power structure.

The rather vague focus upon a collective social cohesion contains within it some serious difficulties and the lack of certainty about the scope of social cohesion absent a consideration of an underpinning in existing legal rule sets suggests a possible moral or suasive approach to behaviour that is unclear and uncertain – factors that are inimical to a living in a liberal democracy.

Conclusion

I suggest that the concerns that have been expressed in the paper are overrated. Disagreement and dissent are fundamental aspects of a liberal democracy. Without them essential elements of a liberal democracy cannot exist. To demonise an alternative view with terms like “misinformation” and “disinformation” without addressing the very nature or content of what is proposed is to engage in another form of veto statement or the cancel culture that is used to silence an opposing view. To justify these aspects of censorship as an aspect of social cohesion – although to be fair the report writers do allow for dissent as long as it resolves in an acceptable solution – is to do violence to the freedom of expression as a vital aspect of a liberal democracy.

Social cohesion in the end is another word for conformity – conformity that is not recognised as a bottom line for human behaviour thus justifying the interference of the law – but some form of moral conformity that does not allow for a contending view. And that is a form of totalitarianism and thought control that has no place in a liberal democracy.


[1] Gluckman P, Bardsley A, Spoonley, P, Royal C, Simon-Kumar N and Chen A University of Auckland Centre for Informed Futures December 2021 https://informedfutures.org/social-cohesion/ (Last accessed 22 December 2021)

[2] See “The Fault, dear Brutus, lies not in social media, but in ourselves” https://theitcountreyjustice.wordpress.com/2021/08/27/the-fault-dear-brutus-is-not-in-social-media-but-in-ourselves/ (Last accessed 27 December 2021)

De-Platforming Dissent

History shows us that lawyers have often been at the forefront of dissent. And on some occasions they have been punished or de-platformed for their dissent This may not seem unusual. The law is perceived as a conservative profession.

Three historical examples came to mind. There are plenty of others.

Marcus Tullius Cicero (106 – 43 BC), the Roman lawyer and Senator, spoke out against Gaius Julius Caesar and although he did not join the assassins was no great supporter of those who came after. His Phillipics aimed mainly at Marcus Antonius earned him the hatred of its subject and his wife Fulvia. When, as the Second Triumvirate,  Antonius, Gaius Octavius and Marcus Aemilius Lepidus sought to eliminate opposition by adopting the proscription – the tool of the former dictator Lucius Cornelius Sulla – which put a price on the heads (literally) of opponents, Cicero was one of those named. He was executed by soldiers operating on behalf of the Triumvirs in 43 BC after having been intercepted during an attempted flight from the Italian peninsula. His severed hands and head were then, as a final revenge of Marcus Antonius, displayed on the Rostra. A very final form of de-platforming ending up with exposure on a platform.

John Stubbs (1544 – 1589) was educated at Trinity College, Cambridge and read law at Lincolns Inn. He was a committed Puritan and was opposed to the negotiations for a marriage between Queen Elizabeth I and the Duke of Alencon who was a Catholic. In 1579 he expressed his opinions in a pamphlet entitled “The Discovery of a Gaping Gulf whereunto England is like to be swallowed by another French Marriage, if the Lord forbid not the banns, by letting her Majesty see the sin and punishment thereo.”

Stubbs graphically characterised the proposed wedding as a “contrary coupling,” “an immoral union, an uneven yoking of the clean ox to the unclean ass, a thing forbidden in the law” as laid down by St. Paul, a “more foul and more gross” union that would draw the wrath of God on England and leave the English “pressed down with the heavy loins of a worse people and beaten as with scorpions by a more vile nation.”

Elizabeth and her Court were outraged. The publication was denounced by a specific proclamation of 27 September 1579.  The proclamation claimed that the pamphlet stirred up rebellion “on the part of the Queen’s subjects, to fear for their own utter ruin and change of government”.

Circulation of the pamphlet was prohibited and copies were burned at the headquarters of the printing trade – Stationers Hall. Stubbs and the publisher were tried and found guilty of seditious writing.

Stubbs’ de-platforming was especially horrific. He and the publisher were sentenced to have their right hands cut off by means of a cleaver driven through the wrist by a mallet. Initially Queen Elizabeth had favoured the death penalty but was persuaded to opt for the lesser sentence. The sentence was carried out and Stubbs’ right hand was cut off on 3 November 1579. At the time he protested his loyalty to the Crown, and immediately before the public dismemberment delivered a shocking pun: “Pray for me now my calamity is at hand.” His right hand having been cut off, he removed his hat with his left hand and cried “God Save the Queen!” before fainting. His fellow conspirator, the publisher William Page, lifted up his bleeding hand and said: “I left there a true Englishman’s hand.”

Stubbs was subsequently imprisoned for eighteen months. On being released in 1581 he continued to write, publishing, among other pamphlets, a reply to Cardinal Allen’s “Defence of the English Catholics”. Despite his punishment, he remained a loyal subject of Queen Elizabeth and later served in the House of Commons as MP for Great Yarmouth in the English Parliament of 1589

William Prynne (1600 – 1669) was a lawyer, author and polemicist and a Puritan opponent of the Anglican establishment He was a graduate of Oriel College, Oxford and was called to the Bar at Lincolns Inn in 1628.

Prynne did not set out to be popular. Many of his views, acerbically expressed, were very unpopular. He opposed religious feast days and their associated revelries. He thought men should not wear their hair long and opposed the custom of drinking to one’s health.  In 1632 had printed his Historiomastix in which he argued that stage plays were incentives to immorality and were prohibited by scripture. Shortly after the book was released, Charles I’s Queen Henrietta Maria took part in a play at Court. A passage in Historiomastix critical of the character of female actresses was interpreted as an attack on the Queen. Passages attacking the audiences of plays and the judges who failed to suppress them were taken as an attack on the King.

Prynne was taken before Star Chamber. After a year’s imprisonment in the Tower of London, he was sentenced on 17 February 1634 to life imprisonment, a fine of £5,000, expulsion from Lincoln’s Inn, deprival of his Oxford University degree, and amputation of both his ears in the pillory where he was held on 7–10 May 1634.

This did not silence Prynne. From the Tower he continued to write polemics against Archbishop Laud, whom he saw as his main persecutor, the Attorney-General William Noy who had prosecuted him before Star Chamber as well as other attacks on leading Anglican clergymen.

Once again he was brought before Star Chamber and on 14 June 1637 Prynne was sentenced once more to a fine of £5,000, to imprisonment for life, and to lose the rest of his ears. At the proposal of Chief-justice John Finch he was also to be branded on the cheeks with the letters S. L., signifying ‘seditious libeller’. Prynne was pilloried on 30 June. Prynne was handled brutally by the executioner. He made, as he returned to his prison, a couple of Latin verses explaining the ‘S. L.’ with which he was branded to mean ‘stigmata laudis’ (“sign of praise”, or “sign of Laud”). His following imprisonment harsh. He was allowed no pens nor ink, had few books and was removed to remote prisons in Wales and Jersey.

Prynne was released by the Long Parliament in 1640. His sentences were declared illegal. His ears could not be restored to him although his degree was along with his membership of Lincolns Inn.

It was not long before Prynne found himself in trouble with the supporters of Parliament and in 1647 he wrote a number of pamphlets against the army, and championed the cause of the eleven presbyterian leaders whom the army impeached.

n November 1648 Prynne was elected Member of Parliament.  As soon as he took his seat, he showed his opposition to the army. He urged the Commons to declare them rebels, and argued that concessions made by Charles in the recent treaty were a satisfactory basis for a peace.

Two days later Pride’s Purge took place. Prynne was arrested by Colonel Thomas Pride and Sir Hardress Waller, and kept prisoner first at an eating-house (called Hell), and then at the Swan and King’s Head inns in the Strand.

Released from custody some time in January 1649, Prynne retired to Swainswick, and began a paper war against the new government. He became a thorn in Cromwell’s side.

He wrote three pamphlets against the engagement to be faithful to the Commonwealth, and proved that neither in conscience, law, nor prudence was he bound to pay the taxes which it imposed.

The government retaliated by imprisoning him for nearly three years without a trial. On 30 June 1650 he was arrested and confined, and was finally offered his liberty on giving security to the amount of £1,000 that he would henceforward do nothing against the government; but, refusing to make any promise, he was released unconditionally on 18 February 1653.

Prynne continued his pamphleteering and supported the restoration of Charles II. He was rewarded with public office and once again elected to Parliamentand ended up as Keeper of Records in the Tower of London before his death in 1669.

Prynne was a gadfly who angered many who suffered at the tip of his pen. He is best known for his horrifying treatment in the 1630’s for publishing and expressing what were at the time unpopular views.

These three examples came to mind as I reflected on what appears to be a growing crackdown on dissent around the globe. De-platforming and cancelling seem to be common. Both are forms of silencing dissent.

Lawyers, as I said at the outset, have often headed dissenting movements. It was the dedication of lawyers who brought proceedings against the Rugby Union that halted an All Black’s tour of South Africa in 1985 – not a popular move in the temper of those times. It is commemorated in an article by Sam Bookman of 2018 and may be found on the New Zealand Law Society website here. And whilst I am on South Africa, Nelson Mandela was a lawyer and practiced in Johnannesburg before turning to anti-apartheid politics for which he was seriously de-platformed and like Stubbs was charged with sedition.

I must say I find it interesting that the tide of history has not turned and that attempts at de-platforming lawyers expressing dissenting views are still occurring

The Content Regulatory System Review – An Overview

Lockdown has its benefits. For some time I have been asked whether or not I would contemplate a 5th edition of “internet.law.nz – selected issues.” After 4 editions including a revised 4th edition my inclination had been that I had written enough on the subject, but a review of the 4th edition together with a review of what I had written in other for a persuaded me that a 5th edition might be a possibility. Lockdown has given me the perfect opportunity to research and write in the comparative peace and solitude that accompanies Alert Level 4.

The approach that I propose will be different from what has gone before, although much of the material in earlier editions will be present. But the focus and the themes that I want to examine differ. I am interested in the regulatory structures that are being applied to the online environment and in particular I am interested in the area of content regulation. This involves a number of areas of law, not the least of which is media law and there is quite an overlap between the fields of media law and what could loosely be termed cyberlaw.

What I am trying to do is examine the law that it has developed, that is presently applicable and what shape it may likely have in the future. In this last objective I am often assisted by proposals that governments have put forward for discussion, or proposed legislation that is before the House.

In this piece I consider a review of content regulation. The proposal, which was announced on 8 June 2021, is extremely broad in scope and is intended to cover content regulation proposals and mechanisms in ALL media – an ambitious objective. What follows are my initial thoughts. I welcome, as always, feedback or comments in the hope that the finished product will be a vast improvement on what is presently before you.

The Proposals

A comprehensive review of content regulation in New Zealand was announced by Minister of Internal Affairs, Hon Jan Tinetti, on 8 June 2021. The review is managed by the Department of Internal Affairs, with support from the Ministry for Culture and Heritage. 

The review aims to create a new modern, flexible and coherent regulatory framework to mitigate the harmful impacts of content, regardless of how it is delivered.

The framework will still need to protect and enhance important democratic freedoms, including freedom of expression and freedom of the press.

Content is described as any communicated material (for example video, audio, images and text) that is publicly available, regardless of how it is communicated.

The need for the review arises from a recognition of media convergence. The review outline states that the ongoing evolution of digital media has resulted in significant and growing potential for New Zealanders to be exposed to harmful content. This was made evident by the livestreaming and subsequent uploading of the Christchurch terror attack video.

Our existing regulatory system was designed around a traditional idea of ‘analogue publication’, such as books, magazines and free-to-air TV, and does not have the flexibility to respond to many digital media types. As a result, it addresses harm in a shrinking proportion of the content consumed by New Zealanders and provides little protection at all for digital media types which pose the greatest risk for harmful content.

The increase in the potential for New Zealanders to be exposed to harmful content is compounded by the complexity of the regulatory system. Different rules apply for content hosted across media channels. This increases difficulty for New Zealanders when deciding what content is appropriate for them and their children and creates confusion on where to report harmful content. 

There is also an uneven playing field for media providers as some types of media are subject to complicated regulatory requirements and some to no regulations at all.

The introduction to the review notes that New Zealand’s current content regulatory system is made up of the Films, Videos, and Publications Classification Act 1993, the Broadcasting Act 1989 and voluntary self-regulation (including the New Zealand Media Council and Advertising Standards Authority). The Office of Film and Literature Classification and the Broadcasting Standards Authority are statutory regulators under their respective regimes. 

New Zealand’s content regulatory system seeks to prevent harm from exposure to damaging or illegal content. It does this through a combination of classifications and ratings to provide consumer information, and standards to reflect community values. These tools are designed to prevent harm from people viewing unwanted or unsuitable content, while protecting freedom of expression.

What is proposed is a broad, harm minimisation-focused review of New Zealand’s media content regulatory system which will contribute to the Government’s priority of supporting a socially cohesive New Zealand, in which all people feel safe, have equal access to opportunities and have their human rights protected, including the rights to freedom from discrimination and freedom of expression. 

The objective of social cohesion was one of the strong points made by the Royal Commission on the 15 March 2019 tragedy in Christchurch.

The review recognises that a broad review of the media content regulatory system has been considered by Ministers since 2008 but has never been undertaken. Instead piecemeal amendments to different frameworks within the system have been made to address discrete problems and gaps.

The problems posed by the Digital Paradigm and media convergence, coupled with the democratisation of media access has, in the view expressed in the briefing paper resulted in significant and growing potential for New Zealanders to be exposed to harmful media content. Our existing regulatory frameworks are based around the media channel or format by which content is made available and do not cover many digital media channels. This model does not reflect a contemporary approach where the same content is disseminated across many channels simultaneously. As a result, it provides protection for a decreasing proportion of media content that New Zealanders experience. This means that New Zealanders are now more easily and frequently exposed to content they might otherwise choose to avoid, including content that may pose harm to themselves, others, and society at large.

What is proposed is a harm-minimisation focused review of content regulation. This review will aim to create a new modern, flexible and coherent regulatory framework to mitigate the harmful impacts of media content, regardless of how it is delivered. The framework will still need to protect and enhance important democratic freedoms, including freedom of expression and freedom of the press. The threshold for justifying limitations on freedom of expression will remain appropriately high.

Given the emphasis on social cohesion it is not unexpected that the Review is part of the Government’s response to the March 2019 Christchurch terrorist attack, including the Christchurch Call and responding to the Royal Commission of Inquiry into the terrorist attack on Christchurch masjidain.

It is noted that in addition to the formal structures under the Films Videos and Publications Classification Act and the Broadcasting Act are voluntary self-regulatory structures such as the Media Council and the Advertising Standards Authority are the provisions of the Harmful Digital Communications Act and the Unsolicited Electronic Messages Act. These structures, it is suggested, are unable to respond to are coming from contemporary digital media content, for example social media. The internet has decentralised the production and dissemination of media content, and a significant proportion of that content is not captured by the existing regulatory system.

Examples of the harmful media content affecting New Zealanders are:

  • adult content that children can access, for example online pornography, explicit language, violent and sexually explicit content
  • violent extremist content, including material showing or promoting terrorism
  • child sexual exploitation material
  • disclosure of personal information that threatens someone’s privacy, promotion of self-harm
  • mis/disinformation
  • unwanted digital communication
  • racism and other discriminatory content
  • hate speech

What is proposed is a harm-minimisation focused review of content regulation, with the aim of creating a new modern, flexible and coherent regulatory framework to mitigate the harmful impacts of all media content. The regulatory framework will balance the need to reduce harm with protecting democratic freedoms, including freedom of expression and freedom of the press. The framework will allocate responsibilities between individuals, media content providers, and Government for reducing harm to individuals, society and institutions from interacting with media. The framework will be platform-neutral in its principles and objectives, however, it will need to enable different approaches to reaching these objectives, spanning Government, co-regulatory and self-regulatory approaches. It will also include a range of regulatory and non-regulatory responses.

The following principles are proposed to guide the review:

a. Responsibilities to ensure a safe and inclusive media content environment should be allocated between individuals, media content service providers (analogue, digital and online providers), and Government;

• Individuals should be empowered to keep themselves safe from harm when interacting with media content;

• Media content service providers should have responsibilities for minimising harms arising from their services;

• Government responses to protect individuals should be considered appropriate where the exercise of individual or corporate responsibility cannot be sufficient. For example:

• Where there is insufficient information available to consumers about the risk of harm;

• Where individuals are unable to control exposure to potentially harmful media content;

• Where there is an unacceptable risk of harm because of the nature of the media content and/or the circumstances of the interaction (e.g. children being harmed by media content interactions);

b. Interventions should be reasonable and able to be demonstrably justified in a free and democratic society. This includes:

  • Freedom of expression should be constrained only where, and to the extent, necessary to avoid greater harm to society
  • The freedom of the press should be protected
  • The impacts of regulations and compliance measures should be proportionate to the risk of harm;

c. Interventions should be adaptive and responsive to:

• Changes in technology and media;

• Emerging harms, and changes to the scale and severity of existing harms;

• Future changes in societal values and expectations;

d. Interventions should be appropriate to t he social and cultural needs of all New Zealanders and, in particular, should be consistent with:

• Government obligations flowing from te Tiriti o Waitangi;

• Recognition of and respect forte ao Maori and tikanga; and

e. Interventions should be designed to maximise opportunities for international coordination and cooperation.

It will be noted that the proposed review and the principles guiding it are wide-ranging. It seems that the objective may be the establishment of a single content regulatory system that will allow for individual responsibility in accessing content and media responsibility for ensuring a minimisation of harm but with a level of State intervention where the steps by individuals or media providers may be insufficient. The guiding principle seems to be that of harm.

At the same time there is a recognition of the democratic values of freedom of expression and freedom of the press. The wording of section 5 of the New Zealand Bill of Rights Act is employed – that interventions should be reasonable and demonstrably justified in a free and democratic society and that responses should be proportionate to the level of harm.

It is interesting to note that the proposed interventions should be flexible and able to adapt to changes in technology and media, the nature of harm and any future changes in societal values and expectations.

Commentary

In many respects the proposals in this outline seem to be those of an overly protective State, developing broad concepts of harm and “safety” as criteria for interference with robust and often confronting expression. It is quite clear that the existing law is sufficient to address concerns about expressions such as threats of physical harm. However, the concept of harm beyond that is rather elusive. The problem was addressed in the Harmful Digital Communications Act 2015 which defines harm as “serious emotional distress”. But a broader scope seems to be applied to harm in the context of this review and that is exemplified by the concept of social cohesion. In addition are some of the categories of content that must give rise to concern and that may well create a tension between freedom of expression on one hand and elements of social cohesion on the other. One example is that of misinformation or disinformation which seems to suggest that there is but one arbiter of accuracy of content that leaves little room for balanced discussion or opposing views. The arbiter of content could describe any opposing view as misinformation and thereby demonise, criminalise and ban the opposing view on the basis that opposition to the “party line” has an impact upon social cohesion.

A matter of concern for media law specialists as this review progresses must be the cumulative impact that content regulation initiatives may have on freedom of expression. I cite as examples proposals to address so-called “hate speech” and the Chief Censor’s report “The Edge of the Infodemic: Challenging Misinformation in Aotearoa.” These proposals, if enacted, will give legislative fiat to a biased form of expression without allowing for a contrary view and demonstrates a concerning level of misunderstanding about the nature of freedom of expression (including the imparting and receiving of ideas) in a free and democratic society.

As matters stand content regulatory systems in New Zealand as discussed have some common features.

  • There is an established set of principles and guidelines that govern the assessment of content.
  • There is a complaints procedure that – as far as media organisations are concerned – involves an approach to the media organisation prior to making a complaint to the regulatory body
  • There is a clear recognition of the importance of the freedom of expression and the role of a free press in a democratic society
  • That in respect to censorship the concept of “objectionable” is appropriately limiting given first that the material may be banned or restricted and second that there may be criminal liability arising from possession or distribution of objectionable material.
  • Guiding principles are based primarily upon the public interest. The Content Review focus on social cohesion is more than a mere re-expression of the public interest concept.

One thing is abundantly clear. The difficulty that regulatory systems have at the moment surrounds continuing technological innovation. To some extent the New Zealand Media Council recognises that and has adapted accordingly. Otherwise there is little wrong with the processes that are in place – at least in principle. If complaints procedures are seen to be unwieldy they can be simplified. The public interest has served as a good yardstick up until now. It has been well-considered, defined and applied. It would be unfortunate to muddy the media standards and public discourse with a standard based on social cohesiveness, whatever that may be. Fundamentally the existing regulatory structures achieve the necessary balance between freedom of expression on the one hand and the protection of the public from objectionable content on the other. Any greater interference than there is at present would be a retrograde step.

What is Truth? Misinformation and the Edge of the Infodemic – A Commentary

“And what is ‘truth’? Is truth unchanging law? We both have truths. Are mine the same as yours?”

Jesus Christ Superstar

I have given some thought about releasing this paper. The reason for my hesitation is that it could be misinterpreted as a support piece for misinformation. That is not its purpose. My primary concern arises from what I perceive as a shift towards a State-based determinator for truth and, as a corollary, that any perspective or opinion that does not conform to that “truth” is misinformation. In my view the issue of misinformation is a more nuanced one and this paper argues that the solution to dealing with misinformation should be in the hands of individuals who can make their own evaluations of the validity or otherwise of pieces of information before acting upon the. Some of the suggestions that have been made in the misinformation paper under discussion are extremely reasonable and sensible. What I am concerned about is the intrusion of the State into the area of belief and points of view. Freedom of thought (or conscience) has long been a cornerstone of liberal democracy.

Introduction

Censorship is a controversial issue in a modern democratic and liberal society, although it has taken place in one form or another over the centuries. This has included art censorship from the strategically placed drapes on the magnificent Michelangelo frescoes on the ceiling of the Sistine Chapel to the controversy surrounding the Mapplethorpe photographic exhibition in New Zealand, film censorship from All Quiet on the Western Front[1] to Baise-Moi[2] and book censorship such as James Joyce’s masterpiece Ulysses[3] and more recently in New Zealand with Ted Dawe’s Into the River.[4]

Censorship challenges freedom of expression by imposing minimum standards of socially acceptable speech on the contemporary community. Under s 14 of the New Zealand Bill of Rights Act 1990 (Bill of Rights) everyone has the right to freedom of expression; a right as “wide as human thought and imagination”.[5] Censorship acts as an abrogation of that right, so how is freedom of expression under the Bill of Rights and censorship under the Act to be accommodated? Some guidance is available from the Court of Appeal in the case of Moonen v Film and Literature Board of Review[6].

Moonen held  that there is  a responsibility on the Classification Office and the Board of Review when carrying out their work to explain how a publication falls into the category of publications that Parliament has deemed objectionable. The Classification Office and the Board of Review must also demonstrate why classifying certain publications as objectionable is a demonstrably justified limitation on freedom of speech. Generally the Moonen approach is followed.

New Media Issues

But lately the Chief Censor, Mr. David Shanks, has been calling for a widening of his brief. At an Otago University conference about ‘Social Media and Democracy’ in March 2021, Mr. Shanks told the conference the way we regulate media is not fit for the future.

“We can be better than this. I think there’s some very obvious moves that we can do here to make the current regulatory system and framework more coherent for a digital environment,” he said.[7]

The “Misinformation” Study

As part of an overall review of regulatory structures surrounding harmful information dissemination, the Government released a discussion paper on hate speech and at the same time the Chief Censor released a paper entitled “The Edge of the Infodemic: Challenging Misinformation in Aotearoa” which in essence is a survey about how citizens are concerned about misinformation. The internet and social media are identified as key sources – while experts and government are trusted more than news media. 

The Chief Censor says it shows the need for urgent action. But the question must be asked – why? Do we need the government or some government agency to be the arbiter of truth? Are we so uncritical that we cannot discern misinformation from empirically based conclusions?

The concerns about new media are not new. Many of the criticisms of the Internet and social media levelled by the Chief Censor have been articulated in the past.  Speaking of newspapers Thomas Jefferson expressed an acidic concern that editors “fill their newspapers with falsehoods, calumnies and audacities”.[8]

What is seen as a problem seems to be a difficulty in accepting that as many as there are people there are opinions. One wonders whether the questions properly addressed the issues. The findings of the report must be concerning. New Zealanders tend to distrust online sources of information.

Only 12 percent had high trust in news and information from internet and social media users – and 83 percent think this group frequently spreads misinformation on purpose.

But 79 percent also said they get news or information from social media and also use it to verify information.

The report found New Zealanders have a relatively high level of trust in news and information from scientists, researchers or experts (78 percent) and government agencies and officials (64 percent).

Six out of 10 respondents reported high trust in New Zealand’s news media – a more favourable result than the responses recorded for overseas news media.

But these findings beg the question I have already raised. Are we talking about facts or are we talking about opinions. Even facts can be “spun” to fulfil a particular purpose and can be interpreted in a number of ways. The facts remain the same. The interpretations may differ. And this is important in a vibrant and developing society. The “truth” for one may not be a “truth” for another.

The concerns that the report advances have been derived from an extensive survey that has been conducted. The findings of the survey lead inexorably to the conclusion that “something must be done” and I would suggest that the “something” involves the control or monitoring of information. And it must be of concern that the self and statutorily described[9] censor is driving this.

So what does the report tell us. I state the findings and my observations in italics follow each one.

First, it is common for New Zealanders to see news and information they think is false or misleading. Opinions differ as to what counts as misinformation, but one topic identified as a source of misinformation surrounds Covid 19. Another concern is that this misinformation is influencing people’s views about things like politics, public health and environmental issues, and many see misinformation as an urgent and serious threat.

What is apparent from this concern is that misinformation is recognised. This would seem to suggest that those who contributed to the survey are still in possession of the reasoning and critical faculties and can distinguish valid information from rubbish. The volume of misinformation may drive a concern but what does it threaten. This question seems to be unanswered.

But arising from this is another more fundamental issue and one that I have already alluded to – what is misinformation. Is it a skewing of facts – something that politicians are skilled in although for them it is called “spin” – or is it a statement of opinion. One wonders how many statements of opinion are taken as fact, especially if the reader or listener or viewer agrees with the opinion.

Secondly New Zealanders tend to distrust online sources of information generally, and this is especially true of social media. Many New Zealanders think social media users and corporations often spread false and misleading information intentionally. At the same time, the internet is the most popular source of news and information, while also being a reference point to verify, fact check or confirm this information.

The first point is a valid one. Do not implicitly trust everything that you see online. With a medium like the Internet – and social media platforms – everyone has a voice. Whereas mainstream media could be selective, had verification duties and are subject to rules about balance and a disciplinary process such as the Broadcasting Standards Authority or the NZ Media Council, social media does not. Thus it follows that statements by individuals on social media platforms should at least be taken with a grain of salt and should be subject to critical scrutiny and verification.

Whether online or offline, most New Zealanders tend to trust information from more traditional sources like government officials, scientists and the New Zealand news media. However, the research shows that people with higher trust in online only sources of information – and who use these sources more often – are more likely to express belief in statements associated with misinformation.

This probably says more about the critical faculties of those who rely on online sources for their information. And this goes to a lack of development of intellectual rigour that goes back to the education system, together with a level of naivete that would suggest that too many people accept anything without question or without careful analysis. It is not the source of the information that is to blame. It is the uncritical stance of the reader that is the problem.

The report then goes on to widen the problem with some rather sweeping generalisations.

Misinformation is widespread and affects everyone. This is true regardless of age, gender, ethnicity or other characteristics.

Subject to defining misinformation (which I discuss below) there is no doubt that all facets of information, true or false are widespread. Does this affect everyone? If what is meant is “does everyone come into contact with misinformation” there is certainly that potential. But if the meaning of the word “affect” is to influence I would have some quibble about the suggestion that misinformation influences everyone. Once again this has more to do with the critical analysis of information, but I consider this conclusion to be overly broad

It’s relatively common for New Zealanders to express belief in at least some ideas that are linked to misinformation – ideas which are not backed by the best available evidence we have.

I would be very interested to see the evidence for this statement and once again it speaks more to the naivete and lack of critical rigour on the part of the audience. And, of course, even a bad idea may be worth consideration if only to analyse it and discard it. The problem I think lies in the use of the word “belief” which suggests something other than an evidence based or empirical conclusion

When people rely on misinformation to make important decisions it can have a harmful impact on the health and safety of our communities. It can also affect us on a personal level, contributing to anxiety, anger, and mistrust.

Agreed. But the issue is the reliance that is placed on misinformation and once again – at risk of repeating myself ad nauseum – much depends upon the critical faculties and analysis employed by the audience. If people choose to make important decisions without properly analysing the source of the evidence supporting those decisions then that is a matter for them.

People often take action themselves in response to misinformation – such as searching different sources to see if information is accurate, looking at more established news sources, or talking about it with people they trust.

New Zealanders also see this as a societal problem that requires more action. They have differing views on who should do this and how. Many think government, news media and experts have the biggest role in dealing with the spread of misinformation, but that individual internet users and social media corporations also have an important role.

Many New Zealanders see the Government as the solution to problems. Rather I agree that responsibility for ascertaining whether content is information or misinformation should the in the hands of the recipient. I agree that individual internet users and social media users have a role – but it is not for the social media corporations to vet content or carry out some moderating activity over content. I base this comment on the fact that Internet based information and indeed the communications paradigm it has introduced means that we must recognise that paradigm shift and consider regulatory solutions in light of it.

What is “Misinformation”?

The problem of “misinformation” and the concerns that are expressed in the report depend very much upon the definition of the term. The Report offers some brief definitions. There is a specific rider to the definitions offered which narrow the concept down to something that is potentially harmful. Other definitions are quite a bit wider.

The Report definitions are as follows:

Misinformation: false information that people didn’t create with the intention to hurt others.

Disinformation: false information created with the intention of harming a person, group, or organisation, or even a country.

Mal-information: true information used with ill intent.

The definitions set out are quite specific and share a similar characteristic and that is that the spread of the information (misinformation, disinformation or mal-information) is accompanied by a specific intention and that is to harm or hurt others[10].

The Report goes on to say

“Misinformation is nothing new, but there are increasing concerns worldwide about the prevalence of misinformation – especially online – and its potential to impact democracy, public health, violent extremism and other matters. We’ve seen how the spread of false and sometimes hostile misinformation and conspiracy theories continue to impact on our whānau and communities during the Covid-19 pandemic, and how extremist talking points and ideology can contribute to real-world violence such as the March 15 attacks in Christchurch.”

Misinformation is defined in the Oxford English dictionary as “false or erroneous information”, and as the report states, the existence of false or erroneous information is nothing new. Falsity implies that the communicator of the information is aware of the falsehood but perpetrates it nonetheless. Erroneous implies error or mistake which lacks the element of wilful deception.

Putting to one side the emotive reference to the March 15 attacks – and there is no evidence that the terrorist was influenced by misinformation – the concern that is expressed is that false, erroneous and sometimes hostile information and conspiracy theories have an impact. As it proceeds the Report seems to lose sight of the qualification that harm must be intended and seems to focus more upon the falsity or error of the information circulated.

Two issues arise from this. The first is that the recipient of information must be critical of the information received and subject it to analysis to determine whether it is “true” or “false”.

The second is that most information disseminated, especially across social medias platforms, is opinion or “point of view” which means that the disseminator is coming from a particular standpoint or is writing with a particular agenda. It would be incorrect for anyone to suggest that the opinion pieces in the New Zealand Herald by columnists such as Simon Wilson, Richard Prebble, Michael Cullen or Mike Hosking are anything else but that. They are interpretations of fact taken from a particular standpoint. It is up to the reader to determine whether first, the facts bare valid and secondly whether the opinion is therefore valid. Finally, if the answer to both questions is in the affirmative there is nothing to compel the reader to accept the opinion. The reader is free to disagree with it.

An associated issue arises and that is the guarantee of the freedom of information contained in section 14 of the New Zealand Bill of Rights Act 1990. The provisions of section 14 are wide. They refer to the imparting and receiving of information – thus widening the usual understanding of freedom of expression to be the imparting of information. It is significant too that section 14 does not qualify the word “information”. There is no suggestion that the information must be true or that it cannot be “misinformation.”

Information is that which informs. To inform someone is to impart learning or instruction, to teach or to impart knowledge of some particular fact or occurrence. The traditional meaning of information suggests an element of factual truth and thus misinformation is erroneous or incorrect information. One interpretation of section 14 is to use the traditional meaning of information which suggests an element of fact based truth. A wider interpretation would include material based on mistaken facts. And then, of course, there is the question of opinion which is a view of one person about a certain set of circumstances.

But in the field of information, misinformation, fact and truth there will always be disputes. Some will be trivial. Others will be significant. Some may be wrong headed. Others may be designed to mislead. Given these varieties of information, what is proposed that we should do about what is referred to in the report as the “infodemic.”

An Internal Inconsistency?

The Infodemic paper contains the following critical acknowledgement.

Misinformation is not in and of itself illegal – and it would be impractical and counterproductive to make it so. It should not be unlawful to express a view or belief that is wrong, or that is contrary to prevailing evidence and opinion.

There are certain types of misinformation with which the law should be involved such as information which promotes criminal or terrorist activity and may fall within the existing ambit of the Films Video and Publications Classification Act, the Human Rights Act or the Crimes Act.

These legal restrictions are perfectly legitimate. They are very limited and are justifiable limitations on the right of freedom of expression guaranteed by section 14 of the New Zealand Bill of Rights Act. But misinformation does not fall within their ambit, nor should it as acknowledged by the Report.

This then raises the issue – what is the problem? Is the raison d’etre for the paper to identify an issue and sound a warning. Or does it go further. The answer, in my opinion, lies in the latter. Realistically the paper recognises that misinformation will never be eliminated nor should it. But in keeping with Mr. Shanks concerns expressed in 2019, the real target for stemming the infodemic lies in dealing with the disseminators – and by that I mean not the individuals who spread misinformation but the digital platforms that enable wide dissemination.

Addressing the Problem

I shall outline the proposals advanced by the Infodemic paper but would offer a note of caution. Some of the proposed solutions are based on existing regulatory or content assessing models. They ignore some of the essential properties of digital systems which make regulation in the Digital Paradigm and completely different exercise from existing regulatory models.

I have discussed the problems of regulation in the Digital Paradigm elsewhere and in some detail[11]. Suffice to say that to engage in any form of content control in the Digital Paradigm is difficult given that the dissemination of content is inextricably entwined with the medium of distribution.

Marshall McLuhan’s aphorism “The Medium is the Message” states the problem, albeit somewhat opaquely. To attempt to control the message one must first understand the medium. This is often overlooked in discussions about regulation in the Digital Paradigm. It is something of an exercise in futility to attempt to apply the models or standards that are applied for what is essentially mainstream media regulation. And to treat online platforms, irrespective of their size and market dominance, in the same way as “analog” or mainstream media platforms ignores the fact that online platforms occupy a paradigmatically different communications space from mainstream media platforms like newspapers, radio and television.

With that cautionary observation I shall consider the proposals in the Infodemic paper.

The report offers five possible avenues for dealing with what it refers to as the Infodemic.

  1. Informing and empowering New Zealanders – this solution is expressed in the report as a means by which misinformation about Covid 19 and vaccinations may be countered. Of course, from a general perspective this is a wider issue than just misinformation and conspiracy theories about the pandemic. Many New Zealanders are concerned about the impact of misinformation across a broad range of topics, including the environment and racial tolerance.

Some of this is based on mistrust of accurate sources of information and it is suggested that steps should be taken to help those who are affected by misinformation and conspiracy theories.

This, of course, is based on the assumption that there is an empirical basis which suggests that alternative views are wrong and should not be believed. And this harks back to the quotation at the beginning of this piece. Are my “truths” the same as yours.

The concern that I have about this proposal is the suggestion that there is but one truth, one “authorised version” to which adherence must be given. It may be easy to prove that a Covid vaccine is effective on the basis of scientific analysis and empirical proof. It may be less easy to prove matters which travel in the realms of faith and belief. And the problem with “authorised versions” is that they become of “approved version” with the result that other “truths” may become sidelined and dismissed to the point where they become heretical.

  1. Education – this is a solution that I find appealing. Media literacy and critical thinking skills can help us sort fact from fiction and interpret information. These skills can also help build resilience in the community against misinformation.

A central government campaign could reach many people but is unlikely to influence people and communities who already have lower trust in government. And should it come from the government in any event – a government which may have its own political agenda?

Education in schools is also needed to empower and equip our young people to recognise and challenge misinformation. Our education system already aims to provide children and young people with the critical thinking skills necessary to navigate a complex world.

  1. Content Moderation and Industry Responsibility – Recent research suggests that misinformation travels through the internet much more rapidly than accurate information. This is one of the realities of internet based information. In the same way that the printing press enabled the increased dissemination of information so the Internet does this in an enhanced and exponential way.

The algorithms that select and promote posts and information on many social media and digital platforms often select information that is ‘high engagement’ – that is, information that attracts more comments, shares and likes. Misinformation can often be high engagement, as it can easily be more sensational, or generate stronger emotions. These algorithms, it should be observed, are also used by mainstream media who use online platforms and accounts for the “ranking” that reports may have on a news website.

Online platforms other than those used by mainstream media who may be subject to the New Zealand Media Council  are not generally subject to the same standards around accuracy, fairness and balance that newspapers, broadcast or other news media are.

However, as I have suggested above, it is a mistake to attribute the responsibilities of mainstream media platforms to online platforms. They are paradigmatically different.

The first point is that content that is broadcast or published in mainstream media goes through an editorial process. Content that is posted on social media does not, nor should it be the duty of the provider of the platform to moderate another person’s content that has been posted.

The second point is that content moderation is a difficult process in the digital paradigm given that essentially social media platforms handle large quantities of data that are later rendered into soe recognisable of comprehensible form. Of course, algorithms can and should be used to trap dangerous content that advocates violent harm or action.

It is suggested that there should be engagement with digital platforms in a co-ordinated way along with industry codes of practice which could result in a consistent set of expectations and approaches in New Zealand.

Once again this suggests a “one truth” solution which creates difficulties is a society with a plurality of opinions.

One suggestion is for users to “call out” and report misinformation, but much depends on how this is done. The development of the “cancel culture” regrettably is intolerant of different strands of opinion and I fear that “calling out” is not the way to go. Rather engagement in rational debate and proposing an alternative would allow for the marketplace of ideas to come into play and separate the wheat from the chaff.[12]

  1. Regulation and Policy

Once again the proposal seeks to compare mainstream media with a paradigmatically different information system that is the Internet.

The statement is made as follows:

“While most misinformation is not illegal, much of it would be in breach of industry standards concerning accuracy. Such standards apply to broadcast services (under the Broadcasting Act), print media (under the standards administered by the New Zealand Media Council) and advertising (under the Advertising Standards Authority). Most of the broadcast and industry self-regulatory models were not set up to address the challenges presented by the digital age such as misinformation shared on platforms like Facebook or YouTube.”

Then it is suggested that a consistent regulatory approach across non-digital and digital misinformation alike is needed.

If I understand it correctly what is being suggested is that the regulatory approach applicable to mainstream media, which developed in an entirely different paradigm from digital media, should be applied across the board.

This ignores that fact that most if not all of the content on digital media and especially social media is user generated. In fact social media allows everyone who has an internet connection to have a voice. Whether or not any attention is paid to that voice is another matter. But within a democratic society, this opportunity has never before been available. And if one looks, for example, at an autocratic state such as the Peoples Republic of China with its severe restraints on freedom of expression and its extreme regulation of Internet content, the question must be asked – is that the road that we wish to travel?

  1. Research and evaluation -The understanding of what needs to be researched and evaluated is becoming clearer, and this should be an ongoing process. The information environment will continue to rapidly evolve – often in ways no-one can predict. As new evidence emerges, interventions will change as well.

This solution seems to suggest that the reason for research and evaluation is to determine interventions and regulatory responses. This must be something of a concern in light of the comment earlier made that misinformation is not illegal and nor should it be.

Conclusion

There are two major issues that arise from the paper.

There is no doubt that misinformation can be problematical. It is, however, one of the attributes of a society that values diversity of opinions and point of view and that values and celebrates a plurality of beliefs.

Eroding Freedom of Expression?

In some respects it is difficult to discern the target in the misinformation paper. Clearly it has been inspired primarily by the conflicting information that has been swirling around about aspects of the Covid crisis. But there is more including references to the 15 March 2019 terror attacks and the various issues surrounding the introduction of 5G, QAnon and the United States polarised society and conspiracy theories.

But there seems to be a deeper issue and that surrounds calls that have been made to regulate the Internet or at least impose some restraints on the activities of social media platforms. Part of the problem with social media platforms is that they allow for a proliferation of a variety of opinions or interpretations of facts which may be unacceptable to many and downright opposed to the beliefs of others.

Governments and politicians, although they are great users of social media platforms, cannot abide a contrary message to their own. In a democracy such as New Zealand it is something with which they must live although there is little hesitation at nibbling away at the edges of expressions of contrary opinions.

Characterising them as “misinformation” is a start down the road of demonisation of these points of view. At the same time, following the 15 March massacre, the Prime Minister of New Zealand instituted the “Christchurch Call” – an attempt to marshall international support for some form of Internet regulation. No laws have been passed as yet and social media organisations, seeing which way the wind is blowing, have made certain concessions. But it is, in the minds of many, still not enough.

In New Zealand a review of media regulatory structures lies behind the “misinformation” study along with the ill-considered and contradictory proposals about “hate speech”. The assault on freedom of expression or contrarianism is not a frontal one – it is subtle and gradual but it is there nonetheless. It is my opinion that the real target of the “misinformation” study is not “misinformation” but rather the expression of contrary points of view – however misguided they might be. And that is a form of censorship and it is therefore not surprising that this move should come from the Chief Censor.

A Democratic Solution

It would be to tread a dangerous path to place the determination of “good information” and “bad information” in the hands of the government or a government organisation. Only the most extreme examples of misinformation which may do demonstrable harm such as objectionable material or terrorist information should be subject to that level of moderation. To add “misinformation” as a general category without precise definition to the sort of material that is objectionable under the Films, Videos and Publications Classification Act would be a retrograde and dangerous step.

There is already a form of content moderation in place, run through the Department of Internal Affairs which makes a filter available to Internet Service Providers to block certain content.[13]

Of the proposals suggested above it will be apparent that I favour as little interference with online platforms as possible, and I do not support anything more that minimal interference with content that is not demonstrably harmful and am of the view that what people wish to see as a “truth” should be left to the individual to make his or her own judgement.

The problem with “misinformation” has been heightened by the conflicting points of view surrounding the Covid crisis – indeed the paper itself picks up on this by describing the misinformation problem as an “infodemic” – the 2020 US Presidential election and some of the conspiracy theories that have been circulating courtesy of Qanon and the like.

But it is not a problem that warrants government or regulatory interference and indeed it should be noted that the Department of Internal Affairs review of media and online content regulation focusses upon content that is harmful.

Misinformation may misinform but much of it depends upon the reader or listener’s willingness to stand apart and subject the content to critical analysis. The problem, however, is that for many people they believe what they want to believe and their truths may not be those held by their neighbours.


[1] Chris Watson and Roy Shuker In the Public Good? Censorship in New Zealand (Dunmore Press, Palmerston North, 1998) at 35.

[2] Re Baise-Moi [2005] NZAR 214 (CA).

[3] United States v One Book Called “Ulysses” 5 F Supp 182 (SD NY 1933); United States v One Book Entitled Ulysses by James Joyce (Random House Inc, Claimant) 72 F 2d 705 (2d Cir 1934).

[4] Re Into the River Film and Literature Board of Review, 14 October 2015.

[5] Moonen v Film and Literature Board of Review [2000] 2 NZLR 9 (CA) at [15].

[6] [2002] 2 NZLR 754 (CA)

[7] “Battle Against Online Harm beefs up censor’s power” Media watch 21 March 2021 https://www.rnz.co.nz/national/programmes/mediawatch/audio/2018788055/battle-against-online-harm-beefs-up-censor-s-power

[8] He also stated on another occasion “Were it left to me to decide whether we should have a government without newspapers, or newspaper without government, I should not hesitate a moment to prefer the latter”

[9] Films Videos and Publications Classification Act 1993 section 80(1).

[10] In some respects this resembles the types of actionable digital communication under the Harmful Digital Communications Act 2015. In both the civil and criminal spheres under the act there must be harm which is defined as serious emotional distress. The report does not go into specifics about what is required to hurt or harm others.

[11] See David Harvey Collisions in the Digital Paradigm: Law and Rulemaking in the Internet Age” (Hart Publishing, Oxford 2017) esp. at Chapter 2.

[12] As at the time of writing it should be noted that a comprehensive review of media content regulation in New Zealand was announced by Minister of Internal Affairs, Hon Jan Tinetti, on 8 June 2021. The review is managed by the Department of Internal Affairs, with support from the Ministry for Culture and Heritage. The review aims to create a new modern, flexible and coherent regulatory framework to mitigate the harmful impacts of media content, regardless of how it is delivered.

The framework will still need to protect and enhance important democratic freedoms, including freedom of expression and freedom of the press.

It correctly observes “Our existing regulatory system was designed around a traditional idea of ‘analogue publication’, such as books, magazines and free-to-air TV, and does not have the flexibility to respond to many digital media types. As a result, it addresses harm in a shrinking proportion of the media content consumed by New Zealanders and provides little protection at all for digital media types which pose the greatest risk for harmful content.” See https://www.dia.govt.nz/media-and-online-content-regulation (Last accessed 9 July 2021)

[13] https://www.dia.govt.nz/Censorship-DCEFS (Last Accessed 9 July 2021)