Media Safety? Responding to Tohatoha

On 25 July a new online safety code came into effect. It was drawn up and agreed between a number of online players such as Netsafe, NZTech, Meta (owner of Facebook, Instagram and WhatsApp), Google owner YouTube, Twitch-owner Amazon, Twitter and TikTok.

The Code obliges tech companies to actively reduce harmful content on relevant digital platforms and services in New Zealand as the country grapples with what Netsafe calls a 25 per cent increase in complaints about harmful content over the past year.

It has drawn criticism from InternetNZ and Tohatoha. One of the criticisms is that the Code is very much a work in progress. This cannot be seen as a problem. Any attempt to address harmful content on digital platforms in a dynamic and everchanging environment such as the Internet must be a continuing and developing task that organically morphs to deal with changes in the digital and content ecosystem.

However, there are other concerns surrounding the development of the Safety Code and the way in which it is to be funded and administered, the most concerning being what seems to be a conflict of interest.

As to the development of the Safety Code the concern is that consultation and the process of development was limited. It was conducted primarily through the agency of Netsafe who co-ordinated the development process. Accordingly there seems to have been little input from other agencies such as Tohatoha and InternetNZ, at least until the first draft was released in February 2022. Civil society organisations nor community representatives were not engaged to the same extent. The view is that online safety must be developed with the community at the forefront. The perception is that there was a “coziness” between Netsafe (who will appoint the Administrator) and the corporates.

This criticism is directed primarily at the legitimacy of the Online Safety Code. It suggests quite properly that there should have been wider involvement of the Online Community from the outset rather than being consulted from time to time. The Code would have greater acceptance had it been developed from the ground up with deep involvement by the wider community. Doubtless there were consultations and certainly a draft of the Code was released in February 2022 but that was a call for comment of a developed proposal rather than seeking detailed input on the devising of the proposal itself.

There should have been a greater level of engagement with the wider community in the development of the proposal if only to ensure that there would be consensus on what was ultimately devised and a level of acceptance of the legitimacy of the Code. As matters stand, those who were not deeply involved will be able to stand on the side-lines and criticise as indeed organisations like Tohatoha and InternetNZ are already doing. Given that situation the legitimacy of the Code, at least as far as the wider community is concerned, is questionable.

Another of the criticisms is associated with that of legitimacy and is directed to what is perceived as a conflict of interest.

The key conflict of interest is that NetSafe would be taking funding from the very organisations it is set up to regulate. In addition, the big platforms know that there is a government media regulation review underway. The Code is perceived as an attempt to undermine what should be the public process of the media regulation review which is conducted by Government and any legislation emanating from such review would go through the Select Committee process and the scrutiny of parliament, the media and the general public. The perception is that in developing the review as essentially a non-Government process NetSafe is undermining democratic processes, in collusion with tech platforms.

This criticism has a number of difficulties. Taken to its logical conclusion, it suggests that any form of industry regulation must be government-led. This ignores the various industries and interests that have developed their own methodologies for regulating their own operations in the wider and more public sense. After all, who better to develop a regulatory system than those who have an intimate knowledge of what is to be regulated and who can devise something workable. Involving government would be to add layers of complexity and an absence of specialist knowledge.

But to be fair, this is not the first time that a review of media regulatory structures has been proposed. In 2011 the New Zealand Law Commission released an Issues Paper entitled “The News Media Meets ‘New Media’: Rights, Responsibilities and Regulation in the Digital Age”. This was in response to a Government request for a review of the legal and regulatory environment in which New Zealand’s news media and other communicators are operating in the digital era. After a lengthy consultation period which was punctuated by a further paper recommending the enactment of Harmful Digital Communications legislation, in 2013 the final report was released.

What had happened over the lengthy consultation period was that those active in the digital space including mainstream media looked at the regulatory structures that were discussed by the Law Commission in the Issues Paper. There were existing regulatory bodies like the Advertising Standards Authority and the Press Council (which were industry funded and voluntary bodies) and the Broadcasting Standards Authority which was a Government Agency. There were no bodies that dealt specifically with the online space. It was clear to those involved in the dissemination of information online – mainstream media as well as bloggers and the alternative online media – that a regulatory model was on the way. To try and provide an alternative to a government led initiative the Online Media Standards Authority was set up. This was a private organisation, funded by the media itself. Membership was voluntary. It had a complaints process and the Tribunal hearing complaints was chaired by a retired High Court Judge. It dealt with complaints about online media on the same basis as the Press Council dealt with mainstream news organisations.

When the Law Commission report finally came out in 2013 it recommended a new converged standards body, folding the functions of the press council, the Broadcasting Standards Authority and the new formed Online Media Standards Authority (OMSA) into one standards body – the News Media Standards Authority or NMSA.  This would be established to enforce standards across all publishers of news including linear and non-linear broadcasters, web publishers and the print media.

The NMSA and the regulatory model proposed by the Law Commission did not come to pass. As it happened OMSA recognised that in some respects its role was redundant, that there was a very low level of work for it and that it should merge with the Press Council which is what happened. The name of the new regulatory body – still voluntary, still funded by the media – is the New Zealand Media Council or NZMC. The members of the Council are drawn from a wide array and the Chair is the Hon Rayner Asher QC, a former High Court and Court of Appeal Judge.

This example demonstrates that there is nothing sinister in organisations establishing and funding their own regulatory structures, even when there is Government interest going on in the background. As I have suggested before, it is often preferable for an industry to regulate itself rather than submit to some “one size fits all” model proposed by Government.

This, then leads to some concerns that I have regarding the critique delivered by Tohatoha and endorsed by a number of other bodies including InternetNZ.

Tohatoha says

“In our view, this is a weak attempt to pre-empt regulation – in New Zealand and overseas – by promoting an industry-led model that avoids the real change and real accountability needed to protect communities, individuals and the health of our democracy, which is being subjected to enormous amounts of disinformation designed to increase hate and destroy social cohesion.”

The statement goes on to say

“We badly need regulation of online content developed through a government-led process. Only government has the legitimacy and resourcing needed to bring together the diverse voices needed to develop a regulatory framework that protects the rights of internet users, including freedom of expression and freedom from hate and harassment.”[1]

These statements must give cause for concern. The first concern is that it suggests that there should be regulation of content on the Internet. The second concern is that this should be through a government-led process. I have already commented on the problems that Government brings to the table in the field of regulation. For Government to be involved in the regulation of news media or indeed any medium that involves the communication of ideas is something that requires a great deal of care. Already Government is involved in a number of areas, such as the enactment of the Films, Videos and Publications Classification Act and the Harmful Digital Communications Act. In addition there is Government involvement in the broadcasting spectrum surrounding the licensing of frequencies under the Radicommunications Act 1989 (and regulations made thereunder) the Telecommunications Act 2001 and the Broadcasting Act 1989.

It seems to me that Tohatoha has overemphasized its advocacy role and overlooked the implications of what it is suggesting. It is clear that by suggesting regulation of content it means a form of control of content. There is another word for this and it is censorship. That a government should lead such regulatory (censorship) process is of even more concern.

Censorship has always been on the side of authoritarianism, conformity, ignorance and the status quo. Advocates for free speech have always been on the side of making societies more democratic, more diverse, more tolerant, more educated and more open to progress.[2]

Finally there is a concern about a loss of social cohesion. By this term what is really meant is a form of coerced conformity and as John Stuart Mill recognized, the most dire threat to freedom comes from social conformity which leads to a shortage of diversity – of inclination, interest, talent and opinion and makes eccentricity a reproach.


[1] https://www.tohatoha.org.nz/2022/07/statement-on-the-release-of-the-aotearoa-code-of-practice-for-online-safety-and-harms/

[2] Erwin Chemerinsky and Howard Gillman Free Speech on Campus (Yale University Press 2017) p. 27.

Advertisement

Regulating Misinformation

Professor Uri Gal argues (Law News 17 June 2022; The Conversation 10 June 2022) that the time has come for legislative control of big high-tech companies. He observes that the policies of companies such as Meta (Facebook), Google and Twitter can affect the well-being of individuals and the country as a whole. He claims that concerns about the harm caused by misinformation on these platforms have been raised in relation to the Convid-19 pandemic, federal elections (in Australia) and climate change among other issues. He argues that legislative standards will hold these companies to account for harmful content on their platforms.

Professor Gal writes from an Australian standpoint. As it happens the yet to be enacted Online Privacy Bill (Aust) proposes to impose higher levels of regulation on online platforms and social media networks. In New Zealand the provisions of the Harmful Digital Communications Act 2015 provide relief for individuals who are harmed by electronic communications and provide for criminal penalties for those posting content with the intention of causing harm or those who post intimate images without consent.

Professor Gal’s issue seems to be with misinformation. At one point in his piece he poses the question “What is misinformation?” but fails to provide any definition.

The term “misinformation” is a curious one. It is frequently used in commentary, especially in the context of the Covid pandemic. It has been used in a number of official publications (The Edge of the Infodemic: Challenging Misinformation in Aotearoa New Zealand; Sustaining Aotearoa as a Cohesive Society). In those publications it has not been defined. It seems to be assumed that its meaning is understood. Yet the way in which it is used seems to suggest that it is a veto word and that the subject matter to which it refers is to be discounted as “misinformation” without further explanation.

The Disinformation Project has provided definitions of misinformation and disinformation in the paper “The murmuration of information disorders: Aotearoa New Zealand’s mis- and disinformation ecologies and the Parliament Protest”. Misinformation is defined as “false information that people didn’t create with the intent to hurt others”. The wording is clumsy. I think what is meant is “false information that people created without the intention to hurt others”. Interestingly nothing is said about dissemination but I assume that is a given.

Disinformation is defined as “false information created with the intention of harming a person, group, or organisation, or even a company”. The paper goes further and defines malinformation as “true information used with ill intent.” The source for these definitions is given as Jess Berentson-Shaw and Marianne Elliot, “Misinformation and Covid-19: A Briefing for Media,” (Wellington: The Workshop, 2020).

The definitions deployed by the Disinformation Project writers seem to focus upon the intention associated with the content associated with falseness of the information communicated. But then the waters are muddied with the addition of true information communicated with a particular intention. The law places high value on truth. For example it is an answer to defamation. I wonder therefore if the concerns of the Disinformation Project are more focused on the consequences of “mis-dis-mal-information” rather than its quality.

As I have said elsewhere the current drive against “misinformation” seems to me to be another attack of the freedom of expression and upon the ability to express views that may be contrary to those of the majority. A justification for this is often cited as the need for “social cohesion” – another term for blind conformity – but in reality it is really yet another manifestation of well-meaning but misguided “liberals” who know better than everyone else what is good for them. Of more concern must be the way in which “misinformation” is being perceived as a national security issue, attracting the attention and scrutiny of the current Government.

What is more concerning is the apparent drive to restrict the freedom of expression by defining certain forms of expression as harmful.

I have earlier suggested that the term “hate speech” should be abandoned for the more precise label of dangerous speech – speech that incites or encourages physical harm against a group or individual. In that way and with precision in definition any assault on the freedom of expression is limited.

The Harmful Digital Communications Act 2015 addresses harmful speech by way of electronic communication. Harm in that legislation is defined as serious emotional distress.

But can the broad and ill-defined term “misinformation” be the subject of regulation of legislation either directly or by attacking the platform upon which it appears?

Certainly, there have been frequent efforts by the State to control the medium of communication – the printing press and the trade associated with it were the subject of attack on frequent occasions. The State has interfered with other communications innovations such as radio and television so it is not surprising that there should be efforts affoot to address Internet based platforms.

Professor Gal, like many others, advocates legislating for informational standards focussing on misinformation or disinformation. This is an attack on freedom of expression. He and others who advocate similarly would do well to remember that there is a right to free expression, a presumption in favour of it and weighty considerations in terms of harms have to be advanced by those who seek to curtail it. Stifling contentious debate in favour of a “party” or “government” line by labelling the contrary view as misinformation or disinformation, in my opinion, is not good reason enough.

Fear Itself?

­­­­­­­­­­­­Introduction

This is another piece about misinformation and disinformation. I have already written about these issues here and here. In this piece I discuss a paper recently released by the Disinformation Project. I consider the definitions that are used and offer a slightly more nuanced approach to the meaning of the terms “misinformation” and “disinformation”. I then go on to discuss some of the available remedies for problems arising from the dissemination of disinformation and close with a discussion of the way in which fear seems to be weaponised to achieve the goal of “social cohesion”. I close with an observation about vested interests and the campaign against disinformation.

Definitional Issues

The working paper “The murmuration of information disorders: Aotearoa New Zealand’s mis- and disinformation ecologies and the Parliament Protest” from the Disinformation Project[1] captured media attention and is itself an interesting study.

I have previously been rather critical of the way in which the terms “misinformation” and “disinformation” have been bandied about and the authors of the working paper have defined their terms.

Misinformation is false information that was not created with the intent to harm people.

Disinformation is false information that was created with the intent to harm a person, community, or organisation.

The material that is available from the Disinformation Project website does not offer any discussion of how these definitions were settled although it is fair to say that similar definitions have appeared in other publications.

Regrettably, the definitions both suffer from a lack of nuance. The nature of the information is not clarified. The definitions do not state whether or not the information conveyed is a statement of fact or opinion. Furthermore the definitions fail to recognise that often a fact may be determined by a process of inference or conclusion based on other existing facts. It may well be that upon further analysis an inferential conclusion may be erroneous. Whether or not it should be described as false gives rise to another issue. The use of the word “false” suggests a fraudulent, dishonest or morally questionable motive. Yet an inferential conclusion may be reached honestly and in good faith.

The definition of “misinformation” goes on to suggest that the information (which may be incorrect) was created and in that sense the suggestion is that it derived from imagination rather than from a number of other pieces of evidence or sources. In my view rather than use the word “created” the word “communicated” should be used and more properly crystallises the nature of the problem.

A person may develop some information either from imagination or from other evidential sources but may do nothing with it. In that respect the information, irrespective of its correctness, is passive. Only when it is communicated and comprehended by an audience does the information become active.

The definition of misinformation also contains the element of motive. A person may analyse a number of facts and arrive at a conclusion. That conclusion may be communicated. The conclusion may be incorrect or  misleading. But the communication of the information was in good faith as to the correctness of the conclusion or its veracity. In such circumstances, the motive for the communication of the information does not matter.

If one is looking for a more nuanced definition of “misinformation” that incorporates the above matters it could read “misinformation is information that is communicated and that is erroneous.”

That definition avoids the issue of motive and the use of the rather loaded word “false”.

“Disinformation” as defined creates some issues. A simple word to describe disinformation is that it is a lie. However, in the definition the word false is used which, in the context of a lie, is a correct term. I have some difficulty with the issue of intention. The intention must be to harm a person, community, or organisation.

A Matter of Harm

I wonder if harm is the correct term. In the context of the Harmful Digital Communications Act, harm is defined as “serious emotional distress” which would be a satisfactory, albeit limited, definition for a person or a community. However, it would not be applicable to an organisation.

Harm could also mean some form of adverse consequence which causes loss or damage. In this respect the communication of false information with the intention of causing loss or damage resembles a crime involving dishonesty. In this respect it could be argued that section 240(1)(d) of the Crimes Act 1961 is applicable. This reads:

“Every one is guilty of obtaining by deception or causing loss by deception who, by any deception and without claim of right….. causes loss to any other person.”

Deception is defined as follows:

  •  a false representation, whether oral, documentary, or by conduct, where the person making the representation intends to deceive any other person and—
  •  knows that it is false in a material particular; or
  •  is reckless as to whether it is false in a material particular; or
  •  an omission to disclose a material particular, with intent to deceive any person, in circumstances where there is a duty to disclose it; or
  •  a fraudulent device, trick, or stratagem used with intent to deceive any person.

Thus it would seem that the communication of false information would fall within the ambit of deception. It is accompanied by the necessary intention and if it causes loss/harm then the offence would be available.

However, as I understand it from the material that is available on the Disinformation Project website and the various commentaries on the “Mumuration” paper the harm that is contemplated is more inchoate and nebulous.

The paper states:

“Disinformation highlights differences and divisions that can be used to target and scapegoat, normalise prejudices, harden us-versus-them mentalities, and justify violence.

Disinformation and its focus on social division are at risk of cementing increasingly angry, anxious and antagonistic ways around how we interact with one another, eroding social cohesion and cooperation.

This has dangerous implications for our individual and collective safety”

Thus, the harm that is perceived is that of divisiveness, antagonism, prejudice and possible physical danger resulting from the use of language that is inciteful. There is concern at the erosion of social cohesion and co-operation.

This theme is picked up by David Fisher in his analysis of the paper. Fisher suggests that the trafficking of false and misleading information should be elevated to the level of national security. With respect I consider such a statement to be unnecessarily shrill and the proposal to be unwarranted. The underlying theme of Fisher’s analysis is that the dissemination of disinformation, some of which originates from overseas sources, poses a threat to established institutions and processes. He cites local body elections and the general election next year which could see a rise in disinformation.

Fisher states:

When it comes to next year’s general election – which attracts much higher public engagement – expect to experience friction as a growing faction with a discordant perception of reality bangs into those who retain faith in the way we live.

The concerns that are voiced by the Disinformation Project and by Fisher express a fear that society is under threat from the spread of disinformation primarily from a cluster of 12 groups of Facebook or social media platforms.

These concerns carry an implicit message that “something must be done”. For some of the disinformation concerns there are already remedies. I categorise these remedies available under existing law as “communications offences”. I have discussed them in an earlier post entitled “Dangerous Speech” but I shall summarise these remedies here.

Existing Remedies

Threats of violence or of harm are covered by section 306 – 307A of the Crimes Act.

Section 307A would seem to be a possible answer to the consequences of disinformation although the language of the section is difficult.

The relevant portions of the section read as follows:

Every one is liable to imprisonment for a term not exceeding 7 years if, without lawful justification or reasonable excuse, and intending to achieve the effect stated in subsection (2), he or she:…..

communicates information—

  •  that purports to be about an act likely to have 1 or more of the results described in subsection (3); and
  •  that he or she believes to be false.

Subsection (2) which deals with the effects that are sought to be achieved reads as follows:

The effect is causing a significant disruption of 1 or more of the following things:

  •  the activities of the civilian population of New Zealand:
  •  something that is or forms part of an infrastructure facility in New Zealand:
  •  civil administration in New Zealand (whether administration undertaken by the Government of New Zealand or by institutions such as local authorities, District Health Boards, or boards of schools):
  •  commercial activity in New Zealand (whether commercial activity in general or commercial activity of a particular kind).

The results that are likely to occur are set out in subsection (3) which reads as follows:

The results are—

  •  creating a risk to the health of 1 or more people:
  •  causing major property damage:
  •  causing major economic loss to 1 or more persons:
  •  causing major damage to the national economy of New Zealand.

However, subsection (4) creates an exception and exempts certain activities from the effect of s. 307A. It reads:

“To avoid doubt, the fact that a person engages in any protest, advocacy, or dissent, or engages in any strike, lockout, or other industrial action, is not, by itself, a sufficient basis for inferring that a person has committed an offence against subsection (1).” (The emphasis is mine)

There has been one case, to my knowledge, that specifically deals with section 307A – that of Police v Joseph [2013] DCR 482.

Other examples of communications offences may be found in the following statutes:

a) the Human Rights Act 1993;

b) the Summary Offences Act 1981;

c) the Harmful Digital Communications Act 2015;

d) the Broadcasting Act 1984; and

e) the Films, Videos, and Publications Classification Act 1993.

f) the Crimes Act 1961.

It should be conceded that not all of the offences created by these statutes deal with the problem of disinformation and I do not propose to discuss all of them and refer the reader to my earlier post on “Dangerous Speech”.

Indeed, the law has been ambivalent towards what could be called communications offences . In 2019 the crime of blasphemous libel was removed from the statute book. Sedition and offences similar to it were removed in 2008. Criminal libel was removed as long ago as 1993.

At the same time the law has recognized that it must turn its face against those who would threaten to commit offences. Thus section 306 criminalises the actions of threatening to kill or do grievous bodily harm to any person or sends or causes to be received a letter or writing threatening to kill of cause grievous bodily harm. The offence requires knowledge of the contents of the communication.

The offence prescribed in section 308 of the Crimes Act involves communication as well as active behavior. It criminalises the breaking or damaging or the threatening to break or damage any dwelling with a specific intention – to intimidate or to annoy. Annoyance is a relatively low level reaction to the behavior. A specific behavior – the discharging of firearms that alarms or intends to alarm a person in a dwelling house – again with the intention to intimidate or annoy – is provided for in section 308(2).

The Summary Offences Act contains the offence of intimidation in section 21. Intimidation may be by words or behavior. The “communication” aspect of intimidation is provided in section 21(1) which states:

Every person commits an offence who, with intent to frighten or intimidate any other person, or knowing that his or her conduct is likely to cause that other person reasonably to be frightened or intimidated,—

(a)     threatens to injure that other person or any member of his or her family, or to damage any of that person’s property;

Thus, there must be a specific intention – to frighten or intimidate – together with a communicative element – the threat to injure the target or a member of his or her family, or damage property.

In some respects section 21 represents a conflation of elements of section 307 and 308 of the Crimes Act together with a lesser harm threatened – that of injury – than appears in section 306 of that Act.

However, there is an additional offence which cannot be overlooked in this discussion and it is that of offensive behavior or language provided in section 4 of the Summary Offences Act.

The language of the section is as follows:

(1)     Every person is liable to a fine not exceeding $1,000 who,—

(a)     in or within view of any public place, behaves in an offensive or disorderly manner; or

(b)     in any public place, addresses any words to any person intending to threaten, alarm, insult, or offend that person; or

(c)     in or within hearing of a public place,—

(i)  uses any threatening or insulting words and is reckless whether any person is alarmed or insulted by those words; or

(ii) addresses any indecent or obscene words to any person.

(2)     Every person is liable to a fine not exceeding $500 who, in or within hearing of any public place, uses any indecent or obscene words.

(3)     In determining for the purposes of a prosecution under this section whether any words were indecent or obscene, the court shall have regard to all the circumstances pertaining at the material time, including whether the defendant had reasonable grounds for believing that the person to whom the words were addressed, or any person by whom they might be overheard, would not be offended.

(4)     It is a defence in a prosecution under subsection (2) if the defendant proves that he had reasonable grounds for believing that his words would not be overheard.

In some respects the consequences of the speech suffered by the auditor (for the essence of the offence relies upon oral communication) resemble those provided in section 61 of the Human Rights Act.

Section 4 was considered by the Supreme Court in the case of Morse v Police [2011] NZSC 45.

In some respects these various offences occupy points on a spectrum. Interestingly, the offence of offensive behaviour has the greatest implications for freedom of expression or expressive behaviour, in that the test incorporates a subjective one in the part of the observer. But it also carries the lightest penalty, and as a summary offence can be seen to be the least serious on the spectrum. The section could be applied in the case of oral or behavioural expression against individuals or groups based on colour, race, national or ethnic origin, religion, gender, disability or sexual orientation as long as the tests in Morse are met.

At the other end of the spectrum is section 307 dealing with threats to kill or cause grievous bodily harm which carries with it a maximum sentence of 7 years imprisonment. This section is applicable to all persons irrespective of colour, race, national or ethnic origin, religion, gender, disability or sexual orientation as are sections 307, 308, section 21 of the Summary Offences Act and section 22 of the Harmful Digital Communications Act which could all occupy intermediate points on the spectrum based on the elements of the offence and the consequences that may attend upon a conviction.

There are some common themes to sections 306, 307, 308 of the Crimes Act and section 21 of the Summary Offences Act.

First, there is the element of fear that may be caused by the behavior. Even although the issue of intimidation is not specifically an element of the offences under sections 306 and 307, there is a fear that the threat may be carried out.

Secondly there is a specific consequence prescribed – grievous bodily harm or damage to or destruction of property.

Thirdly there is the element of communication or communicative behavior that has the effect of “sending a message”.

These themes assist in the formulation of a speech-based offence that is a justifiable limitation on free speech, that recognizes that there should be some objectively measurable and identifiable harm that flows from the speech, but that does not stifle robust debate in a free and democratic society.

Democracy vs Cohesion

The concerns about the effects of disinformation other than those effects which may cause harm relate more to issues of what are described as social cohesiveness. This is a phrase that seems to have been gaining in traction since the Royal Commission Report on the March 15 Christchurch tragedy. It is emphasised in both the “Mumuration” paper and in Fisher’s analysis. The problem with social cohesiveness is that, taken to its ultimate result, we have a society based on silent conformity without any room for dissent, opposition or contrary or contentious opinions.

These elements are essential to a functioning democracy which is cacophonous by nature and which often involves strongly held and differing opinions. Much of the debate surrounding differing opinions can get quite heated and result in what the Disinformation Project claims are angry, anxious and antagonistic arguments. These have been with us for centuries. One need only look at the arguments that have taken place withing the Christian faith over the centuries to understand the passion with which people often approach matters of belief. And, indeed, conflicting opinions within that context would, at the very least be termed “misinformation” or, at worst “disinformation”.

Although the printing press was responsible for the wide dissemination of the contentious arguments surrounding the Reformation and, later in England, the constitutional debates that led to the English Civil War, the dissemination of information afforded by social media platforms is exponentially greater. It is perhaps the delivery of the message, rather than the message itself, that seems to be the root of the problem.

Weaponising Fear

Coupled with this is the fact that the perceived disinformation problem is accompanied by a sense of threat to established institutions which in turn generates a sense of fear and foreboding if the problem is allowed to continue or at least to go unrecognised.

Fear seems to be a widely distributed currency these days. Perhaps older generations have had more experience of the reality of fear having lived through events like various outbreaks of war – Korean, Viet-Nam, Gulf 1 and 2, Afghanistan as a few examples – along with the continuing threat of nuclear conflict which seemed to dissipate in the 1990’s but has now once again loomed and the spectre of terrorism which preceded 9/11 – which was its most egregious example – and which has also been exemplified not only by jihadis but by extremists such as Timothy McVeigh, Anders Breivik and Brenton Tarrant.

But fear is used to market other products. The response to the Covid Pandemic in New Zealand was underpinned by fear, with concerns about potentially high numbers of deaths from the disease if strong measures were not taken. That fear of death and of the consequences of the pandemic underpinned most of the steps taken by the Government and was probably responsible for the complacent response by the populace at least in the first year or 18 months of the pandemic.

Fear can be a strong motivator and often drives extreme responses. Senator Joseph McCarthy played on the fear of a Communist conspiracy in post-World War II USA the reverberations of which were still present in the early 1960’s. The end of the Cold War meant that the fear of the Communist threat was ephemeral but it was shortly replaced by fear of terrorism in the US.

What concerns me is that the fears that are being expressed around misinformation and disinformation suggest that the phenomenon is a new one.  It isn’t but has been exacerbated by the exponential dissemination quality of online platforms.

It is also suggested that there are no remedies to deal with particularly disinformation.

There are and in certain cases the provisions of s. 307A of the Crimes Act 1961 could be deployed along with other remedies discussed if they fit the circumstances.

There are some remedies along with critical analysis of posts that may contain disinformation. To engender a climate of fear is unhelpful, especially when there are existing tools to deal with the issues.

The problem can be summed up by the remark by Franklin D. Roosevelt at his 1933 inauguration –  “the only thing we have to fear is…fear itself — nameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance.”

Misinformation occupies a different space and in my view poses no threat. The views expressed may be contentious or contrarian perspectives. Often the information contained in these views will be opinions based on certain facts which may or may not be valid. Statements of opinion appear regularly in mainstream media and are labelled as such. Often they are the subject of debate and discussion in online comments sections or in letters to the editor. This is part and parcel of life in a liberal democracy that places a high value upon the right to impart and receive information – no matter how wrongheaded it might be.

In fact the way to deal with misinformation was referred to in the NZ Herald for 18 May 2020 entitled “’Tectonic shift’: How Parliament protest supercharged NZ’s misinfodemic” which contained commentary on the “Mumuration” paper. The Prime Minister’s Chief Science adviser Dame Juliette Gerrard is quotes as saying:

“New Zealand needs to play its part in the global effort to foster social cohesion and to empower our children to learn skills which make the next generation strong critical thinkers who are as resilient as possible to an increasingly polluted online environment.”

Whilst I would take issue with the “social cohesion” comment I strongly endorse the suggestion that we need to engage in critical analysis and evaluation of the information that we receive. This is something that needs to be done not only by our children but by ourselves.

Social cohesion is a vague and ephemeral concept for defining acceptable behaviour in society. As I have said in an earlier post:

Without the Rule of Law what is being proposed is some form of “understood” code of behaviour based on the concept of a resilient society that has its foundation in social cohesiveness. I would have thought that a clearly communicated and understood Rule system would establish the metes and bounds of acceptable behaviour.

In my view although a peaceable society is an objective that is the goal of the Rule of Law which allows for a variety of behaviours but provides consequences for unacceptable behaviours – either by civil remedies or criminal sanctions. It is far better to have a clearly defined approach rather than a vague and ephemeral one.

Conclusion – Vested Interests.

Finally it is of interest to observe how vexed the mainstream news media get with the issue of mis/disinformation. Because the warnings emanating from the Disinformation Project, the Chief Censor’s Office and the University of Auckland Centre for Informed Futures, the news media are quick to fan the flames of fear and perhaps overdramatise the significance of the message. But perhaps there is an unstated interest that the news media might have in campaigning against mis/disinformation. In the past they have been the organs of reliable information and their editing and checking systems ensure this.

The Disinformation Project study indicates that on 10 February 2022 misinformation (as they define it) overtook NZ Media for the first time. Perhaps mainstream media has some territory to protect in the contest for the information audience and in fact what they are doing is campaigning strongly against the purveyors of mis/disinformation not to alert the public or perform some altruistic public interest goal but to do whatever they can to protect their own turf, their position as the purveyors of “truth” (despite significant column inches dedicated to “opinion”) and, not least, their advertising revenues and income streams.


[1] It is important to note that the Disinformation Project referred to is based at Victoria University, Wellington and is separate and distinct from the Disinformation Project – a American organization based in Fairfax, Virginia. The website of the NZ organization is https://thedisinfoproject.org. That of the American group is https://thedisinformationproject.org

The Content Regulatory System Review – An Overview

Lockdown has its benefits. For some time I have been asked whether or not I would contemplate a 5th edition of “internet.law.nz – selected issues.” After 4 editions including a revised 4th edition my inclination had been that I had written enough on the subject, but a review of the 4th edition together with a review of what I had written in other for a persuaded me that a 5th edition might be a possibility. Lockdown has given me the perfect opportunity to research and write in the comparative peace and solitude that accompanies Alert Level 4.

The approach that I propose will be different from what has gone before, although much of the material in earlier editions will be present. But the focus and the themes that I want to examine differ. I am interested in the regulatory structures that are being applied to the online environment and in particular I am interested in the area of content regulation. This involves a number of areas of law, not the least of which is media law and there is quite an overlap between the fields of media law and what could loosely be termed cyberlaw.

What I am trying to do is examine the law that it has developed, that is presently applicable and what shape it may likely have in the future. In this last objective I am often assisted by proposals that governments have put forward for discussion, or proposed legislation that is before the House.

In this piece I consider a review of content regulation. The proposal, which was announced on 8 June 2021, is extremely broad in scope and is intended to cover content regulation proposals and mechanisms in ALL media – an ambitious objective. What follows are my initial thoughts. I welcome, as always, feedback or comments in the hope that the finished product will be a vast improvement on what is presently before you.

The Proposals

A comprehensive review of content regulation in New Zealand was announced by Minister of Internal Affairs, Hon Jan Tinetti, on 8 June 2021. The review is managed by the Department of Internal Affairs, with support from the Ministry for Culture and Heritage. 

The review aims to create a new modern, flexible and coherent regulatory framework to mitigate the harmful impacts of content, regardless of how it is delivered.

The framework will still need to protect and enhance important democratic freedoms, including freedom of expression and freedom of the press.

Content is described as any communicated material (for example video, audio, images and text) that is publicly available, regardless of how it is communicated.

The need for the review arises from a recognition of media convergence. The review outline states that the ongoing evolution of digital media has resulted in significant and growing potential for New Zealanders to be exposed to harmful content. This was made evident by the livestreaming and subsequent uploading of the Christchurch terror attack video.

Our existing regulatory system was designed around a traditional idea of ‘analogue publication’, such as books, magazines and free-to-air TV, and does not have the flexibility to respond to many digital media types. As a result, it addresses harm in a shrinking proportion of the content consumed by New Zealanders and provides little protection at all for digital media types which pose the greatest risk for harmful content.

The increase in the potential for New Zealanders to be exposed to harmful content is compounded by the complexity of the regulatory system. Different rules apply for content hosted across media channels. This increases difficulty for New Zealanders when deciding what content is appropriate for them and their children and creates confusion on where to report harmful content. 

There is also an uneven playing field for media providers as some types of media are subject to complicated regulatory requirements and some to no regulations at all.

The introduction to the review notes that New Zealand’s current content regulatory system is made up of the Films, Videos, and Publications Classification Act 1993, the Broadcasting Act 1989 and voluntary self-regulation (including the New Zealand Media Council and Advertising Standards Authority). The Office of Film and Literature Classification and the Broadcasting Standards Authority are statutory regulators under their respective regimes. 

New Zealand’s content regulatory system seeks to prevent harm from exposure to damaging or illegal content. It does this through a combination of classifications and ratings to provide consumer information, and standards to reflect community values. These tools are designed to prevent harm from people viewing unwanted or unsuitable content, while protecting freedom of expression.

What is proposed is a broad, harm minimisation-focused review of New Zealand’s media content regulatory system which will contribute to the Government’s priority of supporting a socially cohesive New Zealand, in which all people feel safe, have equal access to opportunities and have their human rights protected, including the rights to freedom from discrimination and freedom of expression. 

The objective of social cohesion was one of the strong points made by the Royal Commission on the 15 March 2019 tragedy in Christchurch.

The review recognises that a broad review of the media content regulatory system has been considered by Ministers since 2008 but has never been undertaken. Instead piecemeal amendments to different frameworks within the system have been made to address discrete problems and gaps.

The problems posed by the Digital Paradigm and media convergence, coupled with the democratisation of media access has, in the view expressed in the briefing paper resulted in significant and growing potential for New Zealanders to be exposed to harmful media content. Our existing regulatory frameworks are based around the media channel or format by which content is made available and do not cover many digital media channels. This model does not reflect a contemporary approach where the same content is disseminated across many channels simultaneously. As a result, it provides protection for a decreasing proportion of media content that New Zealanders experience. This means that New Zealanders are now more easily and frequently exposed to content they might otherwise choose to avoid, including content that may pose harm to themselves, others, and society at large.

What is proposed is a harm-minimisation focused review of content regulation. This review will aim to create a new modern, flexible and coherent regulatory framework to mitigate the harmful impacts of media content, regardless of how it is delivered. The framework will still need to protect and enhance important democratic freedoms, including freedom of expression and freedom of the press. The threshold for justifying limitations on freedom of expression will remain appropriately high.

Given the emphasis on social cohesion it is not unexpected that the Review is part of the Government’s response to the March 2019 Christchurch terrorist attack, including the Christchurch Call and responding to the Royal Commission of Inquiry into the terrorist attack on Christchurch masjidain.

It is noted that in addition to the formal structures under the Films Videos and Publications Classification Act and the Broadcasting Act are voluntary self-regulatory structures such as the Media Council and the Advertising Standards Authority are the provisions of the Harmful Digital Communications Act and the Unsolicited Electronic Messages Act. These structures, it is suggested, are unable to respond to are coming from contemporary digital media content, for example social media. The internet has decentralised the production and dissemination of media content, and a significant proportion of that content is not captured by the existing regulatory system.

Examples of the harmful media content affecting New Zealanders are:

  • adult content that children can access, for example online pornography, explicit language, violent and sexually explicit content
  • violent extremist content, including material showing or promoting terrorism
  • child sexual exploitation material
  • disclosure of personal information that threatens someone’s privacy, promotion of self-harm
  • mis/disinformation
  • unwanted digital communication
  • racism and other discriminatory content
  • hate speech

What is proposed is a harm-minimisation focused review of content regulation, with the aim of creating a new modern, flexible and coherent regulatory framework to mitigate the harmful impacts of all media content. The regulatory framework will balance the need to reduce harm with protecting democratic freedoms, including freedom of expression and freedom of the press. The framework will allocate responsibilities between individuals, media content providers, and Government for reducing harm to individuals, society and institutions from interacting with media. The framework will be platform-neutral in its principles and objectives, however, it will need to enable different approaches to reaching these objectives, spanning Government, co-regulatory and self-regulatory approaches. It will also include a range of regulatory and non-regulatory responses.

The following principles are proposed to guide the review:

a. Responsibilities to ensure a safe and inclusive media content environment should be allocated between individuals, media content service providers (analogue, digital and online providers), and Government;

• Individuals should be empowered to keep themselves safe from harm when interacting with media content;

• Media content service providers should have responsibilities for minimising harms arising from their services;

• Government responses to protect individuals should be considered appropriate where the exercise of individual or corporate responsibility cannot be sufficient. For example:

• Where there is insufficient information available to consumers about the risk of harm;

• Where individuals are unable to control exposure to potentially harmful media content;

• Where there is an unacceptable risk of harm because of the nature of the media content and/or the circumstances of the interaction (e.g. children being harmed by media content interactions);

b. Interventions should be reasonable and able to be demonstrably justified in a free and democratic society. This includes:

  • Freedom of expression should be constrained only where, and to the extent, necessary to avoid greater harm to society
  • The freedom of the press should be protected
  • The impacts of regulations and compliance measures should be proportionate to the risk of harm;

c. Interventions should be adaptive and responsive to:

• Changes in technology and media;

• Emerging harms, and changes to the scale and severity of existing harms;

• Future changes in societal values and expectations;

d. Interventions should be appropriate to t he social and cultural needs of all New Zealanders and, in particular, should be consistent with:

• Government obligations flowing from te Tiriti o Waitangi;

• Recognition of and respect forte ao Maori and tikanga; and

e. Interventions should be designed to maximise opportunities for international coordination and cooperation.

It will be noted that the proposed review and the principles guiding it are wide-ranging. It seems that the objective may be the establishment of a single content regulatory system that will allow for individual responsibility in accessing content and media responsibility for ensuring a minimisation of harm but with a level of State intervention where the steps by individuals or media providers may be insufficient. The guiding principle seems to be that of harm.

At the same time there is a recognition of the democratic values of freedom of expression and freedom of the press. The wording of section 5 of the New Zealand Bill of Rights Act is employed – that interventions should be reasonable and demonstrably justified in a free and democratic society and that responses should be proportionate to the level of harm.

It is interesting to note that the proposed interventions should be flexible and able to adapt to changes in technology and media, the nature of harm and any future changes in societal values and expectations.

Commentary

In many respects the proposals in this outline seem to be those of an overly protective State, developing broad concepts of harm and “safety” as criteria for interference with robust and often confronting expression. It is quite clear that the existing law is sufficient to address concerns about expressions such as threats of physical harm. However, the concept of harm beyond that is rather elusive. The problem was addressed in the Harmful Digital Communications Act 2015 which defines harm as “serious emotional distress”. But a broader scope seems to be applied to harm in the context of this review and that is exemplified by the concept of social cohesion. In addition are some of the categories of content that must give rise to concern and that may well create a tension between freedom of expression on one hand and elements of social cohesion on the other. One example is that of misinformation or disinformation which seems to suggest that there is but one arbiter of accuracy of content that leaves little room for balanced discussion or opposing views. The arbiter of content could describe any opposing view as misinformation and thereby demonise, criminalise and ban the opposing view on the basis that opposition to the “party line” has an impact upon social cohesion.

A matter of concern for media law specialists as this review progresses must be the cumulative impact that content regulation initiatives may have on freedom of expression. I cite as examples proposals to address so-called “hate speech” and the Chief Censor’s report “The Edge of the Infodemic: Challenging Misinformation in Aotearoa.” These proposals, if enacted, will give legislative fiat to a biased form of expression without allowing for a contrary view and demonstrates a concerning level of misunderstanding about the nature of freedom of expression (including the imparting and receiving of ideas) in a free and democratic society.

As matters stand content regulatory systems in New Zealand as discussed have some common features.

  • There is an established set of principles and guidelines that govern the assessment of content.
  • There is a complaints procedure that – as far as media organisations are concerned – involves an approach to the media organisation prior to making a complaint to the regulatory body
  • There is a clear recognition of the importance of the freedom of expression and the role of a free press in a democratic society
  • That in respect to censorship the concept of “objectionable” is appropriately limiting given first that the material may be banned or restricted and second that there may be criminal liability arising from possession or distribution of objectionable material.
  • Guiding principles are based primarily upon the public interest. The Content Review focus on social cohesion is more than a mere re-expression of the public interest concept.

One thing is abundantly clear. The difficulty that regulatory systems have at the moment surrounds continuing technological innovation. To some extent the New Zealand Media Council recognises that and has adapted accordingly. Otherwise there is little wrong with the processes that are in place – at least in principle. If complaints procedures are seen to be unwieldy they can be simplified. The public interest has served as a good yardstick up until now. It has been well-considered, defined and applied. It would be unfortunate to muddy the media standards and public discourse with a standard based on social cohesiveness, whatever that may be. Fundamentally the existing regulatory structures achieve the necessary balance between freedom of expression on the one hand and the protection of the public from objectionable content on the other. Any greater interference than there is at present would be a retrograde step.

“Harm” in the Harmful Digital Communications Act

The recent decision of Justice Matthew Downs about the Harmful Digital Communications Act seems to be misunderstood.  Ben Hill in the NZ Herald went so far as to state that the decision said that  posting intimate images on Facebook met the harm threshold detailed in the act. Far from it.

Some clarification might be helpful. The case involved a prosecution under one of the sections of the Harmful Digital Communications Act. Because of that a couple of basic criminal law propositions applied. First, the burden of proof was on the prosecution. Secondly to obtain a conviction the prosecution had to prove each of the three elements of the charge beyond a reasonable doubt.

A criminal trial can be divided up into two main phases – the prosecution phase and the defence phase. If the prosecution fails to provide any or sufficient evidence of any one of the elements of a charge, the charge may be dismissed.

In the case under appeal that is the point the case reached. Was there evidence of harm – serious emotional distress – suffered by the complainant? The District Court Judge looked at what was available and concluded that although there was evidence of emotional distress, he was not satisfied that it had reached the “serious” point. Because he concluded that there was insufficient evidence to support one of the elements the case was dismissed.

If that evidence had been available – that is evidence where a reasonable fact-finder, properly applying the law, could convict – then the Judge would have called upon the defendant to ascertain if he had evidence to add to the mix. It does not mean that the defendant was presumed guilty or that he had a burden to prove his innocence. But his evidence could help the Judge in the next step in the reasoning process which would be to assess whether or not the evidence satisfied him beyond a reasonable doubt that the complainant had suffered serious emotional distress. That is a different line of enquiry to that of ascertaining whether the evidence was present in the first place.

When the case went before Justice Downs on appeal by the Police there were two lines of argument. One was that the test applied by the District Court Judge was too high. That argument was rejected by Justice Downs. The second argument was that the Judge had not evaluated the available evidence properly when he concluded that there was no evidence of serious emotional distress.

This was the crux of the case. The District Court Judge had taken reactions by the defendant to the posting of her intimate images on a Facebook page – loss of sleep, tears, possible time off work, embarrassment – and concluded that individually these did not amount to serious emotional distress. Justice Downs said that was not the proper approach. The Judge should have looked at the total effect of all of these elements and taken them collectively. In addition he should have had regard to the context – a relationship breakdown, an apparently controlling and jealous husband who had threatened to put the pictures online. Looking at these factors collectively Justice Downs concluded that they did amount to sufficient evidence of serious emotional distress and therefore the prosecution had established a case that the defendant had to answer.

So the case is about how to evaluate whether or not a post has caused harm. In that sense it could be said that it is “lawyers law”. But does this mean that the defendant is automatically guilty. It does not. The case has been sent back to the District Court and the case for the defendant will be presented and argued. And then the Judge will have to consider whether the evidence takes him past the “beyond reasonable doubt” threshold to enter a conviction.

Prosecutions under the Harmful Digital Communications Act in the main have resulted in pleas of guilty. But each case must be looked at in the context of its own facts and circumstances. It may be morally wrong to post intimate images on a Facebook page. But there are a number of other elements that must be proven before that amounts to a criminal offence. One of those elements is that of actual harm – serious emotional distress as defined in the Act. If proof of that is lacking there is no offence. If the target of the communication dismisses the posting as of no consequence, no harm has been done.

And that is what the Act is about. It is not about the nature of the content. It is about whether or not the posting has caused harm as defined by the Act.

Justice Downs’ decision helps us in how the evaluation of harm should be approached.