The Content Regulatory System Review – An Overview

Lockdown has its benefits. For some time I have been asked whether or not I would contemplate a 5th edition of “ – selected issues.” After 4 editions including a revised 4th edition my inclination had been that I had written enough on the subject, but a review of the 4th edition together with a review of what I had written in other for a persuaded me that a 5th edition might be a possibility. Lockdown has given me the perfect opportunity to research and write in the comparative peace and solitude that accompanies Alert Level 4.

The approach that I propose will be different from what has gone before, although much of the material in earlier editions will be present. But the focus and the themes that I want to examine differ. I am interested in the regulatory structures that are being applied to the online environment and in particular I am interested in the area of content regulation. This involves a number of areas of law, not the least of which is media law and there is quite an overlap between the fields of media law and what could loosely be termed cyberlaw.

What I am trying to do is examine the law that it has developed, that is presently applicable and what shape it may likely have in the future. In this last objective I am often assisted by proposals that governments have put forward for discussion, or proposed legislation that is before the House.

In this piece I consider a review of content regulation. The proposal, which was announced on 8 June 2021, is extremely broad in scope and is intended to cover content regulation proposals and mechanisms in ALL media – an ambitious objective. What follows are my initial thoughts. I welcome, as always, feedback or comments in the hope that the finished product will be a vast improvement on what is presently before you.

The Proposals

A comprehensive review of content regulation in New Zealand was announced by Minister of Internal Affairs, Hon Jan Tinetti, on 8 June 2021. The review is managed by the Department of Internal Affairs, with support from the Ministry for Culture and Heritage. 

The review aims to create a new modern, flexible and coherent regulatory framework to mitigate the harmful impacts of content, regardless of how it is delivered.

The framework will still need to protect and enhance important democratic freedoms, including freedom of expression and freedom of the press.

Content is described as any communicated material (for example video, audio, images and text) that is publicly available, regardless of how it is communicated.

The need for the review arises from a recognition of media convergence. The review outline states that the ongoing evolution of digital media has resulted in significant and growing potential for New Zealanders to be exposed to harmful content. This was made evident by the livestreaming and subsequent uploading of the Christchurch terror attack video.

Our existing regulatory system was designed around a traditional idea of ‘analogue publication’, such as books, magazines and free-to-air TV, and does not have the flexibility to respond to many digital media types. As a result, it addresses harm in a shrinking proportion of the content consumed by New Zealanders and provides little protection at all for digital media types which pose the greatest risk for harmful content.

The increase in the potential for New Zealanders to be exposed to harmful content is compounded by the complexity of the regulatory system. Different rules apply for content hosted across media channels. This increases difficulty for New Zealanders when deciding what content is appropriate for them and their children and creates confusion on where to report harmful content. 

There is also an uneven playing field for media providers as some types of media are subject to complicated regulatory requirements and some to no regulations at all.

The introduction to the review notes that New Zealand’s current content regulatory system is made up of the Films, Videos, and Publications Classification Act 1993, the Broadcasting Act 1989 and voluntary self-regulation (including the New Zealand Media Council and Advertising Standards Authority). The Office of Film and Literature Classification and the Broadcasting Standards Authority are statutory regulators under their respective regimes. 

New Zealand’s content regulatory system seeks to prevent harm from exposure to damaging or illegal content. It does this through a combination of classifications and ratings to provide consumer information, and standards to reflect community values. These tools are designed to prevent harm from people viewing unwanted or unsuitable content, while protecting freedom of expression.

What is proposed is a broad, harm minimisation-focused review of New Zealand’s media content regulatory system which will contribute to the Government’s priority of supporting a socially cohesive New Zealand, in which all people feel safe, have equal access to opportunities and have their human rights protected, including the rights to freedom from discrimination and freedom of expression. 

The objective of social cohesion was one of the strong points made by the Royal Commission on the 15 March 2019 tragedy in Christchurch.

The review recognises that a broad review of the media content regulatory system has been considered by Ministers since 2008 but has never been undertaken. Instead piecemeal amendments to different frameworks within the system have been made to address discrete problems and gaps.

The problems posed by the Digital Paradigm and media convergence, coupled with the democratisation of media access has, in the view expressed in the briefing paper resulted in significant and growing potential for New Zealanders to be exposed to harmful media content. Our existing regulatory frameworks are based around the media channel or format by which content is made available and do not cover many digital media channels. This model does not reflect a contemporary approach where the same content is disseminated across many channels simultaneously. As a result, it provides protection for a decreasing proportion of media content that New Zealanders experience. This means that New Zealanders are now more easily and frequently exposed to content they might otherwise choose to avoid, including content that may pose harm to themselves, others, and society at large.

What is proposed is a harm-minimisation focused review of content regulation. This review will aim to create a new modern, flexible and coherent regulatory framework to mitigate the harmful impacts of media content, regardless of how it is delivered. The framework will still need to protect and enhance important democratic freedoms, including freedom of expression and freedom of the press. The threshold for justifying limitations on freedom of expression will remain appropriately high.

Given the emphasis on social cohesion it is not unexpected that the Review is part of the Government’s response to the March 2019 Christchurch terrorist attack, including the Christchurch Call and responding to the Royal Commission of Inquiry into the terrorist attack on Christchurch masjidain.

It is noted that in addition to the formal structures under the Films Videos and Publications Classification Act and the Broadcasting Act are voluntary self-regulatory structures such as the Media Council and the Advertising Standards Authority are the provisions of the Harmful Digital Communications Act and the Unsolicited Electronic Messages Act. These structures, it is suggested, are unable to respond to are coming from contemporary digital media content, for example social media. The internet has decentralised the production and dissemination of media content, and a significant proportion of that content is not captured by the existing regulatory system.

Examples of the harmful media content affecting New Zealanders are:

  • adult content that children can access, for example online pornography, explicit language, violent and sexually explicit content
  • violent extremist content, including material showing or promoting terrorism
  • child sexual exploitation material
  • disclosure of personal information that threatens someone’s privacy, promotion of self-harm
  • mis/disinformation
  • unwanted digital communication
  • racism and other discriminatory content
  • hate speech

What is proposed is a harm-minimisation focused review of content regulation, with the aim of creating a new modern, flexible and coherent regulatory framework to mitigate the harmful impacts of all media content. The regulatory framework will balance the need to reduce harm with protecting democratic freedoms, including freedom of expression and freedom of the press. The framework will allocate responsibilities between individuals, media content providers, and Government for reducing harm to individuals, society and institutions from interacting with media. The framework will be platform-neutral in its principles and objectives, however, it will need to enable different approaches to reaching these objectives, spanning Government, co-regulatory and self-regulatory approaches. It will also include a range of regulatory and non-regulatory responses.

The following principles are proposed to guide the review:

a. Responsibilities to ensure a safe and inclusive media content environment should be allocated between individuals, media content service providers (analogue, digital and online providers), and Government;

• Individuals should be empowered to keep themselves safe from harm when interacting with media content;

• Media content service providers should have responsibilities for minimising harms arising from their services;

• Government responses to protect individuals should be considered appropriate where the exercise of individual or corporate responsibility cannot be sufficient. For example:

• Where there is insufficient information available to consumers about the risk of harm;

• Where individuals are unable to control exposure to potentially harmful media content;

• Where there is an unacceptable risk of harm because of the nature of the media content and/or the circumstances of the interaction (e.g. children being harmed by media content interactions);

b. Interventions should be reasonable and able to be demonstrably justified in a free and democratic society. This includes:

  • Freedom of expression should be constrained only where, and to the extent, necessary to avoid greater harm to society
  • The freedom of the press should be protected
  • The impacts of regulations and compliance measures should be proportionate to the risk of harm;

c. Interventions should be adaptive and responsive to:

• Changes in technology and media;

• Emerging harms, and changes to the scale and severity of existing harms;

• Future changes in societal values and expectations;

d. Interventions should be appropriate to t he social and cultural needs of all New Zealanders and, in particular, should be consistent with:

• Government obligations flowing from te Tiriti o Waitangi;

• Recognition of and respect forte ao Maori and tikanga; and

e. Interventions should be designed to maximise opportunities for international coordination and cooperation.

It will be noted that the proposed review and the principles guiding it are wide-ranging. It seems that the objective may be the establishment of a single content regulatory system that will allow for individual responsibility in accessing content and media responsibility for ensuring a minimisation of harm but with a level of State intervention where the steps by individuals or media providers may be insufficient. The guiding principle seems to be that of harm.

At the same time there is a recognition of the democratic values of freedom of expression and freedom of the press. The wording of section 5 of the New Zealand Bill of Rights Act is employed – that interventions should be reasonable and demonstrably justified in a free and democratic society and that responses should be proportionate to the level of harm.

It is interesting to note that the proposed interventions should be flexible and able to adapt to changes in technology and media, the nature of harm and any future changes in societal values and expectations.


In many respects the proposals in this outline seem to be those of an overly protective State, developing broad concepts of harm and “safety” as criteria for interference with robust and often confronting expression. It is quite clear that the existing law is sufficient to address concerns about expressions such as threats of physical harm. However, the concept of harm beyond that is rather elusive. The problem was addressed in the Harmful Digital Communications Act 2015 which defines harm as “serious emotional distress”. But a broader scope seems to be applied to harm in the context of this review and that is exemplified by the concept of social cohesion. In addition are some of the categories of content that must give rise to concern and that may well create a tension between freedom of expression on one hand and elements of social cohesion on the other. One example is that of misinformation or disinformation which seems to suggest that there is but one arbiter of accuracy of content that leaves little room for balanced discussion or opposing views. The arbiter of content could describe any opposing view as misinformation and thereby demonise, criminalise and ban the opposing view on the basis that opposition to the “party line” has an impact upon social cohesion.

A matter of concern for media law specialists as this review progresses must be the cumulative impact that content regulation initiatives may have on freedom of expression. I cite as examples proposals to address so-called “hate speech” and the Chief Censor’s report “The Edge of the Infodemic: Challenging Misinformation in Aotearoa.” These proposals, if enacted, will give legislative fiat to a biased form of expression without allowing for a contrary view and demonstrates a concerning level of misunderstanding about the nature of freedom of expression (including the imparting and receiving of ideas) in a free and democratic society.

As matters stand content regulatory systems in New Zealand as discussed have some common features.

  • There is an established set of principles and guidelines that govern the assessment of content.
  • There is a complaints procedure that – as far as media organisations are concerned – involves an approach to the media organisation prior to making a complaint to the regulatory body
  • There is a clear recognition of the importance of the freedom of expression and the role of a free press in a democratic society
  • That in respect to censorship the concept of “objectionable” is appropriately limiting given first that the material may be banned or restricted and second that there may be criminal liability arising from possession or distribution of objectionable material.
  • Guiding principles are based primarily upon the public interest. The Content Review focus on social cohesion is more than a mere re-expression of the public interest concept.

One thing is abundantly clear. The difficulty that regulatory systems have at the moment surrounds continuing technological innovation. To some extent the New Zealand Media Council recognises that and has adapted accordingly. Otherwise there is little wrong with the processes that are in place – at least in principle. If complaints procedures are seen to be unwieldy they can be simplified. The public interest has served as a good yardstick up until now. It has been well-considered, defined and applied. It would be unfortunate to muddy the media standards and public discourse with a standard based on social cohesiveness, whatever that may be. Fundamentally the existing regulatory structures achieve the necessary balance between freedom of expression on the one hand and the protection of the public from objectionable content on the other. Any greater interference than there is at present would be a retrograde step.


What is Truth? Misinformation and the Edge of the Infodemic – A Commentary

“And what is ‘truth’? Is truth unchanging law? We both have truths. Are mine the same as yours?”

Jesus Christ Superstar

I have given some thought about releasing this paper. The reason for my hesitation is that it could be misinterpreted as a support piece for misinformation. That is not its purpose. My primary concern arises from what I perceive as a shift towards a State-based determinator for truth and, as a corollary, that any perspective or opinion that does not conform to that “truth” is misinformation. In my view the issue of misinformation is a more nuanced one and this paper argues that the solution to dealing with misinformation should be in the hands of individuals who can make their own evaluations of the validity or otherwise of pieces of information before acting upon the. Some of the suggestions that have been made in the misinformation paper under discussion are extremely reasonable and sensible. What I am concerned about is the intrusion of the State into the area of belief and points of view. Freedom of thought (or conscience) has long been a cornerstone of liberal democracy.


Censorship is a controversial issue in a modern democratic and liberal society, although it has taken place in one form or another over the centuries. This has included art censorship from the strategically placed drapes on the magnificent Michelangelo frescoes on the ceiling of the Sistine Chapel to the controversy surrounding the Mapplethorpe photographic exhibition in New Zealand, film censorship from All Quiet on the Western Front[1] to Baise-Moi[2] and book censorship such as James Joyce’s masterpiece Ulysses[3] and more recently in New Zealand with Ted Dawe’s Into the River.[4]

Censorship challenges freedom of expression by imposing minimum standards of socially acceptable speech on the contemporary community. Under s 14 of the New Zealand Bill of Rights Act 1990 (Bill of Rights) everyone has the right to freedom of expression; a right as “wide as human thought and imagination”.[5] Censorship acts as an abrogation of that right, so how is freedom of expression under the Bill of Rights and censorship under the Act to be accommodated? Some guidance is available from the Court of Appeal in the case of Moonen v Film and Literature Board of Review[6].

Moonen held  that there is  a responsibility on the Classification Office and the Board of Review when carrying out their work to explain how a publication falls into the category of publications that Parliament has deemed objectionable. The Classification Office and the Board of Review must also demonstrate why classifying certain publications as objectionable is a demonstrably justified limitation on freedom of speech. Generally the Moonen approach is followed.

New Media Issues

But lately the Chief Censor, Mr. David Shanks, has been calling for a widening of his brief. At an Otago University conference about ‘Social Media and Democracy’ in March 2021, Mr. Shanks told the conference the way we regulate media is not fit for the future.

“We can be better than this. I think there’s some very obvious moves that we can do here to make the current regulatory system and framework more coherent for a digital environment,” he said.[7]

The “Misinformation” Study

As part of an overall review of regulatory structures surrounding harmful information dissemination, the Government released a discussion paper on hate speech and at the same time the Chief Censor released a paper entitled “The Edge of the Infodemic: Challenging Misinformation in Aotearoa” which in essence is a survey about how citizens are concerned about misinformation. The internet and social media are identified as key sources – while experts and government are trusted more than news media. 

The Chief Censor says it shows the need for urgent action. But the question must be asked – why? Do we need the government or some government agency to be the arbiter of truth? Are we so uncritical that we cannot discern misinformation from empirically based conclusions?

The concerns about new media are not new. Many of the criticisms of the Internet and social media levelled by the Chief Censor have been articulated in the past.  Speaking of newspapers Thomas Jefferson expressed an acidic concern that editors “fill their newspapers with falsehoods, calumnies and audacities”.[8]

What is seen as a problem seems to be a difficulty in accepting that as many as there are people there are opinions. One wonders whether the questions properly addressed the issues. The findings of the report must be concerning. New Zealanders tend to distrust online sources of information.

Only 12 percent had high trust in news and information from internet and social media users – and 83 percent think this group frequently spreads misinformation on purpose.

But 79 percent also said they get news or information from social media and also use it to verify information.

The report found New Zealanders have a relatively high level of trust in news and information from scientists, researchers or experts (78 percent) and government agencies and officials (64 percent).

Six out of 10 respondents reported high trust in New Zealand’s news media – a more favourable result than the responses recorded for overseas news media.

But these findings beg the question I have already raised. Are we talking about facts or are we talking about opinions. Even facts can be “spun” to fulfil a particular purpose and can be interpreted in a number of ways. The facts remain the same. The interpretations may differ. And this is important in a vibrant and developing society. The “truth” for one may not be a “truth” for another.

The concerns that the report advances have been derived from an extensive survey that has been conducted. The findings of the survey lead inexorably to the conclusion that “something must be done” and I would suggest that the “something” involves the control or monitoring of information. And it must be of concern that the self and statutorily described[9] censor is driving this.

So what does the report tell us. I state the findings and my observations in italics follow each one.

First, it is common for New Zealanders to see news and information they think is false or misleading. Opinions differ as to what counts as misinformation, but one topic identified as a source of misinformation surrounds Covid 19. Another concern is that this misinformation is influencing people’s views about things like politics, public health and environmental issues, and many see misinformation as an urgent and serious threat.

What is apparent from this concern is that misinformation is recognised. This would seem to suggest that those who contributed to the survey are still in possession of the reasoning and critical faculties and can distinguish valid information from rubbish. The volume of misinformation may drive a concern but what does it threaten. This question seems to be unanswered.

But arising from this is another more fundamental issue and one that I have already alluded to – what is misinformation. Is it a skewing of facts – something that politicians are skilled in although for them it is called “spin” – or is it a statement of opinion. One wonders how many statements of opinion are taken as fact, especially if the reader or listener or viewer agrees with the opinion.

Secondly New Zealanders tend to distrust online sources of information generally, and this is especially true of social media. Many New Zealanders think social media users and corporations often spread false and misleading information intentionally. At the same time, the internet is the most popular source of news and information, while also being a reference point to verify, fact check or confirm this information.

The first point is a valid one. Do not implicitly trust everything that you see online. With a medium like the Internet – and social media platforms – everyone has a voice. Whereas mainstream media could be selective, had verification duties and are subject to rules about balance and a disciplinary process such as the Broadcasting Standards Authority or the NZ Media Council, social media does not. Thus it follows that statements by individuals on social media platforms should at least be taken with a grain of salt and should be subject to critical scrutiny and verification.

Whether online or offline, most New Zealanders tend to trust information from more traditional sources like government officials, scientists and the New Zealand news media. However, the research shows that people with higher trust in online only sources of information – and who use these sources more often – are more likely to express belief in statements associated with misinformation.

This probably says more about the critical faculties of those who rely on online sources for their information. And this goes to a lack of development of intellectual rigour that goes back to the education system, together with a level of naivete that would suggest that too many people accept anything without question or without careful analysis. It is not the source of the information that is to blame. It is the uncritical stance of the reader that is the problem.

The report then goes on to widen the problem with some rather sweeping generalisations.

Misinformation is widespread and affects everyone. This is true regardless of age, gender, ethnicity or other characteristics.

Subject to defining misinformation (which I discuss below) there is no doubt that all facets of information, true or false are widespread. Does this affect everyone? If what is meant is “does everyone come into contact with misinformation” there is certainly that potential. But if the meaning of the word “affect” is to influence I would have some quibble about the suggestion that misinformation influences everyone. Once again this has more to do with the critical analysis of information, but I consider this conclusion to be overly broad

It’s relatively common for New Zealanders to express belief in at least some ideas that are linked to misinformation – ideas which are not backed by the best available evidence we have.

I would be very interested to see the evidence for this statement and once again it speaks more to the naivete and lack of critical rigour on the part of the audience. And, of course, even a bad idea may be worth consideration if only to analyse it and discard it. The problem I think lies in the use of the word “belief” which suggests something other than an evidence based or empirical conclusion

When people rely on misinformation to make important decisions it can have a harmful impact on the health and safety of our communities. It can also affect us on a personal level, contributing to anxiety, anger, and mistrust.

Agreed. But the issue is the reliance that is placed on misinformation and once again – at risk of repeating myself ad nauseum – much depends upon the critical faculties and analysis employed by the audience. If people choose to make important decisions without properly analysing the source of the evidence supporting those decisions then that is a matter for them.

People often take action themselves in response to misinformation – such as searching different sources to see if information is accurate, looking at more established news sources, or talking about it with people they trust.

New Zealanders also see this as a societal problem that requires more action. They have differing views on who should do this and how. Many think government, news media and experts have the biggest role in dealing with the spread of misinformation, but that individual internet users and social media corporations also have an important role.

Many New Zealanders see the Government as the solution to problems. Rather I agree that responsibility for ascertaining whether content is information or misinformation should the in the hands of the recipient. I agree that individual internet users and social media users have a role – but it is not for the social media corporations to vet content or carry out some moderating activity over content. I base this comment on the fact that Internet based information and indeed the communications paradigm it has introduced means that we must recognise that paradigm shift and consider regulatory solutions in light of it.

What is “Misinformation”?

The problem of “misinformation” and the concerns that are expressed in the report depend very much upon the definition of the term. The Report offers some brief definitions. There is a specific rider to the definitions offered which narrow the concept down to something that is potentially harmful. Other definitions are quite a bit wider.

The Report definitions are as follows:

Misinformation: false information that people didn’t create with the intention to hurt others.

Disinformation: false information created with the intention of harming a person, group, or organisation, or even a country.

Mal-information: true information used with ill intent.

The definitions set out are quite specific and share a similar characteristic and that is that the spread of the information (misinformation, disinformation or mal-information) is accompanied by a specific intention and that is to harm or hurt others[10].

The Report goes on to say

“Misinformation is nothing new, but there are increasing concerns worldwide about the prevalence of misinformation – especially online – and its potential to impact democracy, public health, violent extremism and other matters. We’ve seen how the spread of false and sometimes hostile misinformation and conspiracy theories continue to impact on our whānau and communities during the Covid-19 pandemic, and how extremist talking points and ideology can contribute to real-world violence such as the March 15 attacks in Christchurch.”

Misinformation is defined in the Oxford English dictionary as “false or erroneous information”, and as the report states, the existence of false or erroneous information is nothing new. Falsity implies that the communicator of the information is aware of the falsehood but perpetrates it nonetheless. Erroneous implies error or mistake which lacks the element of wilful deception.

Putting to one side the emotive reference to the March 15 attacks – and there is no evidence that the terrorist was influenced by misinformation – the concern that is expressed is that false, erroneous and sometimes hostile information and conspiracy theories have an impact. As it proceeds the Report seems to lose sight of the qualification that harm must be intended and seems to focus more upon the falsity or error of the information circulated.

Two issues arise from this. The first is that the recipient of information must be critical of the information received and subject it to analysis to determine whether it is “true” or “false”.

The second is that most information disseminated, especially across social medias platforms, is opinion or “point of view” which means that the disseminator is coming from a particular standpoint or is writing with a particular agenda. It would be incorrect for anyone to suggest that the opinion pieces in the New Zealand Herald by columnists such as Simon Wilson, Richard Prebble, Michael Cullen or Mike Hosking are anything else but that. They are interpretations of fact taken from a particular standpoint. It is up to the reader to determine whether first, the facts bare valid and secondly whether the opinion is therefore valid. Finally, if the answer to both questions is in the affirmative there is nothing to compel the reader to accept the opinion. The reader is free to disagree with it.

An associated issue arises and that is the guarantee of the freedom of information contained in section 14 of the New Zealand Bill of Rights Act 1990. The provisions of section 14 are wide. They refer to the imparting and receiving of information – thus widening the usual understanding of freedom of expression to be the imparting of information. It is significant too that section 14 does not qualify the word “information”. There is no suggestion that the information must be true or that it cannot be “misinformation.”

Information is that which informs. To inform someone is to impart learning or instruction, to teach or to impart knowledge of some particular fact or occurrence. The traditional meaning of information suggests an element of factual truth and thus misinformation is erroneous or incorrect information. One interpretation of section 14 is to use the traditional meaning of information which suggests an element of fact based truth. A wider interpretation would include material based on mistaken facts. And then, of course, there is the question of opinion which is a view of one person about a certain set of circumstances.

But in the field of information, misinformation, fact and truth there will always be disputes. Some will be trivial. Others will be significant. Some may be wrong headed. Others may be designed to mislead. Given these varieties of information, what is proposed that we should do about what is referred to in the report as the “infodemic.”

An Internal Inconsistency?

The Infodemic paper contains the following critical acknowledgement.

Misinformation is not in and of itself illegal – and it would be impractical and counterproductive to make it so. It should not be unlawful to express a view or belief that is wrong, or that is contrary to prevailing evidence and opinion.

There are certain types of misinformation with which the law should be involved such as information which promotes criminal or terrorist activity and may fall within the existing ambit of the Films Video and Publications Classification Act, the Human Rights Act or the Crimes Act.

These legal restrictions are perfectly legitimate. They are very limited and are justifiable limitations on the right of freedom of expression guaranteed by section 14 of the New Zealand Bill of Rights Act. But misinformation does not fall within their ambit, nor should it as acknowledged by the Report.

This then raises the issue – what is the problem? Is the raison d’etre for the paper to identify an issue and sound a warning. Or does it go further. The answer, in my opinion, lies in the latter. Realistically the paper recognises that misinformation will never be eliminated nor should it. But in keeping with Mr. Shanks concerns expressed in 2019, the real target for stemming the infodemic lies in dealing with the disseminators – and by that I mean not the individuals who spread misinformation but the digital platforms that enable wide dissemination.

Addressing the Problem

I shall outline the proposals advanced by the Infodemic paper but would offer a note of caution. Some of the proposed solutions are based on existing regulatory or content assessing models. They ignore some of the essential properties of digital systems which make regulation in the Digital Paradigm and completely different exercise from existing regulatory models.

I have discussed the problems of regulation in the Digital Paradigm elsewhere and in some detail[11]. Suffice to say that to engage in any form of content control in the Digital Paradigm is difficult given that the dissemination of content is inextricably entwined with the medium of distribution.

Marshall McLuhan’s aphorism “The Medium is the Message” states the problem, albeit somewhat opaquely. To attempt to control the message one must first understand the medium. This is often overlooked in discussions about regulation in the Digital Paradigm. It is something of an exercise in futility to attempt to apply the models or standards that are applied for what is essentially mainstream media regulation. And to treat online platforms, irrespective of their size and market dominance, in the same way as “analog” or mainstream media platforms ignores the fact that online platforms occupy a paradigmatically different communications space from mainstream media platforms like newspapers, radio and television.

With that cautionary observation I shall consider the proposals in the Infodemic paper.

The report offers five possible avenues for dealing with what it refers to as the Infodemic.

  1. Informing and empowering New Zealanders – this solution is expressed in the report as a means by which misinformation about Covid 19 and vaccinations may be countered. Of course, from a general perspective this is a wider issue than just misinformation and conspiracy theories about the pandemic. Many New Zealanders are concerned about the impact of misinformation across a broad range of topics, including the environment and racial tolerance.

Some of this is based on mistrust of accurate sources of information and it is suggested that steps should be taken to help those who are affected by misinformation and conspiracy theories.

This, of course, is based on the assumption that there is an empirical basis which suggests that alternative views are wrong and should not be believed. And this harks back to the quotation at the beginning of this piece. Are my “truths” the same as yours.

The concern that I have about this proposal is the suggestion that there is but one truth, one “authorised version” to which adherence must be given. It may be easy to prove that a Covid vaccine is effective on the basis of scientific analysis and empirical proof. It may be less easy to prove matters which travel in the realms of faith and belief. And the problem with “authorised versions” is that they become of “approved version” with the result that other “truths” may become sidelined and dismissed to the point where they become heretical.

  1. Education – this is a solution that I find appealing. Media literacy and critical thinking skills can help us sort fact from fiction and interpret information. These skills can also help build resilience in the community against misinformation.

A central government campaign could reach many people but is unlikely to influence people and communities who already have lower trust in government. And should it come from the government in any event – a government which may have its own political agenda?

Education in schools is also needed to empower and equip our young people to recognise and challenge misinformation. Our education system already aims to provide children and young people with the critical thinking skills necessary to navigate a complex world.

  1. Content Moderation and Industry Responsibility – Recent research suggests that misinformation travels through the internet much more rapidly than accurate information. This is one of the realities of internet based information. In the same way that the printing press enabled the increased dissemination of information so the Internet does this in an enhanced and exponential way.

The algorithms that select and promote posts and information on many social media and digital platforms often select information that is ‘high engagement’ – that is, information that attracts more comments, shares and likes. Misinformation can often be high engagement, as it can easily be more sensational, or generate stronger emotions. These algorithms, it should be observed, are also used by mainstream media who use online platforms and accounts for the “ranking” that reports may have on a news website.

Online platforms other than those used by mainstream media who may be subject to the New Zealand Media Council  are not generally subject to the same standards around accuracy, fairness and balance that newspapers, broadcast or other news media are.

However, as I have suggested above, it is a mistake to attribute the responsibilities of mainstream media platforms to online platforms. They are paradigmatically different.

The first point is that content that is broadcast or published in mainstream media goes through an editorial process. Content that is posted on social media does not, nor should it be the duty of the provider of the platform to moderate another person’s content that has been posted.

The second point is that content moderation is a difficult process in the digital paradigm given that essentially social media platforms handle large quantities of data that are later rendered into soe recognisable of comprehensible form. Of course, algorithms can and should be used to trap dangerous content that advocates violent harm or action.

It is suggested that there should be engagement with digital platforms in a co-ordinated way along with industry codes of practice which could result in a consistent set of expectations and approaches in New Zealand.

Once again this suggests a “one truth” solution which creates difficulties is a society with a plurality of opinions.

One suggestion is for users to “call out” and report misinformation, but much depends on how this is done. The development of the “cancel culture” regrettably is intolerant of different strands of opinion and I fear that “calling out” is not the way to go. Rather engagement in rational debate and proposing an alternative would allow for the marketplace of ideas to come into play and separate the wheat from the chaff.[12]

  1. Regulation and Policy

Once again the proposal seeks to compare mainstream media with a paradigmatically different information system that is the Internet.

The statement is made as follows:

“While most misinformation is not illegal, much of it would be in breach of industry standards concerning accuracy. Such standards apply to broadcast services (under the Broadcasting Act), print media (under the standards administered by the New Zealand Media Council) and advertising (under the Advertising Standards Authority). Most of the broadcast and industry self-regulatory models were not set up to address the challenges presented by the digital age such as misinformation shared on platforms like Facebook or YouTube.”

Then it is suggested that a consistent regulatory approach across non-digital and digital misinformation alike is needed.

If I understand it correctly what is being suggested is that the regulatory approach applicable to mainstream media, which developed in an entirely different paradigm from digital media, should be applied across the board.

This ignores that fact that most if not all of the content on digital media and especially social media is user generated. In fact social media allows everyone who has an internet connection to have a voice. Whether or not any attention is paid to that voice is another matter. But within a democratic society, this opportunity has never before been available. And if one looks, for example, at an autocratic state such as the Peoples Republic of China with its severe restraints on freedom of expression and its extreme regulation of Internet content, the question must be asked – is that the road that we wish to travel?

  1. Research and evaluation -The understanding of what needs to be researched and evaluated is becoming clearer, and this should be an ongoing process. The information environment will continue to rapidly evolve – often in ways no-one can predict. As new evidence emerges, interventions will change as well.

This solution seems to suggest that the reason for research and evaluation is to determine interventions and regulatory responses. This must be something of a concern in light of the comment earlier made that misinformation is not illegal and nor should it be.


There are two major issues that arise from the paper.

There is no doubt that misinformation can be problematical. It is, however, one of the attributes of a society that values diversity of opinions and point of view and that values and celebrates a plurality of beliefs.

Eroding Freedom of Expression?

In some respects it is difficult to discern the target in the misinformation paper. Clearly it has been inspired primarily by the conflicting information that has been swirling around about aspects of the Covid crisis. But there is more including references to the 15 March 2019 terror attacks and the various issues surrounding the introduction of 5G, QAnon and the United States polarised society and conspiracy theories.

But there seems to be a deeper issue and that surrounds calls that have been made to regulate the Internet or at least impose some restraints on the activities of social media platforms. Part of the problem with social media platforms is that they allow for a proliferation of a variety of opinions or interpretations of facts which may be unacceptable to many and downright opposed to the beliefs of others.

Governments and politicians, although they are great users of social media platforms, cannot abide a contrary message to their own. In a democracy such as New Zealand it is something with which they must live although there is little hesitation at nibbling away at the edges of expressions of contrary opinions.

Characterising them as “misinformation” is a start down the road of demonisation of these points of view. At the same time, following the 15 March massacre, the Prime Minister of New Zealand instituted the “Christchurch Call” – an attempt to marshall international support for some form of Internet regulation. No laws have been passed as yet and social media organisations, seeing which way the wind is blowing, have made certain concessions. But it is, in the minds of many, still not enough.

In New Zealand a review of media regulatory structures lies behind the “misinformation” study along with the ill-considered and contradictory proposals about “hate speech”. The assault on freedom of expression or contrarianism is not a frontal one – it is subtle and gradual but it is there nonetheless. It is my opinion that the real target of the “misinformation” study is not “misinformation” but rather the expression of contrary points of view – however misguided they might be. And that is a form of censorship and it is therefore not surprising that this move should come from the Chief Censor.

A Democratic Solution

It would be to tread a dangerous path to place the determination of “good information” and “bad information” in the hands of the government or a government organisation. Only the most extreme examples of misinformation which may do demonstrable harm such as objectionable material or terrorist information should be subject to that level of moderation. To add “misinformation” as a general category without precise definition to the sort of material that is objectionable under the Films, Videos and Publications Classification Act would be a retrograde and dangerous step.

There is already a form of content moderation in place, run through the Department of Internal Affairs which makes a filter available to Internet Service Providers to block certain content.[13]

Of the proposals suggested above it will be apparent that I favour as little interference with online platforms as possible, and I do not support anything more that minimal interference with content that is not demonstrably harmful and am of the view that what people wish to see as a “truth” should be left to the individual to make his or her own judgement.

The problem with “misinformation” has been heightened by the conflicting points of view surrounding the Covid crisis – indeed the paper itself picks up on this by describing the misinformation problem as an “infodemic” – the 2020 US Presidential election and some of the conspiracy theories that have been circulating courtesy of Qanon and the like.

But it is not a problem that warrants government or regulatory interference and indeed it should be noted that the Department of Internal Affairs review of media and online content regulation focusses upon content that is harmful.

Misinformation may misinform but much of it depends upon the reader or listener’s willingness to stand apart and subject the content to critical analysis. The problem, however, is that for many people they believe what they want to believe and their truths may not be those held by their neighbours.

[1] Chris Watson and Roy Shuker In the Public Good? Censorship in New Zealand (Dunmore Press, Palmerston North, 1998) at 35.

[2] Re Baise-Moi [2005] NZAR 214 (CA).

[3] United States v One Book Called “Ulysses” 5 F Supp 182 (SD NY 1933); United States v One Book Entitled Ulysses by James Joyce (Random House Inc, Claimant) 72 F 2d 705 (2d Cir 1934).

[4] Re Into the River Film and Literature Board of Review, 14 October 2015.

[5] Moonen v Film and Literature Board of Review [2000] 2 NZLR 9 (CA) at [15].

[6] [2002] 2 NZLR 754 (CA)

[7] “Battle Against Online Harm beefs up censor’s power” Media watch 21 March 2021

[8] He also stated on another occasion “Were it left to me to decide whether we should have a government without newspapers, or newspaper without government, I should not hesitate a moment to prefer the latter”

[9] Films Videos and Publications Classification Act 1993 section 80(1).

[10] In some respects this resembles the types of actionable digital communication under the Harmful Digital Communications Act 2015. In both the civil and criminal spheres under the act there must be harm which is defined as serious emotional distress. The report does not go into specifics about what is required to hurt or harm others.

[11] See David Harvey Collisions in the Digital Paradigm: Law and Rulemaking in the Internet Age” (Hart Publishing, Oxford 2017) esp. at Chapter 2.

[12] As at the time of writing it should be noted that a comprehensive review of media content regulation in New Zealand was announced by Minister of Internal Affairs, Hon Jan Tinetti, on 8 June 2021. The review is managed by the Department of Internal Affairs, with support from the Ministry for Culture and Heritage. The review aims to create a new modern, flexible and coherent regulatory framework to mitigate the harmful impacts of media content, regardless of how it is delivered.

The framework will still need to protect and enhance important democratic freedoms, including freedom of expression and freedom of the press.

It correctly observes “Our existing regulatory system was designed around a traditional idea of ‘analogue publication’, such as books, magazines and free-to-air TV, and does not have the flexibility to respond to many digital media types. As a result, it addresses harm in a shrinking proportion of the media content consumed by New Zealanders and provides little protection at all for digital media types which pose the greatest risk for harmful content.” See (Last accessed 9 July 2021)

[13] (Last Accessed 9 July 2021)

Information Governance and E-Discovery

In May of 2015 I had the pleasure and honour of sharing the stage with Chief Magistrate Judge Elizabeth Laporte at the IQPC 10th Anniversay Information Governance and eDiscovery Summit held at the Waldorf Hilton in London. The session was chaired by Chris Dale of the excellent and continually informative Edisclosure Information Project and addressed the Global Impact of eDiscovery and Information Governance within the context of data collection for cross border cases.

The session was allocated a generous ninety minutes of Conference time, starting shortly after 9:00 pm. This enabled the presenters to make a brief presentation on issues that appeared to be relevant to the general topic. My presentation addressed common themes present in the e-discovery regimes in the APAC region – Australia, Singapore, Hong Kong and New Zealand.

Following the presentations Chris led a discussion that covered a wide range of discovery and disclosure issues. The approach of the US Courts recently exemplified by the case of In the Matter of a Warrant to Search a Certain E-mail Account Controlled and Maintained by Microsoft (District Court SDNY M9-150/13-MJ-2814 29 August 2014 Judge Preska).

One of the common themes emerging from this discussion was that although a local court may purport to exercise “long arm” jurisdiction in the case of content located off-shore, compliance with local data disclosure requirements may come into play, rendering the disclosing party liable to possible sanctions if compliance is not forthcoming.

Another issue that we discussed was that of the need for lawyers to understand and appreciate the way in which technology is used in developing areas of law and in the ediscovery field in particular. In the United States an understanding of technology is a pre-requisite for competence to practice in some areas and as we move further and deeper into the Digital Paradigm, I consider this to be an absolute necessity. Not only lawyers. More and more cases involve aspects of technology either as the subject matter of the dispute or as an aspect of the evidence that is place before the Court. It is essential that Judges have a working knowledge of some of the more common information technologies. This is something of a contentious issue for there is a school of thought that suggests that judicial understanding of technology can be reached per medium of expert evidence. That proposition may have a limited degree of validity in the case of the subtle aspects of the workings of the technology, but should not extend to ocnstant and repetitious explanations of the general way which a packet switching network operates, or the nature of email metadata and how it operates.

The Conference was a valuable one. The sessions were extremely interesting and highly relevant, all of them presented by experts in the field. I am grateful to the organisers for inviting me and to Chris Dale for his excellent Chairmanship of our session and his insightful discussion management.

I prepared a paper for the Conference delegates and a copy is available on Scribd or may be perused here.