Participation, Knowledge and Publication

This post is about the defamation case of Fairfax Media Publications Pty Ltd v Voller [2021] HCA 27 (8 September 2021). The issue was whether Fairfax was a publisher of comments made by third parties on its Facebook page for the purposes of defamation proceedings.

In essence the decision confirms that an organisation or person opening a site or post to comments by others may be liable for any defamation in the comments others then make. Larger organisations may be able to track, vet and remove problematic posts quickly, but for individuals and organisations without continuous site monitoring, their risk from third party posts might be more difficult to control or mitigate.

It should be noted that the case states the law as applicable in Australia. The New Zealand Courts have taken a different path. However, if the publication has taken place in Australia (under the holding in Dow Jones v Gutnick) an offshore organization or person operating a site where third party comments are posted could be subject to the jurisdiction of the Australian Courts.

Fairfax Publications, in addition to publishing newspapers, maintained a public Facebook page to which they posted news contents and maintained hyperlinks to other material on their site. They invited comment from members of the public who were Facebook users.

The respondent claimed that following the appellants posting about particular news stories referring to him, including posts concerning his incarceration in a juvenile justice detention centre in the Northern Territory, a number of third-party Facebook users responded with comments that were defamatory of him. He alleged that the appellants were liable as the publishers of those comments.

The question was whether the respondent, the plaintiff in the proceedings, had established the publication element of the cause of action of defamation against the defendant[s] in respect of each of the Facebook comments by third-party users. The appellants took the view that a negative answer to the separate question would result in dismissal of the proceedings.

The Facebook page used by the appellant was managed by a Page administrator, the person or persons authorised by the appellant to administer it in accordance with Facebook’s terms of use.

There was evidence before the primary judge, which was largely uncontentious, that an administrator could prevent, or block, the posting of comments by third parties through various means, although the Facebook platform did not allow all posts on a public Facebook page to be blocked.

Individual comments could be deleted after they were posted but this would not prevent publication. It was possible to “hide” most comments, through the application of a filter, which would prevent publication to all except the administrator, the third-party user who posted the comment and their Facebook “friends”.

Hidden comments could then be individually assessed by an administrator. If sufficient staff were allocated to perform this task, comments could be monitored and “un-hidden” if approved by an administrator.

The Court observed that the number of comments is an important aspect of the use of a public Facebook page, because comments increase the profile and popularity of the page, which in turn increases the readership of the digital newspaper or broadcast, and thus, the revenue from advertising on both the page and the digital newspaper or broadcast.

The argument for the appellants was that to be publishers they had to be instrumental to or a participant in the communication of the alleged defamatory matter. They argued that they did not make the defamatory comments available to the public, did not participate in their publication and merely administered the Facebook page on which third parties published material. In this way they were more closely equivalent to the supplier of paper to a newspaper owner or the supplier of a computer to an author.

They also argued that they were akin to the owner of premises upon which defamatory material had been posted or written – graffiti was given as an example – and in cases of this kind there had to be knowledge – an awareness – of the statements and allowing them to remain in place.

However, it was observed that in the Hong Kong case of Oriental Press Group Ltd v Fevaworks Solutions Ltd (2013) 16 HKCFAR 366 it had been held that internet platform providers which hosted a discussion forum were in a different position from the occupiers referred to in those cases. Unlike the occupiers, the providers had encouraged and facilitated postings by members of the forum and were therefore held to be participants in their publication from the outset.

Before the High Court it was argued for the appellants that the common law requires that the publication of defamatory matter be intentional. It is not sufficient that a defendant merely plays a passive instrumental role in the process of publication. To be a publisher a person must intend to communicate the matter complained of, which is to say the relevant words. This, it was argued, followed from the decisions of Webb v Bloch (1928) 41 CLR 331 at 363-4 and from Trkulja v Google LLC (2018) 263 CLR 149 at 163 [68].

The High Court rejected the suggestion that a deliberate act of allowing defamatory material to remain was good law. Following the rationale in Dow Jones v Gutnick (2002) 210 CLR 575 at 600 [26] it held that publication may therefore be understood as the process by which a defamatory statement or imputation is conveyed. A publisher’s liability does not depend upon their knowledge of the defamatory matter which is being communicated or their intention to communicate it. It depends on mere communication of the defamatory matter to a third person. As was held in Lee v Wilson & McKinnon (1934) 51 CLR 276 no question as to the knowledge or intention of the publisher arises.

The High Court revisited Trkulja  and confirmed that the correct meaning of publication is that any act of participation in the communication of defamatory matter to a third party is sufficient to make a defendant a publisher. A person who has been instrumental in, or contributes to any extent to, the publication of defamatory matter is a publisher. All that is required is a voluntary act of participation in its communication.

The time of participation in publication is critical. The High Court considered the line of cases commencing with Byrne v Deane [1937] 1 KB 818 which concerned the placing of an alleged defamatory verse on the wall of a golf club. The rules of the club required the consent of the Secretary to the posting of any notice in the club premises. The Court in that case noted that publication was a question of fact, depending on the circumstances of each case. Cases were referred to where persons who had taken no overt part in the publication of defamatory matter nevertheless adopted and promoted its reading so as to render themselves liable for its publication.

In Bryne v Deane there was evidence which tended to show that the actions of both defendants, as directors of the golf club, fell into this latter category. By electing to leave the alleged libel on the wall of the club, having had the power to remove it, they were taken to have consented to its continued publication to each member who saw it.

The High Court concluded that such cases do not establish a different rule for publication based on the intention of the occupiers. These cases involve the application of the general rule of publication to a particular set of circumstances where a person who has not participated in the primary act of publication may nevertheless become a publisher.

The time when the occupier becomes aware of the publication of the material marks the point from which the occupier’s conduct or inaction is assessed to determine whether they can be said to have participated in the continuing publication. Cases of this kind – such as Oriental Press Group Ltd v Fevaworks Solutions Ltd (2013) 16 HKCFAR 366 (internet platform providers) and Murray v Wishart [2014] 3 NZLR 722 (hosts of a Facebook page) – are not useful to explain the involvement of others in publications in very different circumstances and were held not to be of assistance in this case.

It should be observed that the High Court decision was a majority one. Three judges (Kiefel CJ, Keane and Gleeson JJ)concurred in one set of reasons.

Two judges (Gageler and Gordon JJ) wrote a separate opinion. That opinion extensively reviewed many of the overseas authorities on the issue of publication.

The Judges stated

Murray v Wishart was a decision in which the New Zealand Court of Appeal held that an individual Internet user who was the administrator of a private Facebook page and who had no “actual knowledge” of the contents of third-party comments posted on the page was not liable in defamation. The Court of Appeal proceeded without reference to Webb v Bloch, and indeed without analysis of what constitutes publication at common law. Rather, the starting point for its analysis was that the issue of publication was to be determined by “strained analogy” with previously decided cases. It appeared to assume that either actual or constructive knowledge of the defamatory content was necessary for publication. Its ultimate conclusion that “the actual knowledge test should be the only test to determine whether a Facebook page host is a publisher” was reached having regard to the guarantee of freedom of expression in the New Zealand Bill of Rights Act 1990 (NZ). The reasoning does not reflect the common law of Australia.”

They also noted the decision of Oriental Press Group Ltd v Fevaworks Solutions Ltd, where the issue was not as to publication but as to whether the common law defence of innocent dissemination was available to the respondents, who administered a website which hosted an Internet discussion forum on which users posted defamatory matter. Before turning to resolve that issue, Ribeiro PJ said of the respondents:

“They were certainly publishers of those postings (and do not seek to argue otherwise) since they provided the platform for their dissemination, but the respondents were not aware of their content and realistically, in a many-to-many context, did not have the ability or opportunity to prevent their dissemination, having learned of them only after they had already been published by their originators.”

Such an observation would have to be obiter, albeit strong, given that publication was not the issue for decision.

Two other members of the High Court dissented from the finding (Edelman and Steward JJ) , Steward J referring with to the public meeting analogy in Murray v Wishart in that the actions of the appellant were insufficient to amount to being instrumental in the publication.

Edelman J did not hold that the appellants were not publishers but that what was needed to establish that they were publishers meant that it was necessary to establish publication in respect of each of the Facebook comments by third-party users by establishing that the Facebook comment has a connection to the subject matter posted by the defendant that is more than remote or tenuous.

Steward J considered that the respondent had to establish the publication element in relation to the third party comments which had been procured, provoked or conduced by posts made by the appellants on their respective Facebook pages. It should be noted that the dissenters were not prepared to hold that publication had not taken place. In some cases they felt that someone posting material online might do so in a way that is more culpable than a meeting-organiser and be justifiably considered a “publisher” – for example posting highly controversial material in the hope that algorithmic forces might increase readership and exposure.

In this respect the analogy of a public meeting may be of reduced relevance, given that there are the “dark forces” of algorithms that drive content in certain directions, thus altering the “content neutral” image of the public meeting.

Comment

The effect of the majority decision is that responsibility for publication of third party comments does not depend upon actual knowledge of the presence of the defamatory material. The knowledge element only becomes relevant in determining participation in continued publication.

As observed, the law in Australia differs from that in New Zealand. The Court of Appeal in Murray v Wishart has made it clear that an actual knowledge test is necessary for publication. A matter that is shared by both jurisdictions is that in the matter of publication the enquiry is a fact specific one.

Certainly the decision case has prompted change. Even before the High Court of Australia decision Facebook changed its comments functionality in 2021 to allow users greater control.

Concern has been expressed that the decision may alter the defamation landscape. Australian Courts have adopted a much more severe “plaintiff friendly” approach to definition of defamation elements – see Dow Jones v Gutnick, Duffy v Google [2015] SASC 170 and Trkulja v Google. Given the way in which the law has developed in New Zealand it may be unlikely that the distance in legal position will become any closer in the near future. But there could well be a change in focus if a more granular examination of the operation of the technology of the platform is carried out and it is that, rather than questionable analogies, that drives the decision.

Advertisement

The Content Regulatory System Review – An Overview

Lockdown has its benefits. For some time I have been asked whether or not I would contemplate a 5th edition of “internet.law.nz – selected issues.” After 4 editions including a revised 4th edition my inclination had been that I had written enough on the subject, but a review of the 4th edition together with a review of what I had written in other for a persuaded me that a 5th edition might be a possibility. Lockdown has given me the perfect opportunity to research and write in the comparative peace and solitude that accompanies Alert Level 4.

The approach that I propose will be different from what has gone before, although much of the material in earlier editions will be present. But the focus and the themes that I want to examine differ. I am interested in the regulatory structures that are being applied to the online environment and in particular I am interested in the area of content regulation. This involves a number of areas of law, not the least of which is media law and there is quite an overlap between the fields of media law and what could loosely be termed cyberlaw.

What I am trying to do is examine the law that it has developed, that is presently applicable and what shape it may likely have in the future. In this last objective I am often assisted by proposals that governments have put forward for discussion, or proposed legislation that is before the House.

In this piece I consider a review of content regulation. The proposal, which was announced on 8 June 2021, is extremely broad in scope and is intended to cover content regulation proposals and mechanisms in ALL media – an ambitious objective. What follows are my initial thoughts. I welcome, as always, feedback or comments in the hope that the finished product will be a vast improvement on what is presently before you.

The Proposals

A comprehensive review of content regulation in New Zealand was announced by Minister of Internal Affairs, Hon Jan Tinetti, on 8 June 2021. The review is managed by the Department of Internal Affairs, with support from the Ministry for Culture and Heritage. 

The review aims to create a new modern, flexible and coherent regulatory framework to mitigate the harmful impacts of content, regardless of how it is delivered.

The framework will still need to protect and enhance important democratic freedoms, including freedom of expression and freedom of the press.

Content is described as any communicated material (for example video, audio, images and text) that is publicly available, regardless of how it is communicated.

The need for the review arises from a recognition of media convergence. The review outline states that the ongoing evolution of digital media has resulted in significant and growing potential for New Zealanders to be exposed to harmful content. This was made evident by the livestreaming and subsequent uploading of the Christchurch terror attack video.

Our existing regulatory system was designed around a traditional idea of ‘analogue publication’, such as books, magazines and free-to-air TV, and does not have the flexibility to respond to many digital media types. As a result, it addresses harm in a shrinking proportion of the content consumed by New Zealanders and provides little protection at all for digital media types which pose the greatest risk for harmful content.

The increase in the potential for New Zealanders to be exposed to harmful content is compounded by the complexity of the regulatory system. Different rules apply for content hosted across media channels. This increases difficulty for New Zealanders when deciding what content is appropriate for them and their children and creates confusion on where to report harmful content. 

There is also an uneven playing field for media providers as some types of media are subject to complicated regulatory requirements and some to no regulations at all.

The introduction to the review notes that New Zealand’s current content regulatory system is made up of the Films, Videos, and Publications Classification Act 1993, the Broadcasting Act 1989 and voluntary self-regulation (including the New Zealand Media Council and Advertising Standards Authority). The Office of Film and Literature Classification and the Broadcasting Standards Authority are statutory regulators under their respective regimes. 

New Zealand’s content regulatory system seeks to prevent harm from exposure to damaging or illegal content. It does this through a combination of classifications and ratings to provide consumer information, and standards to reflect community values. These tools are designed to prevent harm from people viewing unwanted or unsuitable content, while protecting freedom of expression.

What is proposed is a broad, harm minimisation-focused review of New Zealand’s media content regulatory system which will contribute to the Government’s priority of supporting a socially cohesive New Zealand, in which all people feel safe, have equal access to opportunities and have their human rights protected, including the rights to freedom from discrimination and freedom of expression. 

The objective of social cohesion was one of the strong points made by the Royal Commission on the 15 March 2019 tragedy in Christchurch.

The review recognises that a broad review of the media content regulatory system has been considered by Ministers since 2008 but has never been undertaken. Instead piecemeal amendments to different frameworks within the system have been made to address discrete problems and gaps.

The problems posed by the Digital Paradigm and media convergence, coupled with the democratisation of media access has, in the view expressed in the briefing paper resulted in significant and growing potential for New Zealanders to be exposed to harmful media content. Our existing regulatory frameworks are based around the media channel or format by which content is made available and do not cover many digital media channels. This model does not reflect a contemporary approach where the same content is disseminated across many channels simultaneously. As a result, it provides protection for a decreasing proportion of media content that New Zealanders experience. This means that New Zealanders are now more easily and frequently exposed to content they might otherwise choose to avoid, including content that may pose harm to themselves, others, and society at large.

What is proposed is a harm-minimisation focused review of content regulation. This review will aim to create a new modern, flexible and coherent regulatory framework to mitigate the harmful impacts of media content, regardless of how it is delivered. The framework will still need to protect and enhance important democratic freedoms, including freedom of expression and freedom of the press. The threshold for justifying limitations on freedom of expression will remain appropriately high.

Given the emphasis on social cohesion it is not unexpected that the Review is part of the Government’s response to the March 2019 Christchurch terrorist attack, including the Christchurch Call and responding to the Royal Commission of Inquiry into the terrorist attack on Christchurch masjidain.

It is noted that in addition to the formal structures under the Films Videos and Publications Classification Act and the Broadcasting Act are voluntary self-regulatory structures such as the Media Council and the Advertising Standards Authority are the provisions of the Harmful Digital Communications Act and the Unsolicited Electronic Messages Act. These structures, it is suggested, are unable to respond to are coming from contemporary digital media content, for example social media. The internet has decentralised the production and dissemination of media content, and a significant proportion of that content is not captured by the existing regulatory system.

Examples of the harmful media content affecting New Zealanders are:

  • adult content that children can access, for example online pornography, explicit language, violent and sexually explicit content
  • violent extremist content, including material showing or promoting terrorism
  • child sexual exploitation material
  • disclosure of personal information that threatens someone’s privacy, promotion of self-harm
  • mis/disinformation
  • unwanted digital communication
  • racism and other discriminatory content
  • hate speech

What is proposed is a harm-minimisation focused review of content regulation, with the aim of creating a new modern, flexible and coherent regulatory framework to mitigate the harmful impacts of all media content. The regulatory framework will balance the need to reduce harm with protecting democratic freedoms, including freedom of expression and freedom of the press. The framework will allocate responsibilities between individuals, media content providers, and Government for reducing harm to individuals, society and institutions from interacting with media. The framework will be platform-neutral in its principles and objectives, however, it will need to enable different approaches to reaching these objectives, spanning Government, co-regulatory and self-regulatory approaches. It will also include a range of regulatory and non-regulatory responses.

The following principles are proposed to guide the review:

a. Responsibilities to ensure a safe and inclusive media content environment should be allocated between individuals, media content service providers (analogue, digital and online providers), and Government;

• Individuals should be empowered to keep themselves safe from harm when interacting with media content;

• Media content service providers should have responsibilities for minimising harms arising from their services;

• Government responses to protect individuals should be considered appropriate where the exercise of individual or corporate responsibility cannot be sufficient. For example:

• Where there is insufficient information available to consumers about the risk of harm;

• Where individuals are unable to control exposure to potentially harmful media content;

• Where there is an unacceptable risk of harm because of the nature of the media content and/or the circumstances of the interaction (e.g. children being harmed by media content interactions);

b. Interventions should be reasonable and able to be demonstrably justified in a free and democratic society. This includes:

  • Freedom of expression should be constrained only where, and to the extent, necessary to avoid greater harm to society
  • The freedom of the press should be protected
  • The impacts of regulations and compliance measures should be proportionate to the risk of harm;

c. Interventions should be adaptive and responsive to:

• Changes in technology and media;

• Emerging harms, and changes to the scale and severity of existing harms;

• Future changes in societal values and expectations;

d. Interventions should be appropriate to t he social and cultural needs of all New Zealanders and, in particular, should be consistent with:

• Government obligations flowing from te Tiriti o Waitangi;

• Recognition of and respect forte ao Maori and tikanga; and

e. Interventions should be designed to maximise opportunities for international coordination and cooperation.

It will be noted that the proposed review and the principles guiding it are wide-ranging. It seems that the objective may be the establishment of a single content regulatory system that will allow for individual responsibility in accessing content and media responsibility for ensuring a minimisation of harm but with a level of State intervention where the steps by individuals or media providers may be insufficient. The guiding principle seems to be that of harm.

At the same time there is a recognition of the democratic values of freedom of expression and freedom of the press. The wording of section 5 of the New Zealand Bill of Rights Act is employed – that interventions should be reasonable and demonstrably justified in a free and democratic society and that responses should be proportionate to the level of harm.

It is interesting to note that the proposed interventions should be flexible and able to adapt to changes in technology and media, the nature of harm and any future changes in societal values and expectations.

Commentary

In many respects the proposals in this outline seem to be those of an overly protective State, developing broad concepts of harm and “safety” as criteria for interference with robust and often confronting expression. It is quite clear that the existing law is sufficient to address concerns about expressions such as threats of physical harm. However, the concept of harm beyond that is rather elusive. The problem was addressed in the Harmful Digital Communications Act 2015 which defines harm as “serious emotional distress”. But a broader scope seems to be applied to harm in the context of this review and that is exemplified by the concept of social cohesion. In addition are some of the categories of content that must give rise to concern and that may well create a tension between freedom of expression on one hand and elements of social cohesion on the other. One example is that of misinformation or disinformation which seems to suggest that there is but one arbiter of accuracy of content that leaves little room for balanced discussion or opposing views. The arbiter of content could describe any opposing view as misinformation and thereby demonise, criminalise and ban the opposing view on the basis that opposition to the “party line” has an impact upon social cohesion.

A matter of concern for media law specialists as this review progresses must be the cumulative impact that content regulation initiatives may have on freedom of expression. I cite as examples proposals to address so-called “hate speech” and the Chief Censor’s report “The Edge of the Infodemic: Challenging Misinformation in Aotearoa.” These proposals, if enacted, will give legislative fiat to a biased form of expression without allowing for a contrary view and demonstrates a concerning level of misunderstanding about the nature of freedom of expression (including the imparting and receiving of ideas) in a free and democratic society.

As matters stand content regulatory systems in New Zealand as discussed have some common features.

  • There is an established set of principles and guidelines that govern the assessment of content.
  • There is a complaints procedure that – as far as media organisations are concerned – involves an approach to the media organisation prior to making a complaint to the regulatory body
  • There is a clear recognition of the importance of the freedom of expression and the role of a free press in a democratic society
  • That in respect to censorship the concept of “objectionable” is appropriately limiting given first that the material may be banned or restricted and second that there may be criminal liability arising from possession or distribution of objectionable material.
  • Guiding principles are based primarily upon the public interest. The Content Review focus on social cohesion is more than a mere re-expression of the public interest concept.

One thing is abundantly clear. The difficulty that regulatory systems have at the moment surrounds continuing technological innovation. To some extent the New Zealand Media Council recognises that and has adapted accordingly. Otherwise there is little wrong with the processes that are in place – at least in principle. If complaints procedures are seen to be unwieldy they can be simplified. The public interest has served as a good yardstick up until now. It has been well-considered, defined and applied. It would be unfortunate to muddy the media standards and public discourse with a standard based on social cohesiveness, whatever that may be. Fundamentally the existing regulatory structures achieve the necessary balance between freedom of expression on the one hand and the protection of the public from objectionable content on the other. Any greater interference than there is at present would be a retrograde step.