“And what is ‘truth’? Is truth unchanging law? We both have truths. Are mine the same as yours?”
Jesus Christ Superstar
I have given some thought about releasing this paper. The reason for my hesitation is that it could be misinterpreted as a support piece for misinformation. That is not its purpose. My primary concern arises from what I perceive as a shift towards a State-based determinator for truth and, as a corollary, that any perspective or opinion that does not conform to that “truth” is misinformation. In my view the issue of misinformation is a more nuanced one and this paper argues that the solution to dealing with misinformation should be in the hands of individuals who can make their own evaluations of the validity or otherwise of pieces of information before acting upon the. Some of the suggestions that have been made in the misinformation paper under discussion are extremely reasonable and sensible. What I am concerned about is the intrusion of the State into the area of belief and points of view. Freedom of thought (or conscience) has long been a cornerstone of liberal democracy.
Introduction
Censorship is a controversial issue in a modern democratic and liberal society, although it has taken place in one form or another over the centuries. This has included art censorship from the strategically placed drapes on the magnificent Michelangelo frescoes on the ceiling of the Sistine Chapel to the controversy surrounding the Mapplethorpe photographic exhibition in New Zealand, film censorship from All Quiet on the Western Front[1] to Baise-Moi[2] and book censorship such as James Joyce’s masterpiece Ulysses[3] and more recently in New Zealand with Ted Dawe’s Into the River.[4]
Censorship challenges freedom of expression by imposing minimum standards of socially acceptable speech on the contemporary community. Under s 14 of the New Zealand Bill of Rights Act 1990 (Bill of Rights) everyone has the right to freedom of expression; a right as “wide as human thought and imagination”.[5] Censorship acts as an abrogation of that right, so how is freedom of expression under the Bill of Rights and censorship under the Act to be accommodated? Some guidance is available from the Court of Appeal in the case of Moonen v Film and Literature Board of Review[6].
Moonen held that there is a responsibility on the Classification Office and the Board of Review when carrying out their work to explain how a publication falls into the category of publications that Parliament has deemed objectionable. The Classification Office and the Board of Review must also demonstrate why classifying certain publications as objectionable is a demonstrably justified limitation on freedom of speech. Generally the Moonen approach is followed.
New Media Issues
But lately the Chief Censor, Mr. David Shanks, has been calling for a widening of his brief. At an Otago University conference about ‘Social Media and Democracy’ in March 2021, Mr. Shanks told the conference the way we regulate media is not fit for the future.
“We can be better than this. I think there’s some very obvious moves that we can do here to make the current regulatory system and framework more coherent for a digital environment,” he said.[7]
The “Misinformation” Study
As part of an overall review of regulatory structures surrounding harmful information dissemination, the Government released a discussion paper on hate speech and at the same time the Chief Censor released a paper entitled “The Edge of the Infodemic: Challenging Misinformation in Aotearoa” which in essence is a survey about how citizens are concerned about misinformation. The internet and social media are identified as key sources – while experts and government are trusted more than news media.
The Chief Censor says it shows the need for urgent action. But the question must be asked – why? Do we need the government or some government agency to be the arbiter of truth? Are we so uncritical that we cannot discern misinformation from empirically based conclusions?
The concerns about new media are not new. Many of the criticisms of the Internet and social media levelled by the Chief Censor have been articulated in the past. Speaking of newspapers Thomas Jefferson expressed an acidic concern that editors “fill their newspapers with falsehoods, calumnies and audacities”.[8]
What is seen as a problem seems to be a difficulty in accepting that as many as there are people there are opinions. One wonders whether the questions properly addressed the issues. The findings of the report must be concerning. New Zealanders tend to distrust online sources of information.
Only 12 percent had high trust in news and information from internet and social media users – and 83 percent think this group frequently spreads misinformation on purpose.
But 79 percent also said they get news or information from social media and also use it to verify information.
The report found New Zealanders have a relatively high level of trust in news and information from scientists, researchers or experts (78 percent) and government agencies and officials (64 percent).
Six out of 10 respondents reported high trust in New Zealand’s news media – a more favourable result than the responses recorded for overseas news media.
But these findings beg the question I have already raised. Are we talking about facts or are we talking about opinions. Even facts can be “spun” to fulfil a particular purpose and can be interpreted in a number of ways. The facts remain the same. The interpretations may differ. And this is important in a vibrant and developing society. The “truth” for one may not be a “truth” for another.
The concerns that the report advances have been derived from an extensive survey that has been conducted. The findings of the survey lead inexorably to the conclusion that “something must be done” and I would suggest that the “something” involves the control or monitoring of information. And it must be of concern that the self and statutorily described[9] censor is driving this.
So what does the report tell us. I state the findings and my observations in italics follow each one.
First, it is common for New Zealanders to see news and information they think is false or misleading. Opinions differ as to what counts as misinformation, but one topic identified as a source of misinformation surrounds Covid 19. Another concern is that this misinformation is influencing people’s views about things like politics, public health and environmental issues, and many see misinformation as an urgent and serious threat.
What is apparent from this concern is that misinformation is recognised. This would seem to suggest that those who contributed to the survey are still in possession of the reasoning and critical faculties and can distinguish valid information from rubbish. The volume of misinformation may drive a concern but what does it threaten. This question seems to be unanswered.
But arising from this is another more fundamental issue and one that I have already alluded to – what is misinformation. Is it a skewing of facts – something that politicians are skilled in although for them it is called “spin” – or is it a statement of opinion. One wonders how many statements of opinion are taken as fact, especially if the reader or listener or viewer agrees with the opinion.
Secondly New Zealanders tend to distrust online sources of information generally, and this is especially true of social media. Many New Zealanders think social media users and corporations often spread false and misleading information intentionally. At the same time, the internet is the most popular source of news and information, while also being a reference point to verify, fact check or confirm this information.
The first point is a valid one. Do not implicitly trust everything that you see online. With a medium like the Internet – and social media platforms – everyone has a voice. Whereas mainstream media could be selective, had verification duties and are subject to rules about balance and a disciplinary process such as the Broadcasting Standards Authority or the NZ Media Council, social media does not. Thus it follows that statements by individuals on social media platforms should at least be taken with a grain of salt and should be subject to critical scrutiny and verification.
Whether online or offline, most New Zealanders tend to trust information from more traditional sources like government officials, scientists and the New Zealand news media. However, the research shows that people with higher trust in online only sources of information – and who use these sources more often – are more likely to express belief in statements associated with misinformation.
This probably says more about the critical faculties of those who rely on online sources for their information. And this goes to a lack of development of intellectual rigour that goes back to the education system, together with a level of naivete that would suggest that too many people accept anything without question or without careful analysis. It is not the source of the information that is to blame. It is the uncritical stance of the reader that is the problem.
The report then goes on to widen the problem with some rather sweeping generalisations.
Misinformation is widespread and affects everyone. This is true regardless of age, gender, ethnicity or other characteristics.
Subject to defining misinformation (which I discuss below) there is no doubt that all facets of information, true or false are widespread. Does this affect everyone? If what is meant is “does everyone come into contact with misinformation” there is certainly that potential. But if the meaning of the word “affect” is to influence I would have some quibble about the suggestion that misinformation influences everyone. Once again this has more to do with the critical analysis of information, but I consider this conclusion to be overly broad
It’s relatively common for New Zealanders to express belief in at least some ideas that are linked to misinformation – ideas which are not backed by the best available evidence we have.
I would be very interested to see the evidence for this statement and once again it speaks more to the naivete and lack of critical rigour on the part of the audience. And, of course, even a bad idea may be worth consideration if only to analyse it and discard it. The problem I think lies in the use of the word “belief” which suggests something other than an evidence based or empirical conclusion
When people rely on misinformation to make important decisions it can have a harmful impact on the health and safety of our communities. It can also affect us on a personal level, contributing to anxiety, anger, and mistrust.
Agreed. But the issue is the reliance that is placed on misinformation and once again – at risk of repeating myself ad nauseum – much depends upon the critical faculties and analysis employed by the audience. If people choose to make important decisions without properly analysing the source of the evidence supporting those decisions then that is a matter for them.
People often take action themselves in response to misinformation – such as searching different sources to see if information is accurate, looking at more established news sources, or talking about it with people they trust.
New Zealanders also see this as a societal problem that requires more action. They have differing views on who should do this and how. Many think government, news media and experts have the biggest role in dealing with the spread of misinformation, but that individual internet users and social media corporations also have an important role.
Many New Zealanders see the Government as the solution to problems. Rather I agree that responsibility for ascertaining whether content is information or misinformation should the in the hands of the recipient. I agree that individual internet users and social media users have a role – but it is not for the social media corporations to vet content or carry out some moderating activity over content. I base this comment on the fact that Internet based information and indeed the communications paradigm it has introduced means that we must recognise that paradigm shift and consider regulatory solutions in light of it.
What is “Misinformation”?
The problem of “misinformation” and the concerns that are expressed in the report depend very much upon the definition of the term. The Report offers some brief definitions. There is a specific rider to the definitions offered which narrow the concept down to something that is potentially harmful. Other definitions are quite a bit wider.
The Report definitions are as follows:
Misinformation: false information that people didn’t create with the intention to hurt others.
Disinformation: false information created with the intention of harming a person, group, or organisation, or even a country.
Mal-information: true information used with ill intent.
The definitions set out are quite specific and share a similar characteristic and that is that the spread of the information (misinformation, disinformation or mal-information) is accompanied by a specific intention and that is to harm or hurt others[10].
The Report goes on to say
“Misinformation is nothing new, but there are increasing concerns worldwide about the prevalence of misinformation – especially online – and its potential to impact democracy, public health, violent extremism and other matters. We’ve seen how the spread of false and sometimes hostile misinformation and conspiracy theories continue to impact on our whānau and communities during the Covid-19 pandemic, and how extremist talking points and ideology can contribute to real-world violence such as the March 15 attacks in Christchurch.”
Misinformation is defined in the Oxford English dictionary as “false or erroneous information”, and as the report states, the existence of false or erroneous information is nothing new. Falsity implies that the communicator of the information is aware of the falsehood but perpetrates it nonetheless. Erroneous implies error or mistake which lacks the element of wilful deception.
Putting to one side the emotive reference to the March 15 attacks – and there is no evidence that the terrorist was influenced by misinformation – the concern that is expressed is that false, erroneous and sometimes hostile information and conspiracy theories have an impact. As it proceeds the Report seems to lose sight of the qualification that harm must be intended and seems to focus more upon the falsity or error of the information circulated.
Two issues arise from this. The first is that the recipient of information must be critical of the information received and subject it to analysis to determine whether it is “true” or “false”.
The second is that most information disseminated, especially across social medias platforms, is opinion or “point of view” which means that the disseminator is coming from a particular standpoint or is writing with a particular agenda. It would be incorrect for anyone to suggest that the opinion pieces in the New Zealand Herald by columnists such as Simon Wilson, Richard Prebble, Michael Cullen or Mike Hosking are anything else but that. They are interpretations of fact taken from a particular standpoint. It is up to the reader to determine whether first, the facts bare valid and secondly whether the opinion is therefore valid. Finally, if the answer to both questions is in the affirmative there is nothing to compel the reader to accept the opinion. The reader is free to disagree with it.
An associated issue arises and that is the guarantee of the freedom of information contained in section 14 of the New Zealand Bill of Rights Act 1990. The provisions of section 14 are wide. They refer to the imparting and receiving of information – thus widening the usual understanding of freedom of expression to be the imparting of information. It is significant too that section 14 does not qualify the word “information”. There is no suggestion that the information must be true or that it cannot be “misinformation.”
Information is that which informs. To inform someone is to impart learning or instruction, to teach or to impart knowledge of some particular fact or occurrence. The traditional meaning of information suggests an element of factual truth and thus misinformation is erroneous or incorrect information. One interpretation of section 14 is to use the traditional meaning of information which suggests an element of fact based truth. A wider interpretation would include material based on mistaken facts. And then, of course, there is the question of opinion which is a view of one person about a certain set of circumstances.
But in the field of information, misinformation, fact and truth there will always be disputes. Some will be trivial. Others will be significant. Some may be wrong headed. Others may be designed to mislead. Given these varieties of information, what is proposed that we should do about what is referred to in the report as the “infodemic.”
An Internal Inconsistency?
The Infodemic paper contains the following critical acknowledgement.
Misinformation is not in and of itself illegal – and it would be impractical and counterproductive to make it so. It should not be unlawful to express a view or belief that is wrong, or that is contrary to prevailing evidence and opinion.
There are certain types of misinformation with which the law should be involved such as information which promotes criminal or terrorist activity and may fall within the existing ambit of the Films Video and Publications Classification Act, the Human Rights Act or the Crimes Act.
These legal restrictions are perfectly legitimate. They are very limited and are justifiable limitations on the right of freedom of expression guaranteed by section 14 of the New Zealand Bill of Rights Act. But misinformation does not fall within their ambit, nor should it as acknowledged by the Report.
This then raises the issue – what is the problem? Is the raison d’etre for the paper to identify an issue and sound a warning. Or does it go further. The answer, in my opinion, lies in the latter. Realistically the paper recognises that misinformation will never be eliminated nor should it. But in keeping with Mr. Shanks concerns expressed in 2019, the real target for stemming the infodemic lies in dealing with the disseminators – and by that I mean not the individuals who spread misinformation but the digital platforms that enable wide dissemination.
Addressing the Problem
I shall outline the proposals advanced by the Infodemic paper but would offer a note of caution. Some of the proposed solutions are based on existing regulatory or content assessing models. They ignore some of the essential properties of digital systems which make regulation in the Digital Paradigm and completely different exercise from existing regulatory models.
I have discussed the problems of regulation in the Digital Paradigm elsewhere and in some detail[11]. Suffice to say that to engage in any form of content control in the Digital Paradigm is difficult given that the dissemination of content is inextricably entwined with the medium of distribution.
Marshall McLuhan’s aphorism “The Medium is the Message” states the problem, albeit somewhat opaquely. To attempt to control the message one must first understand the medium. This is often overlooked in discussions about regulation in the Digital Paradigm. It is something of an exercise in futility to attempt to apply the models or standards that are applied for what is essentially mainstream media regulation. And to treat online platforms, irrespective of their size and market dominance, in the same way as “analog” or mainstream media platforms ignores the fact that online platforms occupy a paradigmatically different communications space from mainstream media platforms like newspapers, radio and television.
With that cautionary observation I shall consider the proposals in the Infodemic paper.
The report offers five possible avenues for dealing with what it refers to as the Infodemic.
- Informing and empowering New Zealanders – this solution is expressed in the report as a means by which misinformation about Covid 19 and vaccinations may be countered. Of course, from a general perspective this is a wider issue than just misinformation and conspiracy theories about the pandemic. Many New Zealanders are concerned about the impact of misinformation across a broad range of topics, including the environment and racial tolerance.
Some of this is based on mistrust of accurate sources of information and it is suggested that steps should be taken to help those who are affected by misinformation and conspiracy theories.
This, of course, is based on the assumption that there is an empirical basis which suggests that alternative views are wrong and should not be believed. And this harks back to the quotation at the beginning of this piece. Are my “truths” the same as yours.
The concern that I have about this proposal is the suggestion that there is but one truth, one “authorised version” to which adherence must be given. It may be easy to prove that a Covid vaccine is effective on the basis of scientific analysis and empirical proof. It may be less easy to prove matters which travel in the realms of faith and belief. And the problem with “authorised versions” is that they become of “approved version” with the result that other “truths” may become sidelined and dismissed to the point where they become heretical.
- Education – this is a solution that I find appealing. Media literacy and critical thinking skills can help us sort fact from fiction and interpret information. These skills can also help build resilience in the community against misinformation.
A central government campaign could reach many people but is unlikely to influence people and communities who already have lower trust in government. And should it come from the government in any event – a government which may have its own political agenda?
Education in schools is also needed to empower and equip our young people to recognise and challenge misinformation. Our education system already aims to provide children and young people with the critical thinking skills necessary to navigate a complex world.
- Content Moderation and Industry Responsibility – Recent research suggests that misinformation travels through the internet much more rapidly than accurate information. This is one of the realities of internet based information. In the same way that the printing press enabled the increased dissemination of information so the Internet does this in an enhanced and exponential way.
The algorithms that select and promote posts and information on many social media and digital platforms often select information that is ‘high engagement’ – that is, information that attracts more comments, shares and likes. Misinformation can often be high engagement, as it can easily be more sensational, or generate stronger emotions. These algorithms, it should be observed, are also used by mainstream media who use online platforms and accounts for the “ranking” that reports may have on a news website.
Online platforms other than those used by mainstream media who may be subject to the New Zealand Media Council are not generally subject to the same standards around accuracy, fairness and balance that newspapers, broadcast or other news media are.
However, as I have suggested above, it is a mistake to attribute the responsibilities of mainstream media platforms to online platforms. They are paradigmatically different.
The first point is that content that is broadcast or published in mainstream media goes through an editorial process. Content that is posted on social media does not, nor should it be the duty of the provider of the platform to moderate another person’s content that has been posted.
The second point is that content moderation is a difficult process in the digital paradigm given that essentially social media platforms handle large quantities of data that are later rendered into soe recognisable of comprehensible form. Of course, algorithms can and should be used to trap dangerous content that advocates violent harm or action.
It is suggested that there should be engagement with digital platforms in a co-ordinated way along with industry codes of practice which could result in a consistent set of expectations and approaches in New Zealand.
Once again this suggests a “one truth” solution which creates difficulties is a society with a plurality of opinions.
One suggestion is for users to “call out” and report misinformation, but much depends on how this is done. The development of the “cancel culture” regrettably is intolerant of different strands of opinion and I fear that “calling out” is not the way to go. Rather engagement in rational debate and proposing an alternative would allow for the marketplace of ideas to come into play and separate the wheat from the chaff.[12]
- Regulation and Policy
Once again the proposal seeks to compare mainstream media with a paradigmatically different information system that is the Internet.
The statement is made as follows:
“While most misinformation is not illegal, much of it would be in breach of industry standards concerning accuracy. Such standards apply to broadcast services (under the Broadcasting Act), print media (under the standards administered by the New Zealand Media Council) and advertising (under the Advertising Standards Authority). Most of the broadcast and industry self-regulatory models were not set up to address the challenges presented by the digital age such as misinformation shared on platforms like Facebook or YouTube.”
Then it is suggested that a consistent regulatory approach across non-digital and digital misinformation alike is needed.
If I understand it correctly what is being suggested is that the regulatory approach applicable to mainstream media, which developed in an entirely different paradigm from digital media, should be applied across the board.
This ignores that fact that most if not all of the content on digital media and especially social media is user generated. In fact social media allows everyone who has an internet connection to have a voice. Whether or not any attention is paid to that voice is another matter. But within a democratic society, this opportunity has never before been available. And if one looks, for example, at an autocratic state such as the Peoples Republic of China with its severe restraints on freedom of expression and its extreme regulation of Internet content, the question must be asked – is that the road that we wish to travel?
- Research and evaluation -The understanding of what needs to be researched and evaluated is becoming clearer, and this should be an ongoing process. The information environment will continue to rapidly evolve – often in ways no-one can predict. As new evidence emerges, interventions will change as well.
This solution seems to suggest that the reason for research and evaluation is to determine interventions and regulatory responses. This must be something of a concern in light of the comment earlier made that misinformation is not illegal and nor should it be.
Conclusion
There are two major issues that arise from the paper.
There is no doubt that misinformation can be problematical. It is, however, one of the attributes of a society that values diversity of opinions and point of view and that values and celebrates a plurality of beliefs.
Eroding Freedom of Expression?
In some respects it is difficult to discern the target in the misinformation paper. Clearly it has been inspired primarily by the conflicting information that has been swirling around about aspects of the Covid crisis. But there is more including references to the 15 March 2019 terror attacks and the various issues surrounding the introduction of 5G, QAnon and the United States polarised society and conspiracy theories.
But there seems to be a deeper issue and that surrounds calls that have been made to regulate the Internet or at least impose some restraints on the activities of social media platforms. Part of the problem with social media platforms is that they allow for a proliferation of a variety of opinions or interpretations of facts which may be unacceptable to many and downright opposed to the beliefs of others.
Governments and politicians, although they are great users of social media platforms, cannot abide a contrary message to their own. In a democracy such as New Zealand it is something with which they must live although there is little hesitation at nibbling away at the edges of expressions of contrary opinions.
Characterising them as “misinformation” is a start down the road of demonisation of these points of view. At the same time, following the 15 March massacre, the Prime Minister of New Zealand instituted the “Christchurch Call” – an attempt to marshall international support for some form of Internet regulation. No laws have been passed as yet and social media organisations, seeing which way the wind is blowing, have made certain concessions. But it is, in the minds of many, still not enough.
In New Zealand a review of media regulatory structures lies behind the “misinformation” study along with the ill-considered and contradictory proposals about “hate speech”. The assault on freedom of expression or contrarianism is not a frontal one – it is subtle and gradual but it is there nonetheless. It is my opinion that the real target of the “misinformation” study is not “misinformation” but rather the expression of contrary points of view – however misguided they might be. And that is a form of censorship and it is therefore not surprising that this move should come from the Chief Censor.
A Democratic Solution
It would be to tread a dangerous path to place the determination of “good information” and “bad information” in the hands of the government or a government organisation. Only the most extreme examples of misinformation which may do demonstrable harm such as objectionable material or terrorist information should be subject to that level of moderation. To add “misinformation” as a general category without precise definition to the sort of material that is objectionable under the Films, Videos and Publications Classification Act would be a retrograde and dangerous step.
There is already a form of content moderation in place, run through the Department of Internal Affairs which makes a filter available to Internet Service Providers to block certain content.[13]
Of the proposals suggested above it will be apparent that I favour as little interference with online platforms as possible, and I do not support anything more that minimal interference with content that is not demonstrably harmful and am of the view that what people wish to see as a “truth” should be left to the individual to make his or her own judgement.
The problem with “misinformation” has been heightened by the conflicting points of view surrounding the Covid crisis – indeed the paper itself picks up on this by describing the misinformation problem as an “infodemic” – the 2020 US Presidential election and some of the conspiracy theories that have been circulating courtesy of Qanon and the like.
But it is not a problem that warrants government or regulatory interference and indeed it should be noted that the Department of Internal Affairs review of media and online content regulation focusses upon content that is harmful.
Misinformation may misinform but much of it depends upon the reader or listener’s willingness to stand apart and subject the content to critical analysis. The problem, however, is that for many people they believe what they want to believe and their truths may not be those held by their neighbours.
[1] Chris Watson and Roy Shuker In the Public Good? Censorship in New Zealand (Dunmore Press, Palmerston North, 1998) at 35.
[2] Re Baise-Moi [2005] NZAR 214 (CA).
[3] United States v One Book Called “Ulysses” 5 F Supp 182 (SD NY 1933); United States v One Book Entitled Ulysses by James Joyce (Random House Inc, Claimant) 72 F 2d 705 (2d Cir 1934).
[4] Re Into the River Film and Literature Board of Review, 14 October 2015.
[5] Moonen v Film and Literature Board of Review [2000] 2 NZLR 9 (CA) at [15].
[6] [2002] 2 NZLR 754 (CA)
[7] “Battle Against Online Harm beefs up censor’s power” Media watch 21 March 2021 https://www.rnz.co.nz/national/programmes/mediawatch/audio/2018788055/battle-against-online-harm-beefs-up-censor-s-power
[8] He also stated on another occasion “Were it left to me to decide whether we should have a government without newspapers, or newspaper without government, I should not hesitate a moment to prefer the latter”
[9] Films Videos and Publications Classification Act 1993 section 80(1).
[10] In some respects this resembles the types of actionable digital communication under the Harmful Digital Communications Act 2015. In both the civil and criminal spheres under the act there must be harm which is defined as serious emotional distress. The report does not go into specifics about what is required to hurt or harm others.
[11] See David Harvey Collisions in the Digital Paradigm: Law and Rulemaking in the Internet Age” (Hart Publishing, Oxford 2017) esp. at Chapter 2.
[12] As at the time of writing it should be noted that a comprehensive review of media content regulation in New Zealand was announced by Minister of Internal Affairs, Hon Jan Tinetti, on 8 June 2021. The review is managed by the Department of Internal Affairs, with support from the Ministry for Culture and Heritage. The review aims to create a new modern, flexible and coherent regulatory framework to mitigate the harmful impacts of media content, regardless of how it is delivered.
The framework will still need to protect and enhance important democratic freedoms, including freedom of expression and freedom of the press.
It correctly observes “Our existing regulatory system was designed around a traditional idea of ‘analogue publication’, such as books, magazines and free-to-air TV, and does not have the flexibility to respond to many digital media types. As a result, it addresses harm in a shrinking proportion of the media content consumed by New Zealanders and provides little protection at all for digital media types which pose the greatest risk for harmful content.” See https://www.dia.govt.nz/media-and-online-content-regulation (Last accessed 9 July 2021)
[13] https://www.dia.govt.nz/Censorship-DCEFS (Last Accessed 9 July 2021)
Like this:
Like Loading...