Digital Property and Computer Crimes – Collisions in the Digital Paradigm IVA

The Court of Appeal decision in Watchorn v R [2014] NZCA 493 was another case involving property in digital data.  The accused had been convicted on three charges alleging breaches of s 249 of the Crimes Act in that he had access to his employers computer system and dishonestly or by deception and without claim of right obtain property.

Mr Watchorn was an employee of TAG, an oil and gas exploration company, which was engaged in both prospecting and the production of oil and gas.  There was no question that on the 7th June 2012 Mr Watchorn downloaded extensive and sensitive geoscience data from TAG’s computer system onto a portable hard drive.  An executive of TAG described the geoscience folder as holding the “secret recipe” because it contained data relating to the discovery of sites of oil and gas.  The information had a very high value to TAG. Had it been disclosed to a competitor it would have been extremely damaging to the company and beneficial to that competitor.

On the day after the download took place Mr Watchorn and his family went to Canada for four weeks so that he could visit his mother who was ill.  Whilst he was in Canada he met a representative from a company called New Zealand Energy Corporation Limited (NZEC) based in Canada but which carries on business in New Zealand. NZEC is a competitor of TAG.  Following this meeting Mr Watchorn was offered a job with NZEC.

On 31 July 2012 Mr Watchorn down loaded similar TAG information to that which he had downloaded on the 7th June and downloaded it on to a USB memory stick.  On the same day he gave notice of his intention to resign from TAG and commence employment with NZEC.

TAG was very concerned about material being downloaded from its computer system and the day after Mr Watchorn gave notice, TAG’s solicitors sent a letter to Mr Watchorn reminding him of his obligations for confidentiality and inviting him to return an apparently missing hard drive.

Mr Watchorn responding by stating that the only thing he had on his personal hard drive was relating to some of the things that he had helped put in place as well as technical data and work from previous employment.

On 28 August 2012 the police executed search warrants including one at the premises of NZEC.  Mr Watchorn was initially interviewed by the police, then re-interviewed on the 7th December 2012 and arrested.  There was some disparity between the various explanations that Mr Watchorn had given to the police relating to whether or not he had taken the portable hard drive with him to Canada.  However there was no evidence indicating any disclosure of information to NZEC while Mr Watchorn was in Canada, when he later accessed the down loaded material at NZEC’s premises or at any other time.  In fact the evidence was that the data down loaded on the 7th June was not disclosed at any time to any person.

The Court of Appeal noted its decision in Dixon v R where it was held that digital CCTV footage stored on a computer was not “property” as defined in the Crimes Act and so the obtaining of such data by accessing a computer system could not amount to “obtaining property” within the meaning of s 249(1)(a) of the Crimes Act.  The Court accepted that that analysis must apply to the kind of data obtained by Mr Watchorn and observed that it was bound to follow Dixon.  However the issue was whether or not the Court would follow the approach adopted in Dixon and substitute convictions based upon an alternative charge of obtaining a benefit.

The first thing the Court did was to consider whether or not there had to be a “dishonest purpose” for obtaining a benefit.  Despite the fact that the heading to s 249 states “accessing the computer system for dishonest purpose” the Court held that that was not an accurate summary of the offence itself.  It observed that the ingredients of s 249(1) do not include a dishonest purpose.  What the Crown must prove is that the accused “accessed a computer system and thereby dishonestly or by deception or and without claim of right obtained a benefit.”  In light of the definition of “dishonestly” in s 217 of the Crimes Act all the Crown had to prove was that Mr Watchorn did not have TAG’s authorisation to down load the data that he down loaded to his hard drive on the 7th June.

“Dishonestly” in s 217 states “In relation to an act or omission means done or omitted without a belief that there was express or implied consent to, or authority for, the act or omission from a person entitled to give such consent or authority”.

 The Supreme Court in R v Hayes [2008] 2 NZLR 321; [2008] NZSC 3 stated that dishonestly requires an absence of belief that there was consent or authority and that it is not necessary to prove that the belief was reasonable.

The Court of Appeal observed that if Mr Watchorn actually believed he was authorised to download the data then the element of “dishonestly obtaining” that data would not be proven.  Whether he downloaded the data for the purpose of taking it with him to Canada or alternatively to make a backup actually did not address the question as to whether he believed he was authorised to do it.  The evidence before the Court was that the TAG Executives said that Mr Watchorn had no authority implied or otherwise to take TAG geoscience data or the material contained with the TAG drilling and TAG electronic site and well files.  Mr Watchorn’s claim that he thought that he was authorised to download the files and take them to Canada was contrary to his version of events when interviewed by the police.

The Court then went on to consider the issue of “claim of right”, differentiating the concepts of “dishonesty” and “claim of right” by noting that dishonesty addresses whether Mr Watchorn believed he was authorised to download the data.  Claim of right addressed whether or not he believed even if he wasn’t authorised that downloading was permissible.  Mr Watchorn argued that he had a defence of claim of right because he believed there was an industry wide practice in the oil and gas field of employees transferring from one firm to another downloading data relevant to the employees work before leaving the employ of the owner of the data.  There was no evidence in Watchorn’s case that implied entitlement did exist and no evidence that Mr Watchorn believed that it did.  The fact that he had downloaded data from previous employers did not provide a proper foundation for a finding that he was lawfully entitled to do so.

After considering some other issues the Court went on to consider whether or not it should substitute the convictions against Mr Watchorn for convictions based on obtaining a benefit.  In Dixon the benefit had been the opportunity to sell a digital CCTV footage that had been obtained by accessing his employer’s computer.  In this case there was no evidence that Mr Watchorn had tried to sell the data but the issue was whether or not the word “benefit” was limited to a financial advantage or something wider.

The Court referred to a High Court decision of Police v Leroy  (HC Wellington CRI 2006-485-58, 12 October 2006 Gendall J) where a District Court Judge had held that the term benefit meant a benefit that could result in the advancement of a person’s material situation and was limited to a benefit of a financial nature.

Gendall J held otherwise.  He said that a non-monetary advantage may nevertheless comprise a benefit.  The advantage might be the acquisition of knowledge or information to which one was not otherwise entitled.  An advantage might be an invasion of another’s privacy.  It might be knowledge or information that could be used to exploit another person.  He gave the example of wrongful access of email communications of another for the advantage of disclosure or for use for political purposes or the purposes of embarrassment.  He held that information obtained might also be used for the benefit or advantage of a wrong doer enacting in such a way as to harass another in breach of the Harassment Act 1997 or be used to assist in the breach of a protection order under the Domestic Violence Act 1995.  It was noted that the words property, pecuniary advantage and valuable consideration relate to matters financial but the same is not necessarily true of benefit, privilege or service and the Court concluded that it was not necessary to confine the concept of benefit to financial benefits.

However that conclusion did not necessarily resolve Mr Watchorn’s case.  The Court considered the legislative history of the computer and other provisions of Part 10 of the Crimes Act 1961 in considering whether or not the scope of the word “benefit” was limited to a financial advantage and concluded that it did not.  However it concluded that the issue of what constituted a benefit in Watchorn’s case was more nuanced than that of Dixon.  The Court considered that it was arguable on the facts of Watchorn’s case that the advantage that he gained was his ability to access the data outside his work environment and without the supervision of his colleagues including after he had left the employment of TAG.  Indeed the Court said that it could be argued that he did not in fact exploit the advantage given to him by selling the data or making it available to his new employer.  It did not in fact reduce the ability that he had to do any of those things.

However the problem was that the Crown did not actually formulate the nature of the benefit that Mr Watchorn might have received.  The failure of articulating such a benefit meant that Mr Watchorn did not have any notice of that allegation that he could properly contest.  The Court held that he was entitled to such notice.  The Court considered that the evidence that could be adduced might include whether or not there was in fact any advantage to him in having possession or control of the data and because the prosecution had restricted its theory of the case to obtaining property the entitlement that Mr Watchorn had to prior notice of the benefit was not present.

The Court distinguished Watchorn from Dixon where in the latter case the Court was able to identify the benefit Mr Dixon hoped to obtain from the facts proven at trial.  Accordingly the Court was not prepared to substitute new verdicts and indeed the grounds for substituting such verdicts were not meet.

 Comment

This case is helpful because it demonstrates the importance of bringing a proper charge under the Computer Crimes sections of the Crimes Act.  The case of Police v Robb [2006] DCR 388 demonstrated the need for a prosecuting authority to exercise considerable care in drafting the charge that it brings.  In Robb the allegation arose pursuant to s 250(2)(a) of the Crimes Act in that it was contended that the accused deleted files the property of his employer without authorisation.  Part of the problem facing the Court was the mental element in the offence.  The Judge held that “deletion” in and of itself did not amount to damaging or interfering of the computer system contrary to s 250.  To establish a criminal offence of damaging or interfering with a computer system it was necessary to exclude innocent or accidental data deletion.  The Judge observed that wiping a file required an additional conscious decision over and above simple deletion.  Forensic evidence could not determine whether a file was deliberately deleted or not.

It is quite clear from both the decisions in Dixon and Watchorn that any charge suggesting the obtaining of property where what in fact has been obtained is digital material cannot be sustained and one of the alternatives in s. 249 must be considered.  For this reason the Court’s exposition of the nature of a benefit and the crystallisation of that benefit must be undertaken by a prosecuting authority.

However the final paragraph of the decision of the Court of Appeal in Watchorn is instructive.  The Court said “the decisions of this Court in Dixon and the present case have identified some drafting issues and inconsistencies in some Crimes Act provisions.  We respectfully suggest that consideration be given to remedial legislation.”

Obviously the question of the nature of any property in digital data must be considered but at the same time it must be carefully thought out.  Although it might be attractive for the definition of property simply to include digital data the problem that arises is that what amounts to copyright infringement within the digital space could well become a criminal offence and the presently incorrect adage advanced by copyright owners that “copyright infringement is theft” could well become a reality.

Technology for Better Fact Finding

This is a paper that I presented to the 14th International Criminal Law Congress in Melbourne on 11 October 2014. In brief it argues that new information technologies should be employed more widely in the Court system to enhance fact finding by juries and judges. It suggests that what are traditional means of evidence presentation, whilst still relevant, may be enhanced by technology use. In particular the paper questions whether the “confrontation right” requires physical presence and suggests that technology can allow virtual presence. It also points to new developments in 3D rendering and 3D printing which may enhance evidential integrity and improve presentation and consideration of evidence. The paper also questions whether some of the ritual aspects of the trial process enhance or impede proper and effective fact finding, or whether they have relevance to the primary function of the Court at all.

Facebook Friends on Appeal – Murray v Wishart

In an earlier post I discussed the decision of Courtney J in Wishart v Murray and dealt specifically with the issue of whether the “owner” of a Facebook page was the “publisher” of defamatory comments made on that page by third parties. The case was appealed to the Court of Appeal (Murray v Wishart [2014] NZCA 461). The judges unanimously held that a third party publisher – that is the owner of the Facebook page that contains comments by others – was not liable as publisher of those comments. They rejected the suggestion liability should attach because the owner of the page “ought to have known” that there was defamatory material, even if he or she was unaware of the actual content of the comment. The Court adopted a more restrictive approach, holding that the host of a Facebook page would only be liable as a publisher if there was actual knowledge of the comments and that there was a failure to remove them in a reasonable time in circumstances which could give rise to an inference that responsibility was being taken for the comments.

However, the approach of the Court, and its apparent recognition of some of the problems posed by the new Digital Paradigm, is of particular interest. In addition the decision leaves open other aspects of publication on platforms other than Facebook such as blogs.

The Background to the Case

Mr Wishart was the author of a book called Breaking Silence, about a woman named Macsyna King. Ms King collaborated with him on the book. Ms King was the mother of Chris and Cru Kahui, who were twins. They died at the age of three months in 2006 from non-accidental injuries. Their father, Chris Kahui, was charged with their murder but acquitted. During his trial, he suggested that Ms King had inflicted the fatal injuries. A subsequent coroner’s report found that the twins had died while in Mr Kahui’s sole care. Nevertheless, suggestions implicating Ms King retained some currency in the public arena. The trial of Chris Kahui for the murder of the twins generated considerable public interest.

Mr Murray learned of the impending publication of Mr Wishart’s book in June 2011. He established a Facebook page called “Boycott the Macsyna King book.”  He used his Twitter account to publicise the Facebook page. He posted comments on Twitter and on the Facebook page criticising both Mr Wishart and Ms King. Mrs Murray posted comments on the Facebook page, as did numerous other people.

Mr Wishart commenced proceedings for defamation. He alleged a number of instances but one cause of action related to a claim against Mr Murray in relation to third party statements made by persons posting comments on the Facebook page. This post will be restricted to the way in which the Court dealt with that cause of action.

In the High Court Mr. Murray applied to strike out this cause of action. He was unsuccessful for the extensive reasons and analysis given by Courtney J and discussed in an earlier post. Hence, he appealed.

The Approach of the Court

The Court started by considering the following test applied by Courtney J and articulated by her as follows:

Those who host Facebook pages or similar are not passive instruments or mere conduits of content posted on their Facebook page. They will [be] regarded as publishers of postings made by anonymous users in two circumstances. The first is if they know of the defamatory statement and fail to remove it within a reasonable time in circumstances that give rise to an inference that they are taking responsibility for it. A request by the person affected is not necessary. The second is where they do not know of the defamatory posting but ought, in the circumstances, to know that postings are being made that are likely to be defamatory. (Para 117)

This holding identified two tests – the “actual knowledge” test and the “ought to know” test. It was argued for Mr Murray that the actual knowledge test should be the only test for publication. As a first stet the Court considered how the Facebook page worked. This is an important and necessary first step in determining the proper application of existing rules. The Court said (at para 84)

An analysis of the positions taken by the parties requires a careful consideration of exactly what happened in relation to the Facebook page and on what basis it is pleaded that Mr Murray became the publisher of the statements made by third parties on the Facebook page. Although Courtney J described those posting messages on the Facebook page as “anonymous users”, that was not correct on the evidence. In fact, most of the users who posted allegedly defamatory statements identified themselves by name, are named in the statement of claim and could be traced by Mr Wishart if he wished to take action against them. So his action against Mr Murray is not the only potential avenue for redress available to him, though it was obviously more practical to sue Mr Murray for all the offending comments rather than sue many of those commenting for their respective comments.

The Court went on to discuss the way in which the page was set up and operated by Mr Murray. It noted that Courtney J had noted that Mr Murray not only could, but did, take frequent and active steps to remove postings that he considered defamatory or otherwise inappropriate, and also blocked particular individuals whose views he considered unacceptable. She found that he could not, therefore, be perceived as a “passive instrument”. Furthermore, Courtney J found that Mr Murray blocked Mr Wishart and his supporters from the Facebook page, which made it more difficult for Mr Wishart to identify and complain about potentially defamatory material. This impacted upon whether Mr Murray ought to have known of the defamatory postings.

The Use of Analogy

After considering the factual background to Courtney J’s finding, the Court went on to consider the legal path by which she reached her conclusion, her reliance upon the decision in Emmens v Pottle (1885) 16 QBD 354 (CA) and discussed at length the various decisions to which she referred. The Court then made the following significant comment  (para 99):

The analysis of the cases requires the Court to apply reasoning by strained analogy, because the old cases do not, of course, deal with publication on the internet. There is a question of the extent to which these analogies are helpful. However, we will consider the existing case law, bearing in mind that the old cases are concerned with starkly different facts.

The Court then went on to consider the factual background to a number of cases that had been discussed by Courtney J. (paras 100 – 123) and the decision in Oriental Press Group Ltd v Fevaworks Solutions Ltd [2013] HKCFA 47 which was decided after Courtney J’s decision. That case considered whether a host of an internet discussion forum is a publisher of defamatory statements posted by users of the forum. Although the main focus of the decision was on the availability of the innocent dissemination defence, the Court also considered whether the forum host was a publisher. It rejected the analogy with the notice board or graffiti cases, because in those cases the person posting or writing the defamatory comment was a trespasser. Since the forum host played an active role in encouraging and facilitating the postings on its forum, they were participants in the publication of postings by forum users and thus publishers.

The Court of Appeal then considered the various authorities that had been referred to by Courtney J and found that they provided limited guidance because the particular factual situation before the Court had to be the subject of focus. The reason for this was that the Court’s analysis of the authorities showed how sensitive the outcome may be to the particular circumstances of publication, and the fact that many of the authorities related to publication in one form or another on the internet did not provide any form of common theme, because of the different roles taken by the alleged publisher in each case.

The Court went on to examine the drawing of analogies , especially from authorities which did not involve the Internet. While noting that analogy is a helpful form of reasoning they may not be useful in particular cases. The Court observed that it was being asked to consider third party Facebook comments as analagous with:

  1. the posting of a notice on a notice board (or a wall on which notices can be affixed) without the knowledge of the owner of the notice board/wall;
  2. the writing of a defamatory statement on a wall of a building without the knowledge of the building owner;
  3. a defamatory comment made at a public meeting without the prior knowledge or subsequent endorsement or adoption by the organiser of the meeting

The Court then considered the circumstances in Emmens v Pottle which established that a party can be a publisher even if they did not know of the defamatory material. The holding in that case was that a news vendor who does not know of the defamatory statement in a paper he or she sells is a publisher, and must rely on the innocent dissemination defence to avoid liability.

The Court of Appeal considered that news vendor in Emmens v Pottle did not provide an apposite analogy with a Facebook page host. It observed that a news vendor is a publisher only because of the role taken in distributing the primary vehicle of publication, the newspaper itself. This contrasts with the host of a Facebook page which is providing the actual medium of publication, and whose role in the publication is completed before publication occurs. The Facebook page is in fact set up before any third party comments are posted.

So was the Facebook page more like the “notice on the wall” situation described in Byrne v Deane [1937] 1 KB 818 (CA)? This analogy was not perfect either. In Oriental Press Group the Court found that posting a notice on a wall on the facts in Byrne v Deane  was a breach of club rules and therefore amounted to a trespass. The Court of Appeal did not consider that the breach of the club rules was a factor affecting the outcome but rather that the club and its owners had not posted the defamatory notice and, until they became aware of it, were in no position to prevent or bring to an end the publication of the defamatory message. If a case arose where the defamatory message was posted on a community notice board on which postings were welcomed from anyone, the same analysis would apply. Furthermore, in Byrne v Deane the post was truly anonymous. There was no way by which the person posting the notice could be identified. In the case of the Facebook host, posting messages in response to an invitation to do so is lawful; and solicited by the host. Similarly, the Facebook host is not the only potential defendant whereas in Byrne v Deane, as has been observed, the poster of the notice could not be identified.

The Court also considered that drawing an analogy between a Facebook page and graffiti on a wall was also unhelpful. The owner of the wall on which the graffiti is written is not intending that the wall be used for posting messages. A Facebook host is.

One argument that had been advanced was that an analogy could be drawn with a public meeting – although there is a danger in equating the physical world with the virtual. It was argued that if Mr Murray had convened a public meeting on the subject of Mr Wishart’s book, Mr Murray would have been liable for his own statements at the meeting but not for those of others who spoke at the meeting, unless he adopted others’ statements himself. The court felt the analogy was useful because it incorporated a factor that neither of the other two analogies do: the fact that Mr Murray solicited third party comments about Mr Wishart’s book. In addition speakers at a public meeting could be identified (and sued) if they made defamatory statements just as many contributors to the Facebook page could be. However, the public meeting analogy is not a perfect one in that statements at a meeting would be oral and therefore ephemeral unlike the written comments on the Facebook page but it did illustrate a situation where even if a person incites defamation, he or she will not necessarily be liable for defamatory statements made by others. That is the case even if he or she ought to have known that defamatory comments could be made by those present at the meeting.

Problems with the “Ought to Know” Test

The Court then expressed its concerns about the “ought to know” test and Facebook hosts. First, an “ought to know” test put the host in a worse position than the “actual knowledge” test. In the “actual knowledge” situation the host has an opportunity to remove the content within a reasonable time and will not be a publisher if this is done. In the “ought to know” case publication commences the moment the comment is posted.

What happens when a Facebook page host who ought to know of a defamatory comment on the page actually becomes aware of the comment? On the “actual knowledge” test, he or she can avoid being a publisher by removing the comment in a reasonable time. But removal of the comment in a reasonable time after becoming aware of it will not avail him or her if, before becoming aware of the comment, he or she ought to have known about it, because on the “ought to know” test he or she is a publisher as soon as the comment is posted.

Another concern was that the “ought to know” test makes a Facebook page host liable on a strict liability basis, solely on the existence of the defamatory comment. Once the comment is posted the host cannot do anything to avoid being treated as a publisher.

A further concern involved the need to balance the right of freedom of expression affirmed in s 14 of the NZ Bill of Rights Act 1990 against the interests of a person whose reputation is damaged by another. The Court considered that the imposition of the “ought to know” test in relation to a Facebook page host gives undue preference to the latter over the former.

A fourth issue concerning the Court was that of the uncertainty of the test in its application. Given the widespread use of Facebook, it is desirable that the law defines the boundaries with clarity and in a manner that Facebook page hosts can regulate their activities to avoid unanticipated risk.

Finally the innocent dissemination test provided in s. 21 of the Defamation Act would be difficult to apply to a Facebook page host, because the language of the section and the defined terms used in it are all aimed at old media and appear to be inapplicable to internet publishers.

Thus the Court concluded that the actual knowledge test should be the only test to determine whether a Facebook page host is a publisher.

Thus the decision clarifies the position for Facebook page hosts and the test that should be applied in determining whether such an individual will be a publisher of third party comments. But there are deeper aspects to the case that are important in approaching cases involving new technologies and new communications technologies in particular.

The Deeper Aspects of the Case

The first is the recognition by the Court of the importance of understanding how the technology actually works. It is necessary to go below the “content layer” and look at the medium itself and how it operates within the various taxonomies of communication methods. In this regard, it is not possible to make generalisations about all communications protocols or applications that utilise the backbone that is the Internet.

Similarly it would be incorrect to refer to defamation by Facebook or using a blog or a Google snippet as “Internet defamation” because the only common factor that these application have is that they bolt on to and utilise the transport layer provided by the Internet. An example in the intellectual property field where an understanding of the technology behind Google adwords was critical to the case was Intercity Group (NZ) Limited v Nakedbus NZ Limited [2014] NZHC 124.. Thus, when confronted with a potentially defamatory communication on a blog, the Court will have to consider the way in which a blog works and also consider the particular blogging platform, for there may well be differences between platforms and their operation.

The second major aspect of the case – and a very important one for lawyers – is the care that must be employed in drawing analogies particularly with earlier communications paradigms. The Court did not entirely discount the use of analogy when dealing with communication applications utilising the Internet. However it is clear that the use of analogies must be approached with considerable care. The Digital Paradigm introduces new and different means of communication that often have no parallel with the earlier paradigm other than that a form of content is communicated. What needs to be considered is how  that content is communicated and the case demonstrates the danger of looking for parallels in earlier methods of communication. While a Facebook page may “look like” a noticeboard upon which “posts” are placed, or has a “wall” which may be susceptible to scrawling graffiti it is important not to be seduced by the language parallels of the earlier paradigm. A Facebook “page” or a “web page” are not pages at all. Neither have the physical properties of a “page”. It is in fact a mixture of coded electronic impulses rendered on a screen using a software and hardware interface. The word “page” is used because in the transition between paradigms we tend to use language that encodes our conceptual understanding of the way in which information is presented. A “website” is a convenient linguistic encoding for the complex way in which information is dispersed across a storage medium which may be accessible to a user. A website is not in fact a discrete physical space like a “building site”. It has no separate identifiable physical existence.

The use of comfortable encoding for paradigmatically different concepts; the resort often to a form of functional equivalence with an earlier paradigm means that we may be lured in considering other analogous equivalencies as we attempt to try to make rules which applied to an old paradigm fit into a new one.

The real deeper subtext to Murray v Wishart is that we must all be careful to avoid what appears to be the comfortable route and carefully examine and understand the reality of the technology before we start to determine the applicable rule.

Digital Data and Theft – Collisions in The Digital Paradigm IV

 

Under the law in New Zealand a digital file cannot be stolen. This follows from the Court of Appeal decision in Dixon v R [2014] NZCA 329 and depends upon the way in which various definitions contained in the Crimes Act coupled with the nature of the charge were interpreted by the Court.

Mr. Dixon, the appellant, had been employed by a security firm in Queenstown. One of the clients of the firm was Base Ltd which operated the Altitude Bar in Queenstown. Base had installed a closed circuit TV system in the bar.

In September 2011 the English rugby team was touring New Zealand as part of the Rugby World Cup. The captain of the team was Mr Tindall. Mr Tindall had recently married the Queen’s granddaughter. On 11 September, Mr Tindall and several other team members visited Altitude Bar. During the evening there was an incident involving Mr Tindall and a female patron, which was recorded on Base’s CCTV.

Mr Dixon found out about the existence of the footage of Mr Tindall and asked one of Base’s receptionists to download it onto the computer she used at work. She agreed, being under the impression that Mr Dixon required it for legitimate work purposes. The receptionist located the footage and saved it onto her desktop computer in the reception area. Mr Dixon subsequently accessed that computer, located the relevant file and transferred it onto a USB stick belonging to him.

Mr Dixon attempted to sell the footage but when that proved unsuccessful he posted it on a video-sharing site, resulting in a storm of publicity both in New Zealand and in the United Kingdom. At his trial the judge Phillips found that Mr Dixon had done this out of spite and to ensure that no one else would have the opportunity to make any money from the footage.

A complaint was laid with the Police and Mr Dixon was charged under s. 249(1)(a) of the Crimes Act 1961.

That section provides as follows:

249 Accessing computer system for dishonest purpose

(1) Every one is liable to imprisonment for a term not exceeding 7 years who, directly or indirectly, accesses any computer system and thereby, dishonestly or by deception, and without claim of right,—

(a) obtains any property, privilege, service, pecuniary advantage, benefit, or valuable consideration;

The indictment against Mr Dixon alleged that he had “accessed a computer system and thereby dishonestly and without claim of right obtained property.”

The issue before the Court was whether or not digital footage stored on a computer was “property” as defined in the Crimes Act.

“Property” is defined in section 2 of the Crimes Act in the following way:

property includes real and personal property, and any estate or interest in any real or personal property, money, electricity, and any debt, and any thing in action, and any other right or interest.

The Court considered the legislative history of the definition, noting that in the Bill that introduced the new computer crimes a separate definition of property specifically for those crimes had been provided. The definition was discarded by the Select Committee which rejected the suggestion that there should be different definitions of the word property for different offences.

The Court also noted that in the case of Davies v Police [2008] 1 NZLR 638 (HC) it was held  that internet usage (the consumption of megabytes in the transmission of electronic data) is “property” but in that case the Judge specifically distinguished internet usage from the information contained in the data. Thus, Dixon was the first case where the Court had to consider “property” as defined in the context of “electronically stored footage or images”.

In considering the decision of the trial Judge, the Court was of the view that he had been influenced by the very wide definition of property and the inclusion of intangible things, and that the footage in question seemed to have all the normal attributes of personal property. The Court also observed that Base Ltd who operated the CCTV system did not lose the file. What it lost was the right to exclusive possession and control of it. The Court considered the trial judge’s holding that the files were within the scope of the definition of property reflected “an intuitive response that in the modern computer age digital data must be property.” (para 20)

The Court concluded otherwise and held that digital files are not property within section 2, and therefore Mr Dixon did not obtain property and was charged under the wrong part of section 249(1)(a). Rather, held the Court, he should have been charged with accessing a computer and dishonestly and without claim of right obtaining a benefit.

The Court referred to the English decision of Oxford v Moss (1979) 68 Cr App R 183 which involved a University student who unlawfully acquired an examination paper, read its contents and returned it. The Court held that was not theft. The student had obtained the information on the paper – confidential it may have been, but it was not property, unlike the medium upon which it was written.

The Court of Appeal noted that Oxford v Moss was not a closely reasoned decision but it remained good law in England and had been followed by the Supreme Court of Canada in Stewart v R [1988] 1 SCR 963. Oxford v Moss had also been followed in New Zealand. In Money Managers Ltd v Foxbridge Trading Ltd (HC Hamilton CP 67/93 15 December 1993) Hammond J noted that traditionally the common law had refused to treat information as property, and in Taxation Review Authority 25 [1997] TRNZ 129 Judge Barber had to consider whether computer programs and software constituted goods for the purpose of the Goods and Services Tax Act 1985. He drew a distinction between the medium upon which information or data was stored – such as computer disks – and the information itself.

The Court considered the nature of confidential information and a line of cases that held that it was not property. The traditional approach had been to rely on the equitable cause of action for breach of confidence.

The Court went on to consider whether or not the digital footage might be distinguishable from confidential information. Once again it noted the distinction between the information or data and the medium, observing that a computer disk containing the information was property whilst the information contained upon it was not. It observed that a digital file arguably does have a physical existence in a way that information (in non-physical form) does not, citing the decision in R v Cox (2004) 21 CRNZ 1 CA at [49]. Cox was a case about intercepted SMS messages. The relevant observation was directed to the issue of whether or not an electronic file could be the subject of a search. The Court in Cox noted

“Nor do we see anything in the argument that the electronic data is not “a thing”. It has a physical existence even if ephemeral and that in any event the computer componentry on which it was stored was undoubtedly “a thing”.

Any doubt on this particular issue has been resolved by the Search and Surveillance Act 2012. However, as I will discuss below, although a digital file does have a physical existence, it is not in coherent form. One of the subtexts to the Court of Appeal’s observations of the “electronically stored footage” was that, when stored electronically it has a continuity similar to film footage. For reason that I will discuss later, this is not the case.

The Court then went on to discuss the nature of information in the electronic space. The Court stated at [31]:

It is problematic to treat computer data as being analogous to information recorded in physical form. A computer file is essentially just a stored sequence of bytes that is available to a computer program or operating system. Those bytes cannot meaningfully be distinguished from pure information. A Microsoft Word document, for example, may appear to us to be the same as a physical sheet of paper containing text, but in fact is simply a stored sequence of bytes used by the Microsoft Word software to present the image that appears on the monitor.

Having reviewed the background to the extension of the definition of “property’ following the decision in the case of R v Wilkinson [1999] 1 NZLR 403 (CA) where it was held that credit extended by a bank was not capable of being stolen because the definition of things capable of being stolen was limited to moveable, tangible things, and the fact that although the definition of document extended to electronic files the word “document” – thereby extending the definition of property to include electronic files – did not appear in the definition of “property”, along with the fact that the Law Commission in its hastily produced and somewhat flawed report Computer Misuse (NZLC R54 1999) referred to a possible redefinition of information as a property right, the Court took what it described as the orthodox approach. Parliament was taken to be aware of the large body of authority regarding the status of information and had it intended to change the legal position, it would have expressly said so by including a specific reference to computer-stored data.

This holding did not make section 249(1) of the Crimes Act meaningless. The section would still extend to cases where, for example, a defendant accesses a computer and uses, for example, credit card details to unlawfully obtain goods. In this case, the Court observed, Mr. Dixon had been charged under the wrong part of the section.

It is clear that prosecuting authorities will have to move with care in future. Under the Dixon holding, someone who unlawfully obtains an e-book for a Kindle or other reader could not be charged with theft, because an e-book is information in digital form. If the same book in hard copy form were taken without payment and with the requisite intention from a bookstore, a charge of theft could follow.

Comment

There can be no doubt that the decision of the Court of Appeal is correct technologically and in law and, although I do take a few minor points with the way in which the technological realities have been articulated.

The issue of where the property lies within medium\information dichotomy has been with us for a considerable period of time. I can own the book, but I do not “own” the content and do with it as I wish because it is the “property” of the author. The particular property right – the “copy right” gives the author the control over the use of the content of the book – the author may lose possession and control of the medium but he or she does not lose control of the message.

But the “copy right” has its own special statute and those legislatively created special property rights do not extend to the provisions of the Crimes Act – even although copyright owners frequently mouth the mantra that copyright infringement is “theft”. Clearly the decision in Dixon emphasising the principle that information is not property for the purposes of theft must put that myth to rest.

Information or Data in the Digital Space

To clearly understand the import of the decision in Dixon it is necessary to understand the nature of information or data in the digital space. The Court of Appeal refers to “information” because that is the basis of the “orthodox” conclusion that it reached. Information implies a certain continuity and coherence that derives from the way in which it was communicated in the pre-digital paradigm. Lawyers are so used to obtaining information that is associated primarily with paper, the medium takes second place to the message. Lawyers focus upon the “content layer” – an approach that must be reconsidered in the Digital Paradigm. For reasons which I shall develop, the word “data” can (and perhaps should) be substituted.

The properties of electronic and digital technologies and their product require a review of one’s approach to information. The nature of the print and paper based medium as a means of recording and storing information, and the digital equivalent are radically different. Apart from occasional incidents of forgery, with paper-based documents, what you saw was what you got. There was no underlying information embedded or hidden in the document, as there is with meta-data in the digital environment. The issue of the integrity of the information contained on the static medium was reasonably clear.

Electronic data is quite different to its predigital counterpart. Some of those differences may be helpful. Electronic information may be easily copied and searched but it must be remembered that electronic documents do pose some challenges.

Electronic data is dynamic and volatile. It is often difficult to ensure it has been captured and retained in such a way as to ensure its integrity. Unintentional modifications may be made simply by opening and reading data. Although the information that appears on the screen may not have been altered, some of the vital meta-data which traces the history of the file (and which can often be incredibly helpful in determining its provenance and which may be of assistance in determining the chronology of events and when parties knew what they knew) may have been changed.

To understand the difficulty that the electronic paradigm poses for our conception of data it is necessary to consider the technological implications of storing information in the digital space. It is factually and paradigmatically far removed from information recorded on a medium such as paper.

If we consider  data as information written upon a piece of paper it is quite easy for a reader to obtain access to that information long after it was created. The only thing necessary is good eye sight and an understanding of the language in which the document is written. It is information in that it is comprehensible and the content informs. Electronic data in and of itself does not do that. It incoherent and incomprehensible, scattered across the sectors of the medium on which it is contained. In that state it is not information in that it does not inform.

Data in electronic format is dependent upon hardware and software. The data contained upon a medium such as a hard drive requires an interpreter to render it into human readable format. The interpreter is a combination of hardware and software. Unlike the paper document, the reader cannot create or manipulate electronic data into readable form without the proper hardware in the form of computers.[1]

There is a danger in thinking of electronic data as an object ‘somewhere there’ on a computer in the same way as a hard copy book is in a library.  Because of the way in which electronic storage media are constructed it is almost impossible for a complete file of electronic information be stored in consecutive sectors of a medium. An electronic file is better understood as a process by which otherwise unintelligible pieces of data are distributed over a storage medium, are assembled, processed and rendered legible for a human user. In this respect the “information” or “file” as a single entity is in fact nowhere. It does not exist independently from the process that recreates it every time a user opens it on a screen.[2]

Computers are useless unless the associated software is loaded onto the hardware. Both hardware and software produce additional evidence that includes, but is not limited to, information such as metadata and computer logs that may be relevant to any given file or document in electronic format.

This involvement of technology and machinery makes electronic information paradigmatically different from traditional information where the message and the medium are one. It is this mediation of a set of technologies that enables data in electronic format – at its simplest, positive and negative electromagnetic impulses recorded upon a medium – to be rendered into human readable form. This gives rise to other differentiation issues such as whether or not there is a definitive representation of a particular source digital object. Much will depend, for example, upon the word processing program or internet browser used.

The necessity for this form of mediation for information acquisition and communication explains the apparent fascination that people have with devices such as smart phones and tablets. These devices are necessary to “decode” information and allow for its comprehension and communication.

Thus, the subtext to the description of the electronically stored footage which seems to suggest a coherence of data similar to that contained on a strip of film cannot be sustained. The “electronically stored footage” is meaningless as data without a form of technological mediation to assemble and present the data in coherent form.  The Court made reference to the problem of trying to draw an analogy between computer data and non-digital information or data and referred to the example of the Word document. This is part of an example of the nature of “information as process” that I have described above. Nevertheless there is an inference of coherence of information in a computer file that is not present in the electronic medium – references to “sequence of bytes” are probably correct once the assembly of data prior to presentation on a screen has taken place –  but the reality is that throughout the process of information display on a screen there is constant interactivity between the disk or medium interpreter, the code of the word processing program and the interpreter that is necessary to display the image on the screen.

In the final analysis there are two approaches to the issue of whether or not digital data is property for the purposes of theft. The first is the orthodox legal position taken by the Court of Appeal. The second is the technological reality of data in the digital space. Even although the new definition of property extends to intangibles such as electricity it cannot apply to data in the digital space because of the incoherence of the data. Even although a file may be copied from one medium to another, it remains in an incoherent state. Even although it may be inextricably associated with a medium of some sort or another, it maintains that incoherent state until it is subjected to the mediation of hardware and software that I have described above. The Court of Appeal’s “information” based approach becomes even sharper when one substitutes the word “data” for “information”. Although there is a distinction between the medium and the data, the data requires a storage medium of some sort. And it is this that is capable of being stolen

Although Marshall McLuhan intended an entirely different interpretation of the phrase, ‘the medium is the message,’[3] it is a truth of information in digital format.

 

[1] Burkhard Schafer and Stephen Mason, chapter 2 ‘The Characteristics of Electronic Evidence in Digital Format’ in Stephen Mason (gen ed) Electronic Evidence (3rd edn, LexisNexis Butterworths, London 2012) 2.05.

[2] Burkhard Schafer and Stephen Mason, chapter 2 ‘The Characteristics of Electronic Evidence in Digital Format’ 2.06.

[3] Marshall McLuhan, Understanding Media : The Extensions of Man (Massachusetts Institute of Technology  Cambridge 1994) Ch 1

 

Internet Governance Theory – Collisions in the Digital Paradigm III

 

The various theories on internet regulation can be placed  within a taxonomy structure . In the centre is the Internet itself. On one side are the formal theories based on traditional “real world” governance models. These are grounded in traditional concepts of law and territorial authority. Some of these model could well become a part of an “uber-model” described as the “polycentric model” – a theory designed to address specific issues in cyberspace. Towards the middle are less formal but nevertheless structured models. Largely technical or “code-based” in nature that are less formal but nevertheless exercise a form of control over Internet operation.

 

On the other side are informal theories that emphasise non-traditional or radical models. These models tend to be technically based, private and global in character.

Internet Governance Graphic
Internet Governance Models – click on the image for a larger copy

 

What I would like to do is briefly outline aspects of each of the models. This will be a very “once over lightly” approach and further detail may be found in Chapter 3 of my text internet.law.nz. This piece also contains some new material on Internet Governance together with some reflections on how traditional sovereign/territorial governance models just won’t work within the context of the Digital Paradigm and the communications medium that is the Internet.

The Formal Theories

The Digital Realists

The “Digital Realist” school has been made famous by Judge Easterbrook’s comment that “there [is] no more a law of cyberspace than there [is] a ‘Law of the Horse.’” Easterbrook summed the theory up in this way:

“When asked to talk about “Property in Cyberspace,” my immediate reaction was, “Isn’t this just the law of the horse?” I don’t know much about cyberspace; what I do know will be outdated in five years (if not five months!); and my predictions about the direction of change are worthless, making any effort to tailor the law to the subject futile. And if I did know something about computer networks, all I could do in discussing “Property in Cyberspace” would be to isolate the subject from the rest of the law of intellectual property, making the assessment weaker.

This leads directly to my principal conclusion: Develop a sound law of intellectual property, then apply it to computer networks.”

Easterbrook’s comment is a succinct summary of the general position of the digital realism school: that the internet presents no serious difficulties, so the “rule of law” can simply be extended into cyberspace, as it has been extended into every other field of human endeavour. Accordingly, there is no need to develop a “cyber-specific” code of law.

Another advocate for the digital realist position is Jack Goldsmith. In “Against Cyberanarchy” he argues strongly against those whom he calls “regulation sceptics” who suggest that the state cannot regulate cyberspace transactions. He challenges their opinions and conclusions, arguing that regulation of cyberspace is feasible and legitimate from the perspective of jurisdiction and choice of law — in other words he argues from a traditionalist, conflict of laws standpoint. However, Goldsmith and other digital realists recognise that new technologies will lead to changes in government regulation; but they believe that such regulation will take place within the context of traditional governmental activity.

Goldsmith draws no distinction between actions in the “real” world and actions in “cyberspace” — they both have territorial consequences. If internet users in one jurisdiction upload pornography, facilitate gambling, or take part in other activities that are illegal in another jurisdiction and have effects there then, Goldsmith argues, “The territorial effects rationale for regulating these harms is the same as the rationale for regulating similar harms in the non-internet cases”. The medium that transmitted the harmful effect, he concludes, is irrelevant.

The digital realist school is the most formal of all approaches because it argues that governance of the internet can be satisfactorily achieved by the application of existing “real space” governance structures, principally the law, to cyberspace. This model emphasises the role of law as a key governance device. Additional emphasis is placed on law being national rather than international in scope and deriving from public (legislation, regulation and so on) rather than private (contract, tort and so on) sources. Digital realist theorists admit that the internet will bring change to the law but argue that before the law is cast aside as a governance model it should be given a chance to respond to these changes. They argue that few can predict how legal governance might proceed. Given the law’s long history as society’s foremost governance model and the cost of developing new governance structures, a cautious, formal “wait and see” attitude is championed by digital realists.

The Transnational Model – Governance by International Law

The transnational school, although clearly still a formal governance system, demonstrates a perceptible shift away from the pure formality of digital realism. The two key proponents of the school, Burk and Perritt, suggest that governance of the internet can be best achieved not by a multitude of independent jurisdiction-based attempts but via the medium of public international law. They argue that international law represents the ideal forum for states to harmonise divergent legal trends and traditions into a single, unified theory that can be more effectively applied to the global entity of the internet.

The transnationalists suggest that the operation of the internet is likely to promote international legal harmonisation for two reasons.

First, the impact of regulatory arbitrage and the increased importance of the internet for business, especially the intellectual property industry, will lead to a transfer of sovereignty from individual states to international and supranational organisations. These organisations will be charged with ensuring broad harmonisation of information technology law regimes to protect the interests of developed states, lower trans-border costs to reflect the global internet environment, increase opportunities for transnational enforcement and resist the threat of regulatory arbitrage and pirate regimes in less developed states.

Secondly, the internet will help to promote international legal harmonisation through greater availability of legal knowledge and expertise to legal personnel around the world.

The transnational school represents a shift towards a less formal model than the digital realism because it is a move away from national to international sources of authority. However, it still clearly belongs to the formalised end of the governance taxonomy on three grounds:

1.    its reliance on law as its principal governance methodology;

2.    the continuing public rather than private character of the authority on which governance rests; and

3.    the fact that although governance is by international law, in the final analysis, this amounts to delegated authority from national sovereign states.

 

National and UN Initiatives – Governance by Governments

This discussion will be a little lengthier because there is some history the serves to illustrate how governments may approach Internet governance.

In 2011 and 2012 there were renewed calls for greater regulation of the Internet.  These were driven by the events in the Middle East early in 2011 which became known as the “Arab Spring” seems more than co-incidental. The “Arab Spring” is a term that refers to anti-government protests that spread across the Middle East. These followed a successful uprising in Tunisia against former leader Zine El Abidine Ben Ali which emboldened similar anti-government protests in a number of Arab countries. The protests were characterised by the extensive use of social media to organise gatherings and spread awareness. There has, however, been some debate about the influence of social media on the political activism of the Arab Spring. Some critics contend that digital technologies and other forms of communication–videos, cellular phones, blogs, photos and text messages– have brought about the concept of a ‘digital democracy’ in parts of North Africa affected by the uprisings. Other have claimed that in order to understand the role of social media during the Arab Uprisings there is context of high rates of unemployment and corrupt political regimes which led to dissent movements within the region. There is certainly evidence of an increased uptake of Internet and social media usage over the period of the events, and during the uprising in Egypt, then President Mubarak’s State Security Investigations Service blocked access to Twitter and Facebook and on 27 January 2011 the Egyptian Government shut down the Internet in Egypt along with SMS messaging.

The G8 Meeting in Deauville May 2011

In May 2011 at G8 meeting in France, President Sarkozy issued a provocative call for stronger Internet Regulation. M. Sarkozy convened a special gathering if global “digerati” in Paris and called the rise of the Internet a “revolution” as significant as the age of exploration and the industrial revolution. This revolution did not have a flag and M. Sarkozy acknowledged that the Internet belonged to everyone, citing the “Arab Spring” as a positive example. However, he warned executives of Google, Facebook, Amazon and E-Bay who were present : “The universe you represent is not a parallel universe. Nobody should forget that governments are the only legitimate representatives of the will of the people in our democracies. To forget this is to risk democratic chaos and anarchy.”

Mr. Sarkozy was not alone in calling existing laws and regulations inadequate to deal with the challenges of a borderless digital world. Prime Minister David Cameron of Britain stated that he would ask Parliament to review British privacy laws after Twitter users circumvented court orders preventing newspapers from publishing the names of public figures who are suspected of having had extramarital affairs but he did not go as far as M. Sarkozy who was pushing for a “civilized Internet” implying wide regulation.

However, the Deauville Communique did not go as far as M. Sarkozy may have like. It affirmed the importance of intellectual property protection, the effective protection of personal data and individual privacy, security of networks a crackdown on trafficking in children for their sexual exploitation. But it did not advocate state control of the Internet but staked out a role for governments. The communique stated:

“We discussed new issues such as the Internet which are essential to our societies, economies and growth. For citizens, the Internet is a unique information and education tool, and thus helps to promote freedom, democracy and human rights. The Internet facilitates new forms of business and promotes efficiency, competitiveness, and economic growth. Governments, the private sector, users, and other stakeholders all have a role to play in creating an environment in which the Internet can flourish in a balanced manner. In Deauville in 2011, for the first time at Leaders’ level, we agreed, in the presence of some leaders of the Internet economy, on a number of key principles, including freedom, respect for privacy and intellectual property, multi-stakeholder governance, cyber-security, and protection from crime, that underpin a strong and flourishing Internet. The “e-G8″ event held in Paris on 24 and 25 May was a useful contribution to these debates….

The Internet and its future development, fostered by private sector initiatives and investments, require a favourable, transparent, stable and predictable environment, based on the framework and principles referred to above. In this respect, action from all governments is needed through national policies, but also through the promotion of international cooperation……

As we support the multi-stakeholder model of Internet governance, we call upon all stakeholders to contribute to enhanced cooperation within and between all international fora dealing with the governance of the Internet. In this regard, flexibility and transparency have to be maintained in order to adapt to the fast pace of technological and business developments and uses. Governments have a key role to play in this model.

We welcome the meeting of the e-G8 Forum which took place in Paris on 24 and 25 May, on the eve of our Summit and reaffirm our commitment to the kinds of multi-stakeholder efforts that have been essential to the evolution of the Internet economy to date. The innovative format of the e-G8 Forum allowed participation of a number of stakeholders of the Internet in a discussion on fundamental goals and issues for citizens, business, and governments. Its free and fruitful debate is a contribution for all relevant fora on current and future challenges.

We look forward to the forthcoming opportunities to strengthen international cooperation in all these areas, including the Internet Governance Forum scheduled next September in Nairobi and other relevant UN events, the OECD High Level Meeting on “The Internet Economy: Generating Innovation and Growth” scheduled next June in Paris, the London International Cyber Conference scheduled next November, and the Avignon Conference on Copyright scheduled next November, as positive steps in taking this important issue forward.”

 The ITU Meeting in Dubai December 2012

The meeting of the International Telecommunications Union (ITU) in Dubai provided the forum for further consideration of expanded Internet regulation. No less an authority than Vinton Cerf, the co-developer with Robert Kahn of the TCP/IP protocol which was one of the important technologies that made the Internet possible, sounded a warning when he said

“But today, despite the significant positive impact of the Internet on the world’s economy, this amazing technology stands at a crossroads. The Internet’s success has generated a worrying desire by some countries’ governments to create new international rules that would jeopardize the network’s innovative evolution and its multi-faceted success.

This effort is manifesting itself in the UN General Assembly and at the International Telecommunication Union – the ITU – a United Nations organization that counts 193 countries as its members, each holding one vote. The ITU currently is conducting a review of the international agreements governing telecommunications and it aims to expand its regulatory authority to include the Internet at a treaty summit scheduled for December of this year in Dubai. ….

Today, the ITU focuses on telecommunication networks, radio frequency allocation, and infrastructure development. But some powerful member countries see an opportunity to create regulatory authority over the Internet. Last June, the Russian government stated its goal of establishing international control over the Internet through the ITU. Then, last September, the Shanghai Cooperation Organization – which counts China, Russia, Tajikistan, and Uzbekistan among its members – submitted a proposal to the UN General Assembly for an “international Code of Conduct for Information Security.” The organization’s stated goal was to establish government-led “international norms and rules standardizing the behavior of countries concerning information and cyberspace.” Other proposals of a similar character have emerged from India and Brazil. And in an October 2010 meeting in Guadalajara, Mexico, the ITU itself adopted a specific proposal to “increase the role of ITU in Internet governance.”

As a result of these efforts, there is a strong possibility that this December the ITU will significantly amend the International Telecommunication Regulations – a multilateral treaty last revised in 1988 – in a way that authorizes increased ITU and member state control over the Internet. These proposals, if implemented, would change the foundational structure of the Internet that has historically led to unprecedented worldwide innovation and economic growth.”

The ITU, originally the International Telegraph Union, is a specialised agency of the United Nations and is responsible for issues concerning information and communication technologies. It was originally founded in 1865 and in the past has been concerned with technical communications issues such as standardisation of communications protocols (which was one of its original purposes) that management of the international radio-frequency spectrum and satellite orbit resources and the fostering of sustainable, affordable access to ICT. It took its present name in 1934 and in 1947 became a specialised agency of the United Nations.

The position of the ITU approaching the 2012 meeting in Dubai was that, given the vast changes that had taken place in the world of telecommunications and information technologies, the International Telecommunications Regulations (ITR)that had been revised in 1988 were no longer in keeping with modern developments. Thus, the objective of the 2012 meeting was to revise the ITRs to suit the new age. After a controversial meeting in Dubai in December 2012 the Final Acts of the Conference were published. The controversial issue was that there was a proposal to redefine the Internet as a system of government-controlled, state supervised networks. The proposal was contained in a leaked document by a group of members including Russia, China, Saudi Arabia, Algeria, Sudan, Egypt and the United Arab Emirates. However, the proposal was withdrawn. But the governance model defined the Internet as an:

“international conglomeration of interconnected telecommunication networks,” and that “Internet governance shall be effected through the development and application by governments,” with member states having “the sovereign right to establish and implement public policy, including international policy, on matters of Internet governance.”

This wide-ranging proposal went well beyond the traditional role of the ITU and other members such as the United States, European countries, Australia, New Zealand and Japan insisted that the ITU treaty should apply to traditional telecommunications systems. The resolution that won majority support towards the end of the conference stated that the ITU’s leadership should “continue to take the necessary steps for ITU to play an active and constructive role in the multi-stakeholder model of the internet.” However, the Treaty did not receive universal acclaim. United States Ambassador Kramer of the announced that the US would not be signing the new treaty. He was followed by the United Kingdom. Sweden said that it would need to consult with its capital (code in UN-speak for “not signing”). Canada, Poland, the Netherlands, Denmark, Kenya, New Zealand, Costa Rica, and the Czech Republic all made similar statements. In all, 89 countries signed while 55 did not.

Quite clearly there is a considerable amount of concern about the way in which national governments wish to regulate or in some way govern and control the Internet. Although at first glance this may seem to be directed at the content layer, and amount to a rather superficial attempt to embark upon the censorship of content passing through a new communications technology, the attempt to regulate through a technological forum such as the ITU clearly demonstrates that governments wish to control not only content but the various transmission and protocol layers of the Internet and possibly even the backbone itself. Continued attempts to interfere with aspects of the Internet or embark upon an incremental approach to regulation have resulted in expressions of concern from another Internet pioneer, Sir Tim Berners-Lee who, in addition to claiming that governments are suppressing online freedom has issued a call for a Digital Magna Carta.

I have already written on the issue of a Digital Magna Carta or Bill of Rights here.

Clearly the efforts described indicate that some form of national government or collective government form of Internet Governance is on the agenda. Already the United Nations has become involved in the development of Internet Governance policy with the establishment of the Internet Governance Forum.

The Internet Governance Forum

The Internet Governance Forum describes itself as bringing

“people together from various stakeholder groups as equals, in discussions on public policy issues relating to the Internet. While there is no negotiated outcome, the IGF informs and inspires those with policy-making power in both the public and private sectors.  At their annual meeting delegates discuss, exchange information and share good practices with each other. The IGF facilitates a common understanding of how to maximize Internet opportunities and address risks and challenges that arise.

The IGF is also a space that gives developing countries the same opportunity as wealthier nations to engage in the debate on Internet governance and to facilitate their participation in existing institutions and arrangements. Ultimately, the involvement of all stakeholders, from developed as well as developing countries, is necessary for the future development of the Internet.”

The Internet Governance Forum is an open forum which has no members. It was established by the World Summit on the Information Society in 2006. Since then, it has become the leading global multi-stakeholder forum on public policy issues related to Internet governance.

Its UN mandate gives it convening power and the authority to serve as a neutral space for all actors on an equal footing. As a space for dialogue it can identify issues to be addressed by the international community and shape decisions that will be taken in other forums. The IGF can thereby be useful in shaping the international agenda and in preparing the ground for negotiations and decision-making in other institutions. The IGF has no power of redistribution, and yet it has the power of recognition – the power to identify key issues.

A small Secretariat was set up in Geneva to support the IGF, and the UN Secretary-General appointed a group of advisers, representing all stakeholder groups, to assist him in convening the IGF.  The United Nations General Assembly agreed in December 2010 to extend the IGF’s mandate for another five years. The IGF is financed through voluntary contributions.”

Zittrain describes the IGF as “diplomatically styled talk-shop initiatives like the World Summit on the Information Society and its successor, the Internet Governance Forum, where “stakeholders” gather to express their views about Internet governance, which is now more fashionably known as “the creation of multi-stakeholder regimes.”

Less Formal Yet Structured

The Engineering and Technical Standards Community

The internet governance models under discussion have in common the involvement of law or legal structures in some shape or form or, in the case of the cyber anarchists, an absence thereof.

Essentially internet governance falls within two major strands:

1.    The narrow strand involving the regulation of technical infrastructure and what makes the internet work.

2.    The broad strand dealing with the regulation of content, transactions and communication systems that use the internet.

The narrow strand regulation of internet architecture recognises that the operation of the internet and the superintendence of that operation involves governance structures that lack the institutionalisation that lies behind governance by law.

The history of the development of the internet although having its origin with the United States Government has had little if any direct government involvement or oversight. The Defence Advanced Research Projects Administration (DARPA) was a funding agency providing money for development. It was not a governing agency nor was it a regulator. Other agencies such as the Federal Networking Council and the National Science Foundation are not regulators, they are organisations that allow user agencies to communicate with one another. Although the United States Department of Commerce became involved with the internet, once potential commercial implications became clear it too has maintained very much of a hands-off approach and its involvement has primarily been with ICANN with whom the Department has maintained a steady stream of Memoranda of Understanding over the years.

Technical control and superintendence of the internet rests with the network engineers and computer scientists who work out problems and provide solutions for its operation. There is no organisational charter. The structures within which decisions are made are informal, involving a network of interrelated organisations with names which at least give the appearance of legitimacy and authority. These organisations include the Internet Society (ISOC), an independent international non-profit organisation founded in 1992 to provide leadership and internet-related standards, education and policy around the world. Several other organisations are associated with ISOC. The Internet Engineering Taskforce (IETF), is a separate legal entity, which has as its mission to make the internet work better by producing high quality, relevant technical documents that influence the way people design, use and manage the internet.

The Internet Architecture Board (IAB) is an advisory body to ISOC and also a committee of IETF, which has an oversight role. Also housed within ISOC is the IETF Administrative Support Activity, which is responsible for the fiscal and administrative support of the IETF Standards Process. The IETF Administrative Support Activity (IASA) has a committee, the IETF Administrative Oversight Committee (IAOC), which carries out the responsibilities of the IASA supporting the Internet Engineering Steering Group (IESG) working groups, the Internet Architecture Board (IAB), the Internet Research Taskforce (IRTF) and Steering Groups (IRSG). The IAOC oversees the work of the IETF Administrative Director (IAD) who has the day-to-day operational responsibility of providing the fiscal and administrative support through other activities, contractors and volunteers.

The central hub of these various organisations is the IETF. This organisation has no coercive power, but is responsible for establishing internet standards, some of which such as TCP/IP are core standards and are non-optional. The compulsory nature of these standards do not come from any regulatory powers, but because of the nature of the critical mass of network externalities involving internet users. Standards become economically mandatory and there is an overall acceptance of IETF standards which maintain core functionality of the internet.

A characteristic of IETF, and indeed all of the technical organisations involved in internet functionality, is the open process that theoretically allows any person to participate. The other characteristic of internet network organisations is the nature of the rough consensus by which decisions are made. Proposals are circulated in the form of a Request for Comment to members of the internet, engineering and scientific communities and from this collaborative and consensus-based approach a new standard is agreed.

Given that the operation of the internet involves a technical process and the maintenance of the technical process depends on the activities of scientific and engineering specialists, it is fair to conclude that a considerable amount of responsibility rests with the organisations who set and maintain standards. Many of these organisations have developed a considerable power structure them without any formal governmental or regulatory oversight – an issue that may well need to be addressed. Another issue is whether these organisations have a legitimate basis to do what they are doing with such an essential infrastructure as the internet. The objective of organisations such IETF is a purely technical one that has little if any public policy ramifications. Its ability to work outside government bureaucracyenables greater efficiency.

However, the internet’s continued operation depends on a number of interrelated organisations which, while operating in an open and transparent manner in a technical collaborative consensus-based model, have little understanding of the public interest ramifications of their decisions. This aspect of internet governance is often overlooked. The technical operation and maintenance of the internet is superintended by organisations that have little or no interactivity with any of the formalised power structures that underlie the various “governance by law” models of internet governance. The “technical model” of internet governance is an anomaly arising not necessarily from the technology, but from its operation.

ICANN

Of those involved in the technical sphere of Internet governance, ICANN is perhaps the best known. Its governance of the “root” or addressing systems makes it a vital player in the Internet governance taxonomy and for that reason requires some detailed consideration.

ICANN is the Internet Corporation for Assigned Names and Numbers (ICANN). This organisation was formed in October 1998 at the direction of the Clinton Administration to take responsibility for the administration of the Internet’s Domain Name System (DNS). Since that time ICANN has been dogged by controversy and criticism from all sides. ICANN wields enormous power as the sole controlling authority of the DNS, which has a “chokehold” over the internet because it is the only aspect of the entire decentralised, global system of the internet that is administered from a single, central point. By selectively editing, issuing or deleting net identities ICANN is able to choose who is able to access cyberspace and what they will see when they are there. ICANN’s control effectively amounts, in the words of David Post, to “network life or death”. Further, if ICANN chooses to impose conditions on access to the internet, it can indirectly project its influence over every aspect of cyberspace and the activity that takes place there.

The obvious implication for governance theorists is that the ICANN model is not a theory but a practical reality. ICANN is the first indigenous cyberspace governance institution to wield substantive power and demonstrate a real capacity for effective enforcement. Ironically, while other internet governance models have demonstrated a sense of purpose but an acute lack of power, ICANN has suffered from excess power and an acute lack of purpose. ICANN arrived at its present position almost, but not quite, by default and has been struggling to find a meaningful raison d’être since. In addition it is pulled by opposing forces all anxious to ensure their vision of the new frontier prevails

ICANN’s “democratic” model of governance has been attacked as unaccountable, anti-democratic, subject to regulatory capture by commercial and governmental interests, unrepresentative, and excessively Byzantine in structure. ICANN has been largely unresponsive to these criticisms and it has only been after concerted publicity campaigns by opponents that the board has publicly agreed to change aspects of the process.

As a governance model, a number of key points have emerged:

1.    ICANN demonstrates the internet’s enormous capacity for marshalling global opposition to governance structures that are not favourable to the interests of the broader internet community.

2.    Following on from point one, high profile, centralised institutions such as ICANN make extremely good targets for criticism.

3.    Despite enormous power and support from similarly powerful backers, public opinion continues to prove a highly effective tool, at least in the short run, for stalling the development of unfavourable governance schemes.

4.    ICANN reveals the growing involvement of commercial and governmental interests in the governance of the internet and their reluctance to be directly associated with direct governance attempts.

5.    ICANN, it demonstrates an inability to project its influence beyond its core functions to matters of general policy or governance of the internet.

ICANN lies within the less formal area of governance taxonomy in that it operates with a degree of autonomy it retains a formal character. Its power is internationally based (and although still derived from the United States government, there is a desire by the US to “de-couple” its involvement with ICANN). It has greater private rather than public sources of authority, in that its power derives from relationships with registries, ISPs and internet users rather than sovereign states. Finally, it is evolving towards a technical governance methodology, despite an emphasis on traditional decision-making structures and processes.

The Polycentric Model of Internet Governance

The Polycentric Model embraces, for certain purposes, all of the preceding models. It does not envelop them, but rather employs them for specific governance purposes.

This theory is one that has been developed by Professor Scott Shackelford. Shackelford in his article “Toward Cyberpeace: Managing Cyberattacks Through Polycentric Governance”  and locates Internet Governance within a special context of cybersecurity and the maintenance of cyberpeace He contends that the  international community must come together to craft a common vision for cybersecurity while the situation remains malleable. Given the difficulties of accomplishing this in the near term, bottom-up governance and dynamic, multilevel regulation should be undertaken consistent with polycentric analysis.

While he sees a role for governments and commercial enterprises he proposes a mixed model. Neither governments nor the private sector should be put in exclusive control of managing cyberspace since this could sacrifice both liberty and innovation on the mantle of security, potentially leading to neither.

The basic notion of polycentric governance is that a group facing a collective action problem should be able to address it in whatever way they see fit, which could include using existing or crafting new governance structures; in other words, the governance regime should facilitate the problem-solving process.

The model demonstrates the benefits of self-organization, networking regulations at multiple levels, and the extent to which national and private control can co-exist with communal management.  A polycentric approach recognizes that diverse organizations and governments working at multiple levels can create policies that increase levels of cooperation and compliance, enhancing flexibility across issues and adaptability over time.

Such an approach, a form of “bottom-up” governance, contrasts with what may be seen as an increasingly state-centric approach to Internet Governance and cybersecurity which has become apparent in for a such as the G8 Conference in Deauville in 2011 and the ITU Conference in Dubai in 2012.  The approach also recognises that cyberspace has its own qualities or affordances, among them its decentralised nature along with the continuing dynamic change flowing from permissionless innovation. To put it bluntly it is difficult to forsee the effects of regulatory efforts which a generally sluggish in development and enactment, with the result that the particular matter which regulation tried to address has changed so that the regulatory system is no longer relevant. Polycentric regulation provides a multi-faceted response to cybersecurity issues in keeping with the complexity of crises that might arise in cyberspace.

So how should the polycentric model work. First, allies should work together to develop a common code of cyber conduct that includes baseline norms, with negotiations continuing on a harmonized global legal framework. Second, governments and CNI operators should establish proactive, comprehensive cybersecurity policies that meet baseline standards and require hardware and software developers to promote resiliency in their products without going too far and risking balkanization. Third, the recommendations of technical organizations such as the IETF should be made binding and enforceable when taken up as industry best practices. Fourth, governments and NGOs should continue to participate in U.N. efforts to promote global cybersecurity, but also form more limited forums to enable faster progress on core issues of common interest. And fifth, training campaigns should be undertaken to share information and educate stakeholders at all levels about the nature and extent of the cyber threat.

Code is Law

Located centrally within the taxonomy and closely related to the Engineering and Technology category of governance models is the “code is law” model, designed by  Harvard Professor, Lawrence Lessig, and, to a lesser extent, Joel Reidenberg. The school encompasses in many ways the future of the internet governance debate. The system demonstrates a balance of opposing formal and informal forces and represents a paradigm shift in the way internet governance is conceived because the school largely ignores the formal dialectic around which the governance debate is centred and has instead developed a new concept of “governance and the internet”. While Lessig’s work has been favourably received even by his detractors, it is still too early to see if it is indeed a correct description of the future of internet governance, or merely a dead end. Certainly, it is one of the most discussed concepts of cyberspace jurisprudence.

Lessig asserts that human behaviour is regulated by four “modalities of constraint”: law, social norms, markets and architecture. Each of these modalities influences behaviour in different ways:

1.    law operates via sanction;

2.    markets operate via supply and demand and price;

3.    social norms operate via human interaction; and

4.    architecture operates via the environment.

Governance of behaviour can be achieved by any one or any combination of these four modalities. Law is unique among the modalities in that it can directly influence the others.

Lessig argues that in cyberspace, architecture is the dominant and most effective modality to regulate behaviour. The architecture of cyberspace is “code” — the hardware and software — that creates the environment of the internet. Code is written by code writers; therefore it is code writers, especially those from the dominant software and hardware houses such as Microsoft and AOL, who are best placed to govern the internet. In cyberspace, code is law in the imperative sense of the word. Code determines what users can and cannot do in cyberspace.

“Code is law” does not mean lack of regulation or governmental involvement, although any regulation must be carefully applied. Neil Weinstock Netanel argues that “contrary to the libertarian impulse of first generation cyberspace scholarship, preserving a foundation for individual liberty, both online and off, requires resolute, albeit carefully tailored, government intervention”. Internet architecture and code effectively regulate individual activities and choices in the same way law does and that market actors need to use these regulatory technologies in order to gain a competitive advantage. Thus, it is the role of government to set the limits on private control to facilitate this.

The crux of Lessig’s theory is that law can directly influence code. Governments can regulate code writers and ensure the development of certain forms of code. Effectively, law and those who control it, can determine the nature of the cyberspace environment and thus, indirectly what can be done there. This has already been done. Code is being used to rewrite Copyright Law. Technological Protection Measures (TPMs) allow content owners to regulate the access and/or use to which a consumer may put digital content. Opportunities to exercise fair uses or permitted uses can be limited beyond normal user expectations and beyond what the law previously allowed for analogue content. The provision of content in digital format, the use of TPMs and the added support that legislation gives to protect TPMs effectively allows content owners to determine what limitations they will place upon users’ utilisation of their material. It is possible that the future of copyright lies not in legislation (as it has in the past) but in contract.

 

Informal Models and Aspects of Digital Liberalism

Digital liberalism is not so much a model of internet governance as it is a school of theorists who approach the issue of governance from roughly the same point on the political compass: (neo)-liberalism. Of the models discussed, digital liberalism is the broadest. It encompasses a series of heterogeneous theories that range from the cyber-independence writings of John Perry Barlow at one extreme, to the more reasoned private legal ordering arguments of Froomkin, Post and Johnson at the other. The theorists are united by a common “hands off” approach to the internet and a tendency to respond to governance issues from a moral, rather than a political or legal perspective.

Regulatory Arbitrage – “Governance by whomever users wish to be governed by”

The regulatory arbitrage school represents a shift away from the formal schools, and towards digital liberalism. “Regulatory arbitrage” is a term coined by the school’s principal theorist, Michael Froomkin, to describe a situation in which internet users “migrate” to jurisdictions with regulatory regimes that give them the most favourable treatment. Users are able to engage in regulatory arbitrage by capitalising on the unique geographically neutral nature of the internet. For example, someone seeking pirated software might frequent websites geographically based in a jurisdiction that has a weak intellectual property regime. On the other side of the supply chain, the supplier of gambling services might, despite residing in the United States, deliberately host his or her website out of a jurisdiction that allows gambling and has no reciprocal enforcement arrangements with the United States.

Froomkin suggests that attempts to regulate the internet face immediate difficulties because of the very nature of the entity that is to be controlled. He draws upon the analogy of the mythological Hydra, but whereas the beast was a monster, the internet may be predominantly benign. Froomkin identifies the internet’s resistance to control as being caused by the following two technologies:

1.    The internet is a packet-switching network. This makes it difficult for anyone, including governments, to block or monitor information originating from large numbers of users.

2.    Powerful military-grade cryptography exists on the internet that users have access to that can, if used properly, make messages unreadable to anyone but the intended recipient.

As a result of the above, internet users have access to powerful tools which can be used to enable anonymous communication. This is unless, of course, their governments have strict access control, an extensive monitoring programme or can persuade its citizens not to use these tools by having liability rules or criminal law.

Froomkin’s theory is principally informal in character. Private users, rather than public institutions are responsible for choosing the governance regime they adhere to. The mechanism that allows this choice is technical and works in opposition to legally based models. Finally, the model is effectively global as users choose from a world of possibilities to decide which particular regime(s) to submit to, rather than a single national regime. While undeniably informal.

Unlike digital liberalists who advocate a separate internet jurisdiction encompassing a multitude of autonomous self-regulating regimes within that jurisdiction, Froomkin argues that the principal governance unit of the internet will remain the nation-state. He argues that users will be free to choose from the regimes of states rather than be bound to a single state, but does not yet advocate the electronic federalism model of digital liberalism.

Digital Libertarianism – Johson and Post

Digital liberalism is the oldest of the internet governance models and represents the original response to the question: “How will the internet be governed?” Digital liberalism developed in the early 1990s as the internet began to show the first inklings of its future potential. The development of a Graphical User Interface together with web browsers such as Mosaic made the web accessible to the general public for the first time. Escalating global connectivity and a lack of understanding or reaction by world governments contributed to a sense of euphoria and digital freedom that was reflected in the development of digital liberalism.

In its early years digital liberalism evolved around the core belief that “the internet cannot be controlled” and that consequently “governance” was a dead issue. By the mid-1990s advances in technology and the first government attempts to control the internet saw this descriptive claim gradually give way to a competing normative claim that “the internet can be controlled but it should not be”. These claims are represented as the sub-schools of digital liberalism — cyberanarchism and digital libertarianism.

In “And How Shall the Net be Governed?” David Johnson and David Post posed the following questions:

Now that lots of people use (and plan to use) the internet, many — governments, businesses, techies, users and system operators (the “sysops” who control ID issuance and the servers that hold files) — are asking how we will be able to:

(1)   establish and enforce baseline rules of conduct that facilitate reliable communications and trustworthy commerce; and

(2)   define, punish and prevent wrongful actions that trash the electronic commons or impose harm on others.

In other words, how will cyberspace be governed, and by what right?

Post and Johnson point out that one of the advantages of the internet is its chaotic and ungoverned nature. As to the question of whether the net must be governed at all they use the example of the three-Judge Federal Court in Philadelphiathat  “threw out the Communications Decency Act on First Amendment grounds seemed thrilled by the ‘chaotic’ and seemingly ungovernable character of the net”. Post and Johnson argue that because of its decentralised architecture and lack of a centralised rule-making authority the net has been able to prosper. They assert that the freedom the internet allows and encourages, has meant that sysops have been free to impose their own rules on users. However, the ability of the user to choose which sites to visit, and which to avoid, has meant the tyranny of system operators has been avoided and the adverse effect of any misconduct by individual users has been limited.

 Johnson and Post propose the following four competing models for net governance:

1.    Existing territorial sovereigns seek to extend their jurisdiction and amend their own laws as necessary to attempt to govern all actions on the net that have substantial impacts upon their own citizenry.

2.    Sovereigns enter into multilateral international agreements to establish new and uniform rules specifically applicable to conduct on the net.

3.    A new international organisation can attempt to establish new rules — a new means of enforcing those rules and of holding those who make the rules accountable to appropriate constituencies.

4.    De facto rules may emerge as the result of the interaction of individual decisions by domain name and IP registries (dealing with conditions imposed on possession of an on-line address), by system operators (local rules to be applied, filters to be installed, who can sign on, with which other systems connection will occur) and users (which personal filters will be installed, which systems will be patronised and the like).

The first three models are centralised or semi-centralised systems and the fourth is essentially a self-regulatory and evolving system. In their analysis, Johnson and Post consider all four and conclude that territorial laws applicable to online activities where there is no relevant geographical determinant are unlikely to work, and international treaties to regulate, say, ecommerce are unlikely to be drawn up.

Johnson and Post proposed a variation of the third option — a new international organisation that is similar to a federalist system, termed “net federalism”.

In net federalism, individual network systems rather than territorial sovereignty are the units of governance. Johnson and Post observe that the law of the net has emerged, and can continue to emerge, from the voluntary adherence of large numbers of network administrators to basic rules of law (and dispute resolution systems to adjudicate the inevitable inter-network disputes), with individual users voting with their electronic feet to join the particular systems they find most congenial. Within this model multiple network confederations could emerge. Each may have individual “constitutional” principles — some permitting and some prohibiting, say, anonymous communications, others imposing strict rules regarding redistribution of information and still others allowing freer movement — enforced by means of electronic fences prohibiting the movement of information across confederation boundaries.

Digital liberalism is clearly an informal governance model and for this reason has its attractions for those who enjoyed the free-wheeling approach to the internet in the early 1990s. It advocates almost pure private governance, with public institutions playing a role only in so much as they validate the existence and independence of cyber-based governance processes and institutions. Governance is principally to be achieved by technical solutions rather than legal process and occurs at a global rather than national level. Digital liberalism is very much the antithesis of the digital realist school and has been one of the two driving forces that has characterised the internet governance debate in the last decade.

Cyberanarchism – John Perry Barlow

In 1990, the FBI were involved in a number of actions against a perceived “computer security threat” posed by a Texas role-playing game developer named Steve Jackson. Following this, John Perry Barlow and Mitch Kapor formed the Electronic Freedom Foundation. Its mission statement says that it was “established to help civilize the electronic frontier; to make it truly useful and beneficial not just to a technical elite, but to everyone; and to do this in a way which is in keeping with our society’s highest traditions of the free and open flow of information and communication”.

One of Barlow’s significant contributions to thinking on internet regulation was the article, “Declaration of the Independence of Cyberspace”, although idealistic in expression and content, eloquently expresses a point of view held by many regarding efforts to regulate cyberspace. The declaration followed the passage of the Communications Decency Act.  In “The Economy of Ideas: Selling Wine without Bottles on the Global Net”,Barlow challenges assumptions about intellectual property in the digital online environment. He suggests that the nature of the internet environment means that different legal norms must apply. While the theory has its attractions, especially for the young and the idealistic, the fact of the matter is that “virtual” actions are grounded in the real world, are capable of being subject to regulation and, subject to jurisdiction, are capable of being subject to sanction. Indeed, we only need to look at the Digital Millennium Copyright Act (US) and the Digital Agenda Act 2000 (Australia) to gain a glimpse of how, when confronted with reality, Barlow’s theory dissolves.

Regulatory Assumptions

In understanding how regulators approach the control of internet content, one must first understand some of the assumptions that appear to underlie any system of data network regulation.

First and foremost, sovereign states have the right to regulate activity that takes place within their own borders. This right to regulate is moderated by certain international obligations. Of course there are certain difficulties in identifying the exact location of certain actions, but the internet only functions at the direction of the persons who use it. These people live, work, and use the internet while physically located within the territory of a sovereign state and so it is unquestionable that states have the authority to regulate their activities.

A second assumption is that a data network infrastructure is critical to the continued development of national economies. Data networks are a regular business tool like the telephone. The key to the success of data networking infrastructure is its speed, widespread availability, and low cost. If this last point is in doubt, one need only consider that the basic technology of data networking has existed for more than 20 years. The current popularity of data networking, and of the internet generally, can be explained primarily by the radical lowering of costs related to the use of such technology. A slow or expensive internet is no internet at all.

The third assumption is that international trade requires some form of international communication. As more communication takes place in the context of data networking, then continued success in international trade will require sufficient international data network connections.

The fourth assumption is that there is a global market for information. While it is still possible to internalise the entire process of information gathering and synthesis within a single country, this is an extremely costly process. If such expensive systems represent the only source of information available it will place domestic businesses at a competitive disadvantage in the global marketplace.

The final assumption is that unpredictability in the application of the law or in the manner in which governments choose to enforce the law will discourage both domestic and international business activity. In fashioning regulations for the internet, it is important that the regulations are made clear and that enforcement policies are communicated in advance so that persons have adequate time to react to changes in the law.

Concluding Thoughts

Governance and the Properties of the Digital Paradigm

Regulating or governing cyberspace faces challenges that lie within the properties or affordances of the Digital Paradigm. To begin with, territorial sovereignty concepts which have been the basis for most regulatory or governance activity rely on physical and defined geographical realities. By its nature, a communications system like the Internet challenges that model. Although the Digital Realists assert that effectively nothing has changed, and that is true to a limited extent, the governance functions that can be exercised are only applicable to that part of cyberspace that sits within a particular geographical space. Because the Internet is a distributed system it is impossible for any one sovereign state to impose its will upon the entire network. It is for this reason that some nations are setting up their own networks, independent of the Internet, although the perception is that the Internet is controlled by the US, the reality is that with nationally based “splinternets” sovereigns have greater ability to assert control over the network both in terms of the content layer and the various technical layers beneath that make up the medium. The distributed network presents the first challenge to national or territorially based regulatory models.

Of course aspects of sovereign power may be ceded by treaty or by membership of international bodies such as the United Nations. But does, say, the UN have the capacity to impose a worldwide governance system over the Internet. True, it created the IGF but that organisation has no power and is a multi-stakeholder policy think tank. Any attempt at a global governance model requires international consensus and, as the ITU meeting in Dubai in December 2012 demonstrated, that is not forthcoming at present.

Two other affordances of the Digital Paradigm challenge the establishment of tradition regulatory or governance systems. Those affordances are continuing disruptive change and permissionless innovation.  The very nature of the legislative process is measured. Often it involves cobbling a consensus. All of this takes time and by the time there is a crystallised proposition the mischief that the regulation is trying to address either no longer exists or has changed or taken another form. The now limited usefulness (and therefore effectiveness) of the provisions of s.122A – P of the New Zealand Copyright Act 1994 demonstrate this proposition. Furthermore, the nature of the legislative process involving reference to Select Committees and the prioritisation of other legislation within the time available in a Parliamentary session means that a “swift response” to a problem is very rarely possible.

Permissionless innovation adds to the problem because as long as this continues, and there is no sign that the inventiveness of the human mind is likely to slow down, developers and software writers will continue to change the digital landscape meaning that the target of a regulatory system may be continually moving, and certainty of law, a necessity in any society that operates under the Rule of Law, may be compromised. Again, the example of the file sharing provisions of the New Zealand Copyright Act provide an example. The definition of file sharing is restricted to a limited number of software applications – most obviously Bit Torrent. Work arounds such as virtual private networks and magnet links, along with anonymisation proxies fall outside the definition. In addition the definition addresses sharing and does not include a person who downloads but does not share by uploading infringing content.

Associated with disruptive change and permissionless innovation are some other challenges to traditional governance thinking. Participation and interactivity, along with exponential dissemination emphasise the essentially bottom up participatory nature of the Internet ecosystem. Indeed this is reflected in the quality of permissionless innovation where any coder may launch an app without any regulatory sign-off. The Internet is perhaps the greatest manifestation of democracy that there has been. It is the Agora of Athens on a global scale, a cacophony of comment, much of it trivial but the fact is that everyone has the opportunity to speak and potentially to be heard. Spiro Agnew’s “silent majority” need be silent no longer. The events of the Arab Spring showed  the way in which the Internet can be used in the face of oppressive regimes in motivating populaces. It seems unlikely that an “undemocratic” regulatory regime could be put in place absent the “consent of the governed” and despite the usual level of apathy that occurs in political matters, it seems unlikely that, given its participatory nature, netizens would tolerate such interference.

Perhaps the answer to the issue of Internet Governance is already apparent – a combination of Lessig’s Code is Law and the technical standards organisations that actually make the Internet work, such as ISOC, ITEF and ICANN. Much criticism has been levelled at ICANN’s lack of accountability, but in many respects similar issues arise with the IETF and IAB, dominated as they are by  groups of engineers. But in the final analysis, perhaps this is the governance model that is the most suitable. The objective of engineers is to make systems work at the most efficient level. Surely this is the sole objective of any regulatory regime. Furthermore, governance by technicians, if it can be called that, contains safeguards against political, national or regional capture. By all means, local governments may regulate content. But that is not the primary objective of Internet governance. Internet governance addresses the way in which the network operates. And surely that is an engineering issue rather than a political one.

 The Last Word

Perhaps the last word on the general topic of internet regulation should be left to Tsutomu Shinomura, a computational physicist and computer security expert who was responsible for tracking down the hacker Kevin Mitnick which he recounted in the excellent book Takedown:

The network of computers known as the internet began as a unique experiment in building a community of people who shared a set of values about technology and the role computers could play in shaping the world. That community was based on a shared sense of trust. Today, the electronic walls going up everywhere on the Net are the clearest proof of the loss of that trust and community. It’s a great loss for all of us.

Back to the Future – Google Spain and the Restoration of Partial and Practical Obscurity

Arising from the pre-digital paradigm are two concepts that had important implications for privacy. Their continued validity as a foundation for privacy protection has been challenged by the digital paradigm. The terms are practical and partial obscurity which are both descriptive of information accessibility and recollection in the pre-digital paradigm and of a challenge imposed by the digital paradigm, especially for privacy.  The terms, as will become apparent, are interrelated.

Practical obscurity refers to the quality of availability of information which may be of a private or public nature[1].  Such information is usually in hard copy format, may be indexed, is in a central location or locations, is frequently location-dependent in that the information that is in a particular location will refer only to the particular area served by that location, requires interaction with officials or bureaucrats to locate the information and, finally, in terms of accessing the information, requires some knowledge of the particular file within which the information source lies. Practical obscurity means that information is not indexed on key words or key concepts but generally is indexed on the basis of individual files or in relation to a named individual or named location.  Thus, it is necessary to have some prior knowledge of information to enable a search for the appropriate file to be made.

 Partial obscurity addresses information of a private nature which may earlier have been in the public arena, either in a newspaper, television or radio broadcast or some other form of mass media communication whereby the information communicated is, at a later date, recalled in part but where, as the result of the inability of memory to retain all the detail of all of the information that has been received by an individual, has become subsumed.  Thus, a broad sketch of the information renders the details obscure, only leaving the major heads of the information available in memory, hence the term partial obscurity.  To recover particulars of the information will require resort to film, video, radio or newspaper archives, thus bringing into play the concepts of practical obscurity. Partial obscurity may enable information which is subject to practical obscurity to be obtained more readily because some of the informational references enabling the location of the practically obscure information can be provided.

The Digital Paradigm and Digital Information Technologies challenge these concepts. I have written elsewhere about the nature of the underlying properties or qualities of the digital medium that sits beneath the content or the “message”. Peter Winn has made the comment “When the same rules that have been worked out for the world of paper records are applied to electronic records, the result does not preserve the balance worked out between the competing policies in the world of paper records, but dramatically alters that balance.”[2]

A property present in digital technologies and very relevant to this discussion is that of searchability. Digital systems allow the retrieval of information with a search utility that can take place “on the fly” and may produce results that are more comprehensive than a mere index. The level of analysis that may be undertaken may be deeper than mere information drawn from the text itself. Writing styles and the use of language or “stock phrases” may be undertaken, thus allowing a more penetrating and efficient analysis of the text than was possible in print.

The most successful search engine is Google which has been available since 1998.  So pervasive and popular is Google’s presence that modern English has introduced the verb “to Google” which means “To search for information about (a person or thing) using the Google search engine” or “To use the Google search engine to find information on the Internet”.[3] The ability to locate information using search engines returns us to the print based properties of fixity and preservation and also enhances the digital property of “the document that does not die”

A further property presented by digital systems is that of accessibilty. If one has the necessary equipment – a computer, modem\router and an internet connection – information is accessible to an extent not possible in the pre-digital environment. In that earlier paradigm, information was located across a number of separate media. Some had the preservative quality of print. Some, such as television or radio, required personal attendance at a set time. In some cases information may be located in a central repository like a library or archive. These are aspects of partial and practical obscurity

The Internet and convergence reverses the pre-digital activity of information seeking to one of information obtaining. The inquirer need not leave his or her home or office and go to another location where the information may be. The information is delivered via the Internet. As a result of this, with the exception of the time spent locating the information via Google, more time can be spent considering, analysing or following up the information. Although this may be viewed as an aspect of information dissemination, the means of access is revolutionarily different.

Associated with this characteristic of informational activity is the way in which the Internet enhances the immediacy of information. Not only is the inquirer no longer required to leave his or her home of place of work but the information can be delivered at a speed that is limited only by the download speed of an internet connection. Thus information which might have involved a trip to a library, a search through index cards and a perusal of a number of books or articles before the information sought was obtained, now, by means of the Internet may take a few keystrokes and mouse clicks and a few seconds for the information to be presented on screen

This enhances our expectations about the access to and availability of information. We expect the information to be available. If Google can’t locate it, it probably doesn’t exist on-line. If the information is available it should be presented to us in seconds. Although material sought from Wikipedia may be information rich, one of the most common complaints about accessability is the time that it takes to download onto a user’s computer. Yet in the predigital age a multi-contributing information resource (an encyclopedia) could only be located at a library and the time in accessing that information could be measured in hours depending upon the location of the library and the efficiency of the transport system used.

Associated with accessibility of information is the fact that it can be preserved by the user. The video file can be downloaded. The image or the text can be copied. Although this has copyright implications, substantial quantities of content are copied and are preserved by users, and frequently may be employed for other purposes such as inclusion in projects or assignments or academic papers.  The “cut and paste” capabilities of digital systems are well known and frequently employed and are one of the significant consequences of information accessibility that the Internet allows.

The “Google Spain” Decision and the “Right to Be Forgotten”

The decision of the European Court of Justice in Google Spain SL, Google Inc. v Agencia Española de Protección de Datos (AEPD), Mario Costeja González, has the potential to significantly change the informational landscape enabled by digital technologies. I do not intend to analyse the entire decision but rather focus on one aspect of it – the discussion about the so-called “right to be forgotten.” The restrictions placed on Google and other search engines as opposed to the provider of the particular content demonstrates a significant inconsistency of approach that is concerning.

The complaint by Mr Gonzales was this. When an internet user entered Mr Costeja González’s name in the Google search engine of  he or she would obtain links to two pages of the La Vanguardia’s newspaper, of 19 January and 9 March 1998 respectively  In those publications was an announcement mentioning Mr Costeja González’s name related to a real-estate auction connected with attachment proceedings for the recovery of social security debts.

Mr González requested, first, that La Vanguardia be required either to remove or alter those pages so that the personal data relating to him no longer appeared or to use certain tools made available by search engines in order to protect the data.

Second, he requested that Google Spain or Google Inc. be required to remove or conceal the personal data relating to him so that they ceased to be included in the search results and no longer appeared in the links to La Vanguardia. Mr González stated in this context that the attachment proceedings concerning him had been fully resolved for a number of years and that reference to them was now entirely irrelevant.

The effect of the decision is that the Court was prepared to allow the particular information – the La Vanguardia report – to remain. The Court specifically did not require that material be removed even although the argument advanced in respect of the claim against Google was essentially the same – the attachment proceedings had been fully resolved for a number of years and that reference to them was now entirely irrelevant. What the Court did was to make it very difficult if not almost impossible for a person to locate the information with ease.

The Court’s exploration of the “right to be forgotten”  was collateral to its main analysis about privacy, yet the development of the “right to be forgotten” section was as an aspect of privacy – a form of gloss on fundamental privacy principles. The issue was framed in this way. Should the various statutory and directive provisions be interpreted as enabling Mr Gonzales to require Google to remove, from the list of results displayed following a search made for his name, links to web pages published lawfully by third parties and containing true information relating to him, on the ground that that information may be prejudicial to him or that he wishes it to be ‘forgotten’ after a certain time? It was argued that the “right to be forgotten” was an element of Mr Gonzales’ privacy rights which overrode the legitimate interests of the operator of the search engine and the general interest in freedom of information.

The Court observed that even initially lawful processing of accurate information may, in the course of time, become incompatible with the privacy directive where that information is no longer necessary in the light of the purposes for which it was originally collected or processed. That is so in particular where the purposes appear to be inadequate, irrelevant or no longer as relevant, or excessive in relation to those purposes and in the light of the time that has elapsed.

What the Court is saying is that notwithstanding that information may be accurate or true, it may no longer be sufficiently relevant and as a result be transformed into information which is incompatible with European privacy principles. The original reasons for the collection of the data may, at a later date, no longer pertain. It follows from this that individual privacy requirements may override any public interest that may have been relevant at the time that the information was collected.

In considering requests to remove links it was important to consider whether a data subject like Mr Gonzales had a right that the information relating to him personally should, at a later point in time, no longer be linked to his name by a list of results displayed following a search based on his name. In this connection, the issue of whether or not the information may be prejudicial to the “data subject” need not be considered. The information may be quite neutral in terms of effect. The criterion appears to be one of relevance at a later date.

Furthermore the privacy rights override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in finding that information upon a search relating to the data subject’s name.

One has to wonder about the use of language in this part of the decision. Certainly, the decision is couched in a very formalised and somewhat convoluted style that one would associate with a bureaucrat rather than a judge articulating reasons for a decision. But what does the Court mean when it says “as a rule”? Does it have the vernacular meaning of “usually” or does it mean what it says – that the rule is that individual privacy rights override economic interests of the search engine operator and of the general public in being able to locate information. If the latter interpretation is correct that is a very wide ranging rule indeed.

However, the Court continued, that would not be the case if it appeared, for particular reasons, such as the role played by the data subject in public life, that the interference with his fundamental rights is justified by the preponderant interest of the general public in having, on account of inclusion in the list of results, access to the information in question.

Thus if a person has a public profile, for example in the field of politics, business or entertainment, there may be a higher public interest in having access to information.

Finally the Court looked at the particular circumstances of Mr Gonzales. The information reflected upon Mr Gonzales private life. Its initial publication was some 16 years ago. Presumably the fact of attachment proceedings and a real estate auction for the recovery of social security debts was no longer relevant within the context of Mr Gonzales’ life at the time of the complaint. Thus the Court held that Mr Gonzales had established a right that that information should no longer be linked to his name by means of such a list.

“Accordingly, since in the case in point there do not appear to be particular reasons substantiating a preponderant interest of the public in having, in the context of such a search, access to that information, a matter which is, however, for the referring court to establish, the Gonzales may, require those links to be removed from the list of results.”

There is an interesting comment in this final passage. The ECJ decision is on matters of principle. It defines tests which the referring Court should apply. Thus the referring Court still has to consider on the facts whether there are particular reasons that may substantiate a preponderant public interest in the information, although the ECJ stated that it did not consider such facts to be present.

Matters Arising

There are a number of issues that arise from this decision. The reference to the “right to be forgotten” is made at an early stage in the discussion but the use of the phrase is not continued. It is developed as an aspect of privacy within the context of the continued use of data acquired for a relevant purpose at one point in time, but the relevance of which may not be so crucial at a later point in time. One of the fundamental themes underlying most privacy laws is that of collection and retention of data for a particular purpose. The ECJ has introduced an element of temporal relevance into that theme.

A second issue restates what I said before. The information about the attachment proceedings and real estate sale which Mr Gonzales faced in 1998 was still “at large” on the Internet. In the interests of a consistent approach, an order should have been made taking that information down. It was that information that was Mr Gonzales’ concern. Google was a data processor that made it easy to access that information. So the reference may not appear in a Google search, but the underlying and now “irrelevant” information still remains.

A third issue relates to access to historical information and to primary data. Historians value primary data. Letters, manuscripts, records, reports from times gone by allow us to reconstruct the social setting within which people carried out their daily lives and against which the great events of the powerful and the policy makers took place. One only has to attempt to a research project covering a period say four hundred years ago to understand the huge problems that may be encountered as a result of gaps in information retained largely if not exclusively in manuscript form, most of which is unindexed. A search engine such as Google aids in the retrieval of relevant information. And it is a fact that social historians relay on the “stories” of individuals to illustrate a point or justify an hypothesis. The removal of references to these stories, or the primary data itself will be a sad loss to historians and social science researchers. What is concerning is that it is the “data subject” that is going to determine which the historical archive will contain – at least from an indexing perspective.

A fourth issue presents something of a conundrum. Imagine that A had information published about him 20 years ago regarding certain business activities that may have been controversial. Assume that 20 years later A has put all that behind him and is a respected member of the community and his activities in the past bear no relevance to his present circumstances. Conceivably, following the approach of the ECJ, he might require Google to remove search results to those events from queries on his name. Now assume a year or so later that A once again gets involved in a controversial business activity. Searches on his name would reveal the current controversy, but not the earlier one. His earlier activities would remain under a shroud – at least as far as Google searches are concerned. Yet it could be validly argued that his earlier activities are very relevant in light of his subsequent actions. How do we get that information restored to the Google search results? Does a news media organisation which has its own information resources and thus may have some “institutional memory” of the earlier event go to Google and request restoration of the earlier results?

The example I have given demonstrates how relevance may be a dynamic beast and may be a rather uncertain basis for something as elevated as a right and certainly as a basis for allowing a removal of results from a search engine as a collateral element of a privacy right.

Another interesting conundrum is presented for Mr Gonzales himself. By instituting proceedings he has highlighted the very problem that he wished to have removed from the search results. To make it worse for Mr Gonzales and his desire for the information of his 1998 activities to remain private, the decision of the ECJ has been the subject of wide ranging international comment on the decision. The ECJ makes reference to his earlier difficulties, and given that the timing of those difficulties is a major consideration in the Court’s assessment of relevance, perhaps those activities have taken on a new and striking relevance in the context of the ECJ’s decision. If Mr Gonzales wanted his name and affairs to remain difficult to find his efforts to do so have had the opposite effect, and perhaps his business problems in 1998 have achieved a new and striking relevance in the context of the ECJ’s decision which would eliminate any privacy interest he might have had but for the case.

Conclusion

But there are other aspects of the decision that are more fundamental for the communication of information and the rights to receive and impart information which are aspects of freedom of expression. What the decision does is that it restores the pre-digital concepts of partial and practical obscurity. The right to be forgotten will only be countered with the ability to be remembered, and no less a person than Sir Edward Coke in 1600 described memory as “slippery”. One’s recollection of a person or an event may modify over a period of time. The particular details of an event congeal into a generalised recollection. Often the absence of detail will result in a misinterpretation of the event.

Perhaps the most gloomy observation about the decision is its potential to emasculate the promise of the Internet and one of its greatest strengths – searchability of information –  based upon privacy premises that were developed in the pre-Internet age, and where privacy concerns involved the spectre of totalitarian state mass data collection on every citizen. In many respects the Internet presents a different scenario involving the gathering and availability of data frequently provided by the “data subject” and the properties and the qualities of digital technologies have remoulded our approaches to information and our expectations of it. The values underpinning pre-digital privacy expectations have undergone something of a shift in the “Information Age” although there are occasional outraged outbursts at incidence of state sponsored mass data gathering exploits. One wonders whether the ECJ is tenaciously hanging on to pre-digital paradigm data principles, taking us back to a pre-digital model or practical and partial obscurity in the hope that it will prevail for the future.  Or perhaps in the new Information Age we need to think again about the nature of privacy in light of the underlying qualities and properties of the Digital Paradigm.

 

[1] The term “practical obscurity” was used in the case of US Department of Justice v Reporters Committee for Freedom of the Press. 489 US 749 (1989)

[2] Peter A. Winn, Online Court Records: Balancing Judicial Accountability and Privacy in an Age of Electronic Information, (2004)79 WASH. L. REV. 307, 315

[3] Oxford English Dictionary

Towards an Internet Bill of Rights

 

Tim Berners-Lee, in an article in the Guardian of the 12th March 2014, building on a comment that he made that the Internet should be safeguarded from being controlled by governments or large corporations, reported in the Guardian for 26 June 2013,  claimed that an online “Magna Carta” is needed to protect and enshrine the independence of the internet.  His argument is that the internet has come under increasing attack from governments and corporate influence.  Although no examples were not cited this has been a developing trend.  The comments by Nicolas Sarkozy at the G8 meetings in 2011 and the unsuccessful attempts by Russia, China and other nation via the ITU at the 2012 World Conference on International Telecommunications to establish wider governance and control of the internet from a national government point of view provide examples.  Sarkozy’s comments were rejected by English Prime Minister David Cameron and the then Secretary of State for the United States, Hillary Clinton. More recently, on 29 April 2014 Russia’s Parliament approved a package of sweeping restrictions on the Internet and blogging.  Clearly there is an appetite for greater control by governments of the internet and, in the opinion of Berners-Lee, this must be resisted.  He considers that what is needed is a global constitution or a Bill of Rights.  He suggests that people generate a digital Bill of Rights for each country – a statement of principles that he hopes will be supported by public institutions government officials and corporations. I should perhaps observe that what is probably intended is an Internet Bill of Rights rather than a Digital one. I say this because it could well be difficult to apply some concepts to all digital technologies, some of which have little to do with the Internet.

The important point that Berners-Lee makes is that there must be a neutral internet and that there must be certainty that it will remain so. Without an open or neutral internet there can be no open government, no good democracy, no good healthcare, no connected communities and no diversity of culture.  By the same token Berners-Lee is of the view that net neutrality is not just going to happen. It requires positive action.

But it is not about direct governmental control of the Internet that concerns Berners-Lee. An example of indirect government interference with the Internet and with challenges to the utilisation of the new communications technology by individuals are the activities of the NSA and the GCHQ as revealed by the Snowden disclosures.  There have been attempts to undermine encryption and to circumvent security tools which face challenges upon individual liberty to communicate frankly and openly and without State surveillance.

What Would An On-Line “Magna Carta” Address

According to Berners-Lee, among the issues that would need to be addressed by an online “Magna Carta” would be those of privacy, free speech and responsible anonymity together with the impact of copyright laws and cultural-societal issues around the ethics of technology.  He freely acknowledges that regional regulation and cultural sensitivities would vary.  “Western democracy” after all is exactly that and its tenets, whilst laudable to its proponents, may not have universal appeal.

What is really required is a shared document of principle that could provide an international standard not so much for the values of Western democracy but for the values and importance that underlie an open Internet.

One of the things that Berners-Lee is keen to see changed is the connection between the US Department of Commerce and the internet addressing system – the IANA contract which controls the database of all domain names.  Berners-Lees’ view was that the removal of this link, if one will forgive the pun, was long overdue and that the United States government could not have a place in running something which is non-national.  He observed that there was a momentum towards that uncoupling but that there should be a continued multi-stakeholder approach and one where governments and corporates are kept at arm’s length.  As it would happen within a week or so after Berners-Lees expressions of opinion the United States government advised that it was going to de-couple its involvement with the addressing system.

Another concern by Berners-Lee was the “balkanisation” of the internet whereby countries or organisations would carve up digital space to work under their own rules be it for censorship regulation or for commerce.  Following the Snowden revelations there were indeed discussions along this line where various countries, to avoid US intrusion into the communications of their citizens, suggested a separate national “internet”.  This division of a global communications infrastructure into one based upon national boundaries is anathema to the concept of an open internet and quite contrary to the views expressed by Mr Berners-Lee.

Is This New?

The idea of some form of Charter or principles that limit or define the extent of potential governmental interference in the Internet is not new. Perhaps what is remarkable is that Berners-Lee, who has been apolitical and concerned primarily with engineering issues surrounding the Internet and the World Wide Web has, since 2013, spoken out on concerns regarding the future of the Internet and fundamental governance issues.

Governing the internet is a challenging undertaking. It is a decentralised, global environment, so governance mechanisms must account for many varied legal jurisdictions and national contexts. It is an environment which is evolving rapidly – legislation cannot keep pace with technological advances, and risks undermining future innovation. And it is shaped by the actions of many different stakeholders including governments, the private sector and civil society.

These qualities mean that the internet is not well suited to traditional forms of governance such as national and international law. Some charters and declarations have emerged as an alternative, providing the basis for self-regulation or co-regulation and helping to guide the actions of different stakeholders in a more flexible, bottom-up manner. In this sense, charters and principles operate as a form of soft law: standards that are not legally binding but which carry normative and moral weight.

Dixie Hawtin in her article “Internet Charters and Principles: Trends and Insights” summarises some of the steps that have been taken:

“Civil society charters and declarations

John Perry Barlow’s 1996 Declaration of Cyberspace Independence is one of the earliest and most famous examples. Barlow sought to articulate his vision of the internet as a space that is fundamentally different to the offline world, in which governments have no jurisdiction. Since then civil society has tended to focus on charters which apply human rights standards to the internet, and which define policy principles that are seen as essential to fulfilling human rights in the digital environment. Some take a holistic approach, such as theAssociation for Progressive Communications’ Internet Rights Charter (2006) and the Internet Rights and Principles Coalition’s (IRP) Charter of Human Rights and Principles for the Internet (2010). Others are aimed at distinct issues within the broader field, for instance, the Electronic Frontier Foundation’s Bill of Privacy Rights for Social Networks (2010), the Charter for Innovation, Creativity and Access to Knowledge (2009), and the Madrid Privacy Declaration (2009).

Initiatives targeted at the private sector

The private sector has a central role in the internet environment through providing hardware, software, applications and services. However, businesses are not bound by the same confines as governments (including international law and electorates), and governments are limited in their abilities to regulate businesses due to the reasons outlined above. A growing number of principles seek to influence private sector activities. The primary example is the Global Network Initiative, a multi-stakeholder group of businesses, civil society and academia which has negotiated principles that member businesses have committed themselves to follow to protect and promote freedom of expression and privacy. Some initiatives are developed predominantly by the private sector (such as the Aspen Institute International Digital Economy Accords which are currently being negotiated); others are a result of co-regulatory efforts with governments and intergovernmental organisations. The Council of Europe, for instance, has developed guidelines in partnership with the online search and social networking sectors. This is part of a much wider trend of initiatives seeking to hold companies to account to human rights standards in response to the challenges of a globalised world where the power of the largest companies can eclipse that of national governments. Examples of the wider trend include the United Nations Global Compact, and the Special Rapporteur on human rights and transnationalcorporations’ Protect, Respect and Remedy Framework.

 Intergovernmental organisation principles

There are many examples of principles and declarations issued by intergovernmental organisations, but in the past year a particularly noticeable trend has been the emergence of overarching sets of principles. The Organisation for Economic Co-operation and Development (OECD) released a Communiqué on Principles for Internet Policy Making in June 2011. The principles seek to provide a reference point for all stakeholders involved in internet policy formation. The Council of Europe has created a set of Internet Governance Principles which are due to be passed in September 2011. The document contains ten principles (including human rights, multi-stakeholder governance, network neutrality and cultural and linguistic diversity) which member states should upholdwhen developing national and international internet policies.

National level principles

At the national level too, some governments have turned to policy principles as an internet governance tool. Brazil has taken the lead in this area through its multi-stakeholder Internet Steering Committee, which has developed the Principles for the Governance and Use of the Internet – a set of ten principles including freedom of expression, privacy and respect for human rights. Another example is Norway’s Guidelines for Internet Neutrality (2009) which were developed by the Norwegian Post and Telecommunications Authority in collaboration withother actors such as internet service providers (ISPs) and consumer protection agencies”

 

A Starting Point – Initial Thoughts.

So what would be a starting point for the development of an internet or digital bill or rights?

Traditionally the “Bill of Rights” concept has been to act as a buffer between over-weaning government power on the one hand and individual liberties on the other.  The first attempt at a form of Bill of Rights occurred at the end of the English Revolution (1642 – 1689) and imposed limits upon the Sovereigns power.

The Age of Enlightenment and much of the philosophical thinking that took place in the late 17th and early 18th centuries resulted in statements or declarations of rights by the American colonies – the Declaration of Independence – the United States in  Amendments 1-10 to the Constitution (referred to as the Bill of Rights)  and the 1789 Declaration of the Rights of Man and the Citizen following the French Revolution.

An essential characteristic of these statements was to define and restrict the interference of the State in the affairs of individuals and guarantee certain freedoms and liberties.  It seems to me that a Internet Bill of Rights would set out and define individual expectations of liberty and non-interference on the part of the State within the context of the communications media made available by the Internet.

But the function of Charters has developed since the Age of Enlightenment approaches, especially with the development of global and transnational institutions. Hawtin notes that:

“Civil society uses charters and principles to raise awareness about the importance of protecting freedom of expression and association online through policy and practice. The process of drafting these texts provides a valuable platform for dialogue and networking. For example, the IRP’s Charter of Human Rights and Principles for the Internet has been authored collaboratively by a wide range of individuals and organisations from different fields of expertise and regions of the world. The Charter acts as an important space, fostering dialogue about how human rights apply to the internet and forging new connections between people.

Building consensus around demands and articulating these in inspirational charters provide civil society with common positions and tools with which to push for change. This is demonstrated by the number of widely supported civil society statements which refer to existing charters issued over the past year. The Civil Society Statement to the e G8 and G8, which was signed by 36 different civil society groups from across the world, emphasises both the IRP’s 10 Internet Rights and Principles (derived from its Charter of Human Rights and Principles for the Internet) and the Declaration of the Assembly on the Right to Communication. The Internet Rights are Human Rights statement submitted to the Human Rights Council was signed by more than 40 individuals and organisations and reiterates APC’s Internet Rights Charter and the IRP’s 10 Internet Rights and Principles.

As charters and principles are used and reiterated, so their standing as shared norms increases. When charters and statements are open to endorsement by different organisations and individuals from around the world, this helps to give them legitimacy and demonstrate to policy makers that there is a wide community of people who are demanding change.

While the continuance of practices which are detrimental to internet freedom indicates that these initiatives have not, so far, been entirely successful, there are signs of improvements. Groups like APC and the IRP have successfully pushed human rights up the agenda in the Internet Governance Forum. Other groups are hoping to emulate these efforts to increase awareness about human rights in other forums. The At-Large Advisory Committee, for instance, is in the beginning stages of creating a charter of rights for use within the Internet Corporation for Assigned Names and Numbers (ICANN).”

  Part of the problem with the “Charter Approach” is that there may be a proliferation of such instruments or proposals that may have the effect of diluting the moves for a universal approach. On the other hand, charters or statements of principle of a high quality with an acceptance that lends legitimacy may be more likely to attract adoption and advocacy by a growing majority of stakeholders. Some charters may be applicable to local circumstances. Those with a specific international orientation will attract a different audience and advocacy approach. As I understand it Berners-Lee is suggesting a combination of the two – an international statement of principle incorporated into local law recognising differences in cultural and customary norms. In some respects his approach seems to have an air the EU approach whereby an EU requirement is adopted into local law – often with a shift in emphasis that takes into account local conditions.

However, what must be remembered is the difficulty with power imbalances where economically and political powerful groups may drive a local (or even international) process. What is required is a meaningful multi-stake-holder approach that recognises equality of arms and influence. Hawtin also observes that with the proliferation of charters and principles, governments and corporates may “cherry pick” those standards which accord with their own interests. Voluntary standards have difficulties with engagement and enforcement.

A Starting Point – A Possible Framework

Because the Internet is primarily a means of communication of information – it’s not referred to as ICT or Information and Communication Technology for nothing – what is being proposed is an extension or redefinition of the rights of freedom of expression guaranteed in national and international instruments such as the First Amendment to the United States Constitution, section 14 of the New Zealand Bill of Rights Act 1990,  Section 2 of the Canadian Charter of Rights and Freedoms and Article 19 of the Universal Declaration of Human  Rights, to mention but a few. Thus an Internet Bill of Rights would have to be crafted as guaranteeing aspects or details of the freedom of expression, although the freedom of expression right also has attached to it other collateral rights such as the right to education, the right to freedom of association (in the sense of communicating with those with whom one is associated), the right to full participation in social, cultural and political life and the right to social and economic development. Perhaps a proper focus for attention should be upon the Internet as a means of facilitating the freedom of expression right.

This approach was the subject of the Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank LaRue, to the General Assembly of the United Nations, in August 2011.

In that Report he made the following observations

14. The Special Rapporteur reiterates that the framework of international human rights law, in particular the provisions relating to the right to freedom of expression, continues to remain relevant and applicable to the Internet. Indeed, by explicitly providing that everyone has the right to freedom of expression through any media of choice, regardless of frontiers, articles 19 of the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights were drafted with the foresight to include and accommodate future technological developments through which individuals may exercise this right.

 15. Hence, the types of information or expression that may be restricted under international human rights law in relation to offline content also apply to online content. Similarly, any restriction applied to the right to freedom of expression exercised through the Internet must also comply with international human rights law, including the following three-part, cumulative criteria:

(a) Any restriction must be provided by law, which must be formulated with sufficient precision to enable an individual to regulate his or her conduct accordingly and must be made accessible to the public;

(b) Any restriction must pursue one of the legitimate grounds for restriction set out in article 19, paragraph 3, of the International Covenant, namely (i) respect of the rights or reputation of others; or (ii) the protection of national security or of public order, or of public health or morals;

 (c) Any restriction must be proven as necessary and proportionate, or the least restrictive means to achieve one of the specified goals listed above.

The issue of the potential human right of access to the Internet was covered in this way:

61. Although access to the Internet is not yet a human right as such, the Special Rapporteur would like to reiterate that States have a positive obligation to promote or to facilitate the enjoyment of the right to freedom of expression and the means necessary to exercise this right, which includes the Internet. Moreover, access to the Internet is not only essential to enjoy the right to freedom of expression, but also other rights, such as the right to education, the right to freedom of association and assembly, the right to full participation in social, cultural and political life and the right to social and economic development.

 62. Recently, the Human Rights Committee, in its general comment No. 34 on the right to freedom of opinion and expression, also underscored that States parties should take all necessary steps to foster the independence of new media, such as the Internet, and to ensure access of all individuals thereto.

 63. Indeed, given that the Internet has become an indispensable tool for full participation in political, cultural, social and economic life, States should adopt effective and concrete policies and strategies, developed in consultation with individuals from all segments of society, including the private sector as well as relevant Government ministries, to make the Internet widely available, accessible and affordable to all.

In locating an Internet Bill of Rights within the concept of the freedom of expression, one must be careful to ensure that by defining subsets of the freedom of expression right, one does not impose limitations that may impinge upon the collateral rights identified by Mr. LaRue

Having made that observation, it is important to recall that an Internet Bill of Rights could guarantee the independence and neutrality of the means of communication – the Internet – and prohibit heavy handed secretive surveillance and intrusive interference with that means of communication.  Whilst it is acknowledged that there is a need for meaningful laws to protect the security of citizens both individually and as a group – and Mr LaRue recognises justified limitation on the freedom of expression in areas such as child pornography, direct and public incitement to commit genocide, Advocacy of national, racial or religious hatred that constitutes incitement to  discrimination, hostility or violence, and incitement to terrorism –  such laws cannot be intrusive into areas such as privacy or private activity and communication.

One of the problems about regulating the Internet or indeed preventing the regulation of the internet is to understand how it is used by end users.  In the United States Representatives Issa (R-Ca) and Senator Wyden (D-Or) developed an idea for a Digital Bill of Rights based upon ten principles:

  1. Freedom – The right to a free and uncensored Internet.
  2. Openness – The right to an open, unobstructed Internet.
  3. Equality – The right to equality on the Internet.
  4. Participation – The right to gather and participate in online activities.
  5. Creativity – The right to create and collaborate on the Internet.
  6. Sharing – The right to freely share their ideas.
  7. Access – The right to access the Internet equally, regardless of who they are or where they are.
  8. Association – The right to freely associate on the Internet.
  9. Privacy – The right to privacy on the Internet.
  10. Property – The right to benefit from what they create.

 

The Issa\Wyden categories are helpful in some respects, again as a starting point. One of the most significant things about their observations lies not so much in their categorisation but in the observation that the way that the Internet is used within the wider activity of communication and social activity must be understood.

Many of the Issa\Wyden principles are in fact subsets of the right to free expression.  Within the right to free expression there is a right not only to the means of expressing an opinion – described in s. 14 of the New Zealand Bill of Rights Act  as the right to impart information – but also the right to receive it.

The wording of the concept of “participation” in the Issa\Wyden proposal is important and in some respects reflects the LaRue concept of association within the Internet space. One must be careful, as Issa and Wyden have been to ensure that concepts applicable to the Internet space as a means of communication remain.

Expressions in favour of an Internet Bill of Rights have been put forward on the basis that the digital economy requires a reliable set of laws and procedures whereby individuals and corporations may do business and promote innovation.  It is suggested that an Internet Bill of Rights could well establish a nation that enacted and guaranteed such rights as being an innovative place within the digital environment which would guarantee a citizen privacy and promote a digital It may support a vision for a country as a data haven where people and businesses can have confidence that they have sovereignty over and unfettered ownership of  their data and that it will be protected.

Stability and certainty, particularly within the commercial environment, are necessary prerequisites for flourishing commercial activity.  I wonder, however, whether or not the concept of an Internet Bill of Rights fits comfortably within the “nation state” model of a secure, predictable and certain place where people can do business.

The Internet Bill of Rights ideally would guarantee certain national and minimum standard for Internet activity that could be mirrored worldwide.   Examples of digital paradigm legislation which attempt to harmonise principles transnationally may be found in New Zealand in the Electronic Transactions Act which has its genesis in international Conventions and the Unsolicited Electronic Messages Act where the principles applied in similar legislation in Australia favour a particular opt-in model for the continued receipt of commercial electronic messages. Legislation in the United States (The CAN-SPAM Act) favours an opt-out approach  based upon constitutional imperatives surrounding the First Amendment. Differing approaches to Spam control  based on local legal or cultural imperative provide a good example of the difficulty in achieving international harmonisation of national laws.

It was suggested by Issa and Wyden that it was necessary for there to be an understanding of the Internet and how it is used. I suggest that in considering a Internet Bill of Rights the enquiry must go further.  Not only must there be an understanding of how the Internet is used but also of how it works and essentially this involves a recognition of the paradigmatic differences between models of communications media and styles that existed before the Digital Age and understanding the way in which the qualities, properties or, as one writer has put it, the affordances of digital technologies work.

One of the present qualities of digital technologies and particularly of the internet is that of “permissionless innovation” – the ability to “bolt on” to the Internet backbone an application without seeking permission from any supervising or regulatory entitly.. This concept is reflected in items 2, 5, 6 and 10 of the Issa/Wyden list of rights   Permissionless innovation is inherent within digital technologies only because it is in existing default position and one which could well change depending upon the level of government interference.  Thus if one were to maintain net neutrality integrity and the importance of innovation the concept of permissionless innovation would have to be endorsed and protected.

A further matter to be considered is the way in which these various characteristics affordances properties or qualities impact upon human behaviour and upon expectations of information.  Our current expectations relating to information, its use, availability, dynamic quality, accessibility and searchability all impact upon our behaviours and responses within the context of the act of communication.  “Information now” – an expectation of an immediate  reply, an expectation of immediate access 24/7 – has developed as the result of the inherent and underlying properties of digital communication systems enabled by the Internet, email, instant messaging, internet telephony, Skype, mobile phone technology or otherwise.

The problem with the Issa\Wyden proposal is that it is cast within the very wide framework of guarantees for individual liberties. In this respect it reflects traditional “rights” instruments as being a definition of the boundaries between the individual and the State. In addressing the Internet – a medium of communication – there are some difficulties in this approach.  Of the items that they identify those of openness, freedom and access are those that might be the focus of attention of an Internet Bill of Rights. The other aspects deal with issues that inhabit the content layer, yet the technological layers are the ones that are really the subject of potential threat from the State. The objective is summed up by InternetNZ who seek an open and uncapturable Internet. This objective recognises the medium rather than the message that it conveys. But by the same token, the medium is critical as a means of fostering the guarantee of freedom of expression.

Moving Forward

It seems to me that the proper focus of an Internet Bill of Rights is that of the technology that is the Internet. Berners-Lee recognises this when he refers to “net neutrality” which is a term that is capable of a number of meanings. What must be guaranteed and recognised by States is that the means of communication must be left alone and should not be the subject of interference by domestic legal processes. An open and uncapturable Internet cannot be compromised by local rules governing technical standards which have world wide application. It is perhaps this global aspect that confounds a traditional approach to Internet regulation in that although it is possible for there to be local rules that interfere with Internet functionality, there cannot be given that such rules may impact upon the wider use of the Internet. Local interference with engineering or technical standards may have downstream implications for overall Internet use by those who are not subject to those local rules.

Recent efforts by the ITU to establish some form of regulatory or governance structures allowing government restriction or blocking of information disseminated via the internet and to create a global regime of monitoring internet communications – including the demand that those who send and receive information identify themselves would have wide ranging implications for Internet use. The proposal would also have allowed governments to shut down the internet if there is the belief that it may interfere in the internal affairs of other states or that information of a sensitive nature might be shared.  Although some of the proposals suggested less US control over the Internet, which is forthcoming is the disengagement of the US Department of Commerce from involvement with ICANN, nevertheless it is of concern that wider interference with Internet traffic should be seriously proposed under the umbrella of an agency whose brief is essentially directed towards the efficient functioning of communications networks, rather than obstructing them.

That there is such an appetite for regulation and control present at an international forum is a matter of concern and probably underscores an increased urgency for a rights-based solution to be put in place.

There are two main areas where the Bill of Rights for the Internet could be explored. One is through the Internet Society operating as an umbrella for those that make up the Internet Ecosystem including:

Technologists, engineers, architects, creatives, organizations such as the Internet Engineering Task Force (IETF) and the World Wide Web Consortium(W3C) who help coordinate and implement open standards.

 Global and local Organizations that manage resources for global addressing capabilities such as the Internet Corporation for Assigned Names and Numbers(ICANN), including its operation of the Internet Assigned Numbers Authority(IANA) function, Regional Internet Registries (RIR), and Domain Name Registries and Registrars.

 Operators, engineers, and vendors that provide network infrastructure services such as Domain Name Service (DNS) providers, network operators, and Internet Exchange Points (IXPs)

 The other is the Internet Governance Forum where its mission to “identify emerging issues, bring them to the attention of the relevant bodies and the general public, and, where appropriate, make recommendations” ideally encompasses discussions and recommendations around an Internet Bill of Rights. It seems to me that the development of a means by which the technical infrastructure of the Internet and the standards that underlie it – which have been in the hands of the ITEF and the W3 consortium – remain open, free and uncapturable should have some priority.

These are organisations that could properly address issues of how to maintain the neutrality and integrity of the engineering and technical aspects of the Internet – to ensure a proper means of ensuring from a principled position an identification and articulation of the technical aspects of the Internet that require protection by a statement of rights – which would be a non-interference approach – couple with the definition of the technological means that can be employed to ensure the protection of those rights.

The objection to such a proposal would be that all power would rest with the engineers, but given that the principle objective of an engineer is to make things work, that can hardly be a bad thing. Maintaining a system in good working order would be preferable to arbitrary and capricious interference with the mechanics of communication by politicians or organs of the State.

This is a project that will have to be developed carefully and analytically to ensure that what we have now continues and is not subverted, damaged or the potential that it may have for humanity in the future as a means of relating to one another is not compromised. It seems to me that protection of the technology is the means by which Berner-Lee’s goal of net neutrality may be maintained.

 

David Harvey

12 May 2014