Zero Draft of Content Removal Best Practices White Paper
- Internet Governance
11 April 2024
The draft paper has been created to frame the discussion and will be made available for public comments and feedback. The draft document and the views represented here are not representative of the positions of the organisations involved in the drafting.
3 September 2014
Introduction
The purpose of this white paper is to frame the discussion at several meetings between groups of Internet-focused NGOs that will lead to the creation of a set of principles for intermediary liability.
The principles that develop from this white paper are intended as a civil society contribution to help guide companies, regulators and courts, as they continue to build out the legal landscape in which online intermediaries operate. One aim of these principles is to move towards greater consistency with regards to the laws that apply to intermediaries and their application in practice.
There are three general approaches to intermediary liability that have been discussed in much of the recent work in this area, including CDT’s 2012 report called “Shielding the Messengers: Protecting Platforms for Expression and Innovation.” The CDT’s 2012 report divides approaches to intermediary liability into three models: 1. Expansive Protections Against Liability for Intermediaries, 2. Conditional Safe Harbor from Liability, 3. Blanket or Strict Liability for Intermediaries.[1]
This white paper argues in the alternative that (a) the “expansive protections against liability” model is preferable, but likely not possible given the current state of play in the legal and policy space (b) therefore the white paper supports “conditional safe harbor from liability” operating via a ‘notice-to-notice’ regime if possible, and a ‘notice and action’ regime if ‘notice-to-notice’ is deemed impossible, and finally (c) all of the other principles discussed in this white paper should apply to whatever model for intermediary liability is adopted unless those principles are facially incompatible with the model that is finally adopted.
As further general background, this white paper works from the position that there are three general types of online intermediaries- Internet Service Providers (ISPs), search engines, and social networks. As outlined in the recent draft UNESCO Report (from which this white paper draws extensively);
“With many kinds of companies operating many kinds of products and services, it is important to clarify what constitutes an intermediary. In a 2010 report, the Organization for Economic Co-operation and Development (OECD) explains that Internet intermediaries “bring together or facilitate transactions between third parties on the Internet. They give access to, host, transmit and index content, products and services originated by third parties on the Internet or provide Internet-based services to third parties.”
Most definitions of intermediaries explicitly exclude content producers. The freedom of expression advocacy group Article 19 distinguishes intermediaries from “those individuals or organizations who are responsible for producing information in the first place and posting it online.” Similarly, the Center for Democracy and Technology explains that “these entities facilitate access to content created by others.” The OECD emphasizes “their role as ‘pure’ intermediaries between third parties,” excluding “activities where service providers give access to, host, transmit or index content or services that they themselves originate.” These views are endorsed in some laws and court rulings. In other words, publishers and other media that create and disseminate original content are not intermediaries. Examples of such media entities include a news website that publishes articles written and edited by its staff, or a digital video subscription service that hires people to produce videos and disseminates them to subscribers.
For the purpose of this case study we will maintain that intermediaries offer services that host, index, or facilitate the transmission and sharing of content created by others. For example, Internet Service Providers (ISPs) connect a user’s device, whether it is a laptop, a mobile phone or something else, to the network of networks known as the Internet. Once a user is connected to the Internet, search engines make a portion of the World Wide Web accessible by allowing individuals to search their database. Search engines are often an essential go-between between websites and Internet users. Social networks connect individual Internet users by allowing them to exchange messages, photos, videos, as well as by allowing them to post content to their network of contacts, or the public at large. Web hosting providers, in turn, make it possible for websites to be published and to be accessed online.”[2]
General Principles for ISP Governance – Content Removals
The discussion that follows below outlines nine principles to guide companies, government, and civil society in the development of best practices related to the regulation of online content through intermediaries, as norms, policies, and laws develop in the coming years. The nine principles are: Transparency, Consistency, Clarity, Mindful Community Policy Making, Necessity and Proportionality in Content Restrictions, Privacy, Access to Remedy, Accountability, and Due Process in both Legal and Private Enforcement. Each principle contains subsections that expand upon the theme of the principle to cover more specific issues related to the rights and responsibilities of online intermediaries, government, civil society, and users.
Principle I: Transparency
“Transparency enables users’ right to privacy and right to freedom of expression. Transparency of laws, policies, practices, decisions, rationale, and outcomes related to privacy and restrictions allow users to make informed choices with respect to their actions and speech online. As such – both governments and companies have a responsibility in ensuring that the public is informed through transparency initiatives.” [3]
Government Transparency
- In general, governments should publish transparency reports:
As part of the democratic process, the citizens of each country have a right to know how their government is applying its laws, and a right to provide feedback about the government’s legal interpretations of its laws. Thus, all governments should be required to publish online transparency reports that provide information about all requests issued by any branch or agency of government for the removal or restriction of online content. Further, governments should allow for the submission of comments and suggestions by a webform hosted on the same webpage where that government’s transparency report is hosted. There should also be some legal mechanism that requires the government to look at the feedback provided by its citizens, ensure that relevant feedback is passed along to legislative bodies, and provide for action to be taken on the citizen-provided feedback where appropriate. Finally, and where possible, the raw data that constitutes each government’s transparency report should be made available online, for free, in a common file format such as .csv, so that civil society may have easy access to it for research purposes.
The legislative process proceeds most effectively when the government knows how the laws that it creates are applied in practice and is able to receive feedback from the public about how those laws should change further, or remain the same. Relatedly, regulation of the Internet is most effective when the legislative and judicial branches are aware of what the other is doing. For all of these reasons, governments should publish information about all of the court orders and executive requests for content removals that they send to online intermediaries. Publishing all of this information in one place necessarily requires that some single entity within the government collects the information, which will have the benefits of giving the government a holistic view of how it is regulating the internet, encouraging dialogue between different branches of government about how best to create and enforce internet content regulation, and encouraging dialogue between the government and its citizens about the laws that govern internet content and their application.
Each government should maintain a public website that publishes as complete a picture as possible of the content removal requests made by any branch of that government, including the judicial branch. The availability of a public website of this type will further many of the goals and objectives discussed elsewhere in this section. The website should be biased towards high levels of detail about each request and towards disclosure that requests were made, subject only to limited exceptions for compelling public policy reasons, where the disclosure bias conflicts directly with another law, or where disclosure would reveal a user’s PII. The information should be published periodically, ideally more than once a year. The general principle should be: the more information made available, the better. On the same website where a government publishes its ‘Transparency Report,’ that government should attempt to provide a plain-language description of its various laws related to online content, to provide users notice about what content is lawful vs. unlawful, as well as to show how the laws that it enacts in the Internet space fit together. Further, and as discussed in section “b,” infra, government should provide citizens with an online feedback mechanism so that they may participate in the legislative process as it applies to online content.
Government should meet users concerned about its content policies in the online domain. Internet users, as citizens of both the internet and the country their country of origin, have a natural interest in defining and defending their civil liberties online; government should meet them there to extend the democratic process to the Internet. Denying Internet users a voice in the policymaking processes that determine their rights undermines government credibility and negatively influences users’ ability to freely share information online. As such, content policies should be posted in general terms online and users should have the ability to provide input on those policies online.
ISP Transparency“The transparency practices of a company impact users’ freedom expression by providing insight into the scope of restriction that is taking in place in specific jurisdiction. Key areas of transparency for companies include: specific restrictions, aggregate numbers related to restrictions, company imposed regulations on content, and transparency of applicable law and regulation that the service provider must abide by.”[4]
“Disclosure by service providers of notices received and actions taken can provide an important check against abuse. In addition to providing valuable data for assessing the value and effectiveness of a N&A system, creating the expectation that notices will be disclosed may help deter fraudulent or otherwise unjustified notices. In contrast, without transparency, Internet users may remain unaware that content they have posted or searched for has been removed pursuant due to a notice of alleged illegality. Requiring notices to be submitted to a central publication site would provide the most benefit, enabling patterns of poor quality or abusive notices to be readily exposed.”[5] Therefore, ISPs at all levels should publish transparency reports that include:
- Government Requests
All requests from government agencies and courts should be published in a periodic transparency report, accessible on the intermediary’s website, that publishes information about the requests the intermediary received and what the intermediary did with them in the highest level of detail that is legally possible. The more information that is provided about each request, the better the understanding that the public will have about how laws that affect their rights online are being applied. That said, steps should be taken to prevent the disclosure of personal information in relation to the publication of transparency reports. Beyond redaction of personal information, however, the maximum amount of information about each request should be published, subject as well to the (ideally minimal) restrictions imposed by applicable law. A thorough Transparency Report published by an ISP or online intermediary should include information about the following categories of requests:
This type of court order should be broken out separately from court orders directed at the applicable online intermediary in companies’ transparency reports because merely providing aggregate numbers that do not distinguish between the two types gives an inaccurate impression to users that a government is attempting to censor more content than it actually is. The idea of including first party court orders to remove content as a subcategory of ‘government requests’ is that a government’s judiciary speaks on behalf of the government, making determinations about what is permitted under the laws of that country. This analogy does not hold for court orders directed at third parties- when the court made its determination of legality on the content in question, it did not contemplate that the intermediary would remove the content. As such, the court likely did not weigh the relevant public interest and policy factors that would include the importance of freedom of expression or the precedential value of its decision. Therefore, the determination does not fairly reflect an attempt by the government to censor content and should not be considered as such.
Instead, and especially considering that these third party court order may be the basis for a number of content removals, third party court orders should be counted separately and presented with some published explanation in the company’s transparency report as to what they are and why the company has decided it should removed content pursuant to its receipt of one.
Private-Party RequestsPrivate-party requests are requests to remove content that are not issued by a government agency or accompanied by a court order. Some examples of private party requests include copyright complaints submitted pursuant to the Digital Millennium Copyright Act or complaints based on the laws of specific countries, such as laws banning holocaust denial in Germany.
Policy/TOS EnforcementTo give users a complete picture of the content that is being removed from the platforms that they use, corporate transparency reports should also provide information about the content that the intermediary removes pursuant to its own policies or terms of service, though there may not be a legal requirement to do so.
User Data RequestsWhile this white paper is squarely focused on liability for content posted online and best practices for deciding when and how content should be removed from online services, corporate transparency reports should also provide information about requests for user data from executive agencies, courts, and others.
Principle II: Consistency
“Uniform notice-and-action procedures should not apply horizontally to all types of illegal content. In particular, CDT believes notice-and-takedown is inappropriate for defamation and other areas of law requiring complex legal and factual questions that make private notices especially subject to abuse. Blocking or removing content on the basis of mere allegations of illegality raises serious concerns for free expression and access to information. Hosts are likely to err on the side of caution and comply with most if not all notices they receive, because evaluating notices is burdensome and declining to comply may jeopardize their protection from liability. The risk of legal content being taken down is especially high in cases where assessing the illegality of the content would require detailed factual analysis and careful legal judgments that balance competing fundamental rights and interests. Intermediaries will be extremely reluctant to exercise their own judgment when the legal issues are unclear, and it will be easy for any party submitting a notice to claim a good faith belief that the content in question is unlawful. In short, the murkier the legal analysis, the greater the potential for abuse.
To reduce this risk, removal of or disablement of access to content based on unadjudicated allegations of illegality (i.e., notices from private parties) should be limited to cases where the content at issue is manifestly illegal – and then only with necessary safeguards against abuse as described above.
CDT believes that online free expression is best served by narrowing what is considered manifestly illegal and subject to takedown upon private notice. With proper safeguards against abuse, for example, notice-and-action can be an appropriate policy for addressing online copyright infringement. Copyright is an area of law where there is reasonable international consensus regarding what is illegal and where much infringement is straightforward. There can be difficult questions at the margins – for example concerning the applicability of limitations and exceptions such as “fair use” – but much online infringement is not disputable.
Quite different considerations apply to the extension of notice-and-action procedures to allegations of defamation or other illegal content. Other areas of law, including defamation, routinely require far more difficult factual and legal determinations. There is greater potential for abuse of notice-and-action where illegality is less manifest and more disputable. If private notices are sufficient to have allegedly defamatory content removed, for example, any person unhappy about something that has been written about him or her would have the ability and incentive to make an allegation of defamation, creating a significant potential for unjustified notices that harm free expression. This and other areas where illegality is more disputable require different approaches to notice and action. In the case of defamation, CDT believes “notice” for purposes of removing or disabling access to content should come only from a competent court after full adjudication.
In cases where it would be inappropriate to remove or disable access to content based on untested allegations of illegality, service providers receiving allegations of illegal content may be able to take alternative actions in response to notices. Forwarding notices to the content provider or preserving data necessary to facilitate the initiation of legal proceedings, for example, can pose less risk to content providers’ free expression rights, provided there is sufficient process to allow the content provider to challenge the allegations and assert his or her rights, including the right to speak anonymously.”[7]
Principle III: Clarity
“Notices should include the following features:
- Specificity. Notices should be required to specify the exact location of the material – such as a specific URL – in order to be valid. This is perhaps the most important requirement, in that it allows hosts to take targeted action against identified illegal material without having to engage in burdensome search or monitoring. Notices that demand the removal of particular content wherever it appears on a site without specifying any location(s) are not sufficiently precise to enable targeted action.
- Description of alleged illegal content. Notices should be required to include a detailed description of the specific content alleged to be illegal and to make specific reference to the law allegedly being violated. In the case of copyright, the notice should identify the specific work or works claimed to be infringed.
- Contact details. Notices should be required to contain contact information for the sender. This facilitates assessment of notices’ validity, feedback to senders regarding invalid notices, sanctions for abusive notices, and communication or legal action between the sending party and the poster of the material in question.
- Standing: Notices should be issued only by or on behalf of the party harmed by the content. For copyright, this would be the rightsholder or an agent acting on the rightsholderʼs behalf. For child sexual abuse images, a suitable issuer of notice would be a law enforcement agency or a child abuse hotline with expertise in assessing such content. For terrorism content, only government agencies would have standing to submit notice.
- Certification: A sender of a notice should be required to attest under legal penalty to a good-faith belief that the content being complained of is in fact illegal; that the information contained in the notice is accurate; and, if applicable, that the sender either is the harmed party or is authorized to act on behalf of the harmed party. This kind of formal certification requirement signals to notice-senders that they should view misrepresentation or inaccuracies on notices as akin to making false or inaccurate statements to a court or administrative body.
- Consideration of limitations, exceptions, and defenses: Senders should be required to certify that they have considered in good faith whether any limitations, exceptions, or defenses apply to the material in question. This is particularly relevant for copyright and other areas of law in which exceptions are specifically described in law.
- An effective appeal and counter-notice mechanism. A notice-and-action regime should include counter-notice procedures so that content providers can contest mistaken and abusive notices and have their content reinstated if its removal was wrongful.
- Penalties for unjustified notices. Senders of erroneous or abusive notices should face possible sanctions. In the US, senders may face penalties for knowingly misrepresenting that content is infringing, but the standard for “knowingly misrepresenting” is quite high and the provision has rarely been invoked. A better approach might be to use a negligence standard, whereby a sender could be held liable for damages or attorneys’ fees for making negligent misrepresentations (or for repeatedly making negligent misrepresentations). In addition, the notice-and-action system should allow content hosts to ignore notices from senders with an established record of sending erroneous or abusive notices or allow them to demand more information or assurances in notices from those who have in the past submitted erroneous notices. (For example, hosts might be deemed within the safe harbor if they require repeat abusers to specifically certify that they have actually examined the alleged infringing content before sending a notice).”[9]
There are legitimate reasons why an ISP may want to have policies that permit less content, and a narrower range of content, than is technically permitted under the law, such as maintaining a product that appeals to families. However, if a company is going to go beyond the minimal legal requirements in terms of content that it must restrict, the company should have clear policies that are published online and kept up-to-date to provide its users notice of what content is and is not permitted on the company’s platform. Notice to the user about the types of content that are permitted encourages her to speak freely and helps her to understand why content that she posted was taken down if it must be taken down for violating a company policy.
Related Events
Sorted By DateTelecom
Judicial Trends: How Courts Applied the Proportionality Test
This is the second in a series of essays aimed at studying the different ways in which apex courts have evaluated national biometric digital ID programs of their countries.
Event
23 March 2024Access to Knowledge
Information Disorders & their Regulation
The Indian media and digital sphere, perhaps a crude reflection of the socio-economic realities of the Indian political landscape, presents a unique and challenging setting for studying information disorders.
Event
5 MBDigital Cultures
Security of Open Source Software
A Survey of Technical Stakeholders’ Perceptions and Actions
Event
2.5 MBAccess to Knowledge
Global Accessibility Awareness Day 2017
The Centre for Internet & Society along with Prakat Solutions and Mitra Jyothi is co-hosting the Global Accessibility Awareness Day in Bengaluru on May 18, 2017.
Event
18 May 2017