Student evaluations posted on the Internet described a professor at a San Francisco college as “mentally ill,” “incompetent” and a “homomaniac.”1
A commenter posing as a university professor insulted one of the professor’s students in a message posted on a student newspaper Web site.2
A printed student newspaper might be liable under traditional rules of defamation law for publishing such statements. But an online publication generally will not be liable if users leave similar messages on a comment board.
Why would an online publication be held to a lower standard of liability than a print version? A federal law passed in 1996 provides the answer.
Section 230 of the Communications Decency Act3 states that providers and users of interactive computer services are not liable for posting information provided by other sources.
For the student media, this could mean broad immunity for content that is created by non-staff members. But there are some pitfalls to avoid. If students add content to material provided by others or rewrite sentences as part of the editing process, for example, a court could conclude that the student newspaper helped to “create” the information. In that case, the publication could be liable. Student journalists and school administrators should be aware of the protection Section 230 may offer — as well as its limits — when they venture into cyberspace.
Before the CDA, courts applied traditional libel law to the Internet. Internet service providers could be held liable for what they disseminated as either “publishers” or “distributors,” depending on their level of editorial control.
Publisher liability applies to reporters, authors, editors and publishers, as well as the publications they work for. Publishers are fully liable for defamation because they are “creatively involved in the process of publication”4 and it is fair to assume they know about any libelous information.
Distributor liability applies to commercial printers, bookstores, libraries and news vendors. Distributors are liable only if they know or have reason to know that information is defamatory. They are not required to independently investigate material before they distribute it. The lower standard is justified by distributors’ lack of control over the information they disseminate.5
Courts initially treated ISPs that monitored their message boards as publishers, making them liable for defamatory content posted to their services.6 By contrast, ISPs that took no steps to screen messages were treated as distributors and were not liable.7
Given these rulings, members of Congress feared many ISPs would choose not to monitor their discussion forums. Section 230 was designed to shield ISPs who wanted to screen offensive content.8 Section 230(c) states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”9 In addition, the law states, “no provider or user of an interactive computer service shall be liable on account of . . . any action voluntarily taken in good faith to restrict access to . . . material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected. . .”10
Generally, courts have taken a broad view of Section 230’s protections. The Fourth Circuit set the tone in Zeran v. America Online, Inc., the first case to interpret the law. 11 That court found Section 230 “creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.”12
Over the next decade, this analysis was almost universally adopted by other courts.13 Some have since backed away from Zeran’s broadest language. But at a minimum, almost all courts agree Section 230 applies when a suit (1) against a provider or user of an interactive computer service (2) would treat the defendant as a publisher or speaker of information provided by another information content provider.14
Who is protected
Section 230 protection applies to any “provider or user of an interactive computer service,”15 specifically including services provided by libraries and educational institutions.16
The earliest cases applying Section 230 dealt with large ISPs, such as AOL, that directly connected customers to the Internet; those companies clearly are covered as “providers.” Courts have expanded their definition of “providers” to include Web sites and e-mail lists, whether they are operated by large corporations or by individuals.17
Moreover, the statute also provides full protection to any user of an interactive computer service who merely re-posts content from someone else.18 By definition, any Web site uses such services to connect with readers. Thus, any Web site or e-mail list — including online student publications — should be covered.
Scope of protection
Even for those covered by Section 230, the law’s protection has two main limits. It protects only against certain types of legal claims, and it applies only to content provided by someone other than the person claiming immunity.
Section 230 protects against any cause of action — however phrased — that treats service providers or users as publishers of content provided by someone else.19 This includes defamation claims,20 as well as claims such as fraud, negligence and false light.21 The immunity does not protect against claims involving federal criminal statutes or any intellectual property violations, including copyright.22
The statute shields service providers and users from both publisher and distributor liability.23 In Zeran, for instance, the plaintiff sued AOL after an anonymous user posted messages advertising “offensive” T-shirts regarding the 1995 Oklahoma City bombing. The phony messages urged readers to call the plaintiff to order the shirts, and the plaintiff sued for negligence after receiving many insulting calls and death threats. The plaintiff argued AOL should be held liable as a distributor because it failed to remove the defamatory messages quickly enough after the plaintiff complained.
But the court held that, under, Section 230, “lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content — are barred.”24 If AOL were held liable as a distributor, the company would have to investigate every complaint of a defamatory posting and decide whether to publish or remove the statement, thrusting it into a publisher’s role.25 Given the volume of messages, ISPs would face an “impossible burden,” likely prompting them to avoid screening messages and to remove messages upon complaint (whether legitimate or not) rather than risk liability.26 This is what Section 230 was designed to prevent. Therefore, the court found Section 230 bars lawsuits based on both distributor and publisher liability.
Sectio 230 does not apply, however, to claims challenging conduct beyond a publisher’s traditional editorial functions. For instance, although service providers cannot be liable for refusing the remove third-party content, they might be liable under contract law if they promise to remove content and then fail to do so.27 In Barnes v. Yahoo!, Barnes sued Yahoo for failing to remove a prank personal ad that caused Barnes to receive many unwelcome overtures.28 Barnes alleged a Yahoo official promised to “personally walk [Barnes’ complaints] over to the division responsible for stopping unauthorized profiles and they would take care of it.”When, two months later, Yahoo still had not acted, Barnes filed suit.
The Ninth Circuit said Section 230 did not affect Barnes’ breach-of-contract claim because the claim alleged conduct — Yahoo’s decision to make and then break a promise — that was separate from Yahoo’s role as a publisher.31 The court emphasized, though, that “a general monitoring policy, or even an attempt to help a particular person” would not create liability, and that Web sites can avoid liability by stating that they are not making any legally binding promises.32
Source of challenged content
The other limitation on Section 230 immunity is it applies only when “another content provider” wrote the relevant material. This distinction can become complicated in practice.
Web site owners will always be responsible for content they create. For instance, the court in Anthony v. Yahoo! said Yahoo had no immunity against a claim that the company itself created fake profiles on its dating service.33 The court also said Yahoo could be liable if it deliberately misrepresented real but expired profiles as profiles of active users.34 In effect, it would be Yahoo, not the original users, “creating” the misleading implication that useres still were active
At the other extreme, service providers are not liable merely for passing along someone else’s expression. In Zeran, the message writer was an anonymous user, who was clearly a distinct entity from AOL. A federal district court in Blumenthal v. Drudge ruled Section 230 immunity also protected AOL from claims arising out of Matt Drudge’s online gossip column, even though AOL promoted the column and paid Drudge to make it available to AOL customers. The court found Congress chose to provide immunity even when the ISP “has an active, even aggressive role in making available content provided by others.”35
However, a service provider might be liable if it is responsible for the “creation or development” of content “in whole or in part.”36
The precise point where a service provider or user becomes a partial content creator has yet to be determined. In Ben Ezra, Weinstein & Co. v. America Online, Inc.,37 the Tenth Circuit made clear a service provider will not lose immunity simply by deleting inaccurate information (in that case, erroneous stock data).
Section 230 also protects providers who select or highlight outside content. In Batzel v. Smith, for instance, Ton Cremers, the operator of an e-mail list, distributed a message he received from Bob Smith.38 The Ninth Circuit ruled Cremers could not be held liable merely for distributing Smith’s allegedly defamatory message, even though Cremers had selected Smith’s message for distribution and had made minor edits to it.
However, the court said Cremers could be liable if — as Smith claimed — Smith had not intended his e-mail for public distribution. In that case, the e-mail would not qualify under Section 230 as content “provided” by another party unless Cremers reasonably believed Smith intended the e-mail for online publication.40 Such ambiguities are unlikely to arise in the context of a student newspaper’s comment boards or other areas where it is clear users are submitting content for public display.
The Ninth Circuit provided the most detailed discussion so far of what might constitute “partial” content creation in Fair Housing Council v. Roommates.com, LLC.41 The court said an online roommate-matching service was not immune from claims that its Web site violated fair-housing laws.The site required users to answer questions about their sex, sexual orientation and whether they have children or are willing to live with children. The court said the site could not claim immunity from claims that merely posing those questions violated fair housing laws (regardless of users’ answers) because the site was the “content provider” of the questions.44
But the court went further, holding the site also could be liable for displaying the answers to these questions on users’ profile pages and using the answers in the site’s search system to screen which listings users saw.The court said the site partially developed the content by posing the allegedly illegal questions and requiring users to answer them, thus “contribut[ing] materially to the alleged illegality of the conduct.”46
In contrast, the court said, merely editing user content for purposes such as shortening, correcting spelling or removing profanity generally would not jeopardize a service provider’s immunity.The only exception would be if the editors contributed to a statement’s illegality, such as by removing the word “not” from the statement “[Name] did not steal the artwork.”48
The court also said the site was not liable for discriminatory statements users wrote in the open-ended “Additional Comments” portion of their profiles. Speaking directly to Web site operators, the court wrote: “If you don’t encourage illegal content, or design your website to require users to input illegal content, you will be immune.”50
Although no published decisions have directly applied Section 230 to student media, it now is well established that the statute covers virtually all Web sites.
To qualify for immunity, student media must prove the content at issue was created by an entity distinct from the publication. Obviously, student media are “content providers” for material they create themselves. For example, student media will be liable for any defamatory content in stories written by student reporters, regardless of whether those stories appear in print or online editions.
On the other hand, publications will not be liable for user-posted comments or similar materials, so long as it is reasonably clear to users that they are submitting material for publication. Other content provided by non-staff members — such as letters to the editor or syndicated material — might also be protected, even though print publications have always been treated as publishers of all such content. 51
However, no court actually has ruled that online publications are protected from liability based on such materials. Unlike message-board operators, who might face an overwhelming volume of user postings, publications routinely screen materials like letters to the editor. In addition, it is common practice for newspaper staff to require letter writers to sign their names and provide a phone number, so that newspaper staff may verify who the author is before the letter is printed. If similar practices are used when publishing online letters to the editor, immunity might not apply.
Regardless, students must be careful not to cross the line between host and content provider. Students should avoid rewriting or adding content to material provided by others. A student outlet also should not ask readers to provide material the publication knows is illegal or defamatory.52 In both cases, a court could find the publication liable as a partial content creator or developer.
While legal protection is strongest when students avoid making substantive revisions to material provided by others, there are many good reasons why students may hesitate to give up editorial control. Editors are supposed to fix sloppy writing, correct errors and fill in gaps in reporting. If editors stop serving these functions in an effort to avoid liability for content, the quality of student media may suffer. But when it comes to avoiding liability for content provided by others, student editors who avoid adding content and rewriting sentences will have stronger protection under the law. Student editors who are uncomfortable with a post — for whatever reason — will be on much safer ground if they simply remove the post entirely.
Georgetown University law student Michael Beder served as the SPLC’s summer law clerk
-SPLC Legal Consultant Mike Hiestand contributed to this analysis.