Section 230: What you need to know
Part Two: Why is Section 230 under attack?
The reasons why Section 230 is under attack all deal with Section 230’s legal permission to moderate – or not moderate – content on platforms without legal liability. Hence, they deal mainly with social media type companies and content posted by third parties (i.e. people who use the platform and aren’t part of that company).
Here’s a closer look at the complaints.
If you watch the news, you may have heard of two main arguments that seem to fall along political party lines. I’ll describe them below according to the party that they tend to be associated with. However, I think the fundamental arguments could be made by either party. Lastly, I'm adding a group of additional complaints that I’m putting into a third category that I believe is equally, even arguably more, significant and yet often remains under that radar.
Republicans: Republicans assert that conservative political views (whether made by a congress person or an ordinary person) are censored by these companies. Meaning, they are not as widely distributed across the platforms as are more liberal views. And that, especially during the Trump presidential era, politically conservative posts were more likely to be assessed [by the platform or a platform’s “fact checker”] to be false, inflammatory or contain misleading statements /misinformation and then taken down, blocked or marked as such. They believe that if you want to enjoy federal immunity for third party online content, you shouldn’t be able to target and ban only certain types of viewpoints. Rather, moderation should be applied equally across the board.
Democrats: Democrats claim that these companies allow content to remain on their platforms that should be removed. They are upset, for example, that these companies’ content includes misinformation campaigns and/or frauds perpetuated on the public (think fake COVID tests and/or interference in government elections) and they have done little to identify this type of content and remove it. They believe that companies have a responsibility for finding and removing some types of particularly harmful content and, if they don’t, they shouldn’t have Section 230 immunity.
Third Category - Prohibited Content, Illegal Speech + New Harms: The third category is a collection of complaints that cover content prohibited by criminal law or civil law. It also includes content that has created new problems and doesn’t fit neatly into either criminal or civil law regulations. I’m dividing them into my own assigned categories to make it easier to understand. First, this category includes “Prohibited Content,” which I’d define as content that is prohibited under federal/state laws and carries criminal penalties. This type of content can range from illegally selling drugs over the internet to misrepresenting that you are a financial broker. Each is a highly regulated activity governed by federal (and sometimes state) law. If you violate the rules, you may be fined or even go to jail.
Section 230 does not grant immunity for criminal violations. It specifically exempts activity prohibited by federal criminal statutes and references preservation of similar state laws.
Next is Illegal Speech, which is speech prohibited by civil law. Unlike Prohibited Content, Illegal Speech has to do with the rights between two individuals and/or entities, such as slander and defamation. Company A’s competitor tries to hurt their business by making untrue statements about them online. Or Jane Barnes has someone post personal pictures of her online with damaging, untrue statements. Both Company A and Jane Barnes have a civil right to pursue the other party for speech prohibited and considered illegal under civil law.
Section 230 provides immunity for almost all forms of Illegal Speech with the exception of intellectual property violations. It does not grant immunity for these types of claims, which include copyright or trademark infringement.
Last, is a group of content that I’m calling New Harms. These are harms unique to the internet. It’s not clear whether they should be a criminal offense, carry civil liability or both. There are little to no existing laws that address it. Or, if there are laws, it’s unclear how to apply them in this new set of facts. Most cases have come to light over the last handful of years. Take, for instance, fake conspiracy theories – by this I mean conspiracy theories where the perpetrator knew what they were saying was not true. An example of the harm this can cause is the case of Maatje Benassi, a US Army reservist who was wrongly accused as being “patient zero” for COVID. She was not patient zero. Her and her husband’s personal information (such as their home address) were widely shared, the content was not taken down and their lives were repeatedly threatened and forever upended as a result. In this situation, clearly its wrong. But who is liable and under what laws?
As you can see Section 230 doesn’t stand alone. It is part of a bigger system of regulations.
Let’s review how they intersect with Section 230.
First Amendment. It’s important to know that there is no First Amendment right of free speech that is enforceable against a public company. The First Amendment only protects you from having the government interfere with your free speech. The government can’t do it. But you can’t assert this right against a Twitter or Facebook.
Even without Section 230, companies have a right to determine what kind of content they want to promote on their site so long as they don’t violate other laws. In fact, arguably, they have a First Amendment right to curate their website with the content/views they want without government interference.
The First Amendment is applicable when addressing a complaint of blocking or censoring viewpoints online by a private party in the sense that it doesn’t apply. Often, we think it might apply in these situations since freedom of speech is a deeply cherished American ideal. This doesn’t mean that the argument put forth by Republicans doesn’t have merit. I’m going to address their argument in Part Three.
Important Aside: For the remainder of this post, I’m going to focus on content that is not moderated or taken down. I want to explain the breadth of what it includes and provide an understanding of the complexities that exist in the interplay of laws with Section 230.
I am also going to narrow the types of companies (also known as service providers) that I’m using for examples. The easy ones to think about are Facebook and Twitter-types, i.e. social media platforms. But there are other service providers who operate on the periphery of Section 230 and are part of the eco-system of the internet. Some of these “other” providers are stuck trying to navigate the complex system I’m going to describe. This includes hosting providers who host websites that may be protected by Section 230 and who, because they provide hosting services, must regularly regularly make decisions on whether they should (or must) require a hosting customer to take content off their website. I also argue that, by necessity, these “other” providers may also include (or we should at least consider) online ads (from ad networks), SMS/texts (services from mobile providers) and possibly email services, in as much as they are often part of the overall abuse that is perpetuated. This last group is significant because content on their services has increasingly become an integral part of the picture that must be considered when evaluating online content.
This post will primarily consider companies which allow third parties to post content and for whom Section 230 directly applies. And, it will lightly consider hosting companies, a party that is regularly in the crosshairs of having to make decisions regarding content posted on their customers’ websites.
Illegal Speech. Section 230 doesn’t change the legality of Illegal Speech. Illegal speech is still illegal. If someone is defaming or slandering you – your right to pursue a legal action against that person for defamation or slander still exists. Or, if you make a slanderous comment about your brother-in-law or ex-boss, you remain liable to him or her.
Here is what changes with Illegal Speech under Section 230. The distributor (or redistributor) of content (i.e. Twitter, Facebook, Instagram) is no longer liable for your slanderous statements or Illegal Speech. These are considered neutral forums or vehicles that we can use to post our speech because Section 230 gives them immunity. Since Section 230 provides immunity to these online companies, this means they don’t have to review or moderate everything that is posted or shared using their services. The person who remains liable is you.
As mentioned earlier, Section 230 does not give immunity to intellectual property (IP) violations. What this means in the online world, though, is not clear because things like trademark violations are often very dependent on a lot of factors. There is one exception, though, for governing IP violations on the internet and that is for copyright violations. Copyright is an intellectual property right that governs literary, artistic, educational, or musical forms of content. The Digital Millennium Copyright Act (DMCA), passed in 1998, was created specifically to provide a mechanism to protect this kind of content on the internet. It provides guidelines and specific procedures for submitting a “Takedown Request” (meaning a request to take down the copyrighted content) to an online content company and what they must do to retain immunity from liability. You’d use the DMCA if someone took your images and put them on their website without your permission. Or you’d use it if you wrote something (whether online or in print) and someone used it online without your permission. It has a narrow application for copyright alone and only applies to the internet.
Prohibited Content. Federal criminal laws in existence apply to the internet. Section 230 specifically does not impact federal criminal law nor is it meant to impact state criminal laws. Prohibited Content is prohibited, and you do not have immunity just because someone else posted it.
Here are some of the laws that are often at play for Prohibited Content online.
What happens when there is Illegal Speech, Prohibited Content or content that represents New Harms is online?
Well, it depends.
Self-Monitoring. Some of the large online tech companies have some basic self-monitoring protocols in place for third party content. This is almost always for Prohibited Content where they could be directly held accountable. Often, these protocols are part of a larger process that includes the intake of external complaints, which is where the majority of questionable content is identified.
External Complaints. How these are handled depends on the type of complaint.
Some violations are easy to identify and action can be taken swiftly, such as with images of child exploitation. Many, though, aren’t as obvious because the violation may depend on things that have to be verified and companies lack the means to verify or the rules for what equals the Prohibited Content is not clear. Thus, processes can vary on what happens with complaints that claim that the content is Prohibited Content if the content isn’t an obvious violation. This results in confusion and frustration. An everyday person submits a complaint regarding content they believe is Prohibited, the company can’t verify, the content remains and both are frustrated. Companies receive well-meaning and respectful notices from law enforcement about questionable content, but law enforcement only asks them to investigate and remove the content if it violates laws or their Terms of Service. The companies are rarely directly instructed by law enforcement or governmental agencies that content in question is Prohibited Content and therefore you must take it down.
Instead, the person/business who is legitimately slandered must pursue claims directly against the party who actually made and posted the statements. They must obtain a judgement that rules the statement as slanderous. AND they also must obtain, with that judgement, a court order instructing the internet company to remove the content. Companies usually will not remove the content based on a judgement alone. This is not nefarious behavior but, rather, it’s usually because a judgement is based on very specific facts that aren’t identical from one situation to the next. So, to make sure they are doing the right thing, companies want a court order instructing them specifically to take it down so that they feel that their actions are under a court’s protection and they remain neutral. This gets complicated if the person who slandered you posts a similar statement on another platform, because you usually need to get a revised court order that instructs that company to remove it.
Herein we can begin to see the problem.
Let’s isolate the areas where harm results.
Prohibited Content: harm happens when: 1) Prohibited Content is unreported / undetected and remains online; 2) Prohibited Content is reported but there aren’t clear legal guidelines on how to verify that the content is indeed Prohibited Content and so the content is not removed promptly.
Illegal Speech: remember that these are harms to an individual / entity they are based in civil law. They include fraudulent impersonation of a company and/or false claims about people. Usually it is the individual/entity that suffers the harm.
The party who is claiming the content is Illegal Speech almost always must go to a court of law to determine their rights. Their rights are determined vis-à-vis the person who posted the content and, thereafter, they seek to enforce those rights against an online company to have the content removed.
Resolution is not short or inexpensive. It’s a process that is built by default in the absence of clearer guidelines. From my direct experience, most companies feel ill-equipped to know what to do, so they attempt to create a responsible process to deal with these kinds of complaints that defer to defined legislation and courts of law, who are the appropriate triers of fact and adjudicators of liability. Nonetheless, it’s long and complicated. Moreover, the party who was offended often gets stuck in a never-ending legal battle across multiple online companies because the perpetrator continues to post the content in new places. All the while, the harm continues. We can do better.
New Harms: we know the content causes harm but see no clear regulation on how to deal with it. This includes hate speech, content inciting violence, false news and claims (think COVID cures), publication of my personal information without my permission (like my phone number, driver’s license), manipulation of American people in governing of the US (for example: potential misrepresentation of a person (I’m a US citizen when in fact I’m a BOT or a group of terrorists).
Companies usually will handle complaints for this type of content by: 1) evaluating whether they can find a legal basis on which to take down the content. If personal information has been posted without permission, is there a federal / state or international law that applies and prohibits it? 2) evaluating the content against their Terms of Service (ToS) and/or Acceptable Use Policies (AUP). Interestingly, ToS/AUP’s (though you may hate how long and detailed they are) are a tool that online companies use to “manage the grey area” where laws have not been enacted. For example, many include “incitement of violence” as prohibited content. Some include “hate speech”. Neither are prohibited by law. But, if the company’s ToS/AUP say that content inciting violence is prohibited, then they can remove the content on that basis. When neither of these apply, as with a lot of New Harms, they go with 3) instruct the reporter that they need to take the matter to a court of law for adjudication.
These are all serious issues to solve.
I deal with all of these issues firsthand. I serve as the Data Protection Officer of a large, global domain and hosting provider. I also oversee their Abuse department. I work directly with federal agencies such as the FBI, FTC, FDA, DOJ, etc. plus state Attorney Generals on evaluating questionable content for verifying if the content violates criminal laws. I am regularly in the trenches with companies to help them determine the right policies, procedures and/or outcomes needed that will comply with the law, protect ideals such as free speech and let them be a good citizen of the internet. I’m proud to say that my client, and others like them, have done a darn good job of finding balance and answers where there is no clear guidance.
But we can’t lay the responsibility of solving the above issues at the feet of companies or law enforcement when laws are unclear or don’t exist. Besides being unfair, it will result in inconsistency from one company to another. In truth, it offers no true resolution at all.
As I’ll describe in Part Three, the solution for the way forward is not clear cut or easy, but it is possible. Section 230 should be amended to catch up with the times and evolution of the internet. In doing so, as a nation, we must consider other fundamental human rights that are the bedrock of our democracy, such as due process and free speech. Indeed, Section 230 is considered the “bedrock” of free speech on the internet as aptly dubbed by EFF, “the most important law protecting internet speech.”
As citizens and residents of this great country, we must take the time to be informed and to be diligent in what we consider. We must ensure no one is resorting to fast decisions that make good sound bites for the press. Amending Section 230 must have the same deliberation and care required by any action that could curb (or destroy) our cherished liberty of free speech. If we do it right, we have the opportunity of creating a bedrock of fair, balanced and integrated guidelines for the internet’s next leap forward.
I look forward to discussing how in Part Three.
Section 230: what you need to know
No doubt you’ve heard complaints about Section 230. In this three-part series, my goal is to provide you with the mission behind Section 230, the current complaints from both our Republican and Democrat representatives as well as everyday citizens, and, lastly, ensure that we collectively understand the reasons to tread carefully with Section 230 reform and provide the questions that should be answered to guide its amendments.
Part One: What is Section 230? Why was it created and what does it do?
In Part One, I want to share the “why” behind the creation of Section 230. I also want to share how it evolved to include some additional protections and what it means today.
Section 230 (officially 47 U.S. Code § 230) is an amendment to the Communications Decency Act (CDA) that was passed into law in 1996. The CDA was originally viewed as restricting free speech on the internet. And, indeed, many of its original provisions were struck down on that basis. The exception is Section 230, which remains intact today.
What led to Section 230’s enactment?
At the time Section 230 was under consideration, the internet was in its infancy. The first version of what we know as today’s internet was created in 1989. It had yet to be adopted broadly when it took a couple of major steps in 1995 when Amazon, Yahoo, eBay and Internet Explorer all launched. Around that time, Congress conducted research which recognized the potential of the internet and based Section 230’s enactment on the following findings [original language, emphasis added]:
1. The rapidly developing array of Internet and other interactive computer services available to individual Americans represent an extraordinary advance in the availability of educational and informational resources.
2. These services offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.
3. The Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.
4. The Internet and other interactive computer services have flourished, to the benefit of all Americans, with a minimum of government regulation.
5. Increasingly Americans are relying on interactive media for a variety of political, educational, cultural, and entertainment services.
In other words, Congress proactively recognized the potential of the internet, and the potential of services that are based on it, to benefit Americans. The potential recognized included broad reaching options for educational and informational purposes. Also, significantly, a primary purpose of Section 230 was to provide the means for Americans to have a diversity of political discourse and minority voices. It’s meant to empower the sharing of information and opinions with limited government regulation.
Section 230 has specific goals [original language, emphasis added]:
1. to promote the continued development of the Internet and other interactive computer services and other interactive media;
2. to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation;
3. to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services;
4. to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material; and
5. to ensure vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking, and harassment by means of computer.
To understand the policy goals, you need to put them in context of the times.
Before 1996, none of the big tech companies were big. In fact, almost none of them existed. The tech leaders who are subject to Section 230 and under fire today were only created after the passage of Section 230. Check it out (company + year created):
In 1996, the internet had not stepped into its potential. Frankly, if like me you were in the professional sphere at that time, no one (companies, entrepreneurs and everyday people) knew how to use it.
It’s obvious from the list of successful companies that were created after Section 230 (and – these are just the brand names you’d know), why Section 230 is attributed as the basis for the growth and expansion of the internet. Let’s clarify – of the internet. And, as you may recognize from the companies listed, it created the opportunity for US based entrepreneurial growth on the internet.
What Does Section 230 Do and How Did It Support Internet Expansion?
Prior to Section 230, there weren’t a lot of websites. Amazon and eBay were fledglings focused on online sales. Yahoo and Internet Explorer were innovations whose promise was to help us find stuff on the internet.
None of these were content companies in the sense of having news, information, and/or the sharing of opinions/thoughts by third parties directly onto their website. Companies who wanted to host this type of content, by necessity, generally fell into two camps: 1) they created websites to share content directly about their own company; or 2) they were news organizations that were subject to the standard review process that governs news. Why? Because each company was directly liable for the content on their sites – regardless of who posted it.
Here’s how it would work. Let’s say that I wanted to create a website where consumers could share reviews of their experiences at restaurants. Pre-Section 230, as the owner of the website, I would be personally liable for any reviews posted. Liability could be for statements viewed as slanderous, defaming, illegal use of images… and on and on. I could be liable for damages awarded for any successful claim and I’d have to pay my own legal costs. Moreover, even if the claim wasn’t successful, it was highly probable that I’d spend unknown attorney fees fending off potential claims. In sum, the costs of potential liability existed where claims: 1) were valid as legally inappropriate or 2) the reviews were accurate, but the complainant aggressively and legally tried to pressure a response through lawsuit. Either way, I'd incur legal fees to defend my website and may also have to pay damages.
In everyday language – You Get Sued.
You were sued because someone (that’s not you) said that something objectionable. You may have had no opportunity to consider or review. But, because you created a place for voices and posts, you were legally on the hook – regardless of whether you had the tools or information to evaluate the legality of the post. Moreover, there was zero law on the topic that could provide a respite for you to reasonably figure it out.
I don’t know about you – but, under those terms, I’d never have created something that would have allowed third party content and opened me up to unknown liability. The result, unsurprisingly, was that, like me, no one wanted to allow a third-party post because they weren’t sure if they were legally exposed because:) they didn’t know the veracity of content first-hand; 2) they didn’t know if the content had legal liability implications; 3) they had no way of knowing whether the content would be controversial and simply stoke litigation (regardless of whether the content was legally ok).
So, while it remained straightforward in how someone exercises their rights against potentially inappropriate content, this also created a disincentive for free speech and discourse on the internet. There was no space for minority and/or unheard voices, even my own, because the internet is so public and the ability to sue so easy.
Section 230 changed everything.
With the passage of Section 230, freedom of expression and innovation on the internet was given (almost) free reign.
The key language of Section 230 states that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." See 47 U.S.C §230(c)(1). In a 1997 seminal case entitled Zeran vs. America Online, Inc,, a federal appeals court interpreted this language:
“ By its plain language, § 230 creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service. Specifically, § 230 precludes courts from entertaining claims that would place a computer service provider in a publisher's role. Thus, lawsuits seeking to hold a service provider liable for its exercise of a publisher's traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content — are barred.“
The court also noted that Section 230 was enacted in response to an earlier pre-Section 230 case that held a service provider liable for third-party postings as if they were the original “publisher” (a.k.a. the party who actually posted). See Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710 (N.Y.Sup.Ct. May 24, 1995). In Stratton, Prodigy had advertised that it would moderate content posted on its bulletin boards and Prodigy had a history of actively screening and editing posts. On this basis, the court held them to the standard of being the original publisher. The decision created a disincentive for companies to moderate content posted on its site. Section 230’s purposes included the goal of removing any disincentive to self-regulate, which the court recognized in Zeran.
Zeran ensured that service providers can permit or deny the posting of third-party content and be immune from liability. This didn’t change existing laws. Nor was Section 230 intended to do so. Service providers are not immune from the consequences of their own posts. If they directly post defamatory statements, they are liable for their own acts. Equally, third parties are still fully liable for posts that they make.
What’s notable and significant about the Zeran decision is two-fold. First, it seeks to ensure Section 230’s purpose of encouraging self-regulation without unintended consequences. Second, it tightly links liability solely to the person who committed the overt, violative act – i.e. the third party who made the post.
Post-Zeran in 1997, here is how the Section 230 / internet landscape shaped up for service providers.
Internet Service Providers are:
This looks pretty great.
Jenn Suarez, CEO