SECTION 230: WHAT YOU NEED TO KNOW
Part Three, Conclusion: 230 Reform –
What You Could Lose, What You Could Gain: Solutions
We can do this.
We have a framework to use for building solutions. Now, let’s roll up our sleeves and go through the problems identified.
Content Removal. Here, we are looking at content that should be evaluated on whether it should be removed by a company.
-Prohibited Content. Remember this is content that is prohibited by federal criminal laws. The main problem here is that content goes undetected and/or takes too long to identify and remove. A major cause of this problem is poorly written laws and/or laws that do not account for how a company would verify that the content is prohibited. In other words, the problem is with the language and implementation considerations.
In order to prevent online harms effectively, it’s imperative that they define violation(s) in as concrete terms as possible – this is essential to ensure easy identification and enforcement. Consider what an online company would need to allow it to self-screen. Make sure the language used is unambiguous. Consider whether the Prohibited Content is a type that, with appropriate language, could be identified via a technological solution. Where algorithmic screening/blocking is not possible, provide enough details for an online company to self-assess manually. Empower frontline federal, state and local law enforcement agents to state clearly to online companies, when they know identified content violates criminal laws and give them the ability to direct online companies on what they must do.
-Illegal Speech. This is content governed by civil law and does not include intellectual property. Section 230’s primary goal is to grant an online company immunity for content when posted by a third party that creates civil liability. A victim’s rights continue to remain intact against the person who posted the content, however, the online company is treated as a neutral party.
The main problem with Illegal Speech is the complicated and, sometimes, herculean process a victim must go through to get online content removed. The cause of the problem is that civil laws have not been updated to provide a process by which a victim can efficiently and effectively protect themselves when the violation is online. It’s the lack of process that fails them (i.e. an implementation breakdown).
I know this firsthand. Let me describe how it works and what needs to happen. Civil law violations are fact-based determinations best done by a court of law. Indeed, a victim must go to court to have a legal adjudication on whether online content creates civil liability. The potential victim would sue the party who made the statement (for our purposes, that’s the third party that posted it) and seek to have the court rule that the content is illegal. At the same time, the third party who posted the content seeks to defend themselves. Adjudication through a court, affords each party due process by a neutral expert (the court) on the law related to the civil claim. This is the way it worked prior to online growth and the way it still works. But, the adjudication process has not evolved to account for the complexities that can occur with online Illegal Speech, which is the underlying cause of the problem.
Let me illustrate what can happen using an example. Assume that someone posted online a lie about you stating that you are a cheater, pedophile or embezzler and they include your picture/home address in their post on a very popular website (like Facebook, Twitter, etc.). You likely have civil law rights that you can enforce against this person who posted this content as it may constitute Illegal Speech. But, your first goal will almost certainly be to get the false content down before it creates havoc in your life.
Today, at a basic level, this requires you to get a judgement against a person regarding the specific content that determines it to be Illegal Speech. Then, you’d need to get a court order with the judgement that directs the online company to remove it.
Sound straightforward? Actually, here is where it all falls apart.
Jurisdiction Challenges. To navigate this process with online content requires a strategic understanding of the online world and what companies will need to ensure they are removing the right content. First, you need to understand where the parties are located. Parties to consider include both the defendant (who posted the content) and the online company where the content resides. Many of these civil laws, such as defamation and slander, are state based laws. So, you need to also determine if the state has jurisdiction over the person who posted the content. If the person resides outside of the state, will the court allow you to name them as a defendant? Some state laws have a “long arm” clause written into them that will grant a court jurisdiction over someone who doesn’t live there or do business there. These clauses have been increasingly applied to online posts that constitute Illegal Speech.
Jurisdiction questions help you determine whether you can obtain a judgement against the person who posted the content, which you must have if you seek to force the removal of content. But, the judgement is the first step. You also need to determine that, if you win the case, whether the online company complies with a court order to remove the online content. One factor is whether the online company is US based and, thus, will comply with a US court order.
Naming the right defendant. Let’s assume that the company is subject to US laws and there are no state issues. When you file your lawsuit, you need to name a defendant in your lawsuit and that needs to be the person who posted. If a name accompanies the post, you might choose to use it. When it’s unknown, some plaintiffs name who they think it is or they name “John Doe” and reference the posted content. The risk at this stage is naming the wrong defendant. For instance, the name with the post is Trey Smith (or maybe you think it’s Trey Smith). If you get a judgement / court order against a person named Trey Smith specifically, this needs to match the name in the online company’s records. A court order requiring the removal of content posted specifically by Trey Smith will not be executed if the poster is not named Trey Smith.
Defects in judgement / court order. Let’s assume that you’ve named the defendant sufficiently. The judgement and related court order need to sufficiently describe the content to be removed. Let’s assume that it contains examples you provided of the specific content as it’s posted online and that you’ve named the website where it’s located. Let’s also assume that the online company will comply with any US based court order. If the specific content noted in the court order remains online in the same way, you are going to be successful getting the company to remove the false statements about you. In reality, though, it’s usually not this easy.
What happens if the content was changed and doesn’t clearly match what you have in your judgement? What if the content is similar but now under someone else’s name (i.e. the defendant had another party post it and they removed their post)? What if the content has now been put on other websites and you need to get it removed from those as well? Any of these scenarios create the risk that you may be required to have the court amend your judgement or go back to court anew.
These current processes exist by default. By necessity, they were cobbled together as online companies tried to create a fair way to deal with Illegal Speech. The laws are valid. The right party is being held accountable. However, jurisdiction issues and the enforcement processes have not changed with the evolution of how online harms now occur.
For Illegal Speech, and the problems of protection and enforcement in the online world, I propose the following as components to a possible solution.
State level. As mentioned earlier, all laws must be on the table. At the state level, states could create jurisdictional exceptions for online harms caused by content posted by third parties on websites protected by Section 230. These exceptions could provide extra-territorial reach for the sole purpose of adjudication and removal of the third-party content from online. While they could, they wouldn’t have to include damages and monetary awards, thereby significantly reducing liability risk in exchange for jurisdiction.
Federal level. Here is a perfect place for a Section 230 amendment. An amendment could provide that any company who enjoys Section 230 protection would be required to accept a state judgement regarding civil harms for content posted on their site and to remove the content. In addition, a set of guidelines can be published that helps guide victims with language to use in judgements and court orders. Thus, as part of their immunity, online companies provide guidance and reduce roadblocks for those who are seeking to enforce a civil judgement against a third party posting on their site.
Another potential Section 230 amendment could include the requirement of content arbitration clauses for anyone who wants to post on the online company’s website. In other words, if you want to post on ABC’s website, you agree to submit to US-based arbitration over your content if a third party claims their rights under civil laws have been violated. The arbitration panel could determine whether the content in question is considered Illegal Speech under the applicable state law and they could be limited to the ability to order the specified content be removed from the website of the online company in question. It cannot award monetary damages and the losing party can appeal to a federal court.
Here’s Our Work + What Matters.
Content Removal + Censor/Removal Restrictions. The next group is a collection of New Harms which includes situations where people believe there is a responsibility to remove content and it includes situations where people believe a Section 230 protected company may be restricted in what/how they remove or subject to guidelines for removal.
-New Harms. This is online content where there is general agreement that it creates harm but where there are no laws that clearly apply to them. Consider again: hate speech, content inciting violence, publication of your personal information without your permission (like your phone number, driver’s license), false news and claims (think COVID cures), manipulation of American people in governing of the US (for example: election manipulation by foreign entities or known lies that are perpetuated and which impact the US broadly). Of course, these are not exhaustive.
In all cases, legislative guidance is required. Companies protected by Section 230 should not be held responsible for situations where our elected leaders have not legislated. Nor should their protection under Section 230 be threatened.
The questions you and elected leaders should be asking and things that elected leaders should consider.
Situations Focused On Content Removal.
Questions to ask for content sought to be removed.
Do any of the New Harms fit under existing federal or state law? For example, in Part Two of this blog, I shared the example of Maatje Benassi, a US Army reservist who was wrongly accused as being “patient zero” for COVID by someone who knew it was not true but continued to push this theory across the internet and included her personal information. Clearly, she was being harassed. Clearly, the behavior (especially the sharing of her home address) was meant to and encouraged stalking.
Section 230(b)(3) states that it is meant to ensure “vigorous enforcement” of federal laws to deter and punish this type of online behavior. Are the federal laws written in a way to cover this obvious situation? If so, are they communicated to the public effectively? Are there guidelines that could be created and distributed?
Can any of these laws be amended to include publication of personal information that is intended to or where the posting of it actually results in stalking or harassment (crimes specifically noted in Section 230)? Do state privacy laws prohibit the publication of personal information without permission and can they be invoked? If so, how do we educate consumers and companies protected by Section 230?
Content that’s being asked to be removed which may bump up against the First Amendment protections of Free Speech.
Remember that the First Amendment and Free Speech does not apply between you and another party, like a third-party individual making online posts or an online company. However, when you ask the government to create laws that put limits on speech, the First Amendment DOES apply. Hate speech, content that incites violence, content that organizes violence – all bump up against and must be considered in relation to First Amendment rights.
Though the First Amendment right to free speech is not absolute, Supreme Court decisions on protection under the First Amendment grant broad rights regarding what someone can say. In Brandenburg V. Ohio (1969), the Supreme Court established the “clear and present danger” rule and held that the government (or states in this particular case) could only restrict speech that "is directed to inciting or producing imminent lawless action, and is likely to incite or produce such action." This case was decided in 1969, well before the internet. Does it apply to government regulation of online content? Can online content that incites violence or content that organizes violence find a home in this language? What standards would the online content have to meet in order to rise to the level of being prohibited and proactively removed (vs. evidence of the crime after the fact)?
Misinformation – where does it fit? During COVID, troublesome online posts included misinformation touting “cures” that were lethal. The Task Force I led monitored and reviewed this type of content, but there were no clear laws prohibiting these posts nor did federally authorized websites exist that provided timely information on what was legitimate vs. harmful.
Does it matter if someone really believes the information vs. where they know it’s false and are intentionally posting online? If someone believes the information, does it then become free speech where the government is limited in how it can prohibit it? If it is a health risk to the public, are there other laws that may apply where health misinformation is prohibited where it carries a risk to life? What would that content need to look like and/or do to meet this standard? Does it make a difference if the harm is great vs small? Physical vs. monetary? Where the harm can be calculated vs. non-economic harm?
Political misinformation. Take, for example, the 2020 presidential election where the incumbent president lost the election and then claimed it was stolen and that there was massive fraud. This misinformation was widely distributed on social media platforms owned by companies who have the protection of Section 230. Most of these companies seemed unsure of their responsibility as to this content. It was not until the January 6th riot on Capitol Hill that many social media companies decided that they should remove the online content due to the significant risk to public safety, and it included the determination that the content posted by the former president amounted to incitement of violence.
No laws exist that can guide these companies on what they should or shouldn’t do. The action they took was based on their own Terms of Service. Was this too little, too late? What about others who continued to post the misinformation that led to the president’s ban? Should similar content posted by others be removed?
Political misinformation can be created and disseminated by anyone. It doesn’t always lead to harm. However, when the creators and disseminators are people in a position of power and widespread influence, the risk, level and likelihood of harm is exponentially greater. I want to briefly address it here through a series of questions for you to consider using the misinformation campaign that claimed the 2020 presidential election fraud. Is it ok for elected officials to tell the American public that the election was stolen? What if over 60 courts dismissed such claims because there was no evidence? What if the President’s own Attorney General issued a statement that the investigation into election fraud/theft found no evidence? Can public figures still repeat these repeatedly debunked and unfounded statements? Do federal representatives have a duty to base statements on facts? Or, at least be required to have credible evidence supporting statements that have been repeatedly debunked and dismissed?
Elected officials, at all levels, have claimed that they too have a blanket right to free speech. But, do they? Should they? What about when it is clear, even if they won’t say so publicly, that the statements aren’t true and they know it? What if the motivation to repeat the false statements are purely for their own political gain? What about when those statements impact our democracy? What about when they are divisive of the country and/or mislead a large portion of our population?
Shouldn’t elected officials, due to their platform, significant ability to influence, and duty of office, be held to a higher standard when they are speaking from their professional role and using the news platform given because of that role? Where an allegation or statement has been repeatedly dismissed by neutral authorities (such as courts of law), shouldn’t elected officials be required to have some basis of credible fact for the things they say? Don’t Americans have a right to this? Don’t their constituents deserve this? Don’t you believe it’s their responsibility to you?
Online company responsibility with misinformation. There has been a large outcry that an online company protected by Section 230 must have at least some responsibility to monitor and deal with harmful content such as misinformation. Compounding this concern is, as the world has learned over the past year, that some social media platforms, such as Facebook, may have algorithms that are designed to identify controversial/contentious content and distribute it more widely, as it leads to more “clicks” and to a user staying on their site longer. In other words, they may be built to foster and promulgate contentious misinformation and have integrated revenue (such as advertising) to profit from it.
Is it fair to require a company to put content governance mechanisms in place when elected officials haven’t even passed laws to regulate it? On one hand, the answer must be “no” and therefore Section 230 immunity shouldn’t be at risk. On the other hand, New Harms will originate in online places such as social media. These companies are often the first to learn about New Harms. So, shouldn’t this create the responsibility to put governance mechanisms in place when they at least have a responsibility to identify New Harms? It would seem only fair to say “yes.” But, then, does this require that their Section 230 immunity is at risk, especially given what American’s stand to lose if companies don’t have Section 230 protections? Or, is there another effective deterrent or motivator to achieve the level of responsibility that is fair to expect from these companies?
Situations Focused On Claims of Inappropriate Censorship / Content Removal.
Last, and significant, are claims that large social media companies who enjoy Section 230 protections have censored online third-party content inappropriately. For example, there have been republicans who’ve stated that they believe republican political views are unfairly censored by large tech companies like Facebook. Their position is the following: Online companies that enjoy the liability protections of Section 230 should not be able to suppress or censor targeted political online content when that company is a major distributor and a well-known source where people get their news. They advocate that companies that are the size and influence of Facebook, Twitter, etc., hold enormous power to impact and sway public opinion and that their censorship of published content is not a neutral act. Therefore, these online companies are different from the run-of-the-mill Section 230 companies. Their size and influence should be factored into their obligations for Section 230 immunity, without which they could not exist. The focus here seems to be on companies that reach a certain size and influence and arguably got there because of Section 230.
Next, let’s understand some of the laws that can come into play. As always with government regulation of content, let’s start with the First Amendment and how it applies. Under the First Amendment, the government cannot restrict your right to speak freely. However, their goal is not to restrict your speech. Their claim is that online companies who benefit from Section 230 (and which have reached a significant size and level of influence) can’t restrict your speech. Yet, there are no First Amendment rights between you and a private party. You cannot claim a free speech right that obligates these online companies to post your speech on their platform. And, Section 230, on its face appears to intend to provide protection for when online companies self-regulate (i.e. censor) content. In addition, there is a First Amendment Free Speech doctrine called “the compelled speech” doctrine that prohibits the government from requiring or compelling speech in certain circumstances that arguably applies. We are in new territory.
Here are a few questions to consider. If a company, by federal law, is granted immunity from civil liabilities, could / should immunity be conditioned on meeting reasonable responsibilities- even when content/speech is involved? Does the First Amendment limit what kind of conditions the government can require if such conditions directed Section 230 companies not to publish certain third-party content or, alternatively, required them to produce certain third-party content? Shouldn’t the law that built their success speak to responsibilities for earning and/ or keeping it? Is there some point where Section 230 companies reach a size and level of influence where they should have public responsibilities, such as when they reach the size of a Facebook? Once they have reached a level of financial success, public engagement and influence, should they have additional responsibilities in order to remain protected?
The Path Forward.
Section 230 cannot be repealed. At the same time, it cannot remain as it is today. Equally, not everything should be laid at the feet of Section 230. Nor can all the problems be blamed on Section 230.
The path forward requires looking at the issues you want to solve and the network of laws that relate to them, which may include Section 230. Addressing issues raised with respect to Prohibited Content and Illegal Speech requires thoughtful, informed deliberation regarding the body of laws that surround an issue and the breakdowns that cause the problem. It requires identifying which laws require amendments and careful crafting of such change. Lastly, it may require some additional regulations that, for example, help civil harms from online content be dealt with efficiently.
The path forward also includes responsible consideration of a body of New Harms. Indeed, the harms in this category is where the most noise is made, the most soundbites created. The significance of these harms deserves more than media sound bites. Moreover, you should not want these important decisions determined by online companies who are protected by Section 230. Just because these New Harms happen on the websites, doesn’t mean that they are the appropriate party to make such decisions for our nation. They are not. They are the conduit.
Resolving the important issues raised by New Harms requires acknowledging that there are no laws that address these situations and that there should be. Successful resolution requires careful research, public engagement and a cautious balance of competing rights. It also requires that this, in turn, results in responsible, bi-partisan legislation specific to the harms. Then, as with legislation guiding Prohibited Speech and Illegal Speech, the content itself is governed. Section 230 companies play a role; but, their role is to follow legislation established by parties elected to represent the breadth of America.
Comments are closed.
Jenn Suarez, CEO