Part Two: Why is Section 230 under attack?
The reasons why Section 230 is under attack all deal with Section 230’s legal permission to moderate – or not moderate – content on platforms without legal liability. Hence, they deal mainly with social media type companies and content posted by third parties (i.e. people who use the platform and aren’t part of that company). Here’s a closer look at the complaints. If you watch the news, you may have heard of two main arguments that seem to fall along political party lines. I’ll describe them below according to the party that they tend to be associated with. However, I think the fundamental arguments could be made by either party. Lastly, I'm adding a group of additional complaints that I’m putting into a third category that I believe is equally, even arguably more, significant and yet often remains under that radar. Republicans: Republicans assert that conservative political views (whether made by a congress person or an ordinary person) are censored by these companies. Meaning, they are not as widely distributed across the platforms as are more liberal views. And that, especially during the Trump presidential era, politically conservative posts were more likely to be assessed [by the platform or a platform’s “fact checker”] to be false, inflammatory or contain misleading statements /misinformation and then taken down, blocked or marked as such. They believe that if you want to enjoy federal immunity for third party online content, you shouldn’t be able to target and ban only certain types of viewpoints. Rather, moderation should be applied equally across the board. Democrats: Democrats claim that these companies allow content to remain on their platforms that should be removed. They are upset, for example, that these companies’ content includes misinformation campaigns and/or frauds perpetuated on the public (think fake COVID tests and/or interference in government elections) and they have done little to identify this type of content and remove it. They believe that companies have a responsibility for finding and removing some types of particularly harmful content and, if they don’t, they shouldn’t have Section 230 immunity. Third Category - Prohibited Content, Illegal Speech + New Harms: The third category is a collection of complaints that cover content prohibited by criminal law or civil law. It also includes content that has created new problems and doesn’t fit neatly into either criminal or civil law regulations. I’m dividing them into my own assigned categories to make it easier to understand. First, this category includes “Prohibited Content,” which I’d define as content that is prohibited under federal/state laws and carries criminal penalties. This type of content can range from illegally selling drugs over the internet to misrepresenting that you are a financial broker. Each is a highly regulated activity governed by federal (and sometimes state) law. If you violate the rules, you may be fined or even go to jail. Section 230 does not grant immunity for criminal violations. It specifically exempts activity prohibited by federal criminal statutes and references preservation of similar state laws. Next is Illegal Speech, which is speech prohibited by civil law. Unlike Prohibited Content, Illegal Speech has to do with the rights between two individuals and/or entities, such as slander and defamation. Company A’s competitor tries to hurt their business by making untrue statements about them online. Or Jane Barnes has someone post personal pictures of her online with damaging, untrue statements. Both Company A and Jane Barnes have a civil right to pursue the other party for speech prohibited and considered illegal under civil law. Section 230 provides immunity for almost all forms of Illegal Speech with the exception of intellectual property violations. It does not grant immunity for these types of claims, which include copyright or trademark infringement. Last, is a group of content that I’m calling New Harms. These are harms unique to the internet. It’s not clear whether they should be a criminal offense, carry civil liability or both. There are little to no existing laws that address it. Or, if there are laws, it’s unclear how to apply them in this new set of facts. Most cases have come to light over the last handful of years. Take, for instance, fake conspiracy theories – by this I mean conspiracy theories where the perpetrator knew what they were saying was not true. An example of the harm this can cause is the case of Maatje Benassi, a US Army reservist who was wrongly accused as being “patient zero” for COVID. She was not patient zero. Her and her husband’s personal information (such as their home address) were widely shared, the content was not taken down and their lives were repeatedly threatened and forever upended as a result. In this situation, clearly its wrong. But who is liable and under what laws? As you can see Section 230 doesn’t stand alone. It is part of a bigger system of regulations. Let’s review how they intersect with Section 230. First Amendment. It’s important to know that there is no First Amendment right of free speech that is enforceable against a public company. The First Amendment only protects you from having the government interfere with your free speech. The government can’t do it. But you can’t assert this right against a Twitter or Facebook. Even without Section 230, companies have a right to determine what kind of content they want to promote on their site so long as they don’t violate other laws. In fact, arguably, they have a First Amendment right to curate their website with the content/views they want without government interference. The First Amendment is applicable when addressing a complaint of blocking or censoring viewpoints online by a private party in the sense that it doesn’t apply. Often, we think it might apply in these situations since freedom of speech is a deeply cherished American ideal. This doesn’t mean that the argument put forth by Republicans doesn’t have merit. I’m going to address their argument in Part Three. Important Aside: For the remainder of this post, I’m going to focus on content that is not moderated or taken down. I want to explain the breadth of what it includes and provide an understanding of the complexities that exist in the interplay of laws with Section 230. I am also going to narrow the types of companies (also known as service providers) that I’m using for examples. The easy ones to think about are Facebook and Twitter-types, i.e. social media platforms. But there are other service providers who operate on the periphery of Section 230 and are part of the eco-system of the internet. Some of these “other” providers are stuck trying to navigate the complex system I’m going to describe. This includes hosting providers who host websites that may be protected by Section 230 and who, because they provide hosting services, must regularly regularly make decisions on whether they should (or must) require a hosting customer to take content off their website. I also argue that, by necessity, these “other” providers may also include (or we should at least consider) online ads (from ad networks), SMS/texts (services from mobile providers) and possibly email services, in as much as they are often part of the overall abuse that is perpetuated. This last group is significant because content on their services has increasingly become an integral part of the picture that must be considered when evaluating online content. This post will primarily consider companies which allow third parties to post content and for whom Section 230 directly applies. And, it will lightly consider hosting companies, a party that is regularly in the crosshairs of having to make decisions regarding content posted on their customers’ websites. Illegal Speech. Section 230 doesn’t change the legality of Illegal Speech. Illegal speech is still illegal. If someone is defaming or slandering you – your right to pursue a legal action against that person for defamation or slander still exists. Or, if you make a slanderous comment about your brother-in-law or ex-boss, you remain liable to him or her. Here is what changes with Illegal Speech under Section 230. The distributor (or redistributor) of content (i.e. Twitter, Facebook, Instagram) is no longer liable for your slanderous statements or Illegal Speech. These are considered neutral forums or vehicles that we can use to post our speech because Section 230 gives them immunity. Since Section 230 provides immunity to these online companies, this means they don’t have to review or moderate everything that is posted or shared using their services. The person who remains liable is you. As mentioned earlier, Section 230 does not give immunity to intellectual property (IP) violations. What this means in the online world, though, is not clear because things like trademark violations are often very dependent on a lot of factors. There is one exception, though, for governing IP violations on the internet and that is for copyright violations. Copyright is an intellectual property right that governs literary, artistic, educational, or musical forms of content. The Digital Millennium Copyright Act (DMCA), passed in 1998, was created specifically to provide a mechanism to protect this kind of content on the internet. It provides guidelines and specific procedures for submitting a “Takedown Request” (meaning a request to take down the copyrighted content) to an online content company and what they must do to retain immunity from liability. You’d use the DMCA if someone took your images and put them on their website without your permission. Or you’d use it if you wrote something (whether online or in print) and someone used it online without your permission. It has a narrow application for copyright alone and only applies to the internet. Prohibited Content. Federal criminal laws in existence apply to the internet. Section 230 specifically does not impact federal criminal law nor is it meant to impact state criminal laws. Prohibited Content is prohibited, and you do not have immunity just because someone else posted it. Here are some of the laws that are often at play for Prohibited Content online.
What happens when there is Illegal Speech, Prohibited Content or content that represents New Harms is online? Well, it depends. Self-Monitoring. Some of the large online tech companies have some basic self-monitoring protocols in place for third party content. This is almost always for Prohibited Content where they could be directly held accountable. Often, these protocols are part of a larger process that includes the intake of external complaints, which is where the majority of questionable content is identified. External Complaints. How these are handled depends on the type of complaint.
Some violations are easy to identify and action can be taken swiftly, such as with images of child exploitation. Many, though, aren’t as obvious because the violation may depend on things that have to be verified and companies lack the means to verify or the rules for what equals the Prohibited Content is not clear. Thus, processes can vary on what happens with complaints that claim that the content is Prohibited Content if the content isn’t an obvious violation. This results in confusion and frustration. An everyday person submits a complaint regarding content they believe is Prohibited, the company can’t verify, the content remains and both are frustrated. Companies receive well-meaning and respectful notices from law enforcement about questionable content, but law enforcement only asks them to investigate and remove the content if it violates laws or their Terms of Service. The companies are rarely directly instructed by law enforcement or governmental agencies that content in question is Prohibited Content and therefore you must take it down.
Instead, the person/business who is legitimately slandered must pursue claims directly against the party who actually made and posted the statements. They must obtain a judgement that rules the statement as slanderous. AND they also must obtain, with that judgement, a court order instructing the internet company to remove the content. Companies usually will not remove the content based on a judgement alone. This is not nefarious behavior but, rather, it’s usually because a judgement is based on very specific facts that aren’t identical from one situation to the next. So, to make sure they are doing the right thing, companies want a court order instructing them specifically to take it down so that they feel that their actions are under a court’s protection and they remain neutral. This gets complicated if the person who slandered you posts a similar statement on another platform, because you usually need to get a revised court order that instructs that company to remove it.
Herein we can begin to see the problem. Let’s isolate the areas where harm results. Prohibited Content: harm happens when: 1) Prohibited Content is unreported / undetected and remains online; 2) Prohibited Content is reported but there aren’t clear legal guidelines on how to verify that the content is indeed Prohibited Content and so the content is not removed promptly. Illegal Speech: remember that these are harms to an individual / entity they are based in civil law. They include fraudulent impersonation of a company and/or false claims about people. Usually it is the individual/entity that suffers the harm. The party who is claiming the content is Illegal Speech almost always must go to a court of law to determine their rights. Their rights are determined vis-à-vis the person who posted the content and, thereafter, they seek to enforce those rights against an online company to have the content removed. Resolution is not short or inexpensive. It’s a process that is built by default in the absence of clearer guidelines. From my direct experience, most companies feel ill-equipped to know what to do, so they attempt to create a responsible process to deal with these kinds of complaints that defer to defined legislation and courts of law, who are the appropriate triers of fact and adjudicators of liability. Nonetheless, it’s long and complicated. Moreover, the party who was offended often gets stuck in a never-ending legal battle across multiple online companies because the perpetrator continues to post the content in new places. All the while, the harm continues. We can do better. New Harms: we know the content causes harm but see no clear regulation on how to deal with it. This includes hate speech, content inciting violence, false news and claims (think COVID cures), publication of my personal information without my permission (like my phone number, driver’s license), manipulation of American people in governing of the US (for example: potential misrepresentation of a person (I’m a US citizen when in fact I’m a BOT or a group of terrorists). Companies usually will handle complaints for this type of content by: 1) evaluating whether they can find a legal basis on which to take down the content. If personal information has been posted without permission, is there a federal / state or international law that applies and prohibits it? 2) evaluating the content against their Terms of Service (ToS) and/or Acceptable Use Policies (AUP). Interestingly, ToS/AUP’s (though you may hate how long and detailed they are) are a tool that online companies use to “manage the grey area” where laws have not been enacted. For example, many include “incitement of violence” as prohibited content. Some include “hate speech”. Neither are prohibited by law. But, if the company’s ToS/AUP say that content inciting violence is prohibited, then they can remove the content on that basis. When neither of these apply, as with a lot of New Harms, they go with 3) instruct the reporter that they need to take the matter to a court of law for adjudication. These are all serious issues to solve. I deal with all of these issues firsthand. I serve as the Data Protection Officer of a large, global domain and hosting provider. I also oversee their Abuse department. I work directly with federal agencies such as the FBI, FTC, FDA, DOJ, etc. plus state Attorney Generals on evaluating questionable content for verifying if the content violates criminal laws. I am regularly in the trenches with companies to help them determine the right policies, procedures and/or outcomes needed that will comply with the law, protect ideals such as free speech and let them be a good citizen of the internet. I’m proud to say that my client, and others like them, have done a darn good job of finding balance and answers where there is no clear guidance. But we can’t lay the responsibility of solving the above issues at the feet of companies or law enforcement when laws are unclear or don’t exist. Besides being unfair, it will result in inconsistency from one company to another. In truth, it offers no true resolution at all. As I’ll describe in Part Three, the solution for the way forward is not clear cut or easy, but it is possible. Section 230 should be amended to catch up with the times and evolution of the internet. In doing so, as a nation, we must consider other fundamental human rights that are the bedrock of our democracy, such as due process and free speech. Indeed, Section 230 is considered the “bedrock” of free speech on the internet as aptly dubbed by EFF, “the most important law protecting internet speech.” As citizens and residents of this great country, we must take the time to be informed and to be diligent in what we consider. We must ensure no one is resorting to fast decisions that make good sound bites for the press. Amending Section 230 must have the same deliberation and care required by any action that could curb (or destroy) our cherished liberty of free speech. If we do it right, we have the opportunity of creating a bedrock of fair, balanced and integrated guidelines for the internet’s next leap forward. I look forward to discussing how in Part Three. Comments are closed.
|
AuthorJenn Suarez, CEO Archives
October 2021
Categories |