SECTION 230: WHAT YOU NEED TO KNOW
Part Three, Conclusion: 230 Reform –
What You Could Lose, What You Could Gain: Solutions
We can do this.
We have a framework to use for building solutions. Now, let’s roll up our sleeves and go through the problems identified.
Content Removal. Here, we are looking at content that should be evaluated on whether it should be removed by a company.
-Prohibited Content. Remember this is content that is prohibited by federal criminal laws. The main problem here is that content goes undetected and/or takes too long to identify and remove. A major cause of this problem is poorly written laws and/or laws that do not account for how a company would verify that the content is prohibited. In other words, the problem is with the language and implementation considerations.
In order to prevent online harms effectively, it’s imperative that they define violation(s) in as concrete terms as possible – this is essential to ensure easy identification and enforcement. Consider what an online company would need to allow it to self-screen. Make sure the language used is unambiguous. Consider whether the Prohibited Content is a type that, with appropriate language, could be identified via a technological solution. Where algorithmic screening/blocking is not possible, provide enough details for an online company to self-assess manually. Empower frontline federal, state and local law enforcement agents to state clearly to online companies, when they know identified content violates criminal laws and give them the ability to direct online companies on what they must do.
-Illegal Speech. This is content governed by civil law and does not include intellectual property. Section 230’s primary goal is to grant an online company immunity for content when posted by a third party that creates civil liability. A victim’s rights continue to remain intact against the person who posted the content, however, the online company is treated as a neutral party.
The main problem with Illegal Speech is the complicated and, sometimes, herculean process a victim must go through to get online content removed. The cause of the problem is that civil laws have not been updated to provide a process by which a victim can efficiently and effectively protect themselves when the violation is online. It’s the lack of process that fails them (i.e. an implementation breakdown).
I know this firsthand. Let me describe how it works and what needs to happen. Civil law violations are fact-based determinations best done by a court of law. Indeed, a victim must go to court to have a legal adjudication on whether online content creates civil liability. The potential victim would sue the party who made the statement (for our purposes, that’s the third party that posted it) and seek to have the court rule that the content is illegal. At the same time, the third party who posted the content seeks to defend themselves. Adjudication through a court, affords each party due process by a neutral expert (the court) on the law related to the civil claim. This is the way it worked prior to online growth and the way it still works. But, the adjudication process has not evolved to account for the complexities that can occur with online Illegal Speech, which is the underlying cause of the problem.
Let me illustrate what can happen using an example. Assume that someone posted online a lie about you stating that you are a cheater, pedophile or embezzler and they include your picture/home address in their post on a very popular website (like Facebook, Twitter, etc.). You likely have civil law rights that you can enforce against this person who posted this content as it may constitute Illegal Speech. But, your first goal will almost certainly be to get the false content down before it creates havoc in your life.
Today, at a basic level, this requires you to get a judgement against a person regarding the specific content that determines it to be Illegal Speech. Then, you’d need to get a court order with the judgement that directs the online company to remove it.
Sound straightforward? Actually, here is where it all falls apart.
Jurisdiction Challenges. To navigate this process with online content requires a strategic understanding of the online world and what companies will need to ensure they are removing the right content. First, you need to understand where the parties are located. Parties to consider include both the defendant (who posted the content) and the online company where the content resides. Many of these civil laws, such as defamation and slander, are state based laws. So, you need to also determine if the state has jurisdiction over the person who posted the content. If the person resides outside of the state, will the court allow you to name them as a defendant? Some state laws have a “long arm” clause written into them that will grant a court jurisdiction over someone who doesn’t live there or do business there. These clauses have been increasingly applied to online posts that constitute Illegal Speech.
Jurisdiction questions help you determine whether you can obtain a judgement against the person who posted the content, which you must have if you seek to force the removal of content. But, the judgement is the first step. You also need to determine that, if you win the case, whether the online company complies with a court order to remove the online content. One factor is whether the online company is US based and, thus, will comply with a US court order.
Naming the right defendant. Let’s assume that the company is subject to US laws and there are no state issues. When you file your lawsuit, you need to name a defendant in your lawsuit and that needs to be the person who posted. If a name accompanies the post, you might choose to use it. When it’s unknown, some plaintiffs name who they think it is or they name “John Doe” and reference the posted content. The risk at this stage is naming the wrong defendant. For instance, the name with the post is Trey Smith (or maybe you think it’s Trey Smith). If you get a judgement / court order against a person named Trey Smith specifically, this needs to match the name in the online company’s records. A court order requiring the removal of content posted specifically by Trey Smith will not be executed if the poster is not named Trey Smith.
Defects in judgement / court order. Let’s assume that you’ve named the defendant sufficiently. The judgement and related court order need to sufficiently describe the content to be removed. Let’s assume that it contains examples you provided of the specific content as it’s posted online and that you’ve named the website where it’s located. Let’s also assume that the online company will comply with any US based court order. If the specific content noted in the court order remains online in the same way, you are going to be successful getting the company to remove the false statements about you. In reality, though, it’s usually not this easy.
What happens if the content was changed and doesn’t clearly match what you have in your judgement? What if the content is similar but now under someone else’s name (i.e. the defendant had another party post it and they removed their post)? What if the content has now been put on other websites and you need to get it removed from those as well? Any of these scenarios create the risk that you may be required to have the court amend your judgement or go back to court anew.
These current processes exist by default. By necessity, they were cobbled together as online companies tried to create a fair way to deal with Illegal Speech. The laws are valid. The right party is being held accountable. However, jurisdiction issues and the enforcement processes have not changed with the evolution of how online harms now occur.
For Illegal Speech, and the problems of protection and enforcement in the online world, I propose the following as components to a possible solution.
State level. As mentioned earlier, all laws must be on the table. At the state level, states could create jurisdictional exceptions for online harms caused by content posted by third parties on websites protected by Section 230. These exceptions could provide extra-territorial reach for the sole purpose of adjudication and removal of the third-party content from online. While they could, they wouldn’t have to include damages and monetary awards, thereby significantly reducing liability risk in exchange for jurisdiction.
Federal level. Here is a perfect place for a Section 230 amendment. An amendment could provide that any company who enjoys Section 230 protection would be required to accept a state judgement regarding civil harms for content posted on their site and to remove the content. In addition, a set of guidelines can be published that helps guide victims with language to use in judgements and court orders. Thus, as part of their immunity, online companies provide guidance and reduce roadblocks for those who are seeking to enforce a civil judgement against a third party posting on their site.
Another potential Section 230 amendment could include the requirement of content arbitration clauses for anyone who wants to post on the online company’s website. In other words, if you want to post on ABC’s website, you agree to submit to US-based arbitration over your content if a third party claims their rights under civil laws have been violated. The arbitration panel could determine whether the content in question is considered Illegal Speech under the applicable state law and they could be limited to the ability to order the specified content be removed from the website of the online company in question. It cannot award monetary damages and the losing party can appeal to a federal court.
Here’s Our Work + What Matters.
Content Removal + Censor/Removal Restrictions. The next group is a collection of New Harms which includes situations where people believe there is a responsibility to remove content and it includes situations where people believe a Section 230 protected company may be restricted in what/how they remove or subject to guidelines for removal.
-New Harms. This is online content where there is general agreement that it creates harm but where there are no laws that clearly apply to them. Consider again: hate speech, content inciting violence, publication of your personal information without your permission (like your phone number, driver’s license), false news and claims (think COVID cures), manipulation of American people in governing of the US (for example: election manipulation by foreign entities or known lies that are perpetuated and which impact the US broadly). Of course, these are not exhaustive.
In all cases, legislative guidance is required. Companies protected by Section 230 should not be held responsible for situations where our elected leaders have not legislated. Nor should their protection under Section 230 be threatened.
The questions you and elected leaders should be asking and things that elected leaders should consider.
Situations Focused On Content Removal.
Questions to ask for content sought to be removed.
Do any of the New Harms fit under existing federal or state law? For example, in Part Two of this blog, I shared the example of Maatje Benassi, a US Army reservist who was wrongly accused as being “patient zero” for COVID by someone who knew it was not true but continued to push this theory across the internet and included her personal information. Clearly, she was being harassed. Clearly, the behavior (especially the sharing of her home address) was meant to and encouraged stalking.
Section 230(b)(3) states that it is meant to ensure “vigorous enforcement” of federal laws to deter and punish this type of online behavior. Are the federal laws written in a way to cover this obvious situation? If so, are they communicated to the public effectively? Are there guidelines that could be created and distributed?
Can any of these laws be amended to include publication of personal information that is intended to or where the posting of it actually results in stalking or harassment (crimes specifically noted in Section 230)? Do state privacy laws prohibit the publication of personal information without permission and can they be invoked? If so, how do we educate consumers and companies protected by Section 230?
Content that’s being asked to be removed which may bump up against the First Amendment protections of Free Speech.
Remember that the First Amendment and Free Speech does not apply between you and another party, like a third-party individual making online posts or an online company. However, when you ask the government to create laws that put limits on speech, the First Amendment DOES apply. Hate speech, content that incites violence, content that organizes violence – all bump up against and must be considered in relation to First Amendment rights.
Though the First Amendment right to free speech is not absolute, Supreme Court decisions on protection under the First Amendment grant broad rights regarding what someone can say. In Brandenburg V. Ohio (1969), the Supreme Court established the “clear and present danger” rule and held that the government (or states in this particular case) could only restrict speech that "is directed to inciting or producing imminent lawless action, and is likely to incite or produce such action." This case was decided in 1969, well before the internet. Does it apply to government regulation of online content? Can online content that incites violence or content that organizes violence find a home in this language? What standards would the online content have to meet in order to rise to the level of being prohibited and proactively removed (vs. evidence of the crime after the fact)?
Misinformation – where does it fit? During COVID, troublesome online posts included misinformation touting “cures” that were lethal. The Task Force I led monitored and reviewed this type of content, but there were no clear laws prohibiting these posts nor did federally authorized websites exist that provided timely information on what was legitimate vs. harmful.
Does it matter if someone really believes the information vs. where they know it’s false and are intentionally posting online? If someone believes the information, does it then become free speech where the government is limited in how it can prohibit it? If it is a health risk to the public, are there other laws that may apply where health misinformation is prohibited where it carries a risk to life? What would that content need to look like and/or do to meet this standard? Does it make a difference if the harm is great vs small? Physical vs. monetary? Where the harm can be calculated vs. non-economic harm?
Political misinformation. Take, for example, the 2020 presidential election where the incumbent president lost the election and then claimed it was stolen and that there was massive fraud. This misinformation was widely distributed on social media platforms owned by companies who have the protection of Section 230. Most of these companies seemed unsure of their responsibility as to this content. It was not until the January 6th riot on Capitol Hill that many social media companies decided that they should remove the online content due to the significant risk to public safety, and it included the determination that the content posted by the former president amounted to incitement of violence.
No laws exist that can guide these companies on what they should or shouldn’t do. The action they took was based on their own Terms of Service. Was this too little, too late? What about others who continued to post the misinformation that led to the president’s ban? Should similar content posted by others be removed?
Political misinformation can be created and disseminated by anyone. It doesn’t always lead to harm. However, when the creators and disseminators are people in a position of power and widespread influence, the risk, level and likelihood of harm is exponentially greater. I want to briefly address it here through a series of questions for you to consider using the misinformation campaign that claimed the 2020 presidential election fraud. Is it ok for elected officials to tell the American public that the election was stolen? What if over 60 courts dismissed such claims because there was no evidence? What if the President’s own Attorney General issued a statement that the investigation into election fraud/theft found no evidence? Can public figures still repeat these repeatedly debunked and unfounded statements? Do federal representatives have a duty to base statements on facts? Or, at least be required to have credible evidence supporting statements that have been repeatedly debunked and dismissed?
Elected officials, at all levels, have claimed that they too have a blanket right to free speech. But, do they? Should they? What about when it is clear, even if they won’t say so publicly, that the statements aren’t true and they know it? What if the motivation to repeat the false statements are purely for their own political gain? What about when those statements impact our democracy? What about when they are divisive of the country and/or mislead a large portion of our population?
Shouldn’t elected officials, due to their platform, significant ability to influence, and duty of office, be held to a higher standard when they are speaking from their professional role and using the news platform given because of that role? Where an allegation or statement has been repeatedly dismissed by neutral authorities (such as courts of law), shouldn’t elected officials be required to have some basis of credible fact for the things they say? Don’t Americans have a right to this? Don’t their constituents deserve this? Don’t you believe it’s their responsibility to you?
Online company responsibility with misinformation. There has been a large outcry that an online company protected by Section 230 must have at least some responsibility to monitor and deal with harmful content such as misinformation. Compounding this concern is, as the world has learned over the past year, that some social media platforms, such as Facebook, may have algorithms that are designed to identify controversial/contentious content and distribute it more widely, as it leads to more “clicks” and to a user staying on their site longer. In other words, they may be built to foster and promulgate contentious misinformation and have integrated revenue (such as advertising) to profit from it.
Is it fair to require a company to put content governance mechanisms in place when elected officials haven’t even passed laws to regulate it? On one hand, the answer must be “no” and therefore Section 230 immunity shouldn’t be at risk. On the other hand, New Harms will originate in online places such as social media. These companies are often the first to learn about New Harms. So, shouldn’t this create the responsibility to put governance mechanisms in place when they at least have a responsibility to identify New Harms? It would seem only fair to say “yes.” But, then, does this require that their Section 230 immunity is at risk, especially given what American’s stand to lose if companies don’t have Section 230 protections? Or, is there another effective deterrent or motivator to achieve the level of responsibility that is fair to expect from these companies?
Situations Focused On Claims of Inappropriate Censorship / Content Removal.
Last, and significant, are claims that large social media companies who enjoy Section 230 protections have censored online third-party content inappropriately. For example, there have been republicans who’ve stated that they believe republican political views are unfairly censored by large tech companies like Facebook. Their position is the following: Online companies that enjoy the liability protections of Section 230 should not be able to suppress or censor targeted political online content when that company is a major distributor and a well-known source where people get their news. They advocate that companies that are the size and influence of Facebook, Twitter, etc., hold enormous power to impact and sway public opinion and that their censorship of published content is not a neutral act. Therefore, these online companies are different from the run-of-the-mill Section 230 companies. Their size and influence should be factored into their obligations for Section 230 immunity, without which they could not exist. The focus here seems to be on companies that reach a certain size and influence and arguably got there because of Section 230.
Next, let’s understand some of the laws that can come into play. As always with government regulation of content, let’s start with the First Amendment and how it applies. Under the First Amendment, the government cannot restrict your right to speak freely. However, their goal is not to restrict your speech. Their claim is that online companies who benefit from Section 230 (and which have reached a significant size and level of influence) can’t restrict your speech. Yet, there are no First Amendment rights between you and a private party. You cannot claim a free speech right that obligates these online companies to post your speech on their platform. And, Section 230, on its face appears to intend to provide protection for when online companies self-regulate (i.e. censor) content. In addition, there is a First Amendment Free Speech doctrine called “the compelled speech” doctrine that prohibits the government from requiring or compelling speech in certain circumstances that arguably applies. We are in new territory.
Here are a few questions to consider. If a company, by federal law, is granted immunity from civil liabilities, could / should immunity be conditioned on meeting reasonable responsibilities- even when content/speech is involved? Does the First Amendment limit what kind of conditions the government can require if such conditions directed Section 230 companies not to publish certain third-party content or, alternatively, required them to produce certain third-party content? Shouldn’t the law that built their success speak to responsibilities for earning and/ or keeping it? Is there some point where Section 230 companies reach a size and level of influence where they should have public responsibilities, such as when they reach the size of a Facebook? Once they have reached a level of financial success, public engagement and influence, should they have additional responsibilities in order to remain protected?
The Path Forward.
Section 230 cannot be repealed. At the same time, it cannot remain as it is today. Equally, not everything should be laid at the feet of Section 230. Nor can all the problems be blamed on Section 230.
The path forward requires looking at the issues you want to solve and the network of laws that relate to them, which may include Section 230. Addressing issues raised with respect to Prohibited Content and Illegal Speech requires thoughtful, informed deliberation regarding the body of laws that surround an issue and the breakdowns that cause the problem. It requires identifying which laws require amendments and careful crafting of such change. Lastly, it may require some additional regulations that, for example, help civil harms from online content be dealt with efficiently.
The path forward also includes responsible consideration of a body of New Harms. Indeed, the harms in this category is where the most noise is made, the most soundbites created. The significance of these harms deserves more than media sound bites. Moreover, you should not want these important decisions determined by online companies who are protected by Section 230. Just because these New Harms happen on the websites, doesn’t mean that they are the appropriate party to make such decisions for our nation. They are not. They are the conduit.
Resolving the important issues raised by New Harms requires acknowledging that there are no laws that address these situations and that there should be. Successful resolution requires careful research, public engagement and a cautious balance of competing rights. It also requires that this, in turn, results in responsible, bi-partisan legislation specific to the harms. Then, as with legislation guiding Prohibited Speech and Illegal Speech, the content itself is governed. Section 230 companies play a role; but, their role is to follow legislation established by parties elected to represent the breadth of America.
Section 230: what you need to know
Part Three, Introduction: 230 Reform –
What You Could Lose, What You Could Gain: The Solution Framework
Let’s start with a recap.
Part One shared that Section 230 is meant to (emphasis added):
Problem areas identified in Part Two are:
How Do We Fix This?
Repeal Section 230?
At the outset, I want to put on (and then remove from) the table the idea that Section 230 should be repealed. It’s simply not possible. Well, ok sure, it’s possible but completely disastrous and unworkable. It would stink for all of us.
Here’s a glimpse.
Without protection from civil liability (which is what Section 230 does), Facebook, Twitter and the like will still exist. There would still be social media companies. Removal of Section 230 won’t ensure a change in behavior but it likely would ensure that they’d be forced to leave the US. En masse. They’d have no choice.
Online websites that are designed to enable people to engage and express themselves on the internet are here to stay. Consumer demand for websites that provide engagement is increasing. Moreover, no one cares or bases their use of such a website on whether the company is based in the US. (For a great example – think TikTok.) In other words, when an online company matches consumer demand, company location is not a deciding factor, at least today.
Repealing Section 230 would have immediate effects. Existing US companies that allow third party content would have to move outside of the US to continue to exist if there isn’t a Section 230. And, it would hamper the innovation of new online companies such as these in the US. Their liability would be too great. The exit of entrepreneurism, economic growth and jobs related to the internet would follow because most of what’s online will certainly continue to have some interactive component.
The exit of these companies would create larger problems for those of us in the United States. Let’s start with civil liabilities. You would have practically zero ability to enforce your rights with respect to content that violates US civil law in such a changed online world. Why? Because, assuming you have a successful judgement that determines content online about you is Illegal Speech, you still need a company to agree to remove content based on this judgement. Now, these companies would be outside the US and not subject to US law. Your other alternative would be to sue these non-US companies in order to force them to remove content. Even if you have the money and ability to sue a company outside the US, let’s say, in Ireland – you would have to find and hire local counsel to file your lawsuit in Ireland, the right you are seeking to enforce would have to exist under Ireland’s law and Ireland’s law would have to allow someone like you (who didn’t live there) to take advantage of it. Your ability to enforce your US based civil law rights in another country doesn’t exist.
It’s less promising for the enforcement of criminal US laws against companies that exist abroad. While criminal laws remain intact under Section 230, they are intact to enforce against companies that exist in the US and/or are subject to US jurisdiction. These laws don’t apply outside of the US. A majority of third-party content and social media companies are based in the US and subject to US law, which provides you protection under US criminal law. They are in the US for a lot of reasons that absolutely include Section 230. But, if they had to leave the US, know that a company without a US legal presence could allow a third-party to post content about you that is otherwise criminal under US law. And, there is likely nothing you could do about it. Thus, a repeal would not only torch our civil rights, it’d blow up criminal harms we are trying to prevent because we have driven the largest existing tech companies and future ones out of the US.
If these reasons don’t have you convinced, there is another one that I believe may eclipse them all. Section 230 truly is the reason why you have free speech online. Without it, minority voices and opinions have limited online avenues for expression. For clarity, “minority” in this context is not the color of your skin, your religion or your ethnic background. It is much broader than that. It is your belief, point of view about anything happening in your life, community, country or the world that may run counter to that of a larger group. Your ability to share these things and, as a bonus, find and discuss online with likeminded people would be virtually non-existent.
What happens when you want to call out injustices to you, your community or other injustices that you believe are happening? No one would allow it on their website. You could have a website of your own; but, single websites lack the collective distribution power of larger, established ones. Some very established Section 230 online companies may serve to give free speech, liberation to other countries. But, leaving those outside of the United States, I propose that, without it, neither you, I or our country would have learned about George Floyd and other victims that put faces, names and unforgettable details to the ongoing abuse against Black Americans. You probably would not find online forums that would have allowed you to express that you believe the 2020 election was stolen.
Whatever your beliefs, a single defining and uniting principle is at stake. It is that, in the United States, you enjoy the constitutionally protected human right that is guaranteed by our government to speak freely. This core freedom is also embraced (with the help of Section 230) by online companies that want to protect your ability to speak your views and beliefs. You and I will likely never convince a newspaper to publish our opinions/beliefs, despite our newsworthiness, if they can’t be fact checked and proven. And, we probably couldn’t build a powerful community of like-minded people with the same views by word of mouth. The equalizing channel that could be an avenue for your opinions and hold the opportunity of building a like-community is via the avenue of the internet. The viability of this avenue is tied to Section 230.
In sum, the losses you stack up if Section 230 is repealed are: Loss of the ability to enforce your US civil law and criminal rights, the loss of the ability to truly speak your mind, and the loss of the economic growth and jobs related to online companies.
Everything you want to protect requires that companies have to be able to exist in the US which, in turn, requires Section 230.
For the US to retain the vast advantages of economic growth on the internet, interactive companies have to be able to exist here. To be able to protect yourself against US legislated civil harms regarding online content, the company where the harm occurs must be able to exist here. To ensure US criminal laws count, the companies that you are looking to regulate must be able to exist here. To empower you to exercise your US constitutional right to, and US embraced, freedom of free speech – i.e. your ability to express your views (no matter how unpopular or how different from what others believe) and connect with others to discuss these views, the companies most able to provide this to you are the companies that are able to exist here. If you value any of these four, you have to agree that we cannot repeal Section 230. Instead, the smarter, easier and most effective thing to do is to fix it.
Components of the Solution.
We’ve identified a majority of issues that are creating problems attributed to Section 230. To comprehensively address them, solutions must include three things that are at the heart of being successful. They are:
1) All appropriate laws must be on the table. Section 230 doesn’t stand alone. It is part of a broader system of legislation. So, which law is the appropriate one to amend? Not everything can or should be addressed by amending Section 230. For example, if the harm you want to fix is the unauthorized online publication of your personal information, like your driver’s license or home address, a federal or state privacy law is possibly the more appropriate place because privacy laws address the use of personal information. Similarly, where any other federal law interacts with Section 230, it may be more appropriate to amend the other federal law to address how it interacts with Section 230 plus any other laws.
2) Language, language, language. Care must be taken with the language used for any amendments, Section 230 included, to make sure there is clarity on the change and how it interacts with other laws. It also should be written to ensure it protects the values you seek to protect and it encourages the behavior you seek to encourage.
3) Implementation components, such as verification + enforcement guidelines, are where the law becomes real. It is imperative that, when drafting the language of any legislation, you account for the “how” in which laws are verified and enforced because this is where legislation is prone to fail.
Let me share an example. In March 2020, when COVID struck the US, I created and led a global COVID Task Force for an online, US-based client. Our goal was to preemptively identify and remove COVID content that was abusive. In an integrated, global effort, I worked directly with the heads of US federal law enforcement and global government agencies.
On March 23, 2020, Executive Order 13910 was signed by the president which banned hoarding and price gouging. The Task Force moved to immediately add the terms of the ban to the monitoring mechanisms we put in place. For our purposes, the content likely to show up online that would violate this EO would be where someone was selling online things like personal protective equipment (PPE) at prices that equaled “gouging.” Therefore, to implement and comply with the EO required that one could determine if an online price qualified as “gouging.” But, the EO didn’t say what would qualify as “price gouging.” The most it said about price gouging was “prices in excess of prevailing market prices.” No one, at that time, knew what would be considered prevailing market prices for basic things, like toilet paper, because you could no longer get it in your grocery store or in most places on the internet. PPE was worse.
The Task Force worked directly with a leading manufacturer of PPE to combat abuse where they were being impersonated. Even though this company shared their retail prices with us, those prices could no longer be used to assess whether an online price for their products was “gouging” because of all the new intermediaries who were part of the distribution process. These new parties in the distribution process absolutely increased the final retail prices, but this didn’t mean that any of the parties in the distribution chain was actually “gouging” prices.
For the record, I’m not picking on this EO – it was the right thing to do and everyone did their best to work with it. It is, though, a perfect example to illustrate the significance of language, and it highlights the details needed to enable companies to self-verify and enforce a legal prohibition. At a minimum, law enforcement needs to know and be empowered to communicate what they are looking for.
Implementation: An Under-Utilized Enforcement Resource.
This example also illustrates a significant enforcement mechanism that is often overlooked in the legislative drafting process but that should be considered and accounted for.
The majority of US based companies (and companies with a US legal presence) that host third-party content intend to comply with criminal laws and will put in place ways to self-monitor and comply, provided they have the information needed to do so. Collectively, they can be a powerful mechanism of enforcement in their own right.
To engage this force, legislative language and accompanying guidelines must understand what businesses need in order to self-verify and comply. The earlier example on price gouging demonstrates a law that a company can’t self-enforce. In contrast, a law that is clear in stating, for example, that you cannot sell opioids which are defined as X, can be executed by companies. A website can self-monitor for it. A hosting company (who provides hosting services for a website) can self-monitor for it across websites that use their services. Consumers and agency watchdogs can identify it and report it using established company abuse reporting channels. Enforcement capability multiplies because there are now adequately informed parties at every level.
Not every law is amenable to this kind of crisp evaluation. But, that’s not an excuse. Laws must be written to consider how companies can self-monitor to comply.
Section 230 is a piece of legislation that has fueled online growth and enabled free speech on the internet. For twenty-five years it has remained as originally written. There are issues, sure. But, all are issues that can and must be addressed because repealing Section 230 cannot be an option. Using the what we’ve learned so far, let’s look at how to address the issues associated with Section 230.
Jenn Suarez, CEO