Check Out Our YouTube Channel!
For privacy tips, information about the HOTTEST legislative and technological trends and other great tips, check out our Privacy Practice YouTube channel.
for privacy tips + news: SUbscribe!
Hi Everyone -
Privacy has EXPLODED!! Included in this is cyber-security, national security for the US and all things in between. We understand that this complex, every-changing landscape is tough to keep up with.
To help, we've launch a Privacy dedicated YouTube channel.
Here's our first video. https://www.youtube.com/watch?v=1BBxT4NR4TU
More to come. AND - drop us a line with questions/topics you'd like us to address at firstname.lastname@example.org.
SECTION 230: WHAT YOU NEED TO KNOW
Part Three, Conclusion: 230 Reform –
What You Could Lose, What You Could Gain: Solutions
We can do this.
We have a framework to use for building solutions. Now, let’s roll up our sleeves and go through the problems identified.
Content Removal. Here, we are looking at content that should be evaluated on whether it should be removed by a company.
-Prohibited Content. Remember this is content that is prohibited by federal criminal laws. The main problem here is that content goes undetected and/or takes too long to identify and remove. A major cause of this problem is poorly written laws and/or laws that do not account for how a company would verify that the content is prohibited. In other words, the problem is with the language and implementation considerations.
In order to prevent online harms effectively, it’s imperative that they define violation(s) in as concrete terms as possible – this is essential to ensure easy identification and enforcement. Consider what an online company would need to allow it to self-screen. Make sure the language used is unambiguous. Consider whether the Prohibited Content is a type that, with appropriate language, could be identified via a technological solution. Where algorithmic screening/blocking is not possible, provide enough details for an online company to self-assess manually. Empower frontline federal, state and local law enforcement agents to state clearly to online companies, when they know identified content violates criminal laws and give them the ability to direct online companies on what they must do.
-Illegal Speech. This is content governed by civil law and does not include intellectual property. Section 230’s primary goal is to grant an online company immunity for content when posted by a third party that creates civil liability. A victim’s rights continue to remain intact against the person who posted the content, however, the online company is treated as a neutral party.
The main problem with Illegal Speech is the complicated and, sometimes, herculean process a victim must go through to get online content removed. The cause of the problem is that civil laws have not been updated to provide a process by which a victim can efficiently and effectively protect themselves when the violation is online. It’s the lack of process that fails them (i.e. an implementation breakdown).
I know this firsthand. Let me describe how it works and what needs to happen. Civil law violations are fact-based determinations best done by a court of law. Indeed, a victim must go to court to have a legal adjudication on whether online content creates civil liability. The potential victim would sue the party who made the statement (for our purposes, that’s the third party that posted it) and seek to have the court rule that the content is illegal. At the same time, the third party who posted the content seeks to defend themselves. Adjudication through a court, affords each party due process by a neutral expert (the court) on the law related to the civil claim. This is the way it worked prior to online growth and the way it still works. But, the adjudication process has not evolved to account for the complexities that can occur with online Illegal Speech, which is the underlying cause of the problem.
Let me illustrate what can happen using an example. Assume that someone posted online a lie about you stating that you are a cheater, pedophile or embezzler and they include your picture/home address in their post on a very popular website (like Facebook, Twitter, etc.). You likely have civil law rights that you can enforce against this person who posted this content as it may constitute Illegal Speech. But, your first goal will almost certainly be to get the false content down before it creates havoc in your life.
Today, at a basic level, this requires you to get a judgement against a person regarding the specific content that determines it to be Illegal Speech. Then, you’d need to get a court order with the judgement that directs the online company to remove it.
Sound straightforward? Actually, here is where it all falls apart.
Jurisdiction Challenges. To navigate this process with online content requires a strategic understanding of the online world and what companies will need to ensure they are removing the right content. First, you need to understand where the parties are located. Parties to consider include both the defendant (who posted the content) and the online company where the content resides. Many of these civil laws, such as defamation and slander, are state based laws. So, you need to also determine if the state has jurisdiction over the person who posted the content. If the person resides outside of the state, will the court allow you to name them as a defendant? Some state laws have a “long arm” clause written into them that will grant a court jurisdiction over someone who doesn’t live there or do business there. These clauses have been increasingly applied to online posts that constitute Illegal Speech.
Jurisdiction questions help you determine whether you can obtain a judgement against the person who posted the content, which you must have if you seek to force the removal of content. But, the judgement is the first step. You also need to determine that, if you win the case, whether the online company complies with a court order to remove the online content. One factor is whether the online company is US based and, thus, will comply with a US court order.
Naming the right defendant. Let’s assume that the company is subject to US laws and there are no state issues. When you file your lawsuit, you need to name a defendant in your lawsuit and that needs to be the person who posted. If a name accompanies the post, you might choose to use it. When it’s unknown, some plaintiffs name who they think it is or they name “John Doe” and reference the posted content. The risk at this stage is naming the wrong defendant. For instance, the name with the post is Trey Smith (or maybe you think it’s Trey Smith). If you get a judgement / court order against a person named Trey Smith specifically, this needs to match the name in the online company’s records. A court order requiring the removal of content posted specifically by Trey Smith will not be executed if the poster is not named Trey Smith.
Defects in judgement / court order. Let’s assume that you’ve named the defendant sufficiently. The judgement and related court order need to sufficiently describe the content to be removed. Let’s assume that it contains examples you provided of the specific content as it’s posted online and that you’ve named the website where it’s located. Let’s also assume that the online company will comply with any US based court order. If the specific content noted in the court order remains online in the same way, you are going to be successful getting the company to remove the false statements about you. In reality, though, it’s usually not this easy.
What happens if the content was changed and doesn’t clearly match what you have in your judgement? What if the content is similar but now under someone else’s name (i.e. the defendant had another party post it and they removed their post)? What if the content has now been put on other websites and you need to get it removed from those as well? Any of these scenarios create the risk that you may be required to have the court amend your judgement or go back to court anew.
These current processes exist by default. By necessity, they were cobbled together as online companies tried to create a fair way to deal with Illegal Speech. The laws are valid. The right party is being held accountable. However, jurisdiction issues and the enforcement processes have not changed with the evolution of how online harms now occur.
For Illegal Speech, and the problems of protection and enforcement in the online world, I propose the following as components to a possible solution.
State level. As mentioned earlier, all laws must be on the table. At the state level, states could create jurisdictional exceptions for online harms caused by content posted by third parties on websites protected by Section 230. These exceptions could provide extra-territorial reach for the sole purpose of adjudication and removal of the third-party content from online. While they could, they wouldn’t have to include damages and monetary awards, thereby significantly reducing liability risk in exchange for jurisdiction.
Federal level. Here is a perfect place for a Section 230 amendment. An amendment could provide that any company who enjoys Section 230 protection would be required to accept a state judgement regarding civil harms for content posted on their site and to remove the content. In addition, a set of guidelines can be published that helps guide victims with language to use in judgements and court orders. Thus, as part of their immunity, online companies provide guidance and reduce roadblocks for those who are seeking to enforce a civil judgement against a third party posting on their site.
Another potential Section 230 amendment could include the requirement of content arbitration clauses for anyone who wants to post on the online company’s website. In other words, if you want to post on ABC’s website, you agree to submit to US-based arbitration over your content if a third party claims their rights under civil laws have been violated. The arbitration panel could determine whether the content in question is considered Illegal Speech under the applicable state law and they could be limited to the ability to order the specified content be removed from the website of the online company in question. It cannot award monetary damages and the losing party can appeal to a federal court.
Here’s Our Work + What Matters.
Content Removal + Censor/Removal Restrictions. The next group is a collection of New Harms which includes situations where people believe there is a responsibility to remove content and it includes situations where people believe a Section 230 protected company may be restricted in what/how they remove or subject to guidelines for removal.
-New Harms. This is online content where there is general agreement that it creates harm but where there are no laws that clearly apply to them. Consider again: hate speech, content inciting violence, publication of your personal information without your permission (like your phone number, driver’s license), false news and claims (think COVID cures), manipulation of American people in governing of the US (for example: election manipulation by foreign entities or known lies that are perpetuated and which impact the US broadly). Of course, these are not exhaustive.
In all cases, legislative guidance is required. Companies protected by Section 230 should not be held responsible for situations where our elected leaders have not legislated. Nor should their protection under Section 230 be threatened.
The questions you and elected leaders should be asking and things that elected leaders should consider.
Situations Focused On Content Removal.
Questions to ask for content sought to be removed.
Do any of the New Harms fit under existing federal or state law? For example, in Part Two of this blog, I shared the example of Maatje Benassi, a US Army reservist who was wrongly accused as being “patient zero” for COVID by someone who knew it was not true but continued to push this theory across the internet and included her personal information. Clearly, she was being harassed. Clearly, the behavior (especially the sharing of her home address) was meant to and encouraged stalking.
Section 230(b)(3) states that it is meant to ensure “vigorous enforcement” of federal laws to deter and punish this type of online behavior. Are the federal laws written in a way to cover this obvious situation? If so, are they communicated to the public effectively? Are there guidelines that could be created and distributed?
Can any of these laws be amended to include publication of personal information that is intended to or where the posting of it actually results in stalking or harassment (crimes specifically noted in Section 230)? Do state privacy laws prohibit the publication of personal information without permission and can they be invoked? If so, how do we educate consumers and companies protected by Section 230?
Content that’s being asked to be removed which may bump up against the First Amendment protections of Free Speech.
Remember that the First Amendment and Free Speech does not apply between you and another party, like a third-party individual making online posts or an online company. However, when you ask the government to create laws that put limits on speech, the First Amendment DOES apply. Hate speech, content that incites violence, content that organizes violence – all bump up against and must be considered in relation to First Amendment rights.
Though the First Amendment right to free speech is not absolute, Supreme Court decisions on protection under the First Amendment grant broad rights regarding what someone can say. In Brandenburg V. Ohio (1969), the Supreme Court established the “clear and present danger” rule and held that the government (or states in this particular case) could only restrict speech that "is directed to inciting or producing imminent lawless action, and is likely to incite or produce such action." This case was decided in 1969, well before the internet. Does it apply to government regulation of online content? Can online content that incites violence or content that organizes violence find a home in this language? What standards would the online content have to meet in order to rise to the level of being prohibited and proactively removed (vs. evidence of the crime after the fact)?
Misinformation – where does it fit? During COVID, troublesome online posts included misinformation touting “cures” that were lethal. The Task Force I led monitored and reviewed this type of content, but there were no clear laws prohibiting these posts nor did federally authorized websites exist that provided timely information on what was legitimate vs. harmful.
Does it matter if someone really believes the information vs. where they know it’s false and are intentionally posting online? If someone believes the information, does it then become free speech where the government is limited in how it can prohibit it? If it is a health risk to the public, are there other laws that may apply where health misinformation is prohibited where it carries a risk to life? What would that content need to look like and/or do to meet this standard? Does it make a difference if the harm is great vs small? Physical vs. monetary? Where the harm can be calculated vs. non-economic harm?
Political misinformation. Take, for example, the 2020 presidential election where the incumbent president lost the election and then claimed it was stolen and that there was massive fraud. This misinformation was widely distributed on social media platforms owned by companies who have the protection of Section 230. Most of these companies seemed unsure of their responsibility as to this content. It was not until the January 6th riot on Capitol Hill that many social media companies decided that they should remove the online content due to the significant risk to public safety, and it included the determination that the content posted by the former president amounted to incitement of violence.
No laws exist that can guide these companies on what they should or shouldn’t do. The action they took was based on their own Terms of Service. Was this too little, too late? What about others who continued to post the misinformation that led to the president’s ban? Should similar content posted by others be removed?
Political misinformation can be created and disseminated by anyone. It doesn’t always lead to harm. However, when the creators and disseminators are people in a position of power and widespread influence, the risk, level and likelihood of harm is exponentially greater. I want to briefly address it here through a series of questions for you to consider using the misinformation campaign that claimed the 2020 presidential election fraud. Is it ok for elected officials to tell the American public that the election was stolen? What if over 60 courts dismissed such claims because there was no evidence? What if the President’s own Attorney General issued a statement that the investigation into election fraud/theft found no evidence? Can public figures still repeat these repeatedly debunked and unfounded statements? Do federal representatives have a duty to base statements on facts? Or, at least be required to have credible evidence supporting statements that have been repeatedly debunked and dismissed?
Elected officials, at all levels, have claimed that they too have a blanket right to free speech. But, do they? Should they? What about when it is clear, even if they won’t say so publicly, that the statements aren’t true and they know it? What if the motivation to repeat the false statements are purely for their own political gain? What about when those statements impact our democracy? What about when they are divisive of the country and/or mislead a large portion of our population?
Shouldn’t elected officials, due to their platform, significant ability to influence, and duty of office, be held to a higher standard when they are speaking from their professional role and using the news platform given because of that role? Where an allegation or statement has been repeatedly dismissed by neutral authorities (such as courts of law), shouldn’t elected officials be required to have some basis of credible fact for the things they say? Don’t Americans have a right to this? Don’t their constituents deserve this? Don’t you believe it’s their responsibility to you?
Online company responsibility with misinformation. There has been a large outcry that an online company protected by Section 230 must have at least some responsibility to monitor and deal with harmful content such as misinformation. Compounding this concern is, as the world has learned over the past year, that some social media platforms, such as Facebook, may have algorithms that are designed to identify controversial/contentious content and distribute it more widely, as it leads to more “clicks” and to a user staying on their site longer. In other words, they may be built to foster and promulgate contentious misinformation and have integrated revenue (such as advertising) to profit from it.
Is it fair to require a company to put content governance mechanisms in place when elected officials haven’t even passed laws to regulate it? On one hand, the answer must be “no” and therefore Section 230 immunity shouldn’t be at risk. On the other hand, New Harms will originate in online places such as social media. These companies are often the first to learn about New Harms. So, shouldn’t this create the responsibility to put governance mechanisms in place when they at least have a responsibility to identify New Harms? It would seem only fair to say “yes.” But, then, does this require that their Section 230 immunity is at risk, especially given what American’s stand to lose if companies don’t have Section 230 protections? Or, is there another effective deterrent or motivator to achieve the level of responsibility that is fair to expect from these companies?
Situations Focused On Claims of Inappropriate Censorship / Content Removal.
Last, and significant, are claims that large social media companies who enjoy Section 230 protections have censored online third-party content inappropriately. For example, there have been republicans who’ve stated that they believe republican political views are unfairly censored by large tech companies like Facebook. Their position is the following: Online companies that enjoy the liability protections of Section 230 should not be able to suppress or censor targeted political online content when that company is a major distributor and a well-known source where people get their news. They advocate that companies that are the size and influence of Facebook, Twitter, etc., hold enormous power to impact and sway public opinion and that their censorship of published content is not a neutral act. Therefore, these online companies are different from the run-of-the-mill Section 230 companies. Their size and influence should be factored into their obligations for Section 230 immunity, without which they could not exist. The focus here seems to be on companies that reach a certain size and influence and arguably got there because of Section 230.
Next, let’s understand some of the laws that can come into play. As always with government regulation of content, let’s start with the First Amendment and how it applies. Under the First Amendment, the government cannot restrict your right to speak freely. However, their goal is not to restrict your speech. Their claim is that online companies who benefit from Section 230 (and which have reached a significant size and level of influence) can’t restrict your speech. Yet, there are no First Amendment rights between you and a private party. You cannot claim a free speech right that obligates these online companies to post your speech on their platform. And, Section 230, on its face appears to intend to provide protection for when online companies self-regulate (i.e. censor) content. In addition, there is a First Amendment Free Speech doctrine called “the compelled speech” doctrine that prohibits the government from requiring or compelling speech in certain circumstances that arguably applies. We are in new territory.
Here are a few questions to consider. If a company, by federal law, is granted immunity from civil liabilities, could / should immunity be conditioned on meeting reasonable responsibilities- even when content/speech is involved? Does the First Amendment limit what kind of conditions the government can require if such conditions directed Section 230 companies not to publish certain third-party content or, alternatively, required them to produce certain third-party content? Shouldn’t the law that built their success speak to responsibilities for earning and/ or keeping it? Is there some point where Section 230 companies reach a size and level of influence where they should have public responsibilities, such as when they reach the size of a Facebook? Once they have reached a level of financial success, public engagement and influence, should they have additional responsibilities in order to remain protected?
The Path Forward.
Section 230 cannot be repealed. At the same time, it cannot remain as it is today. Equally, not everything should be laid at the feet of Section 230. Nor can all the problems be blamed on Section 230.
The path forward requires looking at the issues you want to solve and the network of laws that relate to them, which may include Section 230. Addressing issues raised with respect to Prohibited Content and Illegal Speech requires thoughtful, informed deliberation regarding the body of laws that surround an issue and the breakdowns that cause the problem. It requires identifying which laws require amendments and careful crafting of such change. Lastly, it may require some additional regulations that, for example, help civil harms from online content be dealt with efficiently.
The path forward also includes responsible consideration of a body of New Harms. Indeed, the harms in this category is where the most noise is made, the most soundbites created. The significance of these harms deserves more than media sound bites. Moreover, you should not want these important decisions determined by online companies who are protected by Section 230. Just because these New Harms happen on the websites, doesn’t mean that they are the appropriate party to make such decisions for our nation. They are not. They are the conduit.
Resolving the important issues raised by New Harms requires acknowledging that there are no laws that address these situations and that there should be. Successful resolution requires careful research, public engagement and a cautious balance of competing rights. It also requires that this, in turn, results in responsible, bi-partisan legislation specific to the harms. Then, as with legislation guiding Prohibited Speech and Illegal Speech, the content itself is governed. Section 230 companies play a role; but, their role is to follow legislation established by parties elected to represent the breadth of America.
Section 230: what you need to know
Part Three, Introduction: 230 Reform –
What You Could Lose, What You Could Gain: The Solution Framework
Let’s start with a recap.
Part One shared that Section 230 is meant to (emphasis added):
Problem areas identified in Part Two are:
How Do We Fix This?
Repeal Section 230?
At the outset, I want to put on (and then remove from) the table the idea that Section 230 should be repealed. It’s simply not possible. Well, ok sure, it’s possible but completely disastrous and unworkable. It would stink for all of us.
Here’s a glimpse.
Without protection from civil liability (which is what Section 230 does), Facebook, Twitter and the like will still exist. There would still be social media companies. Removal of Section 230 won’t ensure a change in behavior but it likely would ensure that they’d be forced to leave the US. En masse. They’d have no choice.
Online websites that are designed to enable people to engage and express themselves on the internet are here to stay. Consumer demand for websites that provide engagement is increasing. Moreover, no one cares or bases their use of such a website on whether the company is based in the US. (For a great example – think TikTok.) In other words, when an online company matches consumer demand, company location is not a deciding factor, at least today.
Repealing Section 230 would have immediate effects. Existing US companies that allow third party content would have to move outside of the US to continue to exist if there isn’t a Section 230. And, it would hamper the innovation of new online companies such as these in the US. Their liability would be too great. The exit of entrepreneurism, economic growth and jobs related to the internet would follow because most of what’s online will certainly continue to have some interactive component.
The exit of these companies would create larger problems for those of us in the United States. Let’s start with civil liabilities. You would have practically zero ability to enforce your rights with respect to content that violates US civil law in such a changed online world. Why? Because, assuming you have a successful judgement that determines content online about you is Illegal Speech, you still need a company to agree to remove content based on this judgement. Now, these companies would be outside the US and not subject to US law. Your other alternative would be to sue these non-US companies in order to force them to remove content. Even if you have the money and ability to sue a company outside the US, let’s say, in Ireland – you would have to find and hire local counsel to file your lawsuit in Ireland, the right you are seeking to enforce would have to exist under Ireland’s law and Ireland’s law would have to allow someone like you (who didn’t live there) to take advantage of it. Your ability to enforce your US based civil law rights in another country doesn’t exist.
It’s less promising for the enforcement of criminal US laws against companies that exist abroad. While criminal laws remain intact under Section 230, they are intact to enforce against companies that exist in the US and/or are subject to US jurisdiction. These laws don’t apply outside of the US. A majority of third-party content and social media companies are based in the US and subject to US law, which provides you protection under US criminal law. They are in the US for a lot of reasons that absolutely include Section 230. But, if they had to leave the US, know that a company without a US legal presence could allow a third-party to post content about you that is otherwise criminal under US law. And, there is likely nothing you could do about it. Thus, a repeal would not only torch our civil rights, it’d blow up criminal harms we are trying to prevent because we have driven the largest existing tech companies and future ones out of the US.
If these reasons don’t have you convinced, there is another one that I believe may eclipse them all. Section 230 truly is the reason why you have free speech online. Without it, minority voices and opinions have limited online avenues for expression. For clarity, “minority” in this context is not the color of your skin, your religion or your ethnic background. It is much broader than that. It is your belief, point of view about anything happening in your life, community, country or the world that may run counter to that of a larger group. Your ability to share these things and, as a bonus, find and discuss online with likeminded people would be virtually non-existent.
What happens when you want to call out injustices to you, your community or other injustices that you believe are happening? No one would allow it on their website. You could have a website of your own; but, single websites lack the collective distribution power of larger, established ones. Some very established Section 230 online companies may serve to give free speech, liberation to other countries. But, leaving those outside of the United States, I propose that, without it, neither you, I or our country would have learned about George Floyd and other victims that put faces, names and unforgettable details to the ongoing abuse against Black Americans. You probably would not find online forums that would have allowed you to express that you believe the 2020 election was stolen.
Whatever your beliefs, a single defining and uniting principle is at stake. It is that, in the United States, you enjoy the constitutionally protected human right that is guaranteed by our government to speak freely. This core freedom is also embraced (with the help of Section 230) by online companies that want to protect your ability to speak your views and beliefs. You and I will likely never convince a newspaper to publish our opinions/beliefs, despite our newsworthiness, if they can’t be fact checked and proven. And, we probably couldn’t build a powerful community of like-minded people with the same views by word of mouth. The equalizing channel that could be an avenue for your opinions and hold the opportunity of building a like-community is via the avenue of the internet. The viability of this avenue is tied to Section 230.
In sum, the losses you stack up if Section 230 is repealed are: Loss of the ability to enforce your US civil law and criminal rights, the loss of the ability to truly speak your mind, and the loss of the economic growth and jobs related to online companies.
Everything you want to protect requires that companies have to be able to exist in the US which, in turn, requires Section 230.
For the US to retain the vast advantages of economic growth on the internet, interactive companies have to be able to exist here. To be able to protect yourself against US legislated civil harms regarding online content, the company where the harm occurs must be able to exist here. To ensure US criminal laws count, the companies that you are looking to regulate must be able to exist here. To empower you to exercise your US constitutional right to, and US embraced, freedom of free speech – i.e. your ability to express your views (no matter how unpopular or how different from what others believe) and connect with others to discuss these views, the companies most able to provide this to you are the companies that are able to exist here. If you value any of these four, you have to agree that we cannot repeal Section 230. Instead, the smarter, easier and most effective thing to do is to fix it.
Components of the Solution.
We’ve identified a majority of issues that are creating problems attributed to Section 230. To comprehensively address them, solutions must include three things that are at the heart of being successful. They are:
1) All appropriate laws must be on the table. Section 230 doesn’t stand alone. It is part of a broader system of legislation. So, which law is the appropriate one to amend? Not everything can or should be addressed by amending Section 230. For example, if the harm you want to fix is the unauthorized online publication of your personal information, like your driver’s license or home address, a federal or state privacy law is possibly the more appropriate place because privacy laws address the use of personal information. Similarly, where any other federal law interacts with Section 230, it may be more appropriate to amend the other federal law to address how it interacts with Section 230 plus any other laws.
2) Language, language, language. Care must be taken with the language used for any amendments, Section 230 included, to make sure there is clarity on the change and how it interacts with other laws. It also should be written to ensure it protects the values you seek to protect and it encourages the behavior you seek to encourage.
3) Implementation components, such as verification + enforcement guidelines, are where the law becomes real. It is imperative that, when drafting the language of any legislation, you account for the “how” in which laws are verified and enforced because this is where legislation is prone to fail.
Let me share an example. In March 2020, when COVID struck the US, I created and led a global COVID Task Force for an online, US-based client. Our goal was to preemptively identify and remove COVID content that was abusive. In an integrated, global effort, I worked directly with the heads of US federal law enforcement and global government agencies.
On March 23, 2020, Executive Order 13910 was signed by the president which banned hoarding and price gouging. The Task Force moved to immediately add the terms of the ban to the monitoring mechanisms we put in place. For our purposes, the content likely to show up online that would violate this EO would be where someone was selling online things like personal protective equipment (PPE) at prices that equaled “gouging.” Therefore, to implement and comply with the EO required that one could determine if an online price qualified as “gouging.” But, the EO didn’t say what would qualify as “price gouging.” The most it said about price gouging was “prices in excess of prevailing market prices.” No one, at that time, knew what would be considered prevailing market prices for basic things, like toilet paper, because you could no longer get it in your grocery store or in most places on the internet. PPE was worse.
The Task Force worked directly with a leading manufacturer of PPE to combat abuse where they were being impersonated. Even though this company shared their retail prices with us, those prices could no longer be used to assess whether an online price for their products was “gouging” because of all the new intermediaries who were part of the distribution process. These new parties in the distribution process absolutely increased the final retail prices, but this didn’t mean that any of the parties in the distribution chain was actually “gouging” prices.
For the record, I’m not picking on this EO – it was the right thing to do and everyone did their best to work with it. It is, though, a perfect example to illustrate the significance of language, and it highlights the details needed to enable companies to self-verify and enforce a legal prohibition. At a minimum, law enforcement needs to know and be empowered to communicate what they are looking for.
Implementation: An Under-Utilized Enforcement Resource.
This example also illustrates a significant enforcement mechanism that is often overlooked in the legislative drafting process but that should be considered and accounted for.
The majority of US based companies (and companies with a US legal presence) that host third-party content intend to comply with criminal laws and will put in place ways to self-monitor and comply, provided they have the information needed to do so. Collectively, they can be a powerful mechanism of enforcement in their own right.
To engage this force, legislative language and accompanying guidelines must understand what businesses need in order to self-verify and comply. The earlier example on price gouging demonstrates a law that a company can’t self-enforce. In contrast, a law that is clear in stating, for example, that you cannot sell opioids which are defined as X, can be executed by companies. A website can self-monitor for it. A hosting company (who provides hosting services for a website) can self-monitor for it across websites that use their services. Consumers and agency watchdogs can identify it and report it using established company abuse reporting channels. Enforcement capability multiplies because there are now adequately informed parties at every level.
Not every law is amenable to this kind of crisp evaluation. But, that’s not an excuse. Laws must be written to consider how companies can self-monitor to comply.
Section 230 is a piece of legislation that has fueled online growth and enabled free speech on the internet. For twenty-five years it has remained as originally written. There are issues, sure. But, all are issues that can and must be addressed because repealing Section 230 cannot be an option. Using the what we’ve learned so far, let’s look at how to address the issues associated with Section 230.
Section 230: What you need to know
Part Two: Why is Section 230 under attack?
The reasons why Section 230 is under attack all deal with Section 230’s legal permission to moderate – or not moderate – content on platforms without legal liability. Hence, they deal mainly with social media type companies and content posted by third parties (i.e. people who use the platform and aren’t part of that company).
Here’s a closer look at the complaints.
If you watch the news, you may have heard of two main arguments that seem to fall along political party lines. I’ll describe them below according to the party that they tend to be associated with. However, I think the fundamental arguments could be made by either party. Lastly, I'm adding a group of additional complaints that I’m putting into a third category that I believe is equally, even arguably more, significant and yet often remains under that radar.
Republicans: Republicans assert that conservative political views (whether made by a congress person or an ordinary person) are censored by these companies. Meaning, they are not as widely distributed across the platforms as are more liberal views. And that, especially during the Trump presidential era, politically conservative posts were more likely to be assessed [by the platform or a platform’s “fact checker”] to be false, inflammatory or contain misleading statements /misinformation and then taken down, blocked or marked as such. They believe that if you want to enjoy federal immunity for third party online content, you shouldn’t be able to target and ban only certain types of viewpoints. Rather, moderation should be applied equally across the board.
Democrats: Democrats claim that these companies allow content to remain on their platforms that should be removed. They are upset, for example, that these companies’ content includes misinformation campaigns and/or frauds perpetuated on the public (think fake COVID tests and/or interference in government elections) and they have done little to identify this type of content and remove it. They believe that companies have a responsibility for finding and removing some types of particularly harmful content and, if they don’t, they shouldn’t have Section 230 immunity.
Third Category - Prohibited Content, Illegal Speech + New Harms: The third category is a collection of complaints that cover content prohibited by criminal law or civil law. It also includes content that has created new problems and doesn’t fit neatly into either criminal or civil law regulations. I’m dividing them into my own assigned categories to make it easier to understand. First, this category includes “Prohibited Content,” which I’d define as content that is prohibited under federal/state laws and carries criminal penalties. This type of content can range from illegally selling drugs over the internet to misrepresenting that you are a financial broker. Each is a highly regulated activity governed by federal (and sometimes state) law. If you violate the rules, you may be fined or even go to jail.
Section 230 does not grant immunity for criminal violations. It specifically exempts activity prohibited by federal criminal statutes and references preservation of similar state laws.
Next is Illegal Speech, which is speech prohibited by civil law. Unlike Prohibited Content, Illegal Speech has to do with the rights between two individuals and/or entities, such as slander and defamation. Company A’s competitor tries to hurt their business by making untrue statements about them online. Or Jane Barnes has someone post personal pictures of her online with damaging, untrue statements. Both Company A and Jane Barnes have a civil right to pursue the other party for speech prohibited and considered illegal under civil law.
Section 230 provides immunity for almost all forms of Illegal Speech with the exception of intellectual property violations. It does not grant immunity for these types of claims, which include copyright or trademark infringement.
Last, is a group of content that I’m calling New Harms. These are harms unique to the internet. It’s not clear whether they should be a criminal offense, carry civil liability or both. There are little to no existing laws that address it. Or, if there are laws, it’s unclear how to apply them in this new set of facts. Most cases have come to light over the last handful of years. Take, for instance, fake conspiracy theories – by this I mean conspiracy theories where the perpetrator knew what they were saying was not true. An example of the harm this can cause is the case of Maatje Benassi, a US Army reservist who was wrongly accused as being “patient zero” for COVID. She was not patient zero. Her and her husband’s personal information (such as their home address) were widely shared, the content was not taken down and their lives were repeatedly threatened and forever upended as a result. In this situation, clearly its wrong. But who is liable and under what laws?
As you can see Section 230 doesn’t stand alone. It is part of a bigger system of regulations.
Let’s review how they intersect with Section 230.
First Amendment. It’s important to know that there is no First Amendment right of free speech that is enforceable against a public company. The First Amendment only protects you from having the government interfere with your free speech. The government can’t do it. But you can’t assert this right against a Twitter or Facebook.
Even without Section 230, companies have a right to determine what kind of content they want to promote on their site so long as they don’t violate other laws. In fact, arguably, they have a First Amendment right to curate their website with the content/views they want without government interference.
The First Amendment is applicable when addressing a complaint of blocking or censoring viewpoints online by a private party in the sense that it doesn’t apply. Often, we think it might apply in these situations since freedom of speech is a deeply cherished American ideal. This doesn’t mean that the argument put forth by Republicans doesn’t have merit. I’m going to address their argument in Part Three.
Important Aside: For the remainder of this post, I’m going to focus on content that is not moderated or taken down. I want to explain the breadth of what it includes and provide an understanding of the complexities that exist in the interplay of laws with Section 230.
I am also going to narrow the types of companies (also known as service providers) that I’m using for examples. The easy ones to think about are Facebook and Twitter-types, i.e. social media platforms. But there are other service providers who operate on the periphery of Section 230 and are part of the eco-system of the internet. Some of these “other” providers are stuck trying to navigate the complex system I’m going to describe. This includes hosting providers who host websites that may be protected by Section 230 and who, because they provide hosting services, must regularly regularly make decisions on whether they should (or must) require a hosting customer to take content off their website. I also argue that, by necessity, these “other” providers may also include (or we should at least consider) online ads (from ad networks), SMS/texts (services from mobile providers) and possibly email services, in as much as they are often part of the overall abuse that is perpetuated. This last group is significant because content on their services has increasingly become an integral part of the picture that must be considered when evaluating online content.
This post will primarily consider companies which allow third parties to post content and for whom Section 230 directly applies. And, it will lightly consider hosting companies, a party that is regularly in the crosshairs of having to make decisions regarding content posted on their customers’ websites.
Illegal Speech. Section 230 doesn’t change the legality of Illegal Speech. Illegal speech is still illegal. If someone is defaming or slandering you – your right to pursue a legal action against that person for defamation or slander still exists. Or, if you make a slanderous comment about your brother-in-law or ex-boss, you remain liable to him or her.
Here is what changes with Illegal Speech under Section 230. The distributor (or redistributor) of content (i.e. Twitter, Facebook, Instagram) is no longer liable for your slanderous statements or Illegal Speech. These are considered neutral forums or vehicles that we can use to post our speech because Section 230 gives them immunity. Since Section 230 provides immunity to these online companies, this means they don’t have to review or moderate everything that is posted or shared using their services. The person who remains liable is you.
As mentioned earlier, Section 230 does not give immunity to intellectual property (IP) violations. What this means in the online world, though, is not clear because things like trademark violations are often very dependent on a lot of factors. There is one exception, though, for governing IP violations on the internet and that is for copyright violations. Copyright is an intellectual property right that governs literary, artistic, educational, or musical forms of content. The Digital Millennium Copyright Act (DMCA), passed in 1998, was created specifically to provide a mechanism to protect this kind of content on the internet. It provides guidelines and specific procedures for submitting a “Takedown Request” (meaning a request to take down the copyrighted content) to an online content company and what they must do to retain immunity from liability. You’d use the DMCA if someone took your images and put them on their website without your permission. Or you’d use it if you wrote something (whether online or in print) and someone used it online without your permission. It has a narrow application for copyright alone and only applies to the internet.
Prohibited Content. Federal criminal laws in existence apply to the internet. Section 230 specifically does not impact federal criminal law nor is it meant to impact state criminal laws. Prohibited Content is prohibited, and you do not have immunity just because someone else posted it.
Here are some of the laws that are often at play for Prohibited Content online.
What happens when there is Illegal Speech, Prohibited Content or content that represents New Harms is online?
Well, it depends.
Self-Monitoring. Some of the large online tech companies have some basic self-monitoring protocols in place for third party content. This is almost always for Prohibited Content where they could be directly held accountable. Often, these protocols are part of a larger process that includes the intake of external complaints, which is where the majority of questionable content is identified.
External Complaints. How these are handled depends on the type of complaint.
Some violations are easy to identify and action can be taken swiftly, such as with images of child exploitation. Many, though, aren’t as obvious because the violation may depend on things that have to be verified and companies lack the means to verify or the rules for what equals the Prohibited Content is not clear. Thus, processes can vary on what happens with complaints that claim that the content is Prohibited Content if the content isn’t an obvious violation. This results in confusion and frustration. An everyday person submits a complaint regarding content they believe is Prohibited, the company can’t verify, the content remains and both are frustrated. Companies receive well-meaning and respectful notices from law enforcement about questionable content, but law enforcement only asks them to investigate and remove the content if it violates laws or their Terms of Service. The companies are rarely directly instructed by law enforcement or governmental agencies that content in question is Prohibited Content and therefore you must take it down.
Instead, the person/business who is legitimately slandered must pursue claims directly against the party who actually made and posted the statements. They must obtain a judgement that rules the statement as slanderous. AND they also must obtain, with that judgement, a court order instructing the internet company to remove the content. Companies usually will not remove the content based on a judgement alone. This is not nefarious behavior but, rather, it’s usually because a judgement is based on very specific facts that aren’t identical from one situation to the next. So, to make sure they are doing the right thing, companies want a court order instructing them specifically to take it down so that they feel that their actions are under a court’s protection and they remain neutral. This gets complicated if the person who slandered you posts a similar statement on another platform, because you usually need to get a revised court order that instructs that company to remove it.
Herein we can begin to see the problem.
Let’s isolate the areas where harm results.
Prohibited Content: harm happens when: 1) Prohibited Content is unreported / undetected and remains online; 2) Prohibited Content is reported but there aren’t clear legal guidelines on how to verify that the content is indeed Prohibited Content and so the content is not removed promptly.
Illegal Speech: remember that these are harms to an individual / entity they are based in civil law. They include fraudulent impersonation of a company and/or false claims about people. Usually it is the individual/entity that suffers the harm.
The party who is claiming the content is Illegal Speech almost always must go to a court of law to determine their rights. Their rights are determined vis-à-vis the person who posted the content and, thereafter, they seek to enforce those rights against an online company to have the content removed.
Resolution is not short or inexpensive. It’s a process that is built by default in the absence of clearer guidelines. From my direct experience, most companies feel ill-equipped to know what to do, so they attempt to create a responsible process to deal with these kinds of complaints that defer to defined legislation and courts of law, who are the appropriate triers of fact and adjudicators of liability. Nonetheless, it’s long and complicated. Moreover, the party who was offended often gets stuck in a never-ending legal battle across multiple online companies because the perpetrator continues to post the content in new places. All the while, the harm continues. We can do better.
New Harms: we know the content causes harm but see no clear regulation on how to deal with it. This includes hate speech, content inciting violence, false news and claims (think COVID cures), publication of my personal information without my permission (like my phone number, driver’s license), manipulation of American people in governing of the US (for example: potential misrepresentation of a person (I’m a US citizen when in fact I’m a BOT or a group of terrorists).
Companies usually will handle complaints for this type of content by: 1) evaluating whether they can find a legal basis on which to take down the content. If personal information has been posted without permission, is there a federal / state or international law that applies and prohibits it? 2) evaluating the content against their Terms of Service (ToS) and/or Acceptable Use Policies (AUP). Interestingly, ToS/AUP’s (though you may hate how long and detailed they are) are a tool that online companies use to “manage the grey area” where laws have not been enacted. For example, many include “incitement of violence” as prohibited content. Some include “hate speech”. Neither are prohibited by law. But, if the company’s ToS/AUP say that content inciting violence is prohibited, then they can remove the content on that basis. When neither of these apply, as with a lot of New Harms, they go with 3) instruct the reporter that they need to take the matter to a court of law for adjudication.
These are all serious issues to solve.
I deal with all of these issues firsthand. I serve as the Data Protection Officer of a large, global domain and hosting provider. I also oversee their Abuse department. I work directly with federal agencies such as the FBI, FTC, FDA, DOJ, etc. plus state Attorney Generals on evaluating questionable content for verifying if the content violates criminal laws. I am regularly in the trenches with companies to help them determine the right policies, procedures and/or outcomes needed that will comply with the law, protect ideals such as free speech and let them be a good citizen of the internet. I’m proud to say that my client, and others like them, have done a darn good job of finding balance and answers where there is no clear guidance.
But we can’t lay the responsibility of solving the above issues at the feet of companies or law enforcement when laws are unclear or don’t exist. Besides being unfair, it will result in inconsistency from one company to another. In truth, it offers no true resolution at all.
As I’ll describe in Part Three, the solution for the way forward is not clear cut or easy, but it is possible. Section 230 should be amended to catch up with the times and evolution of the internet. In doing so, as a nation, we must consider other fundamental human rights that are the bedrock of our democracy, such as due process and free speech. Indeed, Section 230 is considered the “bedrock” of free speech on the internet as aptly dubbed by EFF, “the most important law protecting internet speech.”
As citizens and residents of this great country, we must take the time to be informed and to be diligent in what we consider. We must ensure no one is resorting to fast decisions that make good sound bites for the press. Amending Section 230 must have the same deliberation and care required by any action that could curb (or destroy) our cherished liberty of free speech. If we do it right, we have the opportunity of creating a bedrock of fair, balanced and integrated guidelines for the internet’s next leap forward.
I look forward to discussing how in Part Three.
Section 230: what you need to know
No doubt you’ve heard complaints about Section 230. In this three-part series, my goal is to provide you with the mission behind Section 230, the current complaints from both our Republican and Democrat representatives as well as everyday citizens, and, lastly, ensure that we collectively understand the reasons to tread carefully with Section 230 reform and provide the questions that should be answered to guide its amendments.
Part One: What is Section 230? Why was it created and what does it do?
In Part One, I want to share the “why” behind the creation of Section 230. I also want to share how it evolved to include some additional protections and what it means today.
Section 230 (officially 47 U.S. Code § 230) is an amendment to the Communications Decency Act (CDA) that was passed into law in 1996. The CDA was originally viewed as restricting free speech on the internet. And, indeed, many of its original provisions were struck down on that basis. The exception is Section 230, which remains intact today.
What led to Section 230’s enactment?
At the time Section 230 was under consideration, the internet was in its infancy. The first version of what we know as today’s internet was created in 1989. It had yet to be adopted broadly when it took a couple of major steps in 1995 when Amazon, Yahoo, eBay and Internet Explorer all launched. Around that time, Congress conducted research which recognized the potential of the internet and based Section 230’s enactment on the following findings [original language, emphasis added]:
1. The rapidly developing array of Internet and other interactive computer services available to individual Americans represent an extraordinary advance in the availability of educational and informational resources.
2. These services offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.
3. The Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.
4. The Internet and other interactive computer services have flourished, to the benefit of all Americans, with a minimum of government regulation.
5. Increasingly Americans are relying on interactive media for a variety of political, educational, cultural, and entertainment services.
In other words, Congress proactively recognized the potential of the internet, and the potential of services that are based on it, to benefit Americans. The potential recognized included broad reaching options for educational and informational purposes. Also, significantly, a primary purpose of Section 230 was to provide the means for Americans to have a diversity of political discourse and minority voices. It’s meant to empower the sharing of information and opinions with limited government regulation.
Section 230 has specific goals [original language, emphasis added]:
1. to promote the continued development of the Internet and other interactive computer services and other interactive media;
2. to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation;
3. to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services;
4. to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material; and
5. to ensure vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking, and harassment by means of computer.
To understand the policy goals, you need to put them in context of the times.
Before 1996, none of the big tech companies were big. In fact, almost none of them existed. The tech leaders who are subject to Section 230 and under fire today were only created after the passage of Section 230. Check it out (company + year created):
In 1996, the internet had not stepped into its potential. Frankly, if like me you were in the professional sphere at that time, no one (companies, entrepreneurs and everyday people) knew how to use it.
It’s obvious from the list of successful companies that were created after Section 230 (and – these are just the brand names you’d know), why Section 230 is attributed as the basis for the growth and expansion of the internet. Let’s clarify – of the internet. And, as you may recognize from the companies listed, it created the opportunity for US based entrepreneurial growth on the internet.
What Does Section 230 Do and How Did It Support Internet Expansion?
Prior to Section 230, there weren’t a lot of websites. Amazon and eBay were fledglings focused on online sales. Yahoo and Internet Explorer were innovations whose promise was to help us find stuff on the internet.
None of these were content companies in the sense of having news, information, and/or the sharing of opinions/thoughts by third parties directly onto their website. Companies who wanted to host this type of content, by necessity, generally fell into two camps: 1) they created websites to share content directly about their own company; or 2) they were news organizations that were subject to the standard review process that governs news. Why? Because each company was directly liable for the content on their sites – regardless of who posted it.
Here’s how it would work. Let’s say that I wanted to create a website where consumers could share reviews of their experiences at restaurants. Pre-Section 230, as the owner of the website, I would be personally liable for any reviews posted. Liability could be for statements viewed as slanderous, defaming, illegal use of images… and on and on. I could be liable for damages awarded for any successful claim and I’d have to pay my own legal costs. Moreover, even if the claim wasn’t successful, it was highly probable that I’d spend unknown attorney fees fending off potential claims. In sum, the costs of potential liability existed where claims: 1) were valid as legally inappropriate or 2) the reviews were accurate, but the complainant aggressively and legally tried to pressure a response through lawsuit. Either way, I'd incur legal fees to defend my website and may also have to pay damages.
In everyday language – You Get Sued.
You were sued because someone (that’s not you) said that something objectionable. You may have had no opportunity to consider or review. But, because you created a place for voices and posts, you were legally on the hook – regardless of whether you had the tools or information to evaluate the legality of the post. Moreover, there was zero law on the topic that could provide a respite for you to reasonably figure it out.
I don’t know about you – but, under those terms, I’d never have created something that would have allowed third party content and opened me up to unknown liability. The result, unsurprisingly, was that, like me, no one wanted to allow a third-party post because they weren’t sure if they were legally exposed because:) they didn’t know the veracity of content first-hand; 2) they didn’t know if the content had legal liability implications; 3) they had no way of knowing whether the content would be controversial and simply stoke litigation (regardless of whether the content was legally ok).
So, while it remained straightforward in how someone exercises their rights against potentially inappropriate content, this also created a disincentive for free speech and discourse on the internet. There was no space for minority and/or unheard voices, even my own, because the internet is so public and the ability to sue so easy.
Section 230 changed everything.
With the passage of Section 230, freedom of expression and innovation on the internet was given (almost) free reign.
The key language of Section 230 states that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." See 47 U.S.C §230(c)(1). In a 1997 seminal case entitled Zeran vs. America Online, Inc,, a federal appeals court interpreted this language:
“ By its plain language, § 230 creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service. Specifically, § 230 precludes courts from entertaining claims that would place a computer service provider in a publisher's role. Thus, lawsuits seeking to hold a service provider liable for its exercise of a publisher's traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content — are barred.“
The court also noted that Section 230 was enacted in response to an earlier pre-Section 230 case that held a service provider liable for third-party postings as if they were the original “publisher” (a.k.a. the party who actually posted). See Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710 (N.Y.Sup.Ct. May 24, 1995). In Stratton, Prodigy had advertised that it would moderate content posted on its bulletin boards and Prodigy had a history of actively screening and editing posts. On this basis, the court held them to the standard of being the original publisher. The decision created a disincentive for companies to moderate content posted on its site. Section 230’s purposes included the goal of removing any disincentive to self-regulate, which the court recognized in Zeran.
Zeran ensured that service providers can permit or deny the posting of third-party content and be immune from liability. This didn’t change existing laws. Nor was Section 230 intended to do so. Service providers are not immune from the consequences of their own posts. If they directly post defamatory statements, they are liable for their own acts. Equally, third parties are still fully liable for posts that they make.
What’s notable and significant about the Zeran decision is two-fold. First, it seeks to ensure Section 230’s purpose of encouraging self-regulation without unintended consequences. Second, it tightly links liability solely to the person who committed the overt, violative act – i.e. the third party who made the post.
Post-Zeran in 1997, here is how the Section 230 / internet landscape shaped up for service providers.
Internet Service Providers are:
This looks pretty great.
On July 9th, the the Court of Justice of the European Union (CJEU) invalidated Privacy Shield, the EU-US agreement that allows unrestricted transfers of personal data from the EU into the US. Now, companies that used the Privacy Shield as a valid transfer mechanism must rapidly respond to find a new compliance mechanism that fits their business.
Impact: Companies that relied on the Privacy Shield for lawful GDPR data transfers can no longer legally transfer or process such data. To do so, is a GDPR violation.
Questions you'll need to answer include:
1. Must I immediately stop the flow of data from EU to my US business/operations?
2. Can I keep and still process the data transferred to my company under the Privacy Shield?
3. Do I still have obligations under the Privacy Shield?
4. What other mechanism for transfer can I use? And, related, what are the implications of putting it in place and are there any operational changes I need to make?
Of course, these questions are just the starting point for aligning your business to comply with the GDPR post-Privacy Shield. Where you go next depends on the answers and your current business structure.
Expedite your post-Privacy Shield solution to GDPR compliance.
We know that time loss for any business can result in customer and revenue loss. We also understand that the urgency to address this change varies for each business. Whether the decision put your business into a crisis management mode or your need is less urgent, we can help you answer these questions and bring your organization into compliance quickly.
GDPR Threat - Are you ready?
Imagine: You are a privacy focused company. You are trying to do everything right. You receive a threat from a person who claim you violated their GDPR rights and they demand you pay them $x. If you don't agree to pay them, they threaten to report you to a Data Protection Authority and/or file a personal lawsuit against you in a court based in the EEA.
Do you know how to navigate this? We Do.
Evaluating Complex Factors Is One Of Our Superpowers. We faced this recently with one of our clients who is a global, privacy infused company. They wanted to do the right thing but also wanted to protect themselves from extortion-like behavior and from setting a precedent.
Providing potential solutions for requires a unique combination of expertise: It requires a knowledge of breach and notification standards under the GDPR, within the Member country and all US states where our client does business. It also requires an understanding of the relevant, local Data Protection Authority (DPA) in the EEA that would oversee a complaint and their general approach to investigating such reports. It further requires an understanding of relevant non-US, local litigation requirements and related treaties that impact litigation for those in US should the complainant file a claim for personal damages under GDPR outside of the US. Lastly, the client needs someone to help them, not only understand all these factors but also the risks involved and advice on how to negotiate a just solution.
We know how to deliver the best outcome for everyone. We analyzed all of these factors. We educated our client on solutions available, the associated risks, helped them navigate their chosen solution with the customer. We also helped them ensure that they lived by their privacy and customer support standards.
Are you prepared?
Jenn Suarez, CEO