1. Home
  2. |Insights
  3. |President's Promise to Limit 230 Immunity is Underway

President's Promise to Limit 230 Immunity is Underway

Client Alert | 4 min read | 10.23.20

The Administration continues its quest to unilaterally implement dramatic changes to Section 230 of the Communications Decency Act (CDA). We first reported in May 2020 that President Trump issued a sweeping Executive Order that attempted to set the wheels in motion for dramatic changes to section 230. We described the Executive Order as “plainly an attempt to require the government’s input in a private decision-making process.” Specifically, the Executive Order requested the Federal Communications Commission (FCC) to engage in rulemaking, an unprecedented move in connection with the CDA, to define what “good faith” means for section 230 purposes. Well, it seems that time has come.

Last Thursday, Chairman Pai announced that the FCC will craft rules defining when a website’s efforts to moderate content fall outside of section 230’s scope. The timing and process of this change is unclear, though Chairman Pai signaled that he will shepherd through new regulations that substantially align with President Trump’s Executive Order, and likely without regard to the concerns of any platform providers, regardless of the platform. Indeed, when announcing his plans to move forward with the rulemaking, Chairman Pai emphasized that some members of the federal government – including, it appears, Supreme Court Justice Clarence Thomas – have “serious concerns about the prevailing interpretation of the immunity set forth in section 230.” 

Chairman Pai’s decision to begin the rulemaking process at all is a frustrating step. As suggested above, the FCC’s authority to set rules that define legal terms in section 230 like “good faith” is shaky at best. Agencies are generally only allowed to enforce and/or interpret the laws that Congress has expressly permitted them to enforce and/or interpret, and Congress has not given the FCC any such authority over the CDA. The likely consequence of any rulemaking action will therefore be years of uncertainty over the actual meaning of section 230 as the FCC’s conduct is challenged in federal court, with litigants on either side of the issue attempting to spin FCC action, or inaction, as legally significant. The terms of the FCC’s proposed changes – assuming they include to some degree due process requirements for good faith determinations (which the Executive Order also overtly recommends) – are concerning. Such changes would increase the costs of business, narrow the reach of section 230 immunity, spur increased litigation against platforms, and produce results that are contrary to the purposes of the CDA – a devastating combination of shots at smaller platforms.

Such changes would increase the costs of business because fending off a steady stream of litigation due to content moderation decisions will be expensive. At their most basic, they will not only require notifying the party that is under scrutiny and providing that party an opportunity to respond (which happens today on most platforms), but also likely could require more complex processes like publication of rationales for decision, permissive intervention, and appellate rights. Indeed, such decisions might be made reviewable by courts, which will vastly increase the expense of content moderation and slow down the timeline for moderation. 

It’s also easy, when there are more steps involved, for a litigant to find a mistake in the process. If lack of “good faith” can cause a platform to lose its section 230 protection, and a third-party is in charge of adjudging what “good faith” means, then just this legal change would have the practical effect of narrowing the immunity’s reach. And that could alter content on the internet. Because complying with the newly-delineated Section 230 will be so expensive, it will reinforce the position of the largest platforms, who will have the resources to set up and maintain these new processes. We have written about Facebook’s complex content adjudication board before. Not many platforms could afford the efforts that Facebook has announced. This is yet another barrier to entry that will disadvantage new startups which, in order to avoid the potential liability of non-compliance with newly-defined 230, might adopt a strategy of severely restricting third-party content, or alternatively, avoiding any moderation at all. This result would be especially perverse, because it would be an agency’s interpretation of the law that contradicts the law’s core purpose – a change to the Communications Decency Act’s meaning that would likely make communications online less decent.

There is circumstantial evidence to suggest that the Executive Order envisages this very outcome. The timing is at least convenient: After Twitter labelled two of President Trump’s tweets with “fact checking” notices, the E.O. was issued. And after Twitter blocked posts to unconfirmed reports about President Trump’s political opponent, Chairman Pai immediately went public with his rulemaking plan. The Administration has not shied away from its ultimate goals in effectuating change. To the extent that the E.O., and any subsequent rulemaking, is a governmental attempt to chill any efforts at good faith platform moderation, then we should all be worried and uniformly opposed.

Crowell & Moring has been and continues to be discussing the E.O. with our clients. Along with our clients, we are considering legal options, which include the possibility of forming coalitions to consider a legal challenge. We will continue to monitor developments and will provide updates to position businesses to adequately protect their business practices.

Insights

Client Alert | 6 min read | 04.16.24

Navigating the AI Intellectual Property Maze - Key Points From Congressional Hearing

On April 10, 2024, the U.S. House of Representatives, Judiciary Committee Subcommittee on Intellectual Property convened Part III to an ongoing discussion and exploration of artificial intelligence (AI) and intellectual property (IP) rights. The session, “Artificial Intelligence and Intellectual Property: Part III - IP Protection for AI-Assisted Inventions and Creative Works,” delved into the nuanced debate over what IP protections should exist for AI-generated or AI-assisted works....