Sen. Rubio introduces Section 230 legislation to crack down on Big Tech algorithms and protect free speech

Posted

U.S. Senator Marco Rubio (R-Fla.) has introduced legislation to halt Big Tech’s censorship of Americans, defend free speech on the internet, and level the playing field to remove unfair protections that shield massive Silicon Valley firms from accountability. The Disincentivizing Internet Service Censorship of Online Users and Restrictions on Speech and Expression (DISCOURSE) Act would hold Big Tech responsible for complying with pre-existing obligations per Section 230 of the Communications Decency Act (CDA) of 1996 and clarify ambiguous terms that allow Big Tech to engage in censorship.

Specifically, the DISCOURSE Act updates the statute so that when a market-dominant firm actively promotes or censors certain material or viewpoints — including through the manipulative use of algorithms — it no longer receives protections. The bill also limits Section 230 immunities for large corporations that fail to live up to the statute’s obligations. 

“Big Tech has destroyed countless Americans’ reputations, openly interfered in our elections by banning news stories, and baselessly censored important topics like the origins of the coronavirus,” Rubio said. “It is absurd that these massive companies receive special protections through Federal law, even as they tear our country apart. No more free passes — it is time to hold Big Tech accountable for their actions.” 

When it was first passed in 1996, Section 230 was intended to enable internet companies to host third-party content and engage in targeted moderation of the worst content without being treated as “publishers,” which are generally held accountable for the content that appears in its publication. But in the 25 years since the CDA’s passage, internet companies have developed from tiny start-ups that needed the protections afforded by Section 230 into some of the largest corporations on Earth. 

In addition to their growth, these internet companies also changed their missions. Today’s tech giants use opaque algorithms and unaccountable teams of moderators to manipulate online discourse to their worldview. The result is a highly distorted public square in which Americans are censored on a daily basis.

A section-by-section overview of the bill is available here, and a one-pager is here.

Key provisions of the DISCOURSE Act are also listed below.

  • Holds Big Tech responsible for complying with Section 230’s existing obligations: 
    • Amends 230(c)(1) so that immunity guaranteed under the provision is only granted to Big Tech firms that comply with Section 230’s existing customer protection and information requirement.  
  • Amends Section 230(f)(3) to include the following activities for which an interactive computer service is defined as an “Information content provider” and is thus responsible for the information on its platform:
    • Amends Section 230(f)(3) to include the following activities for which an interactive computer service is defined as an “Information content provider” and is thus responsible for the information on its platform: 
      • 1. Algorithmic amplification: The use of algorithmic amplification by a market-dominant firm to target the third-party provided content to users on the platform when the user has not requested or searched for the content. 
      • 2. Moderation activity: Engaging in content moderation activity that reasonably appears to express, promote, or suppress a discernible viewpoint, including reducing or eliminating the ability of an information content provider to earn revenue. 
      • 3. Information creation and development: Soliciting, commenting on, funding, contributing to, and modifying information provided by another person.  
    • For each of these categories, an interactive computer service is responsible for specific information if it has engaged in any of the actions with respect to any user content. However, if the company engages in a pattern or practice of such behavior, it is liable for all of the content on its site.  
  • Amends Section 230(c)(2) to replace vague and subjective language with defined and legal terms: 
    • Conditions the content moderation liability shield on an objective reasonableness standard. In order to be protected from liability, a tech company may only restrict access to content on its platform where it has “an objectively reasonable belief” that the content falls within a specified category;  
    • Removes “otherwise objectionable” and replaces it with concrete terms, including “promoting terrorism,” content that is determined to be “unlawful,” and content that promotes “self-harm.”  
    • Includes a religious liberty clause, which states explicitly that (c)(2) does not extend liability protections to decisions that restrict content based on their religious nature. 
  • Requires disclosures to inform and protect consumers: 
    • Requires interactive computer services to issue public disclosures related to content moderation, promotion, and curation so that consumers can make informed choices when it comes to the use of such services.  
  • Clarifies that Section 230 immunity is an affirmative defense in a criminal or civil action.