abstract
Article

Tech Regulation Digest: Sunsetting Section 230—The Future of Content Moderation, Ads, and AI

Overview

Section 230 of the Communications Decency Act (1996)1 has shaped US internet governance for nearly three decades, fueling a $2.6 trillion digital economy that now represents roughly 10 percent of US gross domestic product.2 Initially designed for nascent online forums, its protections now extend to today’s massive, globally interconnected platforms.

These protections, which shield platforms and users from liability for third-party content while preserving good-faith moderation, face mounting scrutiny as tech companies’ market influence grows.Major platforms dominate online interactions and markets through sophisticated data-driven monetization strategies and content distribution algorithms, prompting questions about Section 230’s applicability to the modern digital landscape.

Recent legislative proposals, including a House bill to sunset Section 230 by 2026, signal a potential inflection point. The challenge for policy reform lies in balancing Section 230’s role in fostering innovation with evolving concerns about platform accountability, consumer protection, and market efficiency in a data-driven economy.

Background

Section 230 has become essential to the business models of today’s globally interconnected platforms, particularly in digital advertising. Its broad liability protections have enabled platforms to scale rapidly, monetize user data, and manage vast amounts of user-generated content without direct responsibility, a point highlighted by the US Department of Justice. These protections extend beyond social media companies, safeguarding any entity that shares or republishes online content—from consumer review websites to reposted content and forwarded emails.

Early rulings, such as Blumenthal v. Drudge (1998), reinforced Section 230’s broad immunity, allowing platforms to transition from passive hosting to active content curation without incurring liability. The court ruled that America Online was not liable for allegedly defamatory content it promoted, even when it paid for distribution. Legal observers recognized that without such protections, interactive services would be reluctant to host third-party content, potentially stifling online communication and development.

The economic impact of these protections has been profound. In 2024, platforms shielded by Section 230 controlled 65 percent of total US digital ad spend, according to the St. Louis Fed. This legal framework helped create the environment that enabled digital platforms to develop content-driven business models that now dominate the advertising landscape.4

However, the scope of Section 230 is narrowing with the advent of algorithmic content amplification. Recent rulings, such as the Third Circuit’s Anderson decision, established that Section 230 does not shield platforms when their algorithmic recommendations constitute “expressive activity,” specifically addressing TikTok’s promotion of content allegedly violating legal standards related to minorsForrest v. Meta expands this inquiry by examining whether algorithmic systems that shape ad content and distribution can be deemed  “material distribution” of illegal activity.

In Zuckerman v. Meta (2024), the court upheld platforms’ control over their algorithms, rejecting third-party content moderation tools.5 This ruling puts into question whether platforms that actively generate or amplify content using algorithms should still be considered platforms “neutral intermediaries.”6 The evolving interpretations in recent judicial proceedings reflect how courts are increasingly evaluating platform protections in an era of algorithmic content distribution.

In Fair Housing Council v. Roommates.com (2008), platforms forfeited Section 230 protection by actively soliciting user preferences.7 That same year, Doe v. MySpace upheld platform immunity despite harm resulting from user interactions. While Doe involved more traditional platform operations, it raised important questions about platform responsibility—questions that recent cases now extend into the realm of algorithmic systems and targeted content, signaling a shift in how courts view platform roles.

Examples of platforms adjusting their practices to comply with stricter data protection laws (e.g., Meta’s ad targeting systems, Google’s ongoing development of the Privacy Sandbox, and Amazon’s privacy measures) demonstrate the increasing pressure on platforms to rethink how they manage user data and ads.

The challenge arising from artificially intelligent (AI)-driven content, such as misleading product recommendations or false information generated by algorithms, further complicates the landscape. Meta reports that AI now recommends 30 percent of Facebook feed content and 50 percent of Instagram content, raising questions about whether platforms should receive the same Section 230 protections for AI-generated content as they do for user-generated content.

Why Is This Important?

Lawmakers are increasingly questioning whether Section 230 still serves its original purpose in today's algorithm-driven digital economy. Federal Communications Commission Chairman Brendan Carr has stated that Section 230 was never meant to grant platforms unchecked power, calling for a more balanced and transparent regulatory approach. Katie Tummarello, executive director of Engine, cautions that reforms could disproportionately harm smaller startups. The risk of over-censorship and the erosion of certain viewpoints in response to regulatory pressures and public perception is also a concern.8

Public trust in platform governance has eroded significantly.9 A 2021 Cato Institute survey shows that 75 percent of Americans lack confidence in social media companies’ content moderation practices, while 65 percent believe these platforms exert too much influence over what political news people read. This tension between algorithmic amplification, user data monetization, and corporate power extends beyond Section 230—when Apple’s App Tracking Transparency gave users a choice about data collection, 96 percent of US users opted out of tracking, resulting in a $10 billion revenue impact for Meta in 2022. These outcomes highlight how both content liability and data privacy regulations are increasingly focused on the same business practices that drive platform monetization strategies.

The largely unchanged nature of Section 230 since 1996 leaves platforms to navigate a growing patchwork of international, state, and federal governance. While Congress has not amended the law, both judicial and congressional interpretations continue to shape its application, as recent decisions illustrate. Federal reform efforts remain in early stages, suggesting that any significant changes will likely depend on congressional action or agency regulation.

Twenty states have enacted data privacy regulations as of February 2025, with California’s Consumer Privacy Act leading the charge. However, the scope of state authority has been shaped by constitutional challenges. Notably, the NetChoice rulings blocked Florida and Texas from regulating viewpoint moderation on social media platforms, reinforcing platforms’ First Amendment rights. This regulatory fragmentation has prompted bipartisan efforts towards more uniform federal standards like the American Privacy Rights Act.

The Supreme Court has yet to definitively rule on Section 230’s relationship to state regulations, though case law is helping to clarify its scope. The absence of a cohesive federal framework has significant implications for consumers and businesses alike. The patchwork of state laws creates uncertainty for consumers, while businesses grapple with increased compliance costs.

Past legislative efforts, including the bipartisan Platform Accountability and Transparency Act (2022), and executive actions on AI safety and governance, signal ongoing attempts to establish regulatory frameworks in this space. The SAFE TECH Act, which would require platforms to disclose content moderation policies and practices, including algorithmic influence, is another example of this trend. These cases and legislative reform efforts indicate a growing trend toward reevaluating the scope of Section 230 immunity. While there is still broad recognition that Section 230 has fostered innovation, reforming or sunsetting the law could create significant uncertainties, potentially consolidating industry power and stifling innovation.

What Happens Next

A complex interplay of legislative efforts, corporate risk management strategies, and evolving regulations will likely shape the future of Section 230. As the Associated Press has reported, ongoing legal and political challenges to these protections could fundamentally reshape the structure and accountability of tech companies.

While comprehensive federal reform remains stalled, state-level policies, voluntary corporate initiatives, and adaptations to EU standards are driving incremental shifts in platform operations. As courts and states refine liability standards, tech companies must navigate an increasingly fragmented regulatory environment. This decentralized approach leaves the US without a unified framework, even as platforms exercise growing influence over digital markets and the broader information ecosystem.

Through continued judicial refinements and clarified legal obligations, companies will likely adapt their practices accordingly, leading to more predictable standards for both platforms and users. The future of Section 230 will likely hinge on balancing innovation with accountability, requiring clarity from both courts and lawmakers in the coming years.
 

______

1
The Communications Decency Act of 1996 introduced new regulations related to indecent material online as part of the Telecommunications Act of 1996, including Section 230, which provides legal immunity for online platforms regarding user-generated content.
2 The current-dollar value added includes contributions from hardware, software, e-commerce margins, and priced digital services, which comprise the broader digital economy (2022).
3 Section 230 allows platforms the discretion to moderate in good faith and remove third-party content in certain circumstances. Exemptions include illicit activity such as federal criminal violations (including intellectual property infringement, terrorism, child exploitation, sex trafficking, and interference with state and federal law enforcement).
4 This dominance extends globally. According to ad buyer GroupM, digital platforms now account for 71 percent of the $1 trillion global advertising market, with five major companies (Google, Meta, ByteDance, Amazon, and Alibaba) capturing more than half of that revenue. This concentration persists even as the digital ad market expands—Meta recently reported quarterly sales of $48.39 billion (up 21 percent), Google’s ad revenue reached $72.46 billion (up 11 percent), and Amazon’s ad business grew to $17.29 billion (up 18 percent).
5 The court rejected a claim seeking to mandate Meta’s support for a third-party tool that allowed users to unfollow all connections on Facebook, thereby modifying platform functionality.
6 AI-driven systems that recommend fictitious products, such as “blinker fluid," or conflate satire with fact, challenge the traditional framework of Section 230, suggesting that platforms may be breaching their role as neutral conduits.
7 The court ruled that requiring users to disclose preferences like family status and sexual orientation, constituted “material contribution” to content creation, placing it outside Section 230’s protection.
8 As platforms face increasing pressure to remove content due to legal concerns, there’s an underlying risk to the long-term accessibility of digital records. Lila Bailey, policy counsel for the Internet Archive, a neutral content host, highlights the challenge of preserving politically or socially sensitive content that could be deleted in the name of compliance.
9 This trend continues today. In January 2025, Google searches for deleting Facebook and Instagram accounts increased by over 5000 percent after Meta announced it would end third-party face-checking and adjust its content moderation policies.