Ben Sperry is Associate Director of Legal Research at International Center for Law and Economics. Ben was a summer fellow at Washington Legal Foundation in 2010 while pursuing his law degree at George Mason University Law School. He is a frequent contributor to the Truth on the Market blog.

A conventional wisdom has developed in the U.S. that “something” has to be done about Big Tech. From the barrages of congressional hearings to the litany of media exposés, it seems everyone has some complaints about social-media companies.

Progressives believe the platforms haven’t done enough to remove misinformation and hate speech, while conservatives believe they do too much to remove conservative speech. Solutions offered by each side all run into serious First Amendment problems. For example, while some blame Section 230 for what perceived anti-conservative bias, courts have made clear that it is ultimately the First Amendment, not Section 230, that protects platforms’ rights to set and enforce their own rules on speech.

But the fight over speech on social-media platforms distracts from more important issues online. Section 230, for instance, does immunize social-media platforms from the normal rules of intermediary liability. There are some clear benefits to this regime, in that it promotes third-party speech and encourages platforms to create a user-friendly environment. But we should be honest that there are also costs to society in failing to hold online platforms accountable in cases where their offline equivalents would be. Getting Section 230 reform right means preserving the benefits that the current immunity regime has made possible, while finding ways to hold online platforms accountable when they are the least-cost avoider of harms.

Big Tech Platforms Are Not the Government

For many conservatives, it is an article of faith that Big Tech platforms’ moderation decisions show obvious political bias and that they violate their users’ free-speech rights. But as courts have consistently found, social-media platforms’ moderation choices do not violate the First Amendment, because they are not state actors.

The state action doctrine is fundamental to the First Amendment, as originalists have long understood. In Manhattan Community Access Corp. v. Halleck, Justice Brett Kavanaugh wrote on behalf of the Court:

Ratified in 1791, the First Amendment provides in relevant part that ‘Congress shall make no law . . . abridging the freedom of speech.’ Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: ‘No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .’ §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty.

While there are times when private actors can be considered state actors for First Amendment purposes, no court to date has found social-media companies to be state actors in a First Amendment lawsuit. For example, U.S. Court of Appeals for the Ninth Circuit, citing Halleck, rejected claims that Google was a state actor that infringed Prager University’s First Amendment rights by placing some of their videos in “Restricted Mode” on YouTube.

Despite longstanding First Amendment jurisprudence on the state action doctrine, some conservatives continue to test theories that would transform social media into government actors. One such theory is that, if Big Tech companies respond to threats of regulation from state actors by restricting protected speech, they are acting as state actors.

John Doe v. Google: Court Rejects the Government Compulsion Theory

In John Doe v. Google LLC, the plaintiffs made this exact argument. The plaintiffs—people who had their YouTube channels removed—alleged that Google was acting pursuant to compulsion from government officials. To make the case that YouTube felt government pressure to remove disfavored speech, they pointed to two letters from U.S. Rep. Adam Schiff; a Twitter interaction between the congressman and YouTube’s CEO, which referenced a “partnership”; public comments made by House Speaker Nancy Pelosi on holding Big Tech accountable; a non-binding congressional resolution condemning QAnon and conspiracy theories more generally; congressional hearings involving Google; threats to repeal Section 230 immunity; and antitrust suits against Google. This, in the plaintiffs’ view, meant that Google was acting as a state actor when it removed their accounts.

The U.S. District Court for the Northern District of California rejected this line of argumentation for two primary reasons.

First, the court found the alleged threats were insufficient to amount to coercion. The court noted that the comments from government officials did not state that any particular accounts needed to be removed from YouTube, nor did those comments “threaten” any specific outcomes for the company if it failed to take stronger action against hate speech or misinformation. The court found the alleged “threats” of regulation to be speculative, particularly in light of cases that found no state action even where private actors relied on government funding or were subject to government regulation.

Second, the court found that the plaintiffs failed to connect YouTube’s decision to remove their accounts to state action. The plaintiffs relied on a Twitter response from YouTube’s CEO to Rep. Schiff to allege there was joint action or a nexus (i.e., a “partnership”) between state actors and Google. The court didn’t find the tweet sufficiently substantial to support these claims, noting that “there are no allegations that Defendants invoked state or federal procedure to bring about the suspension of Plaintiffs’ accounts. Defendants merely suspended Plaintiffs from Defendant’s own private platform.”

The District Court was just the latest in a long line of courts to find that platforms’ moderation decisions are not subject to First Amendment scrutiny. To the contrary, it is the First Amendment that protects those moderation decisions.

A Better Way Forward

It is clear that, in light of the First Amendment, forcing social-media platforms to carry specific content against their will is not a winning argument. But this does not end the analysis. Section 230, a federal law that immunizes online platforms from certain types of lawsuits based on third-party speech, does deserve a second look.

Some conservatives have argued that Section 230’s liability shield should be conditioned on social-media platforms agreeing to carry all legal speech. Craig Parshall, for instance, has proposed incorporating First Amendment doctrine against private online platforms as a condition for them to receive immunity. Ironically, conditioning a government benefit on giving up protected editorial discretion is more likely to be seen as unconstitutional by courts than the “threats” in Doe v. Google.

Section 230 has been important for the Internet’s growth, especially for online platforms like Google, Facebook, Twitter, and many smaller competitors. The greatest benefit has been for platforms hosting user-generated content, because they are not held liable for content posted by those users. But there have been costs to society, where normal rules of intermediary liability would have applied to these platforms. Reform efforts should focus on preserving the benefits of targeted immunity, while better aligning incentives by requiring that platforms adopt reasonable moderation practices.

A new working paper from the International Center for Law & Economics examines this question in depth. In it, the authors consider the economics of online intermediary liability, create a framework to evaluate Section 230 reform, and offer a proposal that would allow online platforms to be held accountable when they are the least-cost avoider.

Conservatives and others interested in Section 230 reform should abandon unconstitutional efforts like mandating the carriage of speech (or its removal), and instead focus on getting the incentives right