9 People Hold the Internet’s Fate in Their Hands

The Supreme Court should continue to safeguard online speech—in the Section 230 case and beyond.
Supreme Court Building reflected in glass
Photograph: Eric Lee/Bloomberg/Getty Images

Free speech advocates focused on the Supreme Court this week, as nine justices spent nearly three hours hashing out the meaning of Section 230 of the Communications Decency Act. Tuesday’s argument in Gonzalez v. Google marked the first time that the Supreme Court might interpret the 26 words that protect online platforms from liability for user content.

But a potentially greater threat to free speech was taking place more than 800 miles to the south in Tallahassee, where a Florida state legislator proposed a bill to make it easier for plaintiffs to bring defamation lawsuits. To the north, a federal judge recently struck down a New York law that regulates online hate speech. To the west, a judge nixed a California Covid misinformation law. And in DC, the justices are also considering whether to rule on the constitutionality of Texas and Florida laws that restrict the ability of social media platforms to moderate user content.  

For the past century, the Supreme Court has taken an expansive view of the First Amendment’s free speech protections, narrowly defining the categories of unprotected speech, and fiercely guarding everything else that is within the First Amendment’s scope. Since 1997, when it struck down most of the Communications Decency Act, the court has held that the full force of the First Amendment applies online. Hailing “dramatic expansion of this new marketplace of ideas” on the internet, the court wrote in that decision that “governmental regulation of the content of speech is more likely to interfere with the free exchange of ideas than to encourage it.”

We are at a potential turning point for the Supreme Court’s strong protections for free speech and the internet. Only one justice who decided the 1997 case remains on the court. And online speech is now far more controversial than it was in the internet’s nascent years, with some arguing that too much harmful speech remains online while others contend that platforms are too heavy-handed in their content moderation. Internal and external forces could pressure the Supreme Court to allow the government to take a more hands-on role with free speech. 

Indeed, the internet age may prompt the court to reconsider one of its landmark free speech rulings, New York Times v. Sullivan. The 1964 opinion requires public officials to demonstrate actual malice—knowledge of falsity or reckless disregard of the truth—in order to sue for defamation. (The court later extended this requirement to public figures.) In setting this high bar, the court recognized “a profound national commitment to the principle that debate on public issues should be uninhibited, robust, and wide-open.”

But some justices are unconvinced that Sullivan remains necessary for that commitment. Justice Clarence Thomas has written three times that he wants the Supreme Court to revisit Sullivanpointing to “real-world effects” such as the proliferation of PizzaGate and other online falsehoods.  Justice Neil Gorsuch has joined his call, in part due to the changes brought by social media. “Now, private citizens can become ‘public figures’ on social media overnight,” Gorsuch wrote. “Individuals can be deemed ‘famous’ because of their notoriety in certain channels of our now-highly segmented media even as they remain unknown in most.”

The Florida bill attempts to weaken defendants’ protections in defamation lawsuits, including by making it easier to sue if the plaintiff has been accused of discriminating by race, sex, sexual orientation, or gender identity. The bill also would help plaintiffs more easily establish actual malice.

I question whether parts of the Florida bill, if passed, would withstand a constitutional challenge, because the actual malice requirement is rooted in the First Amendment and cannot be overridden by a state legislature. But if Justices Thomas and Gorsuch have their way, the court could reconsider the constitutional protections in defamation cases, leaving the door open for Florida and other states to make it far easier to sue not only news organizations but individual critics on social media. Although the debate about Sullivan often focuses on large news organizations like The New York Times and Fox News, it protects all speakers and is essential to open online discourse.

Also looming over the Supreme Court are requests to consider the constitutionality of Texas and Florida laws that restrict the ability of social media companies to moderate user content. Last May, the Eleventh Circuit blocked a Florida law that limits the ability of platforms to moderate political candidates’ content or stories from news organizations. “Put simply, with minor exceptions, the government can't tell a private person or entity what to say or how to say it,” Judge Kevin Newsom wrote. But in September, the Fifth Circuit upheld a Texas law that prohibits social media platforms from “censoring” user content based on viewpoint. “Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say,” Judge Andrew Oldham wrote. Although the court has not yet agreed to hear the cases, it probably will do so in the next year.

A Supreme Court ruling on those laws has the potential to overhaul how online platforms have operated since the dawn of the internet. If the court agrees that platforms do not have a First Amendment right to moderate as they see fit, the platforms could soon face a state-by-state patchwork of restrictions and edicts to carry user content even if it violates the platforms’ internal policies.  Platforms have made some bad content-moderation decisions, but even this imperfect system is better than allowing courts and legislators to decide when platforms can block content. 

And states are not only passing social media laws that require platforms to carry content, but also attempting to limit harmful but constitutionally protected speech. For instance, after last year’s Buffalo supermarket shooting, New York enacted a law that requires platforms to provide “a clear and easily accessible mechanism for individual users to report incidents of hateful conduct,” and to have policies on their response to complaints about hateful conduct. This month, a New York federal district judge struck down the law, concluding that it “both compels social media networks to speak about the contours of hate speech and chills the constitutionally protected speech of social media users.” And last month, a California federal district judge blocked a California law that prohibited physicians and surgeons from disseminating “misinformation or disinformation” about Covid-19 to patients. The New York and California judges reached the correct decisions under current Supreme Court First Amendment precedent, but it is unlikely to be the last time that a state tries to limit constitutionally protected online speech. Eventually those cases may well end up in the Supreme Court, giving it another chance to reevaluate the scope of its free speech protections.

And the courts probably will face other tough online speech questions. For instance, although the First Amendment has long protected anonymous speech, a Texas lawmaker recently introduced a bill that would not only ban children under 18 from using social media, but would require platforms to obtain copies of driver’s licenses from all users, along with photos of the users with the licenses.

Although Gonzalez v. Google involves Section 230 and not the First Amendment, Tuesday’s argument was our best glimpse at how the current Supreme Court views online speech. The case requires the justices to decide whether Section 230 protects Google from liability in a lawsuit brought by an ISIS victim’s family over YouTube’s algorithmic presentation of ISIS content. It is impossible to predict with certainty how the justices will rule, but we can fairly assume that they recognized the importance of their ruling. The argument was scheduled for 70 minutes, and it lasted more than two and a half hours. 

The justices acknowledged the arguments that radical changes to interpretations of Section 230 might have a significant impact not only on online speech, but on the business models of platforms. “Are we really the right body to draw back from what had been the text and consistent understanding in courts of appeals?” Justice Brett Kavanaugh asked.

Also drawing a cautionary tone was Justice Elena Kagan, who suggested that Congress—and not her court—is best suited to determine whether Section 230 is a fair policy. “We really don’t know about these things,” Kagan said. “You know, these are not like the nine greatest experts on the internet.” 

Kagan’s restraint is admirable, but it doesn’t tell the whole story. Although the court is, in fact, not the greatest body of technical experts, it is where the buck stops for protections of free speech, both online and offline. And for the past century, the Supreme Court has developed an impressive body of exceptionalist free speech protections that it eventually extended to the internet. Despite the increasing complexity of the internet and the increased push across the political spectrum for more government involvement in online speech, I hope that the Supreme Court stands firm in its commitment to robust speech safeguards.

Although they might not have the most technological expertise, the Supreme Court justices could continue to be the internet’s nine most effective guardians.