News in English

Third Circuit’s Section 230 TikTok Ruling Deliberately Ignores Precedent, Defies Logic

Step aside Fifth Circuit Court of Appeals, there’s a new contender in town for who will give us the most batshit crazy opinions regarding the internet. This week, a panel on the Third Circuit ruled that a lower court was mistaken in dismissing a case against TikTok on Section 230 grounds.

But, in order to do so, the court had to intentionally reject a very long list of prior caselaw on Section 230, misread some Supreme Court precedent, and (trifecta!) misread Section 230 itself. This may be one of the worst Circuit Court opinions I’ve read in a long time. It’s definitely way up the list.

The implications are staggering if this ruling stands. We just talked about some cases in the Ninth Circuit that poke some annoying and worrisome holes in Section 230, but this ruling takes a wrecking ball to 230. It basically upends the entire law.

At issue are the recommendations TikTok offers on its “For You Page” (FYP), which is the algorithmically recommended feed that a user sees. According to the plaintiff, the FYP recommended a “Blackout Challenge” video to a ten-year-old child, who mimicked what was shown and died. This is, of course, horrifying. But who is to blame?

We have some caselaw on this kind of thing even outside of the internet context. In Winter v. GP Putnam’s Sons, it was found that the publisher of an encyclopedia of mushrooms was not liable for “mushroom enthusiasts who became severely ill from picking and eating mushrooms after relying on information” in the book. The information turned out to be wrong, but the court held that the publisher could not be held liable for those harms because it had no duty to carefully investigate each entry.

In many ways, Section 230 was designed to speed up this analysis in the internet era, by making it explicit that a website publisher has no liability for harms that come from content posted by others, even if the publisher engaged in traditional publishing functions. Indeed, the point of Section 230 was to encourage platforms to engage in traditional publishing functions.

There is a long list of cases that say that Section 230 should apply here. But the panel on the Third Circuit says it can ignore all of those. There’s a very long footnote (footnote 13) that literally stretches across three pages of the ruling listing out all of the cases that say this is wrong:

We recognize that this holding may be in tension with Green v. America Online (AOL), where we held that § 230 immunized an ICS from any liability for the platform’s failure to prevent certain users from “transmit[ing] harmful online messages” to other users. 318 F.3d 465, 468 (3d Cir. 2003). We reached this conclusion on the grounds that § 230 “bar[red] ‘lawsuits seeking to hold a service provider liable for . . . deciding whether to publish, withdraw, postpone, or alter content.’” Id. at 471 (quoting Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997)). Green, however, did not involve an ICS’s content recommendations via an algorithm and pre-dated NetChoice. Similarly, our holding may depart from the pre-NetChoice views of other circuits. See, e.g., Dyroff v. Ultimate Software Grp., 934 F.3d 1093, 1098 (9th Cir. 2019) (“[R]ecommendations and notifications . . . are not content in and of themselves.”); Force v. Facebook, Inc., 934 F.3d 53, 70 (2d Cir. 2019) (“Merely arranging and displaying others’ content to users . . . through [] algorithms—even if the content is not actively sought by those users—is not enough to hold [a defendant platform] responsible as the developer or creator of that content.” (internal quotation marks and citation omitted)); Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 21 (1st Cir. 2016) (concluding that § 230 immunity applied because the structure and operation of the website, notwithstanding that it effectively aided sex traffickers, reflected editorial choices related to traditional publisher functions); Jones v. Dirty World Ent. Recordings LLC, 755 F.3d 398, 407 (6th Cir. 2014) (adopting Zeran by noting that “traditional editorial functions” are immunized by § 230); Klayman v. Zuckerburg, 753 F.3d 1354, 1359 (D.C. Cir. 2014) (immunizing a platform’s “decision whether to print or retract a given piece of content”); Johnson v. Arden, 614 F.3d 785, 791-92 (8th Cir. 2010) (adopting Zeran); Doe v. MySpace, Inc., 528 F.3d 413, 420 (5th Cir. 2008) (rejecting an argument that § 230 immunity was defeated where the allegations went to the platform’s traditional editorial functions).

I may not be a judge (or even a lawyer), but even I might think that if you’re ruling on something and you have to spend a footnote that stretches across three pages listing all the rulings that disagree with you, at some point, you take a step back and ask:

As you might be able to tell from that awful footnote, the Court here seems to think that the ruling in Moody v. NetChoice has basically overturned those rulings and opened up a clean slate. This is… wrong. I mean, there’s no two ways about it. Nothing in Moody says this. But the panel here is somehow convinced that it does?

The reasoning here is absolutely stupid. It’s taking the obviously correct point that the First Amendment protects editorial decision-making, and saying that means that editorial decision-making is “first-party speech.” And then it’s making that argument even dumber. Remember, Section 230 protects an interactive computer service or user from being treated as the publisher (for liability purposes) of third party information. But, according to this very, very, very wrong analysis, algorithmic recommendations are magically “first-party speech” because they’re protected by the First Amendment:

Anderson asserts that TikTok’s algorithm “amalgamat[es] [] third-party videos,” which results in “an expressive product” that “communicates to users . . . that the curated stream of videos will be interesting to them[.]” ECF No. 50 at 5. The Supreme Court’s recent discussion about algorithms, albeit in the First Amendment context, supports this view. In Moody v. NetChoice, LLC, the Court considered whether state laws that “restrict the ability of social media platforms to control whether and how third-party posts are presented to other users” run afoul of the First Amendment. 144 S. Ct. 2383, 2393 (2024). The Court held that a platform’s algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment….

Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms, id. at 2409, it follows that doing so amounts to first-party speech under § 230, too….

This is just flat out wrong. It is based on the false belief that any “expressive product” makes it “first-party speech.” That’s wrong on the law and it’s wrong on the precedent.

It’s a bastardization of an already wrong argument put forth by MAGA fools that Section 230 conflicts with the argument in Moody. The argument, as hinted at by Justices Thomas and Gorsuch, is that because NetChoice argues (correctly) that its editorial decision-making is protected by the First Amendment, it’s somehow in conflict with the idea that they have no legal liability for third-party speech.

But that’s only in conflict if you can’t read and/or don’t understand the First Amendment and Section 230 and how they interact. The First Amendment still protects any editorial actions taken by a platform. All Section 230 does is say that it can’t face liability for third party speech, even if it engaged in publishing that speech. The two things are in perfect harmony. Except to these judges in the Third Circuit.

The Supreme Court at no point says that editorial actions turn into first-party speech because they are protected by the First Amendment, contrary to what they say here. That’s never been true, as even the mushroom encyclopedia example shows above.

Indeed, reading Section 230 in this manner wipes out Section 230. It makes it the opposite of what the law was intended to do. Remember, the law was written in response to the ruling in Stratton Oakmont v. Prodigy, where a local judge found Prodigy liable for content it didn’t moderate, because it did moderate some content. As then Reps. Chris Cox and Ron Wyden recognized, that would encourage no moderation at all, which made no sense. So they passed 230 to overturn that decision and make it so that internet services could feel free to engage in all sorts of publishing activity without facing liability for the underlying content when that content was provided by a third party.

But here, the Third Circuit has flipped that on its head and said that the second you engage in First Amendment-protected publishing activity around content (such as recommending it), you lose Section 230 protections because the content becomes first-party content.

That’s… the same thing that the court ruled in Stratton Oakmont, and which 230 overturned. It’s beyond ridiculous for the Court to say that Section 230 basically enshrined Stratton Oakmont, and it’s only now realizing that 28 years after the law passed.

And yet, that seems to be the conclusion of the panel.

Incredibly, Judge Paul Matey (a FedSoc favorite Trump appointee) has a concurrence/dissent where he would go even further in destroying Section 230. He falsely claims that 230 only applies to “hosting” content, not recommending it. This is literally wrong. He also falsely claims that Section 230 is a form of a “common carriage regulation” which it is not.

So he argues that the first Section 230 case, the Fourth Circuit’s important Zeran ruling, was decided incorrectly. The Zeran ruling established that Section 230 protected internet services from all kinds of liability for third-party content. Zeran has been adopted by most other circuits (as noted in that footnote of “all the cases we’re going to ignore” above). So in Judge Matey’s world, he would roll back Section 230 to only protect hosting of content and that’s it.

But that’s not what the authors of the law meant (they’ve told us, repeatedly, that the Zeran ruling was correct).

Either way, every part of this ruling is bad. It basically overturns Section 230 for an awful lot of publisher activity. I would imagine (hope?) that TikTok will request an en banc rehearing across all judges on the circuit and that the entire Third Circuit agrees to do so. At the very least, that would provide a chance for amici to explain how utterly backwards and confused this ruling is.

If not, then you have to think the Supreme Court might take it up, given that (1) they still seem to be itching for direct Section 230 cases and (2) this ruling basically calls out in that one footnote that it’s going to disagree with most other Circuits.

Читайте на 123ru.net