News in English

The Kids Online Safety Act And The Tyranny Of Laziness

There is some confusion about whether the Kids Online Safety Act (KOSA) regulates content or design on digital platforms like Instagram or TikTok. It’s easy to see why that is, because the bill’s authors claim they are attempting to make the bill about design. This is a good move on their part, as regulations on design can allow us to stop bad behavior from tech companies without endangering speech.

Unfortunately, KOSA didn’t nail it.

That’s because KOSA is trying to regulate the design of content recommendation systems, i.e. the digital chutes that all our online speech filters through, which are unique to each online platform. In KOSA’s case, it’s proven impossible so far to separate the design of content recommendation systems from the speech itself. The duty of care and insistence that it covers “personal recommendation systems” means the bill will inevitably impact the speech itself.

The reason is pretty simple: tech companies are inherently lazy — those with decision making authority will want to comply with regulations in the cheapest and easiest way possible. This means they will take shortcuts wherever possible, including building censorship systems to simply make difficult-to-manage content go away. That will almost certainly include politically targeted content, like speech related to LGBTQ+ communities, abortion, and guns. And they will conduct this censorship with a lazily broad brush, likely sweeping up more nuanced content that would help minors with problems like eating disorders or suicide.

The difference between the aspirations of KOSA and its inevitable impacts work like this: KOSA wants systems engineers to design algorithms that put safety first and not user engagement. While some companies are already pivoting away from pure engagement focused algorithms, doing so can be really hard and expensive because algorithms aren’t that smart. Purely engagement focused algorithms only need to answer one question — did the user engage? By asking that one question, and testing different inferences, the algorithms can get very good at delivering content to a user that they will engage with.

But when it comes to multi-purpose algorithms, like those that want to only serve positive content and avoid harmful content, the task is much harder and the algorithms are unreliable. Algorithms don’t understand what the content they are ranking or excluding is or how it will impact the mental health and well-being of the user. Even human beings can struggle to predict what content will cause the kinds of harm described by KOSA.

To comply with KOSA, tech companies will have to show that they are taking reasonable steps to make sure their personalized recommendation systems aren’t causing harm to minors’ mental health and well-being. The only real way to do that is to test the algorithms to see if they are serving “harmful” content. But what is “harmful” content? KOSA leans on the FTC and a government-created Kids Online Safety Council to signal what that content might be. This means that Congress will have significant influence over categorizing harmful speech and platforms will use those categories to implement keywords, user tags, and algorithmically-estimated tags to flag this “harmful” content when it appears in personal recommendation feeds and results. This opens the door to government censorship.

But it gets even worse. The easiest and cheapest way to make sure a personal recommendation system doesn’t return “harmful” content is to simply exclude any content that resembles the “harmful” content. This means adding an additional content moderation layer that deranks or delists content that has certain keywords or tags, what is called “shadowbanning” in popular online culture.

There are three major problems with this. The first is obviously that the covered platforms will have created a censorship machine that accepts inputs from the government. A rogue FTC could use KOSA explicitly for censorship, by claiming that any politically targeted content leads to the harms described in the bill. We cannot depend on big tech to fight back against this, because being targeted by an administration comes with a cost and making an administration happy might come with some benefits.

Big tech may even eventually benefit from this relationship because content moderation is impossible to do well: too often there are nuanced decisions where content moderators simply have to make the choice they estimate to be the least harmful. In some ways, KOSA allows tech companies to push the responsibility for these decisions onto the FTC and the Kids Online Safety Council. Additionally, tech companies are likely to over-correct and over-censor anything they think the government may take action against, and take zero responsibility for their laziness, just like they did after SESTA-FOSTA.

The second problem is that these systems will leak across the Internet. While they are targeted at minors, the only way to tell if a user is a minor is to use expensive and intrusive age verification systems. Covered platforms will want to err on the side of compliance unless they have explicit safe harbors, which aren’t exactly in the bill. So users may accidentally get flagged as minors when they aren’t. Worse, even the accounts that users proactively choose to “follow” aren’t safe from censorship under KOSA because the definition of “personal recommendation system” includes those that “suggest, promote, or rank content” based on personal data. Almost all feeds of content a user is following are ranked based on the algorithm’s estimation of how engaging that content will be to you. A user is less likely to read a post that says “I ate a cookie today” than one that says “I got married today” because users don’t like scrolling through boring content. And so even much of what we want to see online will more than likely be wrapped into the KOSA censorship system that tech companies create to comply.

The third problem is that these sorting systems will not be perfect and will lead to mistakes on both sides: “harmful” content will get through and “harmless” content will be shadowbanned. In some ways this could make the problems KOSA is explicitly attempting to solve worse. For example, imagine the scenario in which a young user posts a cry for help. This content could easily get flagged as suicide or other harmful content, and therefore get deranked across the feeds of all those that follow the person and care for them. No one may see it.

This example shows how challenging content moderation is. But KOSA is creating incentives to avoid solving these tricky issues, and instead do whatever the government says is legally safe. Again, KOSA will remove some degree of culpability for the tech companies by giving them someone else to blame so long as they are “complying” when bad things happen.

Content moderation is hard and KOSA contains the worst of all worlds in the places it touches content moderation by neither trying to understand what it is regulating nor giving tech companies clear guidance that would do no harm. Congress would be better off stripping out the content moderation parts of the bill and leaving them for the future after it has engaged in a good deal of fact finding and discussions with experts.

There are problems here that Congress assumes can be solved simply by telling tech companies to do better, when I’ve never met a content moderation professional who could tell me how they could actually solve these problems. We shouldn’t allow Big Tech to pass the buck to government boards at the expense of online speech — the next generation deserves a better internet than the one KOSA creates.

Matt Lane is Senior Policy Counsel at Fight for the Future.

Читайте на 123ru.net