Getting more confused about regulating social media
This past September, a British coroner concluded that a 14-year-old “died from an act of self-harm while suffering from depression and the negative effects of online content.” As The New York Times reported, the investigation called witnesses, including the head of Health and Well-Being Policy for Meta. After Meta finally agreed to hand over 16,300 items from the teen’s Instagram account, it was found that 2,100 of the posts were “related to self-harm and depression”—about 12 per day.
The coroner’s conclusion spurred the introduction of a draft Online Safety Bill, in part to protect other children from such harm. In the U.S., this has added yet more fuel to the drive to hold social networking platforms responsible for their effects on our children.
I don’t know how to think about this, and I want to know why, since it’s clear as can be that the death of a child is tragic, and the death of a child by suicide is only more so. So why am I confused, beyond the very important, and widely discussed, free speech issues? (Warning: I’ve come out of this column more confused beyond I started.)
One core reason it’s hard to figure out what to do is that policies are often designed to work at scale, and now we want to apply them to media that are themselves distinguished by their scale. To make it yet more complex, they scale personal relationships. Personal relationships do not scale well, and neither do attempts to regulate them.
The good news is that we’re used to scaled policies. For example, we understand that scale requires the introduction of arbitrarily precise limits: You might not get a loan because you reported on your tax form a hundred dollars less than the minimum annual income. We grumble when the person behind the desk doesn’t want to hear about the particularities of our case. “I appreciate your point,” says the tax agent—or airline rep, or traffic cop, etc.—“but I can’t make exceptions.” We grumble but we understand the demands of scale.
We also have some sense of how scales affect risks. We generally don’t insist on shutting down an entire product line just because it failed on us, for scale implies there will be failures. Otherwise, we’d shut down auto manufacturers after the first death in one of their vehicles, and we wouldn’t have any cars. In fact, if we made that same demand of child car seats, we wouldn’t have any of those either. So, how many car deaths are too many?
Scale makes us ask impossible questions like that. If we were told that in the U.S., we could have an automobile system that produced zero fatalities but only if every year we were willing to sacrifice 43,000 Americans chosen at random, we wouldn’t do it. But we think about the benefits of cars, including the lives they save, and we accept some significant number of deaths per year.