The algorithms are not your friends

Are online platforms responsible for the harm their algorithms cause? Yes, says the Third Circuit.

Who’s holding the strings? Source: Midjourney.

For a while now, I’ve been saying that the most under-reported story in technology (and, by extension, the modern world) is how algorithms have come to dominate our lives. 

But has anyone listened? No. 

This may be because I have mostly been saying these things to my cat, who is frankly uninterested in anything other than why her bowl is not completely full 100 percent of the time, and why I am not scratching her belly right now. 

Source: Cheezeburger.com (where else?)

Very simply, algorithms are formulas that drive automated decisions based on criteria that’s largely invisible to the rest of us. That’s not a good thing, by the way, but it’s where we are. And those algorithms can have life-changing implications. 

Algorithms can help determine whether the online employment application you just filled out ever makes it in front of the people making the hiring decision; whether a bank decides to offer you a credit card at a marginally less usurious interest rate; whether you get approved for a life or home insurance policy; the potential mates presented to you on a dating app; the advertisements you see on your favorite streaming service or website; the music or videos that automatically populate your feed on your favorite digital services (or just start playing, whether you want them to or not); the ‘news’ stories that appear on your Facebook account (whether they’re complete bullshit or not); the ‘’influencers’ you’ve never heard of before who make your visits to Instagram such a delightfully vapid experience, and so on. 

And sometimes those algorithms end up causing children to harm themselves. 

Take, for example, the lawsuit Pennsylvania mom Tawainna Anderson filed against TikTok and its owner company, Bytedance. Per the case summary:

TikTok, Inc., via its algorithm, recommended and promoted videos posted by third parties to ten-year-old Nylah Anderson on her uniquely curated “For You Page.” [FYP] One video depicted the “Blackout Challenge,” which encourages viewers to record themselves engaging in acts of self-asphyxiation. After watching the video, Nylah attempted the conduct depicted in the challenge and unintentionally hanged herself. Nylah’s mother, Tawainna Anderson, sued TikTok and its corporate relative ByteDance.

It’s a horrific story. Yet TikTok was initially cleared of having any role in young Nylah’s death because it didn’t actually make those videos; the TikTok algorithm just decided to show them to her. That decision was based on the very generous protections of Section 230 of the Communications Decency Act of 1996, which absolve platforms like TikTok (and Facebook, Xitter, et al) from any liability for the content its users upload and share. 

Except for one thing. The US Third District Circuit Court of Appeals is no longer buying that argument. It just ruled that the protections of Section 230 do not apply in this case. And, as Blogger/Researcher/Overall Smart Guy Matt Stoller has pointed out, the implications of that decision could be enormous. [1]

Code zero

But before I get to that, let’s get back to algorithms. 

Algorithmic decision making has gotten a lot more attention with the sudden advent of AI Everything Everywhere All at Once. [2] But before that, algorithms mostly just sort of operated in the background, doing their thing. Pay no attention to the code behind the curtain, please enjoy this video of an adorable puppy playing with a baby chick. 

The thing to remember about algorithms is that they don’t write themselves. (Well, with AI models, they do – but only after a lot of care and feeding by humans paid to train those models.) Squadrons of highly compensated engineers at Google, Meta, Tiktok, Xitter, and the like cobble these things together for the sole purpose of enriching your life driving you to spend more time on their services, click on more stupid-ass links or videos when you really should be getting some work done, be subjected to more advertisements, and make these services even more metric shit tons of money in the process. [3]

Are algorithms an expression of a corporation’s free speech (and thus, considered sacrosanct by First Amendment absolutists)? Or are they a product, not unlike toothpaste or cat food, designed by their owners solely for the purpose of generating profits?

The corrupt bought-and-paid-for-by-the-Federalist-Society Supreme Court, which has consistently ruled that corporations have all the rights of living/breathing humans with almost none of the responsibilities, has decided that ‘curated feeds’ (ie, algorithms that determine what you see next) are protected by the First Amendment. 

The Third Circuit said, Fine. By choosing to show these curated feeds to its customers – fully exercising their human-like free speech rights in the process, as God and the Chief Justice intended – those platforms are now responsible for the material contained within those feeds, just like a regular old person would be. 

Per Stoller:

What’s all this then?

Section 230 is often credited with enabling the meteoric growth of the Internet, particularly platforms that rely almost entirely on user generated content, such as social media. Whether you think that’s a good thing or not is another question (I’m leaning more towards not, these days). 

Up until now, if I posted on Facebook that “Your mother was a hamster, and your father smelt of elderberries,” you could conceivably sue me for defamation, but you wouldn’t have been able to sue Mark Zuckerberg for allowing me to defame you. [4] Now, under the new Third Circuit ruling, you could, at least theoretically.

There are many many many other examples where social media algorithms have caused real harm in the physical world – not the least of which includes influencing the results of elections. Perhaps these mega-platforms might take moderation and enforcement more seriously if there were billions in potential penalties attached to them.

I am confident that SCOTUS will eventually find a way to protect the interests of deep-pocketed corporate megaliths – because what else are all those private flights, exclusive vacations, shady real estate transactions, and $300K land yachts paying for? – but in the meantime, it’s going to be pretty interesting to see what happens next.

Have you been harmed by algorithms? Share your pain in the comments or email me: [email protected].

[1] A tip of the cranky chapeau to faithful reader Natalie HB, for once again steering me toward news I could use.

[2] The ‘black box’ nature of AI is drawing more attention to how these algorithms work – in part, because the data scientists who built these AI models can’t explain how they work. They know these models are producing accurate results, most of the time, but how they arrived at those results is often a mystery. Isn’t that special? Look for more regulations demanding that corporations use Explainable AI (XAI), assuming the algorithms allow us to have a semi-functioning government in the future.

[3] Or, in the case of Xitter, lose more metric shit tons of money while spreading Nazi propaganda and fluffing Elon’s ego.

[4] And I in turn might get sued by Monty Python for copyright infringement. It’s a fair cop.

Reply

or to participate.