Free Preview

This is a preview of some of our exclusive, member only content. If you enjoy this article, please consider becoming a member.

The lead-up to mass litigation in the United States has a predictable sequence:

  1. Commercial enterprises unleash a new technology that delights consumers and garners handsome profits for investors.
  2. Scientific investigation raises the possibility that the new technology is harmful to people or property.
  3. Regulators sound the alarm.
  4. Internal documents and whistleblower accounts suggest key companies are aware of the possibility that their products cause harm.
  5. The plaintiffs’ bar recruits plaintiffs and files lawsuits in multiple state and federal courts.

The events leading up to the recently formed federal multidistrict litigation (MDL) IN RE: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation in the Northern District of California follow this sequence to a tee. As of late October, Meta Platforms, Snap, TikTok, Google and other social media companies face nearly 100 lawsuits alleging their products are addictive and that addiction leads to a range of mental harms; many of these lawsuits allege the plaintiff ultimately took his or her own life as a result of their addiction.

Executive Summary

Suits are mounting against social media companies alleged to have created addictions in teens that lead to mental and physical harm. Could this be the lead-up to mass litigation? Here, Praedicat SVP David Loughran reminds insurers that what starts as an isolated event quickly can spill over to adjoining fields drawing in new sets of plaintiffs and defendants armed with novel legal theories and evidence. In this case, he observes that social media liability fits within the broader sphere of algorithmic liability.

Also of interest is how this litigation fits within the broader context of assigning liability for algorithms whose influence over day-to-day life is growing much faster than our understanding of their consequences. The direct impact of the MDL in Northern California on most casualty books could be rather limited since the number of implicated insureds (thus far at least) is small. But the number of companies that rely on algorithms more generally to increase profits is huge.

What does this newly formed MDL tell us about the willingness of society to hold algorithms and their designers to account for social ill?

The Facebook Papers

Mark Zuckerberg launched the app that would become Facebook out of a Harvard dorm room in 2004. The advent of the smartphone in 2007, though, is when Facebook and other social media applications really took off. Facebook active users climbed from 50 million in 2007 to nearly three billion today. According to a survey conducted by Common Sense Media, screen use averaged 5.5 hours per day among children ages 8-12 in 2021 and 8.5 hours per day among teens ages 13-18; 77 percent of teens watch online videos daily, and 63 percent engage with social media spending an average of more than two hours per day on social media platforms.

Social media platforms depend on advertising, and the value of advertising increases with the size of the audience. Thus, like all content providers, social media companies seek to make their applications as engaging as possible. They do this with the help of features such as infinite scrolling, autoplay, notifications, likes, sharing and recommending content based upon detailed data collected about each user. In a 2017 Axios interview, Facebook’s first president Sean Parker recollects:

“The thought process that went into building these applications, Facebook being the first of them…was all about: ‘How do we consume as much of your time and conscious attention as possible?’ And that means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever. And that’s going to get you to contribute more content, and that’s going to get you…more likes and comments. It’s a social-validation feedback loop…exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology. The inventors, creators—it’s me, it’s Mark [Zuckerberg], it’s Kevin Systrom on Instagram, it’s all of these people—understood this consciously. And we did it anyway.”

By the time Parker spoke these words—now referenced in some social media complaints—numerous scientific studies had been published describing problematic social media use as an addiction and documenting a correlation between social media addiction and mental health problems.

Frances Haugen, a product manager at Facebook, added fuel to the fire when in the summer of 2021 she sent the U.S. Securities and Exchange Committee a trove of internal documents revealing that Facebook was well aware of growing evidence linking its social media apps with a range of social ills. Among the documents were slide decks summarizing research the company commissioned that showed its Instagram app worsened mental health problems, and body image issues in particular, among some teen girls. The release of the “Facebook Papers” along with congressional hearings in October 2021 were enough to trigger a series of shareholder lawsuits against Meta Platforms and then the first bodily injury lawsuits in January 2022 alleging the company was responsible for the suicide of a teenage girl (Rodriguez v. Meta Platforms, Inc. et al) and the ongoing mental health problems of another (Doffing v. Meta Platforms, Inc. et al).

The Lawsuits

A smattering of lawsuits alleging that gambling and social media applications caused bodily and financial injury were filed before the release of the Facebook Papers. After the release, however, social media lawsuits become far more explicit in their claims that social media companies subject underage users to a negligently designed product that undermines mental and physical well-being.

Among the allegations contained in these lawsuits are that social media applications and their algorithms:

  • Are designed in a manner that prioritizes engagement and profits over user safety.
  • Exploit the diminished decision-making capacity, impulse control and psychological resiliency of teenage users.
  • Fail to verify the age of users and are designed in a manner that frustrates parental control.
  • Fail to warn users of their addictive design and downplay their negative effects in public statements/
  • Could be designed in a manner that substantially decreases harm to minors.

While the lawsuits often detail horrific content viewed by underage plaintiffs, the focus importantly is on the negligent design of a product that, like cigarettes, creates a harmful addiction. Craving social validation and addicted to a relentless stream of often abusive content, users become anxious, depressed, dysmorphic, sleep deprived, malnourished and, as some lawsuits allege, suicidal.

The Science

In 2021, the American Academy of Pediatrics declared a “national emergency in child and adolescent health” responding to soaring rates of mental health issues and suicide among teenagers. Pediatricians are now more likely to treat teens for a mental health issue than they are for a broken bone, infectious disease or other physical ailment. The declaration places some blame on the COVID-19 pandemic but notes that teen mental health was in decline well before.

According to data from the U.S. National Survey on Drug Use and Health, the percentage of teens ages 12-17 reporting a major depressive episode increased from 8.1 percent to 15.8 percent between 2009 and 2019 (the percentage increased from 11.4 percent to 23.4 percent among girls). Suicide rates among U.S. teens ages 15-19 doubled between 2007 and 2015 and have continued to increase in recent years.

Now, we all know “correlation does not imply causation,” but it is hard not to take note of the fact that the observed inflection in worsening teen mental health at the beginning of the 2010s coincides with the acceleration in social media use. The correlation is also documented in many studies at the individual level. The incidence of mental health problems is higher among teens who spend more hours per day engaged with social media applications. By Praedicat’s count, more than 80 peer-reviewed studies now report a positive correlation between social media use and online gaming and a wide array of adverse mental health conditions, cognitive behaviors and neurological impairments. Only nine studies report no correlation.

When Mark Zuckerberg, though, was asked during a congressional hearing in March 2021 about the connection between social media and the declining mental health of children, he responded: “I don’t think the research is conclusive on that.” And he’s right on that point. The typical social media study design does not readily support causal inference. The observed correlation could be spurious, or the causation could even run in the opposite direction with poor mental health causing greater social media use.

A forthcoming study published in the American Economic Review, however, reports that student mental health declined on college campuses with access to Facebook relative to those that did not have access in Facebook’s early years. Under the assumption that Facebook’s choice of campuses for initial access was unrelated to the underlying mental health conditions of the student body, the correlation between Facebook’s roll-out and mental health can be interpreted causally. A 2018 experiment in which a sample of Facebook users were randomly assigned to having their accounts deactivated during the run-up to the November 2018 mid-term elections also demonstrated a causal effect. Study participants whose Facebook accounts were deactivated reported a significantly higher level of subjective well-being than those whose accounts remained active.

The accumulated science to date was enough to convince a British coroner in late September to rule that social media use contributed to the suicide of Molly Russell at age 14. But it is far from clear that this science will be enough to convince Judge Gonzalez presiding over MDL No. 3047 that a U.S. jury should be allowed to hear expert testimony and decide whether social media companies are liable for plaintiffs’ injuries. Even if Gonzalez is inclined to accept plaintiffs’ arguments that, in principle, social media use can lead to poor mental health outcomes, she may rule that it is a bridge too far to conclude that a specific plaintiffs’ mental health problems were caused by a social media application.

Algorithmic Liability

Nylah Anderson died by strangulation at age 10, responding to a “blackout challenge” video shown on TikTok that encourages children to choke themselves until they pass out. Her parents filed suit in May alleging TikTok’s algorithms determined the video was likely to be of interest to Nylah and that she died as a result. In October, however, a district court judge dismissed the lawsuit, arguing that Section 230 of the federal Communications Decency Act shields TikTok from liability. Section 230 bars claims that treat “interactive computer services” like social media platforms as the publisher of third-party content, meaning these companies cannot be held liable for the ill effects of that content.

The Anderson plaintiffs were well aware of the Section 230 defense, arguing that TikTok’s algorithms, not its third-party content, should be held liable, but the judge was not convinced. TikTok’s algorithms are a means for screening and promoting third-party content, which is what a publisher does, and therefore the plaintiff’s claims were treating TikTok as a publisher.

“The observed correlation could be spurious, or the causation could even run in the opposite direction with poor mental health causing greater social media use.”

A similar case before the Ninth Circuit, though, was decided in the plaintiffs’ favor. In Lemmon v. Snap, plaintiffs claimed that Snapchat’s “Speed Filter” encouraged the dangerous high-speed driving that killed three young men in 2017. The court ruled that the plaintiffs’ claim did not hinge on Snapchat as a publisher of the Snap messages that preceded the crash but rather on Snapchat’s negligence in designing and promoting a feature of its own creation.

The Supreme Court agreed in October to weigh-in on the scope of Section 230 protections in Gonzalez v. Google. In that case, plaintiffs claim YouTube’s algorithms promoted incendiary videos created by ISIS designed to enlist support and recruits for terrorist activities, including the November 2015 terrorist attack in Paris that resulted in the death of family member Nohemi Gonzalez. The justices will consider whether the intent of Section 230 was to bar claims that implicate the technology of “Internet publishing” in addition to the published content itself.

The Broader Playing Field

Mass litigation events often unfold over decades. What starts as an isolated event spills over to adjoining fields of play, drawing in new sets of plaintiffs and defendants armed with novel legal theories and evidence. Given current science and an assumption that plaintiffs overcome the Section 230 defense, Praedicat’s liability catastrophe model estimates economy-wide loss (indemnity plus defense) for a mass litigation event centered on addictive software design has an expected value of $15 billion and a 5 percent chance of exceeding $70 billion. This litigation would encompass social media and Internet gaming companies and potentially spread to smartphone and other device makers as well. By the way, addiction to Internet gaming, or “Internet gaming disorder,” has been proposed for inclusion in the Diagnostic and Statistical Manual of Mental Disorders.

Given current science and an assumption that plaintiffs overcome the Section 230 defense, Praedicat’s liability catastrophe model estimates economy-wide loss (indemnity plus defense) for a mass litigation event centered on addictive software design has an expected value of $15 billion and a 5 percent chance of exceeding $70 billion.

But why stop there? The use of algorithms to advantage one party at the expense of another is commonplace throughout our economy. Individuals on the short end of the algorithmic stick will rightly question whether they are being treated fairly and, when the perceived harm is serious enough, take the fight not just to big tech but to wherever algorithms are deployed.

Congress wrote Section 230 in 1996 with the belief that the information age would never reach its potential if companies could be held liable for each and every bit of information transmitted via their technology. This reasoning provides little solace to the individuals today who believe they or their loved ones have been harmed by what investigative journalist Max Fisher has labeled the “Chaos Machine.” Aggrieved individuals will push back until society at large can strike a new balance between the benefits of low-cost access to information and the risk that an algorithmic approach to collecting and disseminating that information does tangible harm to people, property and institutions.