For over a decade, social media platforms have been largely untouchable legally, shielded by laws designed to protect free speech and the sheer volume of user-generated content. But that may be about to change. Two landmark lawsuits in California are targeting not the content on platforms like Meta (Facebook, Instagram), Google (YouTube), Snap (Snapchat), TikTok (ByteDance), and Discord, but the design of the platforms themselves. The core argument? That these features are deliberately engineered to be addictive, and companies should be held responsible for the psychological harm they inflict – particularly on young people.

The Rise of Addiction-Focused Lawsuits

The lawsuits, brought by school districts, parents, and individuals, allege that endless scrolling, auto-playing videos, constant notifications, and algorithmic feeds exploit users by hijacking their attention. This isn’t about policing what people post; it’s about the platforms intentionally creating an environment that keeps people hooked. The plaintiffs argue that these “defects” turn social media into addictive products, akin to slot machines, designed to maximize engagement at all costs.

This is a critical shift in strategy. Traditionally, legal battles focused on content moderation (bullying, harmful videos, etc.). But the current cases bypass those debates by focusing on the underlying mechanics that drive addiction. This approach sidesteps the First Amendment protections often invoked by tech companies.

Section 230 and the Free Speech Shield

For years, social media giants have benefited from Section 230 of the Communications Decency Act, which largely shields them from liability for user-posted content. This law, written in the 1990s, made sense when the internet was a nascent space. But today, it allows companies to avoid accountability even as their platforms demonstrably harm users.

Several states have tried to regulate social media by focusing on content, passing laws to limit minors’ access or ban “like” counts. However, these efforts have largely failed, as companies successfully argued they violate free speech rights. The lawsuits in California avoid this trap by arguing that the design of the platforms is the problem, not the speech itself.

A Tobacco-Style Reckoning?

The legal strategy echoes the cases brought against tobacco companies in the 1990s. Then, the government argued that companies knew their products were harmful but concealed the truth. Now, plaintiffs allege that social media companies also knew their platforms were addictive and exploitative, yet continued to prioritize engagement over user well-being.

Leaked internal documents from Meta already suggest the company was aware of the addictive nature of its products. One internal communication allegedly described Instagram as a “drug,” with employees acknowledging they were “basically pushers.” These documents, along with others from YouTube, are being used to paint a picture of negligence and intentional harm.

The Potential Impact

If successful, these lawsuits could force social media companies to fundamentally change their designs. They might be required to remove features that encourage addiction, warn users about the harmful effects of excessive use, or even face financial penalties for the damage caused.

The trials are ongoing, but the implications are clear: US law may finally catch up to the reality that social media isn’t just a tool for connection; it’s a product designed to exploit human psychology. This could trigger a wave of regulation and force tech companies to take responsibility for the negative consequences of their platforms.