Section 230 Has a Design Problem
The internet was built on a promise of connectivity, and the rise of user-generated content and social media have helped it overdeliver on that promise. As Big Tech’s influence has grown, so too have the unintended consequences of its platforms, which are now ushering in new legal and regulatory precedents for tech companies of all sizes.
Section 230 of the Communications Decency Act of 1996 was enacted to protect websites and online platforms from liability for content posted by third parties. Once heralded as the law that made the modern internet possible, Section 230 now sits at the center of a fierce debate about accountability, safety, and online speech. With courts and policymakers signaling a shift in how platforms are held responsible, the question is no longer if the rules will change, but how prepared are platforms for what comes next?
Last month, juries in California and New Mexico handed down landmark verdicts against two of the biggest players in tech, holding them responsible not for what content users posted on their respective platforms, but for how their products were designed. It’s important to point out that these disputes aren’t happening in a vacuum. Conversations around addictive design features, such as infinite scrolling and auto-play, have gained significant traction in the U.S., Europe, Australia, and other markets.
For three decades, Section 230 has given online platforms broad legal cover for user-generated content. But the internet of 1996 looks nothing like what we have today. Platforms aren’t passive hosts anymore. They are massive consumer-driven systems that shape what billions of people see, think, and feel every day. Big Tech critics argue that when a company designs a product that it knows can cause harm, and profits from the engagement that harm generates, protecting it from accountability doesn’t protect innovation; it protects negligence.
The March jury verdicts represent a shift that’s been building for years. In 2023, the U.S. Surgeon General warned that social media poses a profound risk to youth mental health, pointing to platform design and algorithmic recommendations. Pew Research found that 90% of U.S. teens ages 13 to 17 reported using at least one social media platform per day. In the U.S., we don’t allow automakers to sell cars without seat belts or pharmaceutical companies to market drugs without disclosing side effects. These aren’t radical ideas. These are basic expectations we’ve built into nearly every industry that touches people’s daily lives, and the latest verdicts suggest tech should be no different.
Cracks in the Foundation
The debate over Section 230 is no longer limited to content moderation. Product design and the harms those designs can create are facing real scrutiny. Three areas where pressure is building stand to reshape what accountability means for platforms. While the recent verdicts targeted Big Tech, the legal and regulatory standards emerging from these cases will inevitably trickle down to small- and mid-sized platforms that rely on many of the same design patterns and engagement strategies.
- Algorithmic transparency. Algorithms decide what you see online, but many platforms say little about how those decisions get made. That is a problem when those same systems are driving users toward harmful content in the name of engagement. The EU’s Digital Services Act now requires the largest platforms to conduct risk assessments and submit to independent audits, and the U.S. is heading in a similar direction. Yet, many online platforms today are still shy about transparency, according to New America. If platforms want public trust, the public needs to understand how their systems work.
- Product design and user safety. The March verdicts frame features like autoplay, infinite scroll, and algorithmic feeds not as neutral tools but as deliberate design choices with measurable effects. In 2023, Gallup reported that U.S. teens spend nearly five hours a day on social media, much of it shaped by algorithmically curated content. When a product is engineered to maximize time spent on it, and the people most affected are minors, treating its design as beyond reproach doesn’t hold up, no matter the size of the company behind it.
- Free speech and content moderation. As platforms face growing pressure to act on harmful content, the risk of overcorrection is real. Too little moderation invites legal consequences; too much erodes trust and chills open dialogue. For smaller platforms with fewer resources, navigating this tension can feel especially daunting. But the difficulty in finding balance isn’t a reason to avoid it. It is a reason to build better frameworks that protect both safety and expression.
These three areas don’t exist in isolation. They feed into each other and together they define what platform accountability will look like going forward.
What Tech Platforms Should Do Now
Companies that wait for final rules before acting will find themselves behind. The precedents being set today will shape expectations for platforms across the board, not just the giants. Here are five steps worth taking now:
- Run product impact assessments regularly. Evaluate features like algorithms and engagement loops for potential harms. Document mitigation efforts to show good faith and due diligence.
- Open up about how your algorithms work. Provide clear explanations of how content is prioritized and how data is used. Embrace independent audits and oversight teams to build trust.
- Put real protections in place for users, especially minors. Implement age-appropriate design standards, enhance parental controls, and build safer defaults into the product experience.
- Get involved in policy conversations. Stay ahead of regulatory trends by engaging with lawmakers, regulators, and third-party advocates early and often.
- Prepare for litigation and reputational risks. Review legal exposure, update crisis response plans, and develop communications strategies before you need them.
The March verdicts are a wake-up call, and not just for Big Tech. The days of broad immunity under Section 230 are numbered, and the future will demand greater transparency, safety, and accountability from platforms of all sizes.
Trilligent can help future-proof your brand by crafting responsible design approaches, stakeholder engagement strategies, and honest communications plans.


