Policing a marginalized community won’t resolve Meta’s safety issues, argues activist Lennon Torres.
In April, Meta quietly reversed a decision to remove an Instagram post honoring older lesbian relationships in Brazil. Despite lacking sexual content or harm to minors, the post was taken down. The content highlighted a historic moment where lesbians had to hide their relationships as “roommates” or “gal pals.” Meta cited hate speech rules, but the Oversight Board later acknowledged this as over-enforcement by automated systems unable to understand context or reclaimed language. The LGBTQ+ community advocated for the content’s restoration.
Policymakers must recognize the implications of content moderation mistakes. Restricting social media access to “protect kids online” overlooks the human cost, as shown in Brazil. When platforms rush to remove speech, they miss nuance, affecting those whose stories require empathy. Instead of policing content, lawmakers should regulate harmful design choices like endless scroll and engagement-based recommendations.
LGBTQ+ youths rely more on online spaces for community and support, yet face unsafe interactions. Australia’s social media ban on minors demonstrates how restriction can isolate vulnerable groups. Recommendation systems prioritize engagement, exposing youth to risky content and connections. Platforms should focus on user protection rather than content censorship to address these issues.
Design codes can reduce risk without turning companies into cultural censors, advocating for product refinement over infringement of constitutional rights. By regulating platform design, we can protect youths from harmful systems and ensure a safer online environment. This case highlights the need to start regulating platform design rather than content.
This article reflects Lennon Torres’s opinion, Movement Director at the Heat Initiative and founding partner of The Attention Studio.
