Ethically Designing Social Media

Last week, Ravi Iyer, a former Data Science Manager, Research Manager, and Product Manager at Meta, shared insights into the challenges and potential solutions for creating a more ethical and socially responsible social media environment. Based on his experiences with platforms like Facebook, Iyer highlighted how design can mitigate negative societal impacts, emphasizing the importance of thoughtful content moderation, incentivizing positive engagement, and addressing the influence of algorithms on user behavior.

Key insights from Meta’s approach:
Understanding dangerous speech

  • Beyond hate speech: Media often focuses on hate speech but miss other forms of dangerous content, such as “fear speech” — messages that provoke fear without direct aggression, which can still incite violence (e.g., rumors leading to mob actions).
  • More on fear speech: Fear speech is when the content creator’s message centers on a warning or alarm, causing panic and heightened suspicion about particular groups. Effective moderators need to consider not just the content but the speaker, the message, the audience, and the medium. Iyer noted that unchecked fear speech has led to real-world violence, as seen in examples like child abduction rumors leading to mob lynchings in India.

Design interventions that work

  • “3% Solutions” and engagement incentives: Minor design tweaks — what Iyer termed “3% solutions” — can influence behavior significantly by “tilting the playing field.” For instance, reducing incentives for engagement-driven content lowers the prominence of inflammatory posts.
  • Quality of content over engagement: Rather than simply promoting highly engaged posts, platforms benefit from user feedback on content quality, helping reduce the spread of low-value or harmful information. Meta experimented with surveys asking users to rate content quality, shifting focus from quantity to substance.

Improving user experience with interface design

  • Positive vs. negative engagement: Interface elements should prompt users for feedback on content they appreciate, rather than relying solely on negative reports, which skew algorithmic responses. This shift can create a healthier ecosystem that discourages divisive content.
  • Functionality limits and user protection: Addressing issues like sextortion rings, bullying, and unwanted advances, Meta introduced options like “profile lock” in India to help vulnerable users feel safer on the platform.

Balancing aspirations and realities of technology

  • Concerns with manipulative algorithms: Many users, particularly younger ones, feel manipulated by social media algorithms that encourage prolonged screen time. Design choices that encourage endless scrolling or video suggestions exploit users’ attention spans, which is concerning given the long-term implications on mental health.
  • Mental health and regulatory measures: Studies cited by Iyer show a strong connection between social media usage and increased mental health issues, especially among teenagers.

Challenges with influencer culture

  • Pressure on influencers to amplify content: The influencer model can lead to increasingly extreme behaviors as influencers compete for attention. This dynamic risks normalizing unethical behaviors, often due to the pressure to grow engagement and maintain influence.

Broader questions and proposed solutions

  • Radicalization and content moderation: Platforms face a dilemma in preventing radicalization, as extreme views often dominate due to high engagement. Limiting engagement incentives could reduce the visibility of radical influencers, creating a more balanced ecosystem.
  • Role of government in moderation: While Iyer expressed skepticism about government regulation of speech, he said the “design,” not “government,” should be making guidelines on platform structure. Regulating “time and place” aspects of online speech rather than content could help create a safe yet free digital environment.
  • Teaching digital literacy: Iyer urged educators to teach students to critically assess content, encouraging awareness of recycled or manipulative content tactics. Such education could empower users to navigate social media more responsibly.

Through thoughtful design and ethical considerations, Iyer suggested that social media could evolve into tools that prioritize users’ well-being over profit, promoting meaningful connections and long-term platform health over sensationalism and engagement.

  • Looking for something specific?