Whistleblowers Reveal Social Media Giants Prioritize Engagement Over User Safety

In a revealing exposé, former employees from Meta and TikTok share how internal pressures led these social media platforms to amplify harmful content to boost engagement, risking user safety in the process.

A group of whistleblowers from Meta and TikTok has exposed significant risks to user safety on these platforms, revealing that executives prioritized engagement metrics over restricting harmful content. Engineers and insiders disclosed decisions that led to the amplification of 'borderline' harmful materials, including violence, misogyny, and conspiracy theories. This exposure highlights the dangerous dynamics within the tech giants' algorithms as they compete for user attention amid rising concerns about the negative impact on society.

Social media giants made decisions which allowed more harmful content on people's feeds, after internal research into their algorithms showed how outrage fueled engagement, whistleblowers told the BBC. More than a dozen whistleblowers and insiders have laid bare how the companies took risks with safety on issues including violence, sexual blackmail, and terrorism as they battled for users' attention.

An engineer at Meta described being instructed by senior management to permit more 'borderline' harmful content to compete with TikTok, citing financial pressures and the company's declining stock price as motivations. A TikTok employee revealed how internal complaints regarding dangerous posts, particularly involving children, were often overshadowed by cases involving politicians, suggesting a prioritization of political relationships over user safety.

Whistleblowers involved in the development of Instagram Reels reported significant safety lapses during its launch, leading to a spike in bullying and harmful interactions compared to other platform features. This pattern indicates a troubling trend where the race to innovate and retain market share frequently compromises user safety.

The algorithm's role in amplifying outrage-driven content has drawn scrutiny, with whistleblowers arguing that platforms inadvertently foster negativity and division among users. Insiders did not shy away from admitting that the platforms reward controversial content more vigorously, creating an echo chamber for harmful ideologies.

In response to accusations, both companies have defended their practices, asserting that measures are in place to combat harmful content. However, employee testimonies depict a different reality—one where safety protocols are insufficient against the backdrop of aggressive growth targets.