Social media giants made decisions which allowed more harmful content on people's feeds, after internal research into their algorithms showed how outrage fueled engagement, whistleblowers told the BBC. More than a dozen whistleblowers and insiders have laid bare how the companies took risks with safety on issues including violence, sexual blackmail, and terrorism as they battled for users' attention.
An engineer at Meta, which owns Facebook and Instagram, described how he had been told by senior management to allow more borderline harmful content - which includes misogyny and conspiracy theories - in users' feeds to compete with TikTok. They sort of told us that it's because the stock price is down, the engineer said.
A TikTok employee gave the BBC rare access to the company’s internal dashboards of user complaints, revealing how staff were instructed to prioritize certain cases involving politicians over reports of harmful posts featuring children. Decisions were being made to maintain a strong relationship with political figures to avoid threats of regulation or bans.
The whistleblowers who spoke to the BBC documentary, Inside the Rage Machine, provide insight into how the industry responded following TikTok's growth, whose engaging algorithm left rivals scrambling. A senior Meta researcher noted that Instagram Reels was launched without adequate safeguards, resulting in a higher prevalence of bullying, harassment, and hate speech compared to the main Instagram platform.
Furthermore, whistleblowers illustrated the pervasive normalization of harmful content across social media. TikTok’s internal safety team pointed out that escalating cases of terrorism and sexual violence were overshadowed by the need to cater to public figures, sparking serious concerns about user safety.
In the race to maximize engagement, algorithms at both platforms not only discouraged the removal of harmful content but actively promoted it, leaving a trail of safety concerns, particularly for vulnerable users like teenagers. The tragedy is compounded by social media’s growing role in shaping young minds and opinions, as highlighted by cases of radicalization and increased tolerance for violence observed by counter-terror specialists.
As the algorithms gain more autonomy and less scrutiny amidst competitive pressures, the future of social media safety and its ethical implications remain precariously balanced.
An engineer at Meta, which owns Facebook and Instagram, described how he had been told by senior management to allow more borderline harmful content - which includes misogyny and conspiracy theories - in users' feeds to compete with TikTok. They sort of told us that it's because the stock price is down, the engineer said.
A TikTok employee gave the BBC rare access to the company’s internal dashboards of user complaints, revealing how staff were instructed to prioritize certain cases involving politicians over reports of harmful posts featuring children. Decisions were being made to maintain a strong relationship with political figures to avoid threats of regulation or bans.
The whistleblowers who spoke to the BBC documentary, Inside the Rage Machine, provide insight into how the industry responded following TikTok's growth, whose engaging algorithm left rivals scrambling. A senior Meta researcher noted that Instagram Reels was launched without adequate safeguards, resulting in a higher prevalence of bullying, harassment, and hate speech compared to the main Instagram platform.
Furthermore, whistleblowers illustrated the pervasive normalization of harmful content across social media. TikTok’s internal safety team pointed out that escalating cases of terrorism and sexual violence were overshadowed by the need to cater to public figures, sparking serious concerns about user safety.
In the race to maximize engagement, algorithms at both platforms not only discouraged the removal of harmful content but actively promoted it, leaving a trail of safety concerns, particularly for vulnerable users like teenagers. The tragedy is compounded by social media’s growing role in shaping young minds and opinions, as highlighted by cases of radicalization and increased tolerance for violence observed by counter-terror specialists.
As the algorithms gain more autonomy and less scrutiny amidst competitive pressures, the future of social media safety and its ethical implications remain precariously balanced.



















