This article is from the source 'guardian' and was first published or seen on . The next check for changes will be

You can find the current article at its original source at https://www.theguardian.com/technology/2025/feb/27/instagram-reels-violent-videos

The article has changed 2 times. There is an RSS feed of changes available.

Version 0 Version 1
Instagram Reels flooded with violent videos before Meta says it fixed error Instagram Reels flooded with violent videos before Meta says it fixed error
(about 2 hours later)
Users complained of ‘not safe for work’ videos in feeds despite some having enabled setting to filter such contentUsers complained of ‘not safe for work’ videos in feeds despite some having enabled setting to filter such content
Meta Platforms said on Thursday it had resolved an error that flooded the personal Reels feeds of Instagram users with violent and graphic videos worldwide.Meta Platforms said on Thursday it had resolved an error that flooded the personal Reels feeds of Instagram users with violent and graphic videos worldwide.
It was not immediately clear how many people were affected by the glitch. Meta’s comments followed a wave of complaints on social media about violent and “not safe for work” content in Reels feeds, despite some users having enabled the “sensitive content control” setting meant to filter such material.It was not immediately clear how many people were affected by the glitch. Meta’s comments followed a wave of complaints on social media about violent and “not safe for work” content in Reels feeds, despite some users having enabled the “sensitive content control” setting meant to filter such material.
“We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake,” a spokesperson for Meta said. The company did not disclose the reason behind the error.“We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake,” a spokesperson for Meta said. The company did not disclose the reason behind the error.
Meta’s moderation policies have come under scrutiny after it decided last month to scrap its US fact-checking program on Facebook, Instagram and Threads, three of the world’s biggest social media platforms with more than 3 billion users globally. Meta’s moderation policies have come under scrutiny after it decided last month to scrap its US factchecking program on Facebook, Instagram and Threads, three of the world’s biggest social media platforms with more than 3 billion users globally.
Violent and graphic videos are prohibited under Meta’s policy and the company usually removes such content to protect users, barring exceptions given for videos that raise awareness on topics including human rights abuse and conflict.Violent and graphic videos are prohibited under Meta’s policy and the company usually removes such content to protect users, barring exceptions given for videos that raise awareness on topics including human rights abuse and conflict.
The company has in recent years been leaning more on its automated moderation tools, a tactic that is expected to accelerate with the shift away from fact-checking in the United States. The company has in recent years been leaning more on its automated moderation tools, a tactic that is expected to accelerate with the shift away from factchecking in the United States.
Meta has faced criticism for failing to effectively balance content recommendations and user safety, as seen in incidents such as the spread of violent content during the Myanmar genocide, Instagram promoting eating disorder content to teens and misinformation during the Covid-19 pandemic.Meta has faced criticism for failing to effectively balance content recommendations and user safety, as seen in incidents such as the spread of violent content during the Myanmar genocide, Instagram promoting eating disorder content to teens and misinformation during the Covid-19 pandemic.