Published on February 27th, 2025
Instagram users worldwide have recently experienced an alarming surge in disturbing, violent, and NSFW (Not Safe for Work) content in their Reels feed.
Many users took to social media to express their concerns, questioning whether this was a glitch, an algorithm update, or a moderation failure.
Meta, the parent company of Instagram, has since addressed the issue, confirming that an “error” led to the widespread exposure of such content.
But what exactly happened, and what does this mean for Instagram’s content moderation policies? Here’s everything you need to know.
What’s Happening with Instagram Reels?
Over the past few days, Instagram users have been reporting an influx of violent and graphic content appearing on their Reels feed. This includes videos featuring:
- Graphic violence
- Severe injuries
- Dead bodies
- Violent attacks
Many of these videos were marked as “Sensitive Content,” yet they remained easily accessible, raising serious concerns about Instagram’s ability to moderate content effectively.
A user on X (formerly Twitter) posted:
“In the past few hours, my IG Reels feed has suddenly started showing violent or disturbing videos out of nowhere. Feels random.”
Others have shared similar experiences, wondering whether this surge in sensitive content was a mistake or a deliberate change in Instagram’s recommendation algorithm.
Read More: Instagram Tests Dislike Button For Comments
Why Are You Seeing More Sensitive Content on Instagram Reels?
On Thursday, Meta officially acknowledged the issue and issued an apology, stating that the spike in disturbing content was due to an unintended error in Instagram’s recommendation system.
A Meta spokesperson told CNBC:
“We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake.”
Meta’s Content Moderation Policies
Instagram’s official policy aims to shield users from distressing content by removing or restricting posts that display excessive violence or explicit material. According to Meta, content that includes the following is strictly prohibited:
- Dismemberment
- Visible innards or charred bodies
- Sadistic remarks towards suffering humans or animals
However, certain graphic posts may be allowed if they serve to raise awareness about human rights violations, armed conflicts, or terrorism. Such content is typically restricted with warning labels.
How Are Users Reacting to This Issue?
Despite Meta’s claim that the issue has been fixed, many users report that they are still seeing sensitive content on their Instagram feeds.
One user wrote on X:
“Opened Instagram this morning, and my entire Reels feed was a horror show – murder clips, brutal fights, rape cases, deadly accidents. And it wasn’t just me; people worldwide saw the same. Was this an algorithm glitch or intentional? Either way, it’s disturbing.”
Others have shared videos of their feeds flooded with sensitive content warnings, while some speculate that Instagram’s content moderation system may not be as effective as Meta claims.
Another frustrated user shared:
“Is it just me, or did Instagram Reels go crazy today? My feed is full of violent and sensitive stuff!”
A Worrying Trend?
The sudden rise in disturbing content has raised concerns about Instagram’s content moderation efforts. Is this an isolated incident, or part of a larger trend?
Read More: Instagram Introduces Teen Accounts In India With New Safety Features
Has Something Like This Happened Before?
This isn’t the first time Instagram has faced criticism over content moderation.
Past Concerns Over Explicit Content
A Wall Street Journal investigation previously revealed that Instagram’s algorithm had been promoting sexually explicit Reels to users as young as 13. Researchers found that as soon as a new Instagram account was created with a teenage user profile, the platform started recommending suggestive videos.
Between January and April last year, researchers set up test accounts to analyze Instagram’s recommendation system. Within minutes, these accounts were flooded with suggestive content, some of which included nudity and explicit acts.
By contrast, similar tests conducted on TikTok and Snapchat showed that those platforms did not push such content to new teenage users, even when they actively searched for related material.
Meta’s Internal Study on Content Moderation
An internal study conducted by Meta in 2022, reviewed by the Wall Street Journal, found that younger users were more likely to see inappropriate content compared to adults. The study revealed that underage users were:
- 3x more likely to encounter nudity-related posts
- 1.7x more likely to see violent content
- 4.1x more likely to experience cyberbullying
Despite Meta’s claims of strict content moderation, the company’s automated tools have struggled to prevent inappropriate content from reaching young audiences.
What’s Next for Instagram Users?
While Meta has apologized and stated that the issue has been resolved, some users remain skeptical. If you are still experiencing a surge of disturbing content on your Instagram feed, here’s what you can do:
Steps to Take:
- Report Content – Use Instagram’s built-in reporting feature to flag violent or sensitive content.
- Adjust Sensitive Content Settings – Navigate to Settings > Privacy > Sensitive Content Control to limit exposure.
- Clear Cache & Refresh Recommendations – Sometimes, clearing the app’s cache or refreshing your feed helps reset recommendations.
- Contact Instagram Support – If the issue persists, reaching out to Instagram’s support team may help resolve it.
Final Thoughts
The recent flood of disturbing content on Instagram Reels highlights potential flaws in Meta’s content moderation system. While the company claims the issue was an error, past investigations suggest that Instagram’s recommendation algorithm has long been under scrutiny for promoting inappropriate material.
For now, users should remain vigilant and take necessary steps to filter their content.
As social media platforms continue to evolve, it remains to be seen whether Meta can truly create a safer digital environment for its global user base.
Have you experienced disturbing content on your Instagram feed? Share your thoughts in the comments.