

To combat the spread of ‘AI slop’ on Facebook, Meta announced updates to the platform’s content guidelines on Friday and introduced new impersonation detection tools. The company said creators will now be able to use a centralized dashboard to flag and act on content republished by impersonators, with all reports submitted in one place to simplify the reporting process.
Meta announced that Facebook has revised its content guidelines to clarify the definition of 'original content.' This now encompasses material that is "filmed or produced directly by a creator" as well as reels that incorporate remixed content or overlays to offer new elements like analysis, discussion, or additional information.
Although the new content protection tools can identify duplicate content, they are unable to detect or act on AI-generated deepfakes that replicate a creator’s likeness.
Nonetheless, content that is duplicated with slight modifications will still be considered unoriginal and will be given lower priority for engagement. This includes re-uploads and minimal changes like adding borders or captions. Meta's recent actions come in response to numerous user complaints about Facebook allegedly becoming an 'AI slop hellscape'. The company has reacted by targeting spammy and unoriginal content while promoting original creator content in users' feeds.
The platform has also strengthened content protections: 20 million accounts were removed last year, leading to a 33% drop in impersonation reports for major creators. As a result, views and watch time for original content roughly doubled in the second half of 2025. Other platforms, like YouTube, are also expanding AI-based deepfake detection for public figures.