
Deepfake harassment, a form of digital abuse that uses artificial intelligence to superimpose someone’s face onto another body, often in explicit or compromising contexts, is leaving behind invisible scars that many are struggling to heal.
The Growing Threat of Deepfake Exploitation
Originally developed for entertainment and creative purposes, deepfake technology has been co-opted into a dangerous tool for online harassment. Victims, predominantly women—are waking up to find their likenesses used in pornographic content, political propaganda, or fake social media videos, all without consent.
Mental Health Repercussions: From Shame to PTSD
Victims of deepfake harassment often suffer in silence, fearing shame or disbelief. The psychological impact can mirror that of physical abuse—leading to anxiety, depression, insomnia, and in some cases, symptoms of PTSD. The lingering trauma isn't just from the exposure, but from the violation of one’s identity, agency, and control.
Many victims report:
Panic attacks or fear of being seen in public
Social withdrawal and isolation
Obsessive checking of online platforms for new abuses
A constant sense of being watched or judged
Even the process of seeking help can be retraumatizing, especially in systems where digital harassment is still misunderstood or trivialized.
Why Gen Z Is Especially Vulnerable
The very platforms Gen Z thrives on—Instagram, Snapchat, TikTok—are the same ones where deepfake content circulates rapidly. With advanced AI making it harder to distinguish between real and fake, young users are left more exposed than ever.
What makes the impact deeper is the culture of instant judgment and viral content. A deepfake clip, once shared, can spiral into a meme, a joke, or worse—permanent digital humiliation. And while the internet may move on, the victim remains frozen in that moment of violation.
Legal Gaps & Lack of Support
India’s legal framework around AI-generated harassment is still evolving. While Section 66E and 67A of the IT Act offer some protection against obscene content and privacy breaches, they don’t explicitly address AI-powered threats like deepfakes.
Victims often struggle to get FIRs registered, let alone track down anonymous perpetrators. Meanwhile, platforms lack robust reporting tools for AI-based harassment, leaving users with little recourse.
Deepfake harassment is not just a technological issue—it’s a human one. As the digital world expands, so must our empathy, regulation, and support systems. We need to start seeing this form of abuse not as a glitch in the system, but as a threat to the emotional fabric of our society.