

Amid growing scrutiny over online safety, Google on Tuesday, February 10, announced a simplified process for users to request the removal of non-consensual explicit images from its search results. Under the updated system, users can click the three dots above an image in Google Search, select “Remove result,” and then choose “It shows a sexual image of me.”
Users can now also flag multiple images in one go instead of filing separate complaints for each result. “You no longer have to report each image individually. This new tool lets you select and submit requests for multiple images from search results in a single, simple form,” the search giant said. Google added that the streamlined reporting feature will roll out to most countries in the coming days.
Under the amended IT Rules, online platforms must remove non-consensual intimate imagery within two hours, down from 24 hours earlier, while other unlawful content must be taken down within three hours instead of 36. The rules also mandate prominent labelling of AI-generated content on platforms like YouTube, Instagram, and Snapchat.
Non-consensual explicit imagery has surged with the rise of generative AI, with tools like xAI’s Grok facing regulatory backlash for enabling the creation of such content. Google said its updated process will also let users opt in to proactive filters to block similar explicit results. Users can track requests under the ‘Results about you’ tab, receive email updates, and will be directed to “expert organizations that provide emotional and legal support.”
Snapchat announced an expansion of its ‘Home Safe’ feature, which previously allowed users to notify friends and family upon reaching home. The company said users can now send alerts after arriving at other locations as well. “Arrival Notifications now work for everyday moments — like letting someone know you’re back for the night while traveling, or automatically sharing when you arrive at a weekly class, practice, or meeting — without needing to remember to send a message,” Snapchat wrote in a blog post.
Separately, OpenAI said it is stepping up protections for Indian teenagers by introducing age prediction tools, age-appropriate policies, and parental controls, in collaboration with policymakers, regulators, educators, and child safety experts. The company also pointed to existing safeguards within ChatGPT, such as in-app reminders to take breaks, directing users to real-world support if they express suicidal intent, and blocking the generation of child sexual abuse material (CSAM) and child sexual exploitation material (CSEM).
Emphasising its safety-first approach, OpenAI said, “The way ChatGPT responds to a 15-year-old should differ from the way it responds to an adult […] This approach is especially important in India: as AI adoption accelerates, AI literacy needs to be taught alongside other areas of education.”