As AI trends continue to dominate social media, more and more people are uploading their personal photos to AI-powered apps. Whether it's the recent retro saree trend powered by Google Gemini, the 3D figurine filter from Google's Nano Banana, or ChatGPT’s Studio Ghibli-style avatars, users are jumping on these viral trends and generating massive amounts of AI-edited content, often without thinking twice about the implications of sharing their personal images.
In the midst of this AI craze, one crucial question arises: How safe is it to share your images with AI apps, and what really happens to them afterward?
What AI Apps Do with Your Images
Most AI applications, including Google Gemini, use the uploaded images to process and deliver enhanced results such as artistic filters, facial swaps, or style changes. Some may also use the data to train their machine learning models and to develop or improve features.
However, the core concerns revolve around data retention and user consent. Do these apps store your images? Can they reuse your photos to train future AI models? Are your images shared with third-party companies?
There are significant risks associated with uploading personal photos to AI apps. These include the misuse of facial recognition data, potential use of images for unauthorized surveillance or deepfake creation, data breaches or leaks due to vulnerabilities in AI infrastructure, and identity theft.
There’s also the problem of unclear or hidden consent, where users may unknowingly permit the app to use their images for AI training or commercial purposes.
According to Google’s official privacy policy, when using Gemini’s image tools:
Your data may be used to improve Google's AI models only if you opt in.
Uploaded images are temporarily stored for processing and may be retained longer if used for feedback or feature development.
Users can manage or delete their data via the “My Activity” section of their Google Account.
Crucially, Google currently allows users to disable data sharing in their privacy settings.
The popularity of these tools has led to a surge in engagement. The Nano Banana tool alone has been used to create or modify over 200 million images so far.
The Google Gemini app surpassed 10 million downloads shortly after launching the Nano Banana 3D figurine feature.
A 2023 study by the Mozilla Foundation revealed that 80% of the most widely-used AI applications either lacked transparent data policies or made it difficult for users to opt out of data collection.
In 2022, the AI-powered app Lensa faced backlash for allegedly using user-uploaded photos to train its AI models without proper disclosure.
Meanwhile, Norton’s 2024 Cyber Safety Insights report noted that while 68% of people are concerned about the misuse of personal data by AI apps, 42% still use these platforms without reading the terms and conditions.
To protect your privacy, Here are a few expert-recommended precautions:
Read the privacy policy, especially on data use and storage.
Avoid uploading sensitive images or anything with personal details.
Opt out of data training when available.
Stick to trusted apps like Google or Adobe; avoid unknown ones.
Clear app data regularly and review Google Activity settings.
AI image tools can be fun, creative, and visually stunning. But in our race to generate the perfect avatar or jump onto the latest viral trend, we may be exposing far more than just our faces. Before uploading your next photo to an AI app, take a moment to ask yourself: Do I really know where this image is going and how it might be used?