

As artificial intelligence rapidly reshapes how information is created, interpreted, and manipulated, Italian Prime Minister Giorgia Meloni has raised concerns about a growing online trend, AI-generated deepfakes. This time, however, the issue became personal.
Meloni responded directly to the viral AI deepfake images, confirming they were fake and suggesting they were meant as a political attack. While she lightly joked that the creator had “improved” her, she quickly shifted focus to the bigger issue, how dangerous such content can be.
A fabricated image of her in lingerie started spreading online and rapidly drew attention. It appeared convincing enough to mislead people at first glance, but it was not genuine. The picture was completely AI-generated and then shared on the internet as though it were real.
She warned that deepfakes are becoming easier to create and harder to detect, making it easier to mislead people, distort narratives, and target individuals. She also pointed out an important imbalance: she can respond publicly, but most people cannot.
Calling for caution, she urged users to verify information before believing or sharing it, noting how quickly fake content spreads online and how difficult it is to contain once it does.
In her tweet, Meloni said several fake AI-generated images of her were being circulated online and wrongly presented as real. She even added a sarcastic remark that the creator had “improved” her appearance, but stressed that the issue goes far beyond her personal case.
She warned that deepfakes are a “dangerous tool” capable of deceiving and manipulating people, adding that while she can defend herself, many others cannot. Meloni also urged users to verify information before believing or sharing it, saying today it may be her, but tomorrow it could be anyone.
Reactions were mixed online. Some users dismissed AI-generated images as so widespread that they’ve lost credibility altogether, while others compared their impact to something devalued and no longer trustworthy.
Others questioned why the creator of the fake image wasn’t identified, calling out the lack of accountability. Some agreed that AI content can be “improved” but still warned that fake news in any form is unacceptable.
More critical voices raised concerns about AI being used for scams and misinformation, especially against people without strong public platforms.