Photo Courtesy: Taylor Swift Instagram page
AI-generated pornographic images of internationally acclaimed singer Taylor Swift are currently going viral on social media, triggering fear over the potential negative outcome of the use of the technology.
The images predominantly circulate on the micro-blogging site X platform. It was previously known as Twitter before billionaire Elon Musk bought it.
One of the most prominent examples on X attracted more than 45 million views, 24,000 reposts, and hundreds of thousands of likes and bookmarks before the verified user who shared the images had their account suspended for violating platform policy, reported The Verge.
According to reports, the post was live on the platform for over 17 hours before it was removed.
“This is a prime example of the ways in which AI is being unleashed for a lot of nefarious reasons without enough guardrails in place to protect the public square,” Ben Decker, who runs Memetica, a digital investigations agency, told CNN.
Decker said the exploitation of generative AI tools to create potentially harmful content targeting all types of public figures is increasing quickly and spreading faster than ever across social media.
“The social media companies don’t really have effective plans in place to necessarily monitor the content,” he said.
Informing users about policies regarding synthetic and manipulated media and nonconsensual nudity, X posted on its page named Safety: "Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content. Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We're closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed. We're committed to maintaining a safe and respectful environment for all users."
Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content. Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We're closely…
— Safety (@Safety) January 26, 2024
The platform shared the statement at a time when Swift's fake images went viral.
X, however, did not mention the developments around Swift's images in its post.