Searching for Taylor Swift on X, formerly known as Twitter, showed an error message Saturday after pornographic, AI-generated images of the singer were circulated across social media last week.
X’s search function only displays results for Swift under its “Media” and “List” tabs. However, Swift is still searchable using several boolean operators. Inputting “Taylor Swift” with quotation marks, as well as “Taylor AND Swift” yield normal search results under all of X’s search function tabs.
The search function error message does not appear on either Instagram or Reddit.
The fake images of Swift — which show the singer in sexually suggestive and explicit positions — were predominantly circulating on X, and were viewed tens of millions of times before being removed from social platforms.
Like most major social media platforms, X’s policies ban the sharing of “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”
“This is a temporary action and done with an abundance of caution as we prioritize safety on this issue,” the company told CNN in a statement Saturday.
Digitally manipulated pornographic images of celebrities are nothing new on the internet, and have been circulating online since the advent of software like Photoshop. But the rise in mainstream artificial intelligence software has heightened concerns due to its ability to create convincingly real and damaging images.
The incident comes as the United States heads into a presidential election year, prompting fears misleading AI-generated images and videos could be used in disinformation efforts.
And it’s not just public figures with massive online presences who fall victim to this type of harassment.
In November, a 14-year-old New Jersey high school student called on school and government officials to take action after she said photos of her and more than 30 female classmates were manipulated and possibly shared publicly.
At the time, the school provided CNN with a statement from Superintendent Dr. Raymond González, who said, “All school districts are grappling with the challenges and impact of artificial intelligence and other technology available to students at any time and anywhere.”
Nine US states currently have laws against the creation or sharing this kind of nonconsensual deepfake photography, which are synthetic images created to mimic one’s likeness.
Read the full article here