The group stated, “The sexually explicit, A.I.-generated images depicting Taylor Swift are upsetting, harmful, and deeply concerning. The development and dissemination of fake images — especially those of a lewd nature — without someone’s consent must be made illegal. As a society, we have it in our power to control these technologies, but we must act now before it is too late. SAG-AFTRA continues to support legislation by Congressman Joe Morelle, the Preventing Deepfakes of Intimate Images Act, to make sure we stop exploitation of this nature from happening again. We support Taylor, and women everywhere who are the victims of this kind of theft of their privacy and right to autonomy.”
This is not the first company to speak out on the issue as well. Microsoft CEO Satya Nadella said Friday that the company has to “move fast” on combatting nonconsensual sexually explicit deepfake images, after AI-generated fake nude pictures of Taylor Swift went viral this week.
In an exclusive interview with NBC News’ Lester Holt, Nadella commented on the “alarming and terrible” deepfake images of Swift posted on X that by Thursday had been viewed more than 27 million times.
“Yes, we have to act,” Nadella said in response to a question about the deepfakes of Swift. “I think we all benefit when the online world is a safe world. And so I don’t think anyone would want an online world that is completely not safe for both content creators and content consumers. So therefore, I think it behooves us to move fast on this.”
According to NBC, Microsoft has invested and created AI technology of its own, especially being a primary investor in OpenAI. The company has also integrated artificial intelligence within Microsoft products including Copilot, AI chatbot, and more in the search engine side like Bing.
“I go back to what I think’s our responsibility, which is all of the guardrails that we need to place around the technology so that there’s more safe content that’s being produced,” Nadella said. “And there’s a lot to be done and a lot being done there.”
404 Media reported that the deepfake images of Swift that went viral on X were traced back to a Telegram group chat, where members said they used Microsoft’s generative-AI tool, Designer, to make such material.
Nadella did not comment directly on the latest report.
A statement to 404 Media Microsoft said it was investigating the reports and would take appropriate action to address them.
“Our Code of Conduct prohibits the use of our tools for the creation of adult or non-consensual intimate content, and any repeated attempts to produce content that goes against our policies may result in loss of access to the service,” Microsoft said in its statement to 404 Media. “We have large teams working on the development of guardrails and other safety systems in line with our responsible AI principles, including content filtering, operational monitoring and abuse detection to mitigate misuse of the system and help create a safer environment for users.”