FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Google News. About 71,100 results: “Taylor Swift deepfakes”. 27 JAN.

Taylor Swift’s AI pictures go viral

BBC Taylor Swift deepfakes spark calls in Congress for new legislation US politicians have called for new laws to criminalise the creation of deepfake images, after explicit faked photos of Taylor Swift were… .14 hours ago

The Telegraph White House voices alarm over pornographic Taylor Swift deepfakes The White House has said explicit doctored images of singer Taylor Swift circulating on social media are “very alarming” and urged Congress… .10 hours ago

Taylor Swift deepfakes spark calls in Congress for new legislation – BBC News

The images were posted on social media sites, including X and Telegram.

US Representative Joe Morelle called the spread of the pictures “appalling”.

In a statement, X said it was “actively removing” the images and taking “appropriate actions” against the accounts involved in spreading them.

It added: “We’re closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed.”

While many of the images appear to have been removed at the time of publication, one photo of Swift was viewed a reported 47 million times before being taken down.

Deepfakes use artificial intelligence (AI) to make a video of someone by manipulating their face or body. A study in 2023 found that there has been a 550% rise in the creation of doctored images since 2019, fuelled by the emergence of AI.

There are currently no federal laws against the sharing or creation of deepfake images, though there have been moves at state level to tackle the issue.

In the UK, the sharing of deepfake pornography became illegal as part of its Online Safety Act in 2023.

Democratic Rep Morelle, who last year unveiled the proposed Preventing Deepfakes of Intimate Images Act – which would have made it illegal to share deepfake pornography without consent – called for urgent action on the issue.

He said the images and videos “can cause irrevocable emotional, financial, and reputational harm – and unfortunately, women are disproportionately impacted”.

Pornography consists of the overwhelmingly majority of the deepfakes posted online, with women making up 99% of those targeted in such content, according to the State of Deepfakes report published last year.

“What’s happened to Taylor Swift is nothing new,” Democratic Rep Yvette D Clarke posted on X. She noted that women had been targeted by the technology “for years”, adding that with “advancements in AI, creating deepfakes is easier & cheaper”.

Republican Congressman Tom Kean Jr agreed, saying that it is “clear that AI technology is advancing faster than the necessary guardrails”.

“Whether the victim is Taylor Swift or any young person across our country, we need to establish safeguards to combat this alarming trend,” he added.

Swift has not spoken publicly about the images, but the Daily Mail reportedthat her team is “considering legal action” against the site which published the AI-generated images.

Worries about AI-generated content have increased as billions of people vote in elections this year across the globe.

This week, a fake robocall claiming to be from US President Joe Biden sparked an investigation. It is thought to have been made by AI.

Taylor Swift deepfake pornography sparks renewed calls for US legislation – The Guardian

Fake but convincing explicit images of pop singer were viewed tens of millions of times on X and Telegram, prompting outcry from US politicians

“But the technology is overwhelmingly targeted at women, and in a sexually exploitative way: a 2019 study by DeepTrace Labs, cited in the proposed US legislation, found that 96% of deepfake video content was non-consenting pornographic material.”

By Ben Beaumont-Thomas @ben_bt

Fri 26 Jan 2024 14.44 CET

The rapid online spread of deepfake pornographic images of Taylor Swift has renewed calls, including from US politicians, to criminalise the practice, in which artificial intelligence is used to synthesise fake but convincing explicit imagery.

The images of the US popstar have been distributed across social media and seen by millions this week. Previously distributed on the app Telegram, one of the images of Swift hosted on X was seen 47m times before it was removed.

X said in a statement: “Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them.”

Yvette D Clarke, a Democratic congresswoman for New York, wrote on X: “What’s happened to Taylor Swift is nothing new. For yrs, women have been targets of deepfakes [without] their consent. And [with] advancements in AI, creating deepfakes is easier & cheaper. This is an issue both sides of the aisle & even Swifties should be able to come together to solve.”

Some individual US states have their own legislation against deepfakes, but there is a growing push for a change to federal law.

In May 2023, Democratic congressman Joseph Morelle unveiled the proposed Preventing Deepfakes of Intimate Images Act, which would make it illegal to share deepfake pornography without consent. Morelle said the images and videos “can cause irrevocable emotional, financial, and reputational harm – and unfortunately, women are disproportionately impacted.”

In a tweet condemning the Swift images, he described them as “sexual exploitation”. His proposed legislation has not yet become law.

Republican congressman Tom Kean Jr said: “It is clear that AI technology is advancing faster than the necessary guardrails. Whether the victim is Taylor Swift or any young person across our country, we need to establish safeguards to combat this alarming trend.” He has co-sponsored Morelle’s bill, and introduced his own AI Labeling Act that would require all AI-generated content (including more innocuous chatbots used in customer service settings, for example) to be labelled as such.

Swift has not spoken publicly about the images. Her US publicist had not replied to a request for comment as of publication time.

Convincing deepfake video or audio has been used to imitate some high-profile men, particularly politicians such as Donald Trump and Joe Biden, and artists such as Drake and the Weeknd. In October 2023, Tom Hanks told his Instagram followers not to be lured in by a fake dentristry advert featuring his likeness.

But the technology is overwhelmingly targeted at women, and in a sexually exploitative way: a 2019 study by DeepTrace Labs, cited in the proposed US legislation, found that 96% of deepfake video content was non-consenting pornographic material.

The issue has considerably worsened since 2019. Fake pornography, where photo editing software is used to place a non-consenting person’s face into an existing pornographic image, is a longstanding problem. But a new frontier has opened up thanks to the sophistication of artificial intelligence, which can be used to generate entirely new and highly convincing images, including by using simple text commands.

High profile women are particularly at risk. In 2018, Scarlett Johansson spoke about widespread fake pornography featuring her likeness: “I have sadly been down this road many, many times. The fact is that trying to protect yourself from the internet and its depravity is basically a lost cause, for the most part.”

The UK government made nonconsensual deepfake pornography illegal in December 2022, in an amendment to the Online Safety Bill that also outlawed any explicit imagery taken without someone’s consent, including so-called “downblouse” photos.

Dominic Raab, then deputy prime minister, said: “We must do more to protect women and girls from people who take or manipulate intimate photos in order to hound or humiliate them. Our changes will give police and prosecutors the powers they need to bring these cowards to justice and safeguard women and girls from such vile abuse.”

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.