junio 17, 2024

Las falsificaciones profundas de Taylor Swift y el CSAM de IA resaltan la necesidad de nuevas leyes


A series of explicit Taylor Swift deep fakes have led to lawmakers proposing new legislation to tackle this in a similar way to existing legislation for so-called revenge porn …

Relatedly, governments are growing increasingly concerned about the use of AI to generate child sexual abuse materials (CSAM).

Taylor Swift deep fakes

Social media network X was recently flooded with sexually explicit imagery purporting to be Taylor Swift, but in reality fakes generated using AI technology. 404 Media reports that a Microsoft AI tool was used to generate the images.

The report notes that just one example received 45 million views and 24,000 retweets before it was removed. X acknowledged the issue, saying that it was removing the images and taken action against the accounts behind them.

New law proposed: The DEFIANCE Act

The Verge reports that a bipartisan group of senators have proposed a new law to tackle the problem. Effectively, this would ban fake explicit photos in the same way an existing law bans real images posted without consent.

US lawmakers have proposed letting people sue over faked pornographic images of themselves, following the spread of AI-generated explicit photographs of Taylor Swift. The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act would add a civil right of action for intimate “digital forgeries” depicting an identifiable person without their consent, letting victims collect financial damages from anyone who “knowingly produced or possessed” the image with the intent to spread it […]

It builds on a provision in the Violence Against Women Act Reauthorization Act of 2022, which added a similar right of action for non-faked explicit images. 

Senate Majority Whip Dick Durbin (D-IL), joined by Sens. Lindsey Graham (R-SC), Amy Klobuchar (D-MN), and Josh Hawley (R-MO) specifically cite the Taylor Swift imagery as an example of how AI fakes can be used to exploit and harass women.

While prompted by concerns around the abuse of AI image tools, the law would also cover manually-created fakes, like those generated using conventional image editing apps like Photoshop.

Law enforcement fears a flood of AI-generated CSAM

The New York Times reports on similar concerns being raised about AI-generated CSAM.

Law enforcement officials are bracing for an explosion of material generated by artificial intelligence that realistically depicts children being sexually exploited, deepening the challenge of identifying victims and combating such abuse […]

Simply entering a prompt spits out realistic images, videos and text in minutes, yielding new images of actual children as well as explicit ones of children who do not actually exist. These may include […] routine class photos, adapted so all of the children are naked.

Predictably, however, politicians are using this problem as an excuse to call for a ban on end-to-end encrypted messaging.

Tom Tugendhat, Britain’s security minister, said the move would empower child predators around the world.

“Meta’s decision to implement end-to-end encryption without robust safety features makes these images available to millions without fear of getting caught,” Mr. Tugendhat said in a statement.

Photo: Ronald Woan/CC2.0 (Cropped)

FTC: We use income earning auto affiliate links. More.