close
close

Law cracking down on AI images of child sexual abuse | News, Sports, Jobs

Law cracking down on AI images of child sexual abuse | News, Sports, Jobs

WASHINGTON — A child psychiatrist altered a first-day-of-school photo he saw on Facebook to make a group of girls appear nude. A US Army soldier accused of creating images of children he knew being sexually abused. A software engineer tasked with generating sexually explicit hyperrealistic images of children.

US law enforcement agencies are cracking down on a troubling spread of child sex abuse images created through artificial intelligence technology – from doctored photos of real children to computer-generated graphic depictions of children. Justice Department officials say they are aggressively pursuing criminals who exploit AI tools, as states race to ensure that people who generate “deepfakes” and other harmful images of children can be prosecuted under their laws.

“We need to signal early and often that this is a crime, that it will be investigated and prosecuted when the evidence supports it,” Steven Grocki, who heads the Justice Department’s Child Exploitation and Obscenity Division, said in an interview for The. Associated Press. “And if you’re sitting there thinking otherwise, you’re dead wrong. And it’s only a matter of time before someone holds you accountable.”

The Justice Department says existing federal laws clearly apply to such content and recently brought what is believed to be the first federal case involving purely AI-generated images — meaning the children depicted are not real, but virtual . In another case, federal authorities in August arrested a US soldier stationed in Alaska, accused of broadcasting innocent images of real children he knew through an AI chatbot to make the images sexually explicit.

Trying to catch up with technology

The pursuits come as child advocates work urgently to curb misuse of the technology to prevent a flood of disturbing images, which officials fear could make it harder to save real victims. Law enforcement officials fear that investigators will waste time and resources trying to identify and track down exploited children who don’t really exist.

Meanwhile, lawmakers are passing legislation to ensure local prosecutors can bring charges under state law for AI-generated “deepfakes” and other sexually explicit images of children. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered images of child sexual abuse, according to an analysis by the National Center for Missing and Exploited Children.

“We’re playing catch-up as law enforcement with a technology that, frankly, is moving much faster than we are,” said Ventura County, California District Attorney Erik Nasarenko.

Nasarenko promoted legislation signed last month by Gov. Gavin Newsom that makes it clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office could not pursue eight cases involving AI-generated content between last December and mid-September because California law required prosecutors to prove the images depicted a real child.

AI-generated images of child sex abuse can be used to groom children, law enforcement officials say. And even if they are not physically abused, children can be deeply affected when their image is made to appear sexually explicit.

“I felt like a part of me was taken away. Even though I wasn’t physically raped,” said Kaylin Hayman, 17, who starred in the Disney Channel series “Just Roll with It” and helped promote the California bill, after becoming a victim of the images “deepfake”.

Hayman testified last year at the federal trial of the man who digitally superimposed his face and that of other child actors on bodies performing sex acts. He was sentenced in May to over 14 years in prison.

Open-source AI models that users can download to their computers are known to be favored by criminals, who can train or modify the tools to produce explicit representations of children, experts say. Hackers exchange tips in dark web communities on how to manipulate AI tools to create such content, officials say.

A report last year by the Stanford Internet Observatory found that a research dataset that was the source for major AI image producers such as Stable Diffusion contained links to sexually explicit images of children, contributing to the ease with which which some tools were able to produce. harmful images.