Federal Officials Investigate Relevance of Existing Legislation to Tackle Increase in AI-Created Child Exploitative Images

Federal Officials Investigate Relevance of Existing Legislation to Tackle Increase in AI-Created Child Exploitative Images

Federal Officials Investigate Relevance of Existing Legislation to Tackle Increase in AI-Created Child Exploitative Images


### Children Vulnerable to AI-Created Sexual Images as Authorities Intensify Action

In recent times, advancements in artificial intelligence (AI) have led to considerable progress across various sectors, from healthcare to entertainment. Nevertheless, these technological developments have also unveiled fresh and concerning dangers, especially in the area of child exploitation. Law enforcement agencies along with child protection advocates are struggling with the swift proliferation of AI-generated child sexual abuse material (CSAM), complicating the task of safeguarding children from harm.

#### The Escalating Danger of AI-Created Child Abuse Images

Generative AI, which has the capability to produce realistic images from text input or alter existing photographs, is being utilized by malicious individuals to generate fabricated child sexual abuse images. Even though these images do not feature actual children, they remain damaging and aid in the normalization of child exploitation. The National Center for Missing and Exploited Children (NCMEC) indicates that it receives about 450 reports monthly concerning AI-generated child sex abuse material, a small yet increasing portion of the 3 million monthly reports of genuine child abuse.

The FBI has also raised alarms regarding the likelihood of “innocuous photos” of children shared online being altered by AI technologies, transforming harmless images into exploitative material. This has sparked worries about the ease with which AI can be weaponized against children, even when they are not directly engaged in the image creation process.

#### Challenges in Law and Federal Action

Despite the rising threat, the legal landscape for dealing with AI-generated CSAM remains unclear. In 2024, U.S. prosecutors pursued merely two criminal cases aimed at leveraging existing child pornography and obscenity laws to tackle the challenge. These initial cases are evaluating whether current laws can effectively confront the new difficulties presented by AI-generated content.

A significant challenge in prosecuting these cases is identifying the victim. Some AI-generated images are based on photographs of real children, while others are entirely fabricated, making it ambiguous whether the current child pornography statutes apply. James Silver, head of the U.S. Department of Justice’s computer crime and intellectual property section, has articulated concerns regarding the normalization of AI-produced child abuse imagery, cautioning that the broader the distribution of these images, the more socially accepted they may become in certain groups.

Silver has also mentioned that obscenity laws might still be applicable in instances where the images do not feature identifiable children. However, legal experts foresee substantial hurdles in implementing these laws concerning AI-generated content, particularly as technology progresses.

#### The Impact of “Nudify” Applications

One of the most straightforward methods that offenders are employing to create harmful images is through “nudify” applications, which use AI to digitally strip clothing from photos. These apps have gained traction on platforms like Telegram, where millions of users are reportedly producing fake nude images, including those of minors. This trend has raised red flags among child protection advocates, who are advocating for stricter regulations on such applications.

In response, some states have taken legal measures against the developers of nudify apps. In August 2024, San Francisco’s city attorney, David Chiu, filed a lawsuit against 16 widely used nudify applications, aiming to halt their operations. However, only one app has responded to the lawsuit, and it remains uncertain how effective these legal actions will be in limiting the dissemination of harmful AI-generated material.

#### State Legislation vs. Federal Law

While several states have enacted legislation to safeguard minors from AI-generated CSAM, there is currently no federal law specifically aimed at this concern. Public Citizen, a consumer advocacy organization, has been monitoring state laws on intimate deepfakes and found that over 20 states have implemented regulations to protect individuals from non-consensual deepfake images. However, only five states have passed laws specifically designed to shield minors from AI-generated content.

Child safety advocates are urging for broader federal legislation to address the issue, but such efforts have encountered pushback. The Kids Online Safety Act, for instance, has faced criticism for being overly broad and potentially infringing on free speech rights. Some experts suggest that a more focused federal statute targeting AI-generated CSAM could attract bipartisan backing and provide the necessary legal means to combat this escalating threat.

#### The DOJ’s Strategy and Initial Cases

The U.S. Department of Justice (DOJ) has adopted a measured strategy in prosecuting cases involving AI-generated CSAM, concentrating on situations where actual children are involved or where the images are particularly severe. In May 2024, the DOJ asserted that “CSAM generated by AI is still CSAM,” indicating its commitment to pursuing cases related to AI-generated content.

Two preliminary cases have arisen as tests of the DOJ’s capacity to apply current laws to AI-generated CSAM. One case revolves around a U.S. Army soldier allegedly utilizing AI bots to create child pornography, while the other concerns a Wisconsin man accused of employing the AI tool Stable Diffusion to produce thousands of realistic images of prepubescent minors. Both individuals have entered not guilty pleas, and their cases