![](https://nflbulletin.com/wp-content/uploads/2025/02/file-20250210-15-s4a6ij-Ohzpa6.jpeg)
Child pornography laws may be clear, but AI makes enforcement more difficult. AP Photo/J. Scott Applewhite
The city of Lancaster, Pennsylvania, was shaken by revelations in December 2023 that two local teenage boys shared hundreds of nude images of girls in their community over a private chat on the social chat platform Discord. Witnesses said the photos easily could have been mistaken for real ones, but they were fake. The boys had used an artificial intelligence tool to superimpose real photos of girls’ faces onto sexually explicit images.
With troves of real photos available on social media platforms, and AI tools becoming more accessible across the web, similar incidents have played out across the country, from California to Texas and Wisconsin. A recent survey by the Center for Democracy and Technology, a Washington D.C.-based nonprofit, found that 15% of students and 11% of teachers knew of at least one deepfake that depicted someone associated with their school in a sexually explicit or intimate manner.
The Supreme Court has implicitly concluded that computer-generated pornographic images that are based on images of real children are illegal. The use of generative AI technologies to make deepfake pornographic images of minors almost certainly falls under the scope of that ruling. As a legal scholar who studies the intersection of constitutional law and emerging technologies, I see an emerging challenge to the status quo: AI-generated images that are fully fake but indistinguishable from real photos.
Policing child sexual abuse material
While the internet’s architecture has always made it difficult to control what is shared online, there are a few kinds of content that most regulatory authorities across the globe agree should be censored. Child pornography is at the top of that list.
For decades, law enforcement agencies have worked with major tech companies to identify and remove this kind of material from the web, and to prosecute those who create or circulate it. But the advent of generative artificial intelligence and easy-to-access tools like the ones used in the Pennsylvania case present a vexing new challenge for such efforts.
In the legal field, child pornography is generally referred to as child sexual abuse material, or CSAM, because the term better reflects the abuse that is depicted in the images and videos and the resulting trauma to the children involved. In 1982, the Supreme Court ruled that child pornography is not protected under the First Amendment because safeguarding the physical and psychological well-being of a minor is a compelling government interest that justifies laws that prohibit child sexual abuse material.
That case, New York v. Ferber, effectively allowed the federal government and all 50 states to criminalize traditional child sexual abuse material. But a subsequent case, Ashcroft v. Free Speech Coalition from 2002, might complicate efforts to criminalize AI-generated child sexual abuse material. In that case, the court struck down a law that prohibited computer-generated child pornography, effectively rendering it legal.
The government’s interest in protecting the physical and psychological well-being of children, the court found, was not implicated when such obscene material is computer generated. “Virtual child pornography is not ‘intrinsically related’ to the sexual abuse of children,” the court wrote.
States move to criminalize AI-generated CSAM
According to the child advocacy organization Enough Abuse, 37 states have criminalized AI-generated or AI-modified CSAM, either by amending existing child sexual abuse material laws or enacting new ones. More than half of those 37 states enacted new laws or amended their existing ones within the past year.
California, for example, enacted Assembly Bill 1831 on Sept. 29, 2024, which amended its penal code to prohibit the creation, sale, possession and distribution of any “digitally altered or artificial-intelligence-generated matter” that depicts a person under 18 engaging in or simulating sexual conduct.
Deepfake child pornography is a growing problem.
While some of these state laws target the use of photos of real people to generate these deep fakes, others go further, defining child sexual abuse material as “any image of a person who appears to be a minor under 18 involved in sexual activity,” according to Enough Abuse. Laws like these that encompass images produced without depictions of real minors might run counter to the Supreme Court’s Ashcroft v. Free Speech Coalition ruling.
Real vs. fake, and telling the difference
Perhaps the most important part of the Ashcroft decision for emerging issues around AI-generated child sexual abuse material was part of the statute that the Supreme Court did not strike down. That provision of the law prohibited “more common and lower tech means of creating virtual (child sexual abuse material), known as computer morphing,” which involves taking pictures of real minors and morphing them into sexually explicit depictions.
The court’s decision stated that these digitally altered sexually explicit depictions of minors “implicate the interests of real children and are in that sense closer to the images in Ferber.” The decision referenced the 1982 case, New York v. Ferber, in which the Supreme Court upheld a New York criminal statute that prohibited persons from knowingly promoting sexual performances by children under the age of 16.
The court’s decisions in Ferber and Ashcroft could be used to argue that any AI-generated sexually explicit image of real minors should not be protected as free speech given the psychological harms inflicted on the real minors. But that argument has yet to be made before the court. The court’s ruling in Ashcroft may permit AI-generated sexually explicit images of fake minors.
But Justice Clarence Thomas, who concurred in Ashcroft, cautioned that “if technological advances thwart prosecution of ‘unlawful speech,’ the Government may well have a compelling interest in barring or otherwise regulating some narrow category of ‘lawful speech’ in order to enforce effectively laws against pornography made through the abuse of real children.”
With the recent significant advances in AI, it can be difficult if not impossible for law enforcement officials to distinguish between images of real and fake children. It’s possible that we’ve reached the point where computer-generated child sexual abuse material will need to be banned so that federal and state governments can effectively enforce laws aimed at protecting real children – the point that Thomas warned about over 20 years ago.
If so, easy access to generative AI tools is likely to force the courts to grapple with the issue.
Wayne Unger does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Advertisement
![](https://nflbulletin.com/wp-content/uploads/2022/09/nfl_titlej-1.jpg)
Advertisement
Contact Us
If you would like to place dofollow backlinks in our website or paid content reach out to info@qhubonews.com