Deepfake Danger: Expanding Federal Child Pornography Protections In the Age of Artificial Intelligence

By: Nina Feder

Executive Summary

Federal law must cover deepfake pornography that portrays minors by amending the definition of child pornography to include AI-generated or altered imagery regardless of whether the image is real or synthetic. 

Deepfakes are an image or recording that have been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said​​ [1]. The term was coined by Reddit user DeepFake in 2017, who posted AI-generated images of female celebrities having sex without their consent. From their inception, deepfakes have served to sexually violate women. 

While case law largely criminalizes the nature of deepfake child pornography, there is a lack of explicit criminalization under federal statutory definitions of child pornography [2]. Additionally, there is no avenue for students to autonomously pursue civil recourse under current Title IX provisions [3].

The United States is a global leader in artificial intelligence. It must also set the standard for legislative protection made necessary by how susceptible AI is to abuse. The Trump administration aims to create a federal, singular framework to regulate artificial intelligence. It is imperative that it explicitly criminalizes deepfake pornography involving underage girls within this federal regulation. This must include incorporating AI-generated media into the statutory language of child pornography as well as in Title IX provisions. 

Introduction

Deepfakes are incredibly accessible; a multitude of apps exist for the sole purpose of transposing an individual’s face onto a different body, which can be completed in a few hours. The point of deepfakes is to simulate reality to the greatest extent possible; they are functionally identical to real images. 

The scale of harm done to women has exploded commensurate with the exponential growth of artificial intelligence. Approximately 8 million deepfakes currently circulate the internet [6]. 98% of those are of a nonconsensual, pornographic nature and 99% are of women [7]. These staggering numbers communicate the presence of an incredibly dystopian and widespread form of sexual violence: “In this sense, women’s bodies are not only a physical or virtual prosthetics, but can become something more through modern technologies, limited only by the perpetrator’s imagination” [8].

This sexual violence affects women not just in the private, psychological aspect of their life but also in the public sphere, as they are often published online for popular consumption. Artificial intelligence models typically employ an algorithm that compiles images already present on the internet. Many of these images depict minors and are then used to create media that is indistinguishable from a real image [9].

The U.S. has the largest presence of child sexual abuse content in the world [10]. However, the constitutionality of virtual child pornography is a grey area [11]. Child pornography was first criminalized in 1977 by The Protection of Children Against Sexual Exploitation Act [12]. In 1982, New York v. Ferber furthered this protection by establishing that the right free speech does not categorically protect child pornography [13].  However, it centered criminality around the involvement of real children. The Child Pornography Protection Act of 1996 was the first major law to address the intersection of internet access and child pornography, aiming to prevent all versions of virtual child pornography regardless of the actual involvement of a child [14].

However, Ashcroft v Free Speech Coalition partially struck down the CPPA in 2002, citing its overly expansive definition of virtual child pornography [15]. In essence, Ashcroft found that, “Virtual child pornography is not "intrinsically related" to the sexual abuse of children [16]. The harm that could follow from virtual child pornography was portrayed to be contingent and indirect [17]. While it does not categorically prevent the government from criminally prosecuting this conduct, it does massively limit judicial ability to criminalize it. 

The Protect Act of 2003 addressed Ashcroft by establishing that if the work only involves a computer-generated imitation of a child rather than a real one, then an individual can be charged if the image is ‘virtually indistinguishable’ from an ‘identifiable minor’. While this seemingly addresses the issue of child deepfake pornography, it also provides a loophole if defendants argue that no ‘real’ children were involved. Both Ashcroft and the Protect Act preceded the form and ability of current artificial intelligence to simulate child pornography and cannot truly address the ambiguous nature of whether real children are in deepfakes. 

The 2024 Title IX provision did incorporate ‘the nonconsensual distribution of intimate images’ into its definition of sexual harassment, including both authentic and AI-generated images [18]. This federal protection was crucial because schools represent the intersection of deepfakes and minors. However, the 2024 Title IX provisions were disarmed by the Federal District Court of Eastern Kentucky, reverting the nation to 2020 provisions that provide no safeguards against deepfake child pornography [19]. This issue is not theoretical; there are already instances of peer-on-peer sexual harassment involving deepfake virtual pornography. In Louisiana, a 13-year-old girl was relentlessly bullied at school over a deepfake that depicted her face on a nude body in 2025 [20]. She was later expelled after hitting the boy who created the image of her. This harassment was potentially actionable under 2024 Title IX Provisions due to Davis v. Monroe County Board of Education [21]. However, direct legal recourse for deepfake child pornography is nonexistent under current Title IX provisions, let alone external to federally funded educational programs.

Policy Alternatives 

While federal law criminalizes computer-generated images, it does not include AI-generated media or deepfakes in the language of the statute. Aside from this linguistic gap, criminality can only be established if a reasonable individual would believe that the child in the image is real [22].

While minute, this statutory loophole is crucial to address because deepfakes involve an inherent distortion of reality. The standard that virtual child pornography must involve a ‘real child’ in order to be criminalized fails to adequately address the nature of deepfakes. With AI, the ‘real’ likeness of an underage girl can easily be superimposed upon a nude body with ‘fake’ genitalia. This body is generated through the compilation of thousands of images that are also real. No aspect of deepfakes can be completely divided from reality by virtue of how they are created. 

This logic applies when considering why they are created as well: they are made to be indistinguishable from authentic images. However, what constitutes ‘indistinguishable imagery’ is called into question. While the face of the deepfake may be identical to a real child’s, the body is often highly sexualized and potentially different from a real child’s. Regardless, the deepfake will be perceived as a legitimate child. This blurs the Protect Act’s standard of knowingly consuming, distributing, or producing virtual child pornography. An individual can knowingly consume deepfake pornography that is indistinguishable from a child, yet evade criminality because the work is not entirely the child themselves. This legislative gap must be addressed in order to protect young women from sexual abuse online. This would create crucial avenues for underage girls to seek recourse when victimized by deepfakes. 

Policy Recommendations

  1. Amend the language of federal statute definitions of child pornography to include deepfakes: This would eliminate the grey area created by conflicting legal precedent and provide concrete ground to develop legislation in order to criminalize this form of child sexual abuse in accordance with the ambiguous, conflicting nature of deepfake pornography. It would also provide a model for the federal AI framework of the appropriate language to employ in order to truly protect underage victims. This would evade the gendered element specific to Title IX because of the universal criminal applicability of federal statutes to a multitude of contexts. 

  2. Amend Title IX definitions of sex-based harassment to include deepfake pornography: As previously established, the majority of deepfake pornography depicts female individuals in an explicit manner. While child deepfake pornography undeniably affects underage boys, this is inherently a gendered issue. Title IX needs to recognize it as such in order to truly ensure equal protection and access to educational environments free of harassment.

    1. Additionally, civil protections must exist through Title IX. Criminal charges are typically harsher and a stronger deterrent than civil ones. However, involving the intermediary of police in criminal cases often deters victims from seeking recourse. The humiliation and fear that accompanies testifying in court is one of the reasons why many do not seek legal recourse for any form of sexual violence; that phenomenon is relevant to this context [23]. However, civil charges are often less intimidating and restrict freedom of speech more so than criminalization. They provide a clear framework to claim damages and seek injunctive relief [24]. However, civil protections are potentially less effective relative to more severe criminal penalties, and they could exacerbate the limiting effect of free speech protections in this context. Regardless, parents of and underage victims themselves must have the option of both forms of recourse. There is no universal form of justice; it is up to individual discretion. It is the government’s obligation to provide both criminal or civil avenues to choose from.

  3. Focus on holding individual creators accountable: The target of legislation must be the individual who created the deepfake due to the convoluted nature of synthetic content. NY v. Ferber is crucial in considering this policy alternative because it establishes criminality when child pornography is based on conduct, not speech. Free speech protections have overridden minors’ safety in the past. However, Ferber demonstrates that the government interest of protecting minors against sexual abuse outweighs free speech rights when the focus in on an individual’s conduct. That is entirely applicable here: AI software does not autonomously create pornographic deepfakes. There is someone behind the phone, prompting the AI to place a girl’s likeness on an objectified body.

    1. Regulating AI companies themselves is a difficult task. This is in part due to their economic prowess and resources to defend themselves against legal liability. It is also because their algorithms cannot unlearn how to create sexualized deepfakes. This technology will persist and continue to be exploited. The time required to adequately remove algorithms’ ability to create pornographic deepfakes is an additional roadblock considering how this issue needs to be addressed immediately. Social media platforms themselves have mostly been addressed by the Take It Down Act of 2025, which requires the removal of pornographic deepfakes from social media platforms within 48 hours [25]. Even so, deepfakes are often circulated external to social media and through private communication that the FTC currently holds no power to regulate. Additionally, the power of legislation is minimal relative to the astonishing capability of artificial intelligence. This is evident in the platform X’s chatbot ‘Grok’, which Elon Musk removed in January 2026  due to the immense amount of deepfake pornography circulating the platform. Even with the Take It Down Act in effect, the law lacks the oversight to adequately manage social media-based AI.  Not only will it outpace legislation, but regulatory power predominantly lies in the hands of the owners of social media platforms. Federal law must center what it can actually impact: the conduct of individuals themselves. Criminal penalties must be imposed on those who create and distribute deepfakes: “After all, it is the direct actions of such a person that led to the damage to the victim.” [26]

  4. Research the scale of deepfake child pornography: It is imperative to quantify the extent to which all deepfake pornography draws from images of actual children. It is also necessary to understand how many deepfakes themselves are of underage girls. Further inquiry into the various contexts in how these images are utilized to violate girls is required. 

  5. Extend protections to Title VII: The majority of legislative history centers the perpetrator’s right to free speech, which is discordant from how the victim’s equal protection and privacy rights are affected. This is an issue that almost solely affects women and girls and will continue to do so. Any girl whose likeness is innocuously posted on the internet is vulnerable to this virtual sexual violence. While girls must be protected in schools, they must also be safeguarded at work. The Equal Employment Opportunity Commission's statutory definition of sexual harassment fails to include AI-generated content. The understanding of sexual harassment in 2024 Title IX Provisions must be applied to the definition of sex-based discrimination in Title VII. Legislative consistency is imperative because women of all ages should receive equal protection in all aspects of their life. The quantity and upward mobility of women in the workforce is already in decline; the presence of yet another factor that deters or excludes women from work must be expressly addressed.

  6. Address how AI is weaponized for revenge porn: Revenge porn, or pornographic content that is nonconsensually publicized to establish control against an individual, overlaps with this issue. It is already widely criminalized, which can serve as a model for the criminalization of deepfake child pornography.

Conclusion

In the effort to federalize AI regulation, the statutory definition of child pornography must be amended to reflect the reality of artificial intelligence and its intersection with sexual violence. The definition of child pornography should explicitly include AI-generated or altered media in order to address the widespread issue of deepfake imagery. Artificial intelligence often appears to be a force that grows at a rate that far exceeds human control. However, legislation is a critical tool to anticipate the future of virtual sexual violence and the present state of deepfake child pornography. More expansively, the federal government has a responsibility to protect girls not just in schools, but in their lives; they have a right to privacy, protection, and a life that is free of pernicious sexualization.


References

[1] “Deepfake,” Merriam‑Webster.com Dictionary, Merriam‑Webster, https://www.merriam‑webster.com/dictionary/deepfake (accessed December 12th, 2025)

[2] 18 U.S.C. § 2256, “Definitions for chapter,” Legal Information Institute, Cornell Law School, accessed February 9, 2026

[3] 34 C.F.R. Part 106 — Nondiscrimination on the Basis of Sex in Education Programs or Activities Receiving Federal Financial Assistance (2020 Title IX regulations), PDF

[4] Kang, Cecilia. 2025. “Trump Moves to Stop States from Regulating AI with a New Executive Order.” The New York Times, December 11, 2025.

[5] Ibid, 386

[6] Keepnet Labs, “Deepfake Statistics & Trends 2025: Growth, Risks, and Future Insights,” Keepnet Labs (blog), September 24, 2025

[7] Jensen, Peter, “REPORT. 2023 STATE of DEEPFAKES Realities, Threats, and Impact.” Blog.biocomm.ai

[8] Matthew Hall, Andreas Pester, and Alex Atanasov, “AI Threats to Women’s Rights: Implications and Legislations,” Journal of Law and Emerging Technologies 2, no. 2 (2022): 59

[9] Claudia Ratner, “When ‘Sweetie’ Is Not so Sweet: Artificial Intelligence and Its Implications for Child Pornography,” Family Court Review 59, no. 2 (2021): 389.

[10]  Rhiannon Williams, “The US Now Hosts More Child Sexual Abuse Material Online Than Any Other Country,” MIT Technology Review, April 26, 2022

[11] Ratner, Claudia,“When ‘Sweetie’ Is Not so Sweet”, 390. 

[12] Protection of Children Against Sexual Exploitation Act of 1977, S.1585, 95th Cong., 1st sess,,1978. 

[13] "New York v. Ferber." Oyez. Accessed December 13, 2025. https://www.oyez.org/cases/1981/81-55, 458.

[14] H.R. 4123, 104th Cong. (1995–1996): Child Pornography Prevention Act of 1996, October 4, 1996

[15] Ashcroft v. Free Speech Coalition, 535 U.S. 234 (2002), 236, accessed December 13, 2025, 258

[16] Ibid, 256

[17] Ibid, 256

[18] Chloe Altieri, “New Title IX Rule Defines Deepfakes as Sexual Harassment,” Student Privacy Compass, August 14, 2024

[19] U.S. Equal Employment Opportunity Commission, “Federal District Court Vacates 2024 Title IX Regs,” Husch Blackwell (January 16, 2025)

[20] Chris Nakamoto and Kevin Foster, “I-TEAM: Louisiana Lawmaker Promises Changes after WAFB Report on Deepfake Porn Case,” WAFB, November 13, 2025

[21] Davis v. Monroe County Board of Education, 526 U.S. 629 (1999), accessed December 13, 2025

[22] 18 U.S.C. § 2256 (Definitions for chapter) (2011)

[23] Ibid, 607.

[24] Michelle Evans, “Regulating the Non-Consensual Sharing of Intimate Images (Revenge Pornography) via a Civil Penalty Regime: A Sex Equality Analysis,” Monash University Law Review 44, no. 3 (2018): 606

[25] S. 146, TAKE IT DOWNAct, 119th Cong. (2025)

[26] Matthew Hall, Jeff Hearn, and Ruth Lewis, “Image-Based Sexual Abuse: Online Gender-Sexual Violations,” Encyclopedia 3, no. 1 (2023): 327–39

[27] United States Equal Employment Opportunity Commission, Regulations and Guidelines”, EEOC

Author Bio

Nina Feder

Nina Feder is a current freshman at Georgetown University studying Government and Women’s & Gender Studies with a minor in Medical Humanities. She is deeply passionate about reproductive justice, which she views as the paragon of how race, class, geography, and gender uniquely intersect to oppress women across dimensions of identity and levels of social stratification. She is an intersectional feminist who has always been deeply passionate about women’s liberation and addressing the interrelation of various forms of oppression in the fight to achieve freedom. Nina aspires to pursue a career in law in order to ensure safe, equitable access to reproductive care. She is especially interested in the interaction of medical racism with reproductive rights particularly in the contexts of maternal morbidity rates and abortion.

At Georgetown University, she serves as a coordinator for D.C. Reads. D.C. Reads is a program that mentors children in Southeast D.C. in order to build literacy skills and pursue educational equality across the District. Nina is also involved with H*yas for Choice, the university’s unaffiliated pro-choice club. She is the Freshman Representative on the steering committee of Georgetown Coalition for Worker’s Rights. Nina views autonomy as a fundamental human right and understands that it must be ensured in all forms, including workers’ rights to self-determination. She also provides literature tutoring at the Alexandria Detention Center through her involvement in Georgetown Students for Prison Justice. In high school she proudly served as Co-President of Intersectional Feminism Club, where she agitated for a just dress code, menstrual equity, and solidarity among female students. Nina is from Princeton, New Jersey and enjoys yoga, music, and reading in her free time.

Previous
Previous

Addressing Gendered Economic Inequality in Ukraine: A Policy Brief

Next
Next

Towards Safety and Equality: Evaluating Panama’s Efforts to Address Gender-Based Violence Through Institutional Reform