Law Enforcement Asks Congress for Help to Prosecute Cases of AI-Generated Child Pornography
April 9, 2024 Don Pumphrey, Jr. Criminal Defense, News & Announcements, Sex Crimes Social Share
Research from the Stanford Internet Observatory found that generative AI is responsible for creating more child sexual abuse material (CSAM). However, prosecuting these types of cases can be difficult when the illegal content has been digitally made. This is due to the current laws against CSAM that require a real image of a child to prosecute the person responsible.
“Bad actors are taking photographs of minors, using AI to modify into sexually compromising positions, and then escaping the letter of the law, not the purpose of the law but the letter of the law,” said Carl Szabo, vice president of nonprofit NetChoice.
Attorney Generals from across the nation recently submitted a letter to Congress addressing their concerns over AI-generated content depicting children in a sexual manner. This page will cover the information from the bipartisan letter, along with a recent example case of a person arrested in Florida for computer generated child pornography.
Bipartisan Letter to Congress Regarding AI
The National Association of Attorney Generals sent a bipartisan letter to Congress in September 2023 regarding the concerns over artificial intelligence (AI).
The letter addressed the potential advances brought to society by AI, along with the potential to inflict serious harm. The main concern in for Congress was regarding their alleged lack of effort in reporting or publishing studies specifically on the exploitation of children through AI technology.
The collective group of Attorney Generals challenged that while internet crimes against children are already actively prosecuted, AI creates a “new frontier for abuse” that can make prosecuting those responsible increasingly difficult.
The main concerns regarding the exploitation of children through AI are identified in the letter as follows:
- AI’s ability to source and track images of anyone (including children) across the internet to identify or anticipate an alleged victim’s location;
- AI’s ability to study recordings of a person’s voice and mimic that voice to say things a person never said; and
- AI’s ability to generate child sexual abuse material (CSAM) by studying real images of people and creating deepfakes to depict the children in sexual positions or activity.
When discussing situations where AI CSAM is not a deepfake but instead is an animation of children who do not actually exist, the Attorney Generals explained the following reasons why this is still concerning:
- AI-generated CSAM is often based on source images of real children who have been abused;
- Even if the children depicted have never been abused, AI-generated CSAM often still resembles real children that could potentially endanger unvictimized children or their parents;
- AI-generated CSAM (even if they are not resembling real children) support the growth of the child exploitation market by normalizing child abuse; and
- Like deepfakes, AI-generated CSAM is efficient and easy to generate across widely available AI tools.
In its conclusion, the letter provides the following advice to Congress on how to protect children from the dangerous side of AI:
“First, Congress should establish an expert commission to study the means and methods of AI that can be used to exploit children specifically and to propose solutions to deter and address such exploitation…while we are aware that several governmental offices and committees have been established to evaluate AI generally, a working group devoted specifically to the protection of children from AI is necessary to ensure the vulnerable among us are not forgotten…Congress should act to deter and address child exploitation, such as by expanding existing restrictions on CSAM to explicitly cover AI-generated CSAM.”
Example Case
The Pasco Sheriff’s Office charged an elementary school teacher with the unlawful possession of child pornography. According to the press release, investigators received a tip about third grade science teacher Steven Houser, 67. The investigation found that Houser was in possession of two photos and three videos that featured child pornography.
The investigation also found that Houser possessed child erotica that had been generated by an AI computer system. The defendant admitted to police that he had used the yearbook from Beacon Christian Academy to generate sexual depictions based on the yearbook photos of three students. The defendant has since been apprehended and charged with possession of child pornography.
Florida Statute Section 827.071(5)(a) prohibits any person from knowingly possessing, controlling, or intentionally viewing any photograph, motion picture, exhibition, show, representation, image, data, computer depiction, or other representation that they know to include child pornography. A violation of this law results in a third-degree felony.
Important: Each individual depiction of child pornography is considered a separate offense for possession of child pornography. If the alleged child pornography depicts more than one child, then each child in each depiction that is possessed and intentionally viewed is considered a separate offense.
Contact Pumphrey Law Firm
The increasing popularity of AI in everyday life can be beneficial, until it incriminates you for an alleged offense. If you or someone you know is being accused of a crime through an AI-generated program, you should prioritize finding legal representation. These cases are extremely complex and may face challenges over the next few years with the continuous technological changes. You will want a defense attorney in your corner who has the knowledge and expertise to have your charges dismissed completely or lowered to a lesser offense.
The defense attorneys at Pumphrey Law Firm can provide you with a free consultation to go over the surrounding facts of your case. Fill out our form or give us a call at (850) 681-7777.
Social Share