Students’ mental health is facing a new and terrifying threat from the use of artificial intelligence to create sexually explicit deepfake images that are used for bullying, harassment, and sometimes even extortion.
Legal definitions of Deepfakes vary depending on the jurisdiction. Generally, they are any kind of synthetic media created by artificial intelligence to portray a fictitious rendering of a person or event. Recently passed federal legislation addressing the issue calls them “digital forgeries.” Although the ability to photo-edit has been around for decades, it is only the recent advent of modern generative AI that has created the ability for almost anybody to create a photorealistic image using little more than their ever-present mobile device and some simple AI prompts. Given the prevalence of images and videos of teens across social media to draw from, this has created the perfect storm of cyberbullying and sexual harassment for students, schools, and parents. Teens are able to cause significant and lasting emotional damage within minutes. Teenagers are still developing the parts of the brain responsible for judgment and impulse control. As I often saw play out in juvenile criminal cases, they are often biologically unable to fully understand the long-term consequences of their impulsive actions and that is what we are seeing play out here.
States and school governing bodies are attempting to act quickly to put their own policies and procedures in place in an attempt to address the issue; however, once the images are created, often the damage is already done for the victims, regardless of the consequences that are put in place for the offenders. The United States criminal justice system was designed to enforce laws and safeguard due process, but historically placed little emphasis on what the victims need to recover or be made whole. Although many laws include civil liability for the offender, often the damage is already done, particularly for young people. When sexually explicit deepfake images or videos go viral, even if only within their social orbit, students feel isolation and deep shame, even though they have done nothing wrong. Because of the nature of the content, they can feel hesitant to report the issue to their parents or school officials. That was the case only last year in Kentucky, when a teenager was contacted by strangers who had found photos of him online and used them to create sexually explicit images and threatened to circulate the images to his family and friends if he didn’t pay them $3,000. The teen in question ended his life within hours of receiving the threats, never even waking his parents, who were asleep in the home. He was only one of what is estimated to be at least twenty young people who have taken their lives as a result of sextortion.
These cases can present a number of hurdles for prosecutors and law enforcement as well. The images can be posted and go viral within hours or even less. By the time they are circulating, the time it takes to obtain search warrants for cell phones, and perform the necessary digital analysis can take time in what are often already understaffed law enforcement agencies. In that time period, the trauma to the victims has already occurred and often the images have already reached the far outer edges of cyberspace. From a practical standpoint, the system is simply not built to move at the speed of digital harm. The new federal legislation attempts to address this issue by requiring platforms to remove material within 48 hours of obtaining a valid request; however platforms have until May of 2026 to establish a procedure for victims to request removal of the material. Additionally, not all platforms are covered under the legislation. From my perspective, these measures are a necessary and important start, but they highlight how much responsibility still falls on schools, families, and the victims in the meantime.
What parents, students, and educators can realistically do while the law catches up.
The normalization of AI generative tools for young people has quickly outpaced the digital literacy of the decision makers and those entrusted to keep our young people safe. But that doesn’t mean we are helpless against this threat. The messaging, even as recently as five years ago to young people was “don’t share nude photos of yourself.” But those bright line rules are gone. Now we to take the lessons we have already learned about survivors of sexual trauma and apply them to this new AI reality. Simple but important principles like, believing the victim, and removing the shame of reporting, to focus on a trauma-informed response, apply here just as much as they do to any sex crime. Creating an environment in homes and schools of non-judgmental support where young people feel they can report these incidents is the key to quick interventions for the victims. Having these conversations and letting them know the harm that comes from sharing and reposting false images or narratives is a lesson we are already trying to instill in everybody through digital literacy, but the consequences are so much higher when young people’s mental health is on the line.
