
Whispers followed her offline. Online, the abuse imploded, unchecked: comments, ridicule, shares, screenshots. She had never consented to any of it. That hadn’t stopped anyone.
Within minutes, thousands had seen the content. Within hours, millions.
The nightmare had only begun.
Days passed before platforms responded. By then, the images had been seen, save, and replicated. She was left asking: Who do I report this to? Will anyone believe me? Will the people who did this ever face consequences? Or will the blame land on me?
This is the reality for thousands of women and girls every single day. AI deepfakes are destroying real lives and justice remains out of reach for most survivors.
Her story could be yours.
Deepfake abuse is the sharp edge of a much broader pattern of digital violence targeting women and girls. It’s gendered and it’s escalating. Right now, the systems designed to protect people are failing, while the tools to cause harm become cheaper, faster and easier to use every day.
Here’s what you need to know:
What is deepfake abuse and how common is it?
Deepfakes are images, audio or videos manipulated by artificial intelligence (AI) that make it appear someone said or did something they never did.
The technology itself isn’t new, but its weaponisation against women and girls is a newer phenomenon, and it’s accelerating fast.
- deepfake pornography made up 98 per cent of all deepfake videos online, and 99 per cent depicted women, according to a 2023 report.
- deepfake videos were an estimated 550 per cent more prevalent in 2023 than in 2019
- the tools to create them are widely available, usually free, and require very little technical expertise
- once posted, AI-generated content can be replicated endlessly, saved to private devices, and shared across platforms, making it nearly impossible to fully remove
Why survivors don’t report and what happens when they do
Underreporting is one of the biggest barriers to accountability. For survivors who do come forward, the justice system often becomes another source of trauma.
- Survivors are asked repeatedly to view and describe abusive content with police, lawyers and platform moderators while often facing questions like, “are you sure it’s not real?” or “did you share intimate images before?”
- If a case reaches court, their clothing, relationships and past behaviour go under the microscope, not the perpetrator’s
- Harm doesn’t stay online, according to a UN Women survey, which found 41 per cent of women in public life who experienced digital violence also reported facing offline attacks or harassment linked to it
Why deepfake creators rarely face justice
Despite the scale of harm, prosecutions are rare, platforms routinely fail to act and survivors are often re-traumatised when they try to seek help. Here’s why:
The law hasn’t caught up as less than half of countries have laws that address online abuse and even fewer have legislation that specifically covers AI-generated deepfake content
- most “revenge porn” or image-based abuse laws were written before deepfakes existed, leaving gaping loopholes
- in many countries, deepfake porn or AI-generated nude images fall into legal grey areas
- survivors are unsure whether the abuse is even illegal and whether perpetrators can be prosecuted
Enforcement is lagging because even when laws exist, investigators need digital forensics expertise, cross-border coordination and platform cooperation to build a case while most justice systems don’t have adequate resources for any of these
- evidence disappears fast as content spreads and copies multiply while perpetrators hide behind anonymity or operate across jurisdictions
- platforms are slow or unwilling to share data with law enforcement, especially in cross-border cases
- digital forensics backlogs mean cases stall before they even get started
Tech platforms are failing survivors as they have long hidden behind “intermediary” status to avoid responsibility for user-generated content.
What must happen now
While there are a number of nations and regions taking action (see text box below), stopping deepfake abuse requires urgent, coordinated action from governments, institutions and tech platforms.
Here are five things that need to happen:
1. Laws that actually cover deepfake abuse
Governments must pass legislation with clear definitions of AI-generated abuse and focusing on consent, strict liability for perpetrators, fast-track removal obligations for platforms and cross-border enforcement protocols.
2. Justice systems that can investigate and prosecute
Law enforcement needs training, resources and dedicated capacity to collect and preserve digital evidence while digital forensics backlogs are addressed, with international cooperation frameworks becoming fast, functional and fit for purpose.
3. Platforms held accountable
Tech companies must be legally required to proactively monitor for and remove abusive content within mandatory timelines, cooperate with law enforcement and face real financial consequences when they fail to act.
4. Real support for survivors
Trained, trauma-informed law enforcement and legal professionals and free legal aid should be available.
5. Education that prevents abuse
Digital literacy, including consent education, online safety, and what to do when experiencing abuse, needs to start young and reach everyone as prevention is as important as prosecution.
UN Women warns this is not a niche internet problem: “It is a global crisis.”
- in a recent high-profile case, UK journalist Daisy Dixon discovered AI-generated, sexualised images of herself on X in December 2025, created using the platform’s own Grok AI tool; it took days for the platform to geoblock the function, while the abuse kept spreading
- deepfake abuse can serve as online catalyst for so-called “honour-based crimes” in certain cultural contexts, where perceived breach of honour norms on digital platforms can result in extreme physical violence against women, or even death
- more than half of deepfake victims in the United States of America contemplated suicide, according to recent research
Meanwhile, a handful of jurisdictions are starting to act:
- Brazil amended its criminal code in 2025, increasing penalty for causing psychological violence against women using AI or other technology to alter their image or voice
- the European Union artificial intelligence (AI) act imposes transparency obligations around deepfakes
- The United Kingdom’s Online Safety Act prohibits sharing digitally manipulated explicit images, but does not address the creation of deepfakes and may not apply where intent to cause distress cannot be proven
- the United States Take It Down Act explicitly covers AI-generated intimate imagery and requires platform removal within 48 hours
Discover more from Truth Inspire Your Day
Subscribe to get the latest posts sent to your email.

