Table of Contents

AI and the New Face of Sextortion

There was a time when sextortion followed a predictable pattern which usually involved stolen private images, hacked accounts, or moments of misplaced trust. The victims were often people who had shared intimate content with the wrong person.

Over the past year, cybersecurity researchers uncovered multiple publicly accessible databases tied to AI image-generation tools. Large volumes of generated images were explicitly sexual. Many appeared to depict women in revealing or nude states. Some suggested attempts to manipulate images to make adults appear younger. Others went a step further using ordinary photos of real people, likely taken from social media, to create explicit content that never originally existed.

Sextortion is no longer dependent on real images. With generative AI, it can now be manufactured.

What Sextortion Really Means Today

Sextortion, a combination of “sexual” and “extortion” is a form of blackmail where someone threatens to release intimate or compromising content unless the victim complies with demands. These demands are often financial, but not always. Traditionally, this relied on:

  • Stolen private images
  • Leaked videos
  • Coercion during personal interactions

But generative AI has changed the mechanics completely. Today, a person does not need to have ever taken a compromising photo to become a target. A simple, publicly available image like a selfie, a holiday picture, a professional headshot can be enough.

AI tools can now:

  • Remove clothing from images with convincing realism
  • Generate entirely fake nude bodies and attach real faces
  • Produce manipulated videos in minutes

What once required advanced editing skills and hours of effort can now be done by typing a few lines of text. That accessibility is what makes this shift so significant. The barrier to entry has effectively disappeared.

How the AI Database Exposure Revealed the Scale of Abuse

The risks of this technology became even clearer following an investigation linked to the South Korean platform GenNomis.

Security researcher Jeremiah Fowler discovered an unsecured database connected to the platform, one that required no password or authentication to access. What he found inside offered a rare, unfiltered look into how generative AI tools are actually being used.

The database contained more than 95,000 records and over 45GB of data, the majority of which were AI-generated images. A significant portion of this content was explicit.

Some of the most concerning findings included:

  • Large volumes of AI-generated sexual images involving adults
  • Evidence of face-swapping, where real individuals’ faces were placed onto explicit bodies
  • Images suggesting the use of real photographs as source material for generating non-consensual content
  • Prompts used to generate these images, revealing explicit and disturbing instructions

More troubling still were indications that some content involved the depiction of individuals made to appear as minors, including manipulated representations of public figures.

The exposure did not appear to include usernames or login data, but that does little to reduce the seriousness of what was found. The database showed, in practical terms, how easily these tools can be used to create harmful, non-consensual imagery at scale.

Shortly after the discovery was reported, access to the database was restricted. Not long after media enquiries were made, the platform’s websites became unavailable.

Whether or not the platform intended for this type of content to be created is almost beside the point. The design of the system including features like image generation, editing, and face manipulation made it possible. And that is the real issue.

Why this Changes the Sextortion Landscape

What cases like this demonstrate is simple but uncomfortable. The threat is no longer limited to what exists, it now includes what can be created. This changes how sextortion works in three important ways:

Anyone can be targeted: A publicly available photo is enough. No prior interaction is required.

The content can look convincing: AI-generated images are now realistic enough to be believed, especially under pressure.

The attacker’s job is easier: There is no need to hack accounts or gain trust. The content can be generated on demand.

This significantly lowers the effort required for attackers while increasing the psychological impact on victims. Even if the content is fake, the damage it can cause is real.

Protecting Yourself in an AI-driven Threat Environment

The uncomfortable truth is that there is no single action that eliminates this risk completely. However, there are practical steps that reduce exposure significantly.

Treat public images as reusable data: Any image posted publicly can be downloaded, modified, and reused. This includes profile pictures. Limit what you share openly, especially high-resolution or personal images.

Lock down your social media visibility: Review privacy settings across all platforms:

  • Restrict who can view your photos
  • Limit profile visibility to trusted contacts
  • Remove old or unnecessary images

This reduces the pool of material available for misuse.

Be cautious with unknown contacts: Many sextortion attempts still begin with direct interaction. Avoid engaging with accounts that:

  • Seem recently created
  • Have little to no real activity
  • Quickly steer conversations toward personal topics

Understand that “evidence” can be fabricated: If you are ever threatened with compromising images, remember:

  • The content may be entirely AI-generated
  • Panic is part of the attacker’s strategy

Do not respond immediately or comply with demands.

Take early action if targeted: If you suspect an attempt at sextortion:

  • Stop communication
  • Preserve evidence (messages, usernames, timestamps)
  • Report the account on the platform
  • Inform someone you trust

Silence and isolation are what attackers rely on.

Conclusion

The conversation around sextortion used to focus on poor judgement or risky behaviour. That framing no longer holds. We are now in a phase where technology allows harm to be created without consent, interaction, or even awareness.

It is no longer about what you’ve done but about what someone else can generate. And that means the responsibility for safety is no longer just personal. It is systemic, technological, and increasingly urgent.

 

Categorized in:

Uncategorized,