#49 Deepfake Abuse and AI Law: Spain Leads the Fight Against Non-Consensual AI Images

The past few weeks have brought a case into the spotlight that is as disturbing as it is revealing. Allegations by Colleen Fernández against her ex-husband, Christian Ulmen, have triggered a broader debate about the misuse of artificial intelligence in deeply personal contexts. Like many who have followed the story, I found the details profoundly shocking. At the center of the case lies a growing and urgent issue: AI-generated deepfake pornography and non-consensual synthetic images.

What Happened in the Ulmen Case?

For readers unfamiliar with the case: Fernández accuses her ex-husband of creating and distributing AI-generated nude and pornographic images depicting her without consent. These deepfakes were allegedly shared with colleagues and acquaintances over an extended period.
Although the images were entirely synthetic, they appeared highly realistic. This highlights a crucial point: when it comes to deepfakes, the harm does not depend on whether the content is “real.” The reputational damage, emotional distress, and loss of control are very real.

Deepfakes and Digital Violence

Deepfakes are AI-generated or manipulated images, videos, or audio that convincingly imitate real people. What makes them particularly dangerous is their accessibility – tools that once required advanced technical knowledge are now widely available.
In cases like this, deepfakes become a form of digital violence. They allow individuals to fabricate intimate scenarios, undermine personal dignity, and distribute harmful content at scale. The technology transforms private abuse into a public and often irreversible violation.

Spain’s Legal Response: From Gaps to Action

The case is being addressed in Spain, where the legal system is currently undergoing significant transformation in response to AI-related harms.
Traditionally, Spanish law has relied on strong protections of honor, privacy, and personal image under constitutional and civil law. These rights allow victims to seek injunctions and damages when their dignity or reputation is violated.
What is changing – and this is crucial – is how the law treats synthetic content. Recent legal developments in Spain explicitly recognize that AI-generated images and voices can constitute unlawful interference with a person’s fundamental rights, even if the content is entirely fabricated (see ECIJA, 2026).
At the same time, Spain is actively introducing new rules to regulate deepfakes more directly. Proposed legislation strengthens consent requirements for the use of a person’s image, voice, or likeness in AI-generated content. It clarifies that:

  • explicit consent is required,
  • minors require heightened protection,
  • and publicly available images – such as those on social media – cannot simply be reused for AI generation

While existing criminal law provisions can still apply – especially in cases involving harassment or the dissemination of intimate content – they were not designed with AI-generated material in mind. Spain’s current reforms aim to close exactly this gap.

Germany: Similar Principles, Slower Adaptation

Germany offers comparable protection through personality rights, privacy laws, and restrictions on the use of images. Victims can pursue both civil claims and criminal remedies.
However, the German framework still largely relies on traditional concepts such as “real” images or recordings. This creates uncertainty when dealing with purely AI-generated content. Courts are only beginning to address whether synthetic images fall within existing legal definitions.
Compared to Spain, Germany’s approach appears more cautious and less explicitly adapted to the realities of generative AI.

Why Deepfakes Challenge the Law

Deepfakes blur fundamental legal distinctions: between reality and fabrication, identity and imitation, consent and simulation. This creates a structural challenge for legal systems that were not designed for synthetic media.
The core issue is no longer authenticity but impact. If a fabricated image can damage reputation, violate dignity, and cause psychological harm, the law must respond accordingly.

Final Thoughts

The Ulmen case is not just a personal dispute – it is a warning signal. It shows how easily AI can be misused to create deeply harmful content, often with devastating consequences. Spain’s evolving legal approach suggests a clear direction: stronger consent rules, broader protection of identity, and a recognition that synthetic content can be just as harmful as real material. The question now is whether other jurisdictions will follow quickly enough. Because one thing is certain: AI is moving fast and the law can no longer afford to lag behind.

Stay curious, stay informed, and let´s keep exploring the fascinating world of AI together.

This post was written with the help of different AI tools.

Check out previous posts for more exiting insights!