#10 Law & Code: Tales from the AI Courtroom The Real Black Widow

Scarlett Johansson vs. AI and the Fight for Digital Identity

Welcome to the AI Wild West! In the ever-unfolding drama between Hollywood and Silicon Valley, a new kind of conflict has emerged: not over scripts or sequels, but over faces, voices, and digital clones. At the heart of this legal and ethical rodeo? None other than Scarlett Johansson. Yes, that Scarlett. Marvel megastar, feminist icon, and—most recently—the unwilling poster child of the AI ethics debate.

In a world where AI can now generate eerily lifelike images and voices, Johansson has become one of the first major celebrities to draw a very public line in the digital sand. This isn’t just a catfight between tech and Tinseltown—this is a legal showdown that may define the future of digital identity, intellectual property, and the wild new world of synthetic media.

So, buckle up. We’re diving deep into the who, what, why, and WTF of the

Scarlett Johansson vs. AI saga.

Why 'Law & Code'? A Note from the Author

Welcome to Law & Code: Tales from the AI Courtroom, a new series where we dive headfirst into the legal drama unfolding at the intersection of artificial intelligence and human identity. You might be thinking: “Wait—are there even enough cases to fill a whole series?” And you wouldn’t be wrong. The truth is, we’re still in the early days. The number of court cases involving AI-generated content, deepfakes, and synthetic identity is small—but growing fast.

This series exists because we can’t afford to wait until the casebooks are full. By highlighting landmark disputes, unexpected showdowns, and the legal gray zones popping up around AI, I hope to spark conversation, curiosity, and maybe even a little outrage. While the laws are limited and often outdated, the stakes are enormous—from the future of personal rights to the credibility of media, creativity, and consent.

So consider this your legal expedition into an emerging frontier. We’ll explore the legal gaps, the ethical landmines, and the cases that just might shape the rules for generations to come.

Laying the Legal Foundation:

The Right of Publicity—And Why It Matters

Before we dive any deeper into courtrooms and cease-and-desists, let’s unpack the star of our legal show: the right of publicity. It may sound like a backstage pass for A-listers, but it’s actually a cornerstone of personal identity law—especially in the United States.

The right of publicity refers to an individual’s legal right to control and profit from the commercial use of their name, likeness, voice, signature, and other distinctive personal attributes. This means if someone wants to slap your face on a T-shirt, mimic your voice for a radio ad, or deepfake you into a toothpaste commercial, they need your explicit permission—or they might get a friendly letter from your lawyer (and then a less friendly one).

Originally designed to protect celebrities and public figures from unauthorized endorsements or misleading associations, this right has evolved into a broader tool for protecting personal identity in the digital age. It’s recognized in over 25 U.S. states, with California and New York being the most notable, though the exact scope and strength of protections vary significantly. Some states even extend this right posthumously—meaning your digital ghost could theoretically hire a lawyer from beyond the grave.

Importantly, while it’s especially relevant to actors, athletes, and influencers, the right of publicity isn’t exclusive to them. If your identity has commercial value—even locally or within a niche industry—you can invoke this right. Think: podcast hosts, TikTok creators, YouTubers, or even that teacher who went viral for dancing in class.

The catch? There’s no federal law standardizing it. So depending on where you are, enforcement can range from a tightly worded court order to a polite shrug. And with the rise of AI-generated likenesses, voices, and content, many legal experts argue that we need to rethink and federalize the right of publicity to cover the emerging “synthetic self.”

In short: this right is the legal equivalent of “hands off my face”—and as AI gets more powerful, it might become your best digital defense.

Zooming Out: The European Approach to Personality and Data Protection

Across the pond, things look a little different—but no less protective. While Europe doesn’t use the term “right of publicity,” it achieves similar outcomes through a blend of personality rights and data protection laws.

In countries like Germany, the legal concept of the Allgemeines Persönlichkeitsrecht (general right of personality) plays a key role. Rooted in constitutional law, this right protects an individual’s dignity, identity, and control over how they are portrayed. German courts have used it to address unauthorized image use, false representation, and the exploitation of personal identity for commercial gain—often with even stricter standards than in the U.S.

On an EU-wide level, we also have the General Data Protection Regulation (GDPR). It doesn’t explicitly mention deepfakes or avatars (yet), but it does consider biometric data, voiceprints, and facial recognition data as sensitive personal data. That means if an AI tool is trained on your face or voice without your consent, it could very well be a GDPR violation—especially if the use is commercial or public-facing.

When combined, the GDPR and national personality rights offer a robust framework. Unlike the fragmented U.S. system, EU protections are broader and often don’t require proof of commercial harm—only that your rights as an individual were infringed. The bar for consent is also higher, making it harder for AI developers to operate in a legal gray zone.

In essence: the U.S. talks about celebrity rights, while Europe talks about human dignity. Both roads lead to Rome—but the European route wears a more philosophical robe (and probably files a complaint with the Datenschutzbeauftragter).

So whether you’re a pop star in L.A. or a philosophy professor in Heidelberg, the message is clear: your digital self deserves legal protection. And as AI keeps evolving, the intersection between American publicity rights and European personality rights might just be where the future of digital identity law is born.

And now, the case: where legal theory meets digital reality—starring Scarlett, code, and a whole lot of cease-and-desists

Part I: The Spark – A Voice, A Video, and a Viral Ad

The controversy began in October 2023, when an AI-generated ad promoting an obscure app called „Lisa AI: 90s Yearbook & Avatar“ began circulating online. The video featured a woman who looked and sounded exactly like Scarlett Johansson. We’re talking Marvel-movie-level mimicry. Except there was one tiny detail missing: Scarlett had nothing to do with it.

Cue the legal sirens.

The ad began with what appeared to be actual footage of Johansson behind the scenes on a Marvel set. Then, the deepfake magic kicked in: Johansson’s voice (generated by AI), promoting the app, seamlessly took over. It was slick. It was creepy. And it was completely unauthorized.

But how did it come to this? In the months leading up to the ad, a boom in „AI avatar“ apps had taken over app stores and social media, allowing users to create stylized portraits, retro avatars, and even talking clones of themselves and others. These apps scraped vast amounts of image and video data—often from public sources and social media—to train their models. And while most users delighted in turning themselves into anime characters or 90s prom queens, developers realized that celebrity likenesses drew the most engagement. The temptation to include AI versions of famous faces—without asking for permission—became too great for some companies to resist.

In Scarlett’s case, developers used existing clips from her past interviews and movie scenes to train a model that could reproduce her voice with shocking accuracy. Combined with a model trained on her visual likeness, the result was a seamless digital facsimile. What was marketed as a quirky nostalgia app suddenly became a legal and ethical minefield. Even more troubling, the app never disclosed the use of her likeness in any terms of service or public materials, making the impersonation not just shady—but arguably deceptive.

Johansson’s team responded swiftly with a cease-and-desist letter. Within hours, the ad was pulled. But the damage was already done. The video had been viewed millions of times. And for the legal world, it opened Pandora’s box.

Part II: The Legal Battlefield – Right of Publicity and Deepfake Dilemmas

To understand why this is such a legal mess, let’s break down the key legal issue here: the right of publicity.

This right gives individuals—especially celebrities—the ability to control how their name, image, likeness, and voice are used commercially. In many U.S. states, it’s protected under both statutory and common law. Violate it, and you could be facing a lawsuit faster than Iron Man’s repulsor blast.

Here’s the rub: AI-generated content blurs the lines. If an AI creates a new voice or image that merely resembles a celebrity, is it really their likeness? And if no actual footage or audio was used, does it still count as a violation?

Johansson’s case pushes us into the gray zone. The app creators didn’t use her actual voice—they used AI to simulate it. But if the result is indistinguishable to the public, does it matter?

Legal experts (myself included, thank you very much) agree: this could set a precedent.

The case also highlights how ill-equipped existing laws are to handle the nuanced ethical issues of synthetic media. Legal scholars are now seriously debating whether new rights—like a „digital likeness right“—should be enshrined into law. This right would go beyond traditional publicity laws and offer broader protection over one’s biometric and digital attributes, especially in light of generative AI models that can synthesize entire personas from public data.

Part III: Tech’s Defense – Innovation or Invasion?

From the tech side, the arguments are equal parts ambitious and slightly tone-deaf. Developers argue that generative AI is transformative—it creates new content, not copies. Some even claim it’s a form of free expression, protected under the First Amendment.

But when that “expression” involves mimicking someone’s face or voice for profit, courts tend to raise a skeptical eyebrow.

Companies like OpenAI have since tiptoed around the issue. While their tools can generate hyperrealistic images and voices, they’ve implemented internal guardrails to avoid creating content that resembles real people—especially Scarlett Johansson.

Why the special treatment? Because she’s already proven she’s not afraid to lawyer up. Other celebs may shrug off a lookalike AI image. Johansson will send you a cease-and-desist and follow up with an actual lawsuit. That’s how legends are born.

More importantly, this legal friction is forcing tech companies to think harder about the ethical underpinnings of their work. It’s not enough to hide behind disclaimers or terms of service. Ethical AI design means embedding safeguards into the architecture—like consent-based training data, opt-out registries for public figures, and clearer disclosures for synthetic content.

Part IV: Beyond Scarlett – A Celebrity Minefield

Johansson may be the most high-profile example, but she’s not alone. Tom Hanks, Keanu Reeves, and even rapper Drake have spoken out against AI-generated versions of themselves popping up in everything from ads to fake music tracks.

The legal term you’ll be hearing a lot more of in the next few years? Synthetic media. It’s a catch-all for AI-generated content that mimics real humans. And right now, the law is playing catch-up.

In the EU, the Digital Services Act and AI Act are slowly addressing these issues, focusing on transparency and accountability. In the U.S., legislation like the NO FAKES Act (yeah, that’s the real name) is in the works to regulate AI-generated impersonations.

But as of now? It’s still the Wild West.

And celebrities aren’t just defending themselves—they’re going on the offensive. Some stars are proactively licensing digital versions of themselves, creating “official” avatars for brand deals, film appearances, and fan experiences. This isn’t just defensive lawyering—it’s a whole new business model. Think of it as Hollywood 2.0: where your agent negotiates for your physical and your digital self.

Part V: A European Perspective – Striking the Balance Between Innovation and Integrity

Across the Atlantic, the European Union is taking a slightly more preemptive and structured approach. The AI Act, set to become one of the world’s first comprehensive AI regulations, classifies AI applications into risk categories, with synthetic media and biometric data usage falling under high-risk. This means developers could face strict requirements around transparency, data sourcing, and user consent. Add to that the GDPR, which already grants individuals extensive rights over their personal data—including voice and likeness—and you’ve got a potent legal cocktail that puts individual privacy at the forefront. Moreover, the Digital Services Act (DSA) mandates platform accountability, including obligations to remove deceptive or non-consensual deepfakes. While enforcement details are still being fine-tuned, the EU is sending a clear message: consent and authenticity aren’t optional—they’re non-negotiable.

These frameworks may become blueprints for other jurisdictions, especially as AI companies operate globally. For Johansson and others concerned with their international image, European law may offer a more robust fallback—even if U.S. protections are still lagging behind.

Part VI: What This Means for You (Yes, You Too!)

You might be thinking, „I’m not a celebrity, why should I care?“ Well, buckle up, because this isn’t just about red-carpet dwellers.

The tools used to generate Scarlett-level deepfakes are publicly accessible. Anyone with a decent GPU and some free time can create eerily accurate voice clones or face swaps. If you’ve ever posted a TikTok, a podcast, or even a spicy selfie—your data could become training fodder.

So, this isn’t just a celebrity issue. It’s a human rights issue. It’s a privacy issue. It’s about control over your own identity in a world where your digital twin might outlive you—and take up a side hustle selling skincare products you never approved.

Schools, workplaces, and even online dating platforms may soon have to grapple with these realities. Could you be catfished by an AI? Could your résumé be auto-generated by a bot trained on your public profiles? The lines are blurring, and fast.

Part VII: Consequences and the Road Ahead

Scarlett’s legal stand is already influencing the tech world. Several AI platforms now explicitly prohibit generating content that resembles real people without their consent. Watermarking, transparency tools, and metadata tagging are being rolled out to help distinguish synthetic from authentic content.

But these are just band-aids. The real question is whether we need new legal frameworks entirely—ones that recognize digital identity as a fundamental right.

Will AI creators need to license a person’s likeness like music producers license a song? Will we see agencies representing “digital twins” for hire? Will lawyers need to specialize in synthetic impersonation law? (Spoiler: yes. Start updating your CVs.)

Johansson’s case is a turning point. It’s forcing lawmakers, technologists, and the public to confront uncomfortable questions about identity, consent, and the price of innovation.

The ripple effects are already visible: universities are adding AI ethics to their law curriculums, legal tech startups are offering deepfake detection tools, and advocacy groups are pushing for a global „digital bill of rights.“ What began as one celebrity’s fight might just shape an entire legal era.

Final Thoughts

In many ways, Scarlett Johansson has become the Black Widow of the digital rights world—poised, lethal, and not afraid to burn bridges if it means protecting her turf.

Her battle isn’t just about one rogue app—it’s about setting boundaries in a world that increasingly ignores them. It’s about reminding Silicon Valley that consent matters. That creativity doesn’t mean carte blanche. That your face, your voice, your self—aren’t just data points.

This isn’t the last case we’ll see. But it’s the first of many that will shape how we define reality, reputation, and rights in an AI-driven world. And for that, Scarlett, we salute you.

Because sometimes, to save humanity from the robots, you don’t need Iron Man. You just need a good lawyer.

Stay curious, stay informed, and let´s keep exploring the fascinating world of AI together.

This post was written with the help of different AI tools.

Check out previous posts for more exiting insights!