Publications
Article

Breaking Down the Intersection of Right-of-Publicity Law, AI

Law360

An artificial intelligence chatbot creates a deepfake video of celebrities sending a controversial political message. It's contentious. It's gripping. And it's a legal minefield. In a world where a voice, face and even persona can be spun up from scraps of data, the question isn't just can we — it's may we?

AI now makes it easy to synthesize a person's image, voice and likeness — raising core legal and ethical questions about when AI-assisted creation becomes unlawful appropriation.

This article examines how existing right-of-publicity law governs AI-generated voice-overs, deepfakes and deadbots; distills lessons from a recent ruling in Lehrman v. Lovo Inc., a case involving AI-generated voice clones; surveys pending legislative proposals; and offers practical guardrails for using AI without violating the right of publicity.

What is the right of publicity?

The right of publicity is a bundle of state law rights that protect a person's identity — name, image, voice and likeness — from unauthorized commercial use. This right is rooted in the idea that individuals have a commercial interest in their own persona and should be able to control how it is used or monetized.

While the right of publicity is most commonly invoked by public figures, performers and celebrities, it can apply to anyone whose identity is used for commercial gain.

There is no single federal statute. Rather, a patchwork of state statutes and common-law rules governs whether — and how long — a person controls the commercial use of their identity. Some states, like California and New York, have more robust statutory protections, while others rely on common law or provide only limited rights.

The scope of protection sometimes expands from name and likeness to distinctive voices, catchphrases, and even signature styles or mannerisms. In some states, these rights can even survive death, allowing heirs to control and profit from a deceased person's name, image and likeness.

How is it enforced?

The right of publicity is often enforced using civil lawsuits brought by individuals — or their estates — whose identities were used without authorization.

Those aggrieved might allege a slew of federal and state law claims, from copyright infringement and trademark infringement, unfair competition, false designation of origin, and false association under the Lanham Act, to violations of state right-of-publicity statutes, violations of state consumer protection statutes, common-law misappropriation, fraud, conversion, unjust enrichment and even breach of contract (e.g., violations of a platform's terms and conditions).

For example, in May 2024 in Lehrman v. Lovo Inc., in the U.S. District Court for the Southern District of New York, voice actors Paul Lehrman and Linnea Sage alleged that Lovo used recordings of their voices to create and sell AI-generated voice clones without authorization or compensation.

Lovo is an AI company that offers AI text-to-speech software services. It created AI-generated clones of the actors' voices from recordings provided through Fiverr.com, an online marketplace for freelance services.

Although recordings were initially obtained for internal research, they were later used commercially. The actors sued, raising a litany of claims against Lovo, including breach of contract; trademark infringement, unfair competition, false affiliation and false advertising under the Lanham Act; copyright infringement; violations of New York's Civil Rights Law; violations of New York's consumer protection laws; fraud; unjust enrichment; and conversion.

Considering many of these issues in the first instance on Lovo's motion to dismiss, on July 10, the court permitted the actors to proceed on the following claims.

Breach of Contract Claims

The court found that the actors plausibly alleged the existence of valid contracts. The actors and Lovo's agents exchanged clear offers, acceptances and consideration (payment for recordings), with explicit restrictions on use.

The court held that online chat messages on Fiverr, supplemented by the platform's terms of service, satisfied New York's statute of frauds. The communications included all material terms and were authenticated by the parties' conduct.

New York Civil Rights Law, Sections 50 and 51: Right of Publicity/Voice

The court held that New York's Civil Rights Law covers unauthorized use of a person's voice, including digital replicas created by AI, even though the law was amended to explicitly cover digital replicas of deceased persons. The court reasoned that the law's purpose is to protect identity, and its language is broad enough to encompass new technologies.

The court found the claims timely because each new use or sale of the AI-generated voice clone constituted a new violation, refreshing the statute of limitations.

New York General Business Law, Sections 349 and 350: Consumer Protection/False Advertising

The court allowed claims under New York's consumer protection statutes to proceed because Lovo allegedly misrepresented to its subscribers that they had full commercial rights to use the AI-generated voices, when in fact such use could violate the actors' rights under New York law.

Lovo's conduct was directed at the general public, not just a private dispute, and the misleading statements could affect a broad class of consumers. The actors plausibly alleged injury in the form of lost sales and diverted customers due to Lovo's misrepresentations.

Copyright Claims (Limited Scope)

The court allowed direct copyright infringement claims to proceed where Lovo allegedly used actual portions of Sage's copyrighted voice recordings in marketing and investor presentations.

The court dismissed, but granted leave to amend, the actors' claims based on their use of the recordings to train Lovo's AI software, noting that more factual detail was needed about how the training process constituted copyright infringement.

The court dismissed the remaining claims, i.e., trademark, fraud, unjust enrichment and conversion. Importantly, it reasoned that although there was no basis for categorically excluding voices from trademark protection, here, the actors' voices were not protectable because they did not function as source identifiers.

The court also found that essential elements of the remaining claims (fraud, unjust enrichment and conversion) were simply not present.

Cases like Lehrman v. Lovo highlight the limitations of federal intellectual property law, as well as the importance of state-level rights and clear contractual agreements in the AI era.

General Rules So Far

Federal IP laws provide a weak fit for AI outputs and identity cloning.

Trademarks protect source identifiers, not the sale of someone's identity. For example, in Lehrman v. Lovo, the Southern District of New York found that the actors' voices, while unique, were not used in a way that identified the source of goods or services, but rather were the product itself.

Copyright may not protect a voice per se, as voices are not generally considered original works of authorship under copyright law.

State publicity rights fill the gap.

New York, for example, allows claims for unauthorized use of voice under its Civil Rights Law, and many states recognize similar protections. These state laws can provide a remedy where federal law does not, especially as AI makes it easier to replicate and commercialize someone's identity at scale.

Postmortem rights are jurisdiction-specific.

Some states cut off rights at death. Others, like California and New York, extend them. That deadbot could be lawful in one state and actionable in another, creating a patchwork of risk for companies operating nationally.

Contracts count.

Platform direct messages, model releases and terms of service can be enforceable if they clearly limit use. Courts have found that even informal agreements, such as those made through direct messages, can constitute binding contracts if the terms are clear and agreed upon by both parties.

This means that companies and creators must pay close attention to the language in their agreements and the scope of any permissions granted.

Proposed Federal Legislation: Toward a Federal Right of Publicity

Congress is considering federalizing core aspects of the right of publicity to address AI-generated voice and likeness cloning. Two bills would create nationwide protections and remedies that supplement today's patchwork of state laws.

The No AI FRAUD Act, introduced in the U.S. House of Representatives in January 2024, but failing to advance to date, would create a federal property right in an individual's voice and likeness, applicable to all persons. It would also authorize civil claims for unauthorized AI-generated replicas, closing gaps where state law may be limited or inconsistent. The goal is to establish uniform, nationwide standards alongside existing state publicity and consumer protection laws.

The NO FAKES Act, reintroduced in the Senate in April, would establish a federal private right of action against the unauthorized use of a person's voice or likeness in highly realistic digital replicas. It would also incorporate a Digital Millennium Copyright Act-style takedown mechanism for online services, enabling the notice and takedown of infringing digital replicas.

If enacted, these bills would create federal causes of action for violations of a person's right of publicity from AI outputs, reducing forum shopping and uncertainty across states. AI companies, advertisers and users could face federal claims, and would need to protect themselves through clear consents and narrowly tailored licenses.

Both bills remain proposals and may change during the legislative process. Those interested should monitor changes to the scope (e.g., definitions of digital replica, exemptions, remedies and preemption) and be prepared to update their compliance playbooks to align with a federal framework for these AI outputs.

Where's the line for AI companies, advertisers and users?

This is the million-dollar question as AI-generated content becomes more sophisticated and accessible. Although legal and ethical boundaries are still being drawn, several key principles are emerging.

Consent is king.

The safest and most reliable way to avoid liability is to obtain clear, written consent from the individual whose identity (name, image, likeness, voice, or persona) will be used. This is especially critical for commercial uses, such as advertising, endorsements or advocacy campaigns. Consent should be specific about the scope, duration and nature of the use. It should specify whether the AI will be trained on the data, whether the likeness or voice will be cloned, and how the output will be distributed.

Context matters.

The legal risk is highest when AI-generated content is used in advertising, marketing or other commercial purposes. Courts are more likely to find a violation of the right of publicity if the use suggests endorsement or is designed to sell a product or service. By contrast, expressive works (such as satire, parody or news reporting) may receive greater protection under the First Amendment and similar state law defenses, though this is not absolute.

Don't imply endorsement.

Even if you use a soundalike or a digital clone, you can be liable if a reasonable person would believe the real individual is endorsing the product, service or message. Avoiding explicit references is not enough if the overall impression is misleading.

Mind the map.

State law is a patchwork. Some states have strong right of publicity statutes that cover both living and deceased individuals, while others offer only limited or no protection. This means that a campaign that is legal in one state could be actionable in another. Companies operating nationally or globally must be especially careful to comply with the strictest applicable laws.

Be transparent and use disclosure.

While not always legally required, disclosing that a voice or likeness is AI-generated can help mitigate not only legal, but also reputational, risk. This is particularly important as consumers become more aware of deepfakes and digital manipulation. Some industry groups and regulators are considering rules that would require these types of disclosures in certain contexts.

Make sure to have contractual clarity.

If you are sourcing images, voices or likenesses from freelancers, platforms or agencies, make sure your contracts are explicit about how the material can be used. Contracts should state whether it can be used for AI training or cloning, and what rights (if any) are being transferred.

Courts have shown a willingness to enforce even informal agreements, such as those made via platform direct messages, if the terms are clear.

Keep ethical considerations in mind.

Beyond the law, there are reputational and ethical risks to using AI-generated images, voices and likenesses without consent. Consumers care about authenticity and respect for personal identity. Brands and creators should weigh not just what is legal, but what may be considered ethically right.

Conclusion

In short, the legal lines are still being drawn, but the best practice is to err on the side of consent, clarity and transparency. If you wouldn't use a person's real voice, image or likeness without their permission, you shouldn't use the AI-generated output either.

In the era of voice-overs, deepfakes and deadbots, legality and ethics converge on the same rule — respect the person behind the pixels.

"Breaking Down The Intersection Of Right-Of-Publicity Law, AI," by Jillian M. Taylor was published in Law360 on October 9, 2025.