Publications
Article

Revenge of the Synth(etic Voice)

The Legal Intelligencer

With the creation of AI-powered digital deepfakes that mimicked the voices of famous musicians, presidential candidates, and even everyday people, vocal rights were at the forefront of negotiations and debate in 2024. This motivated leaders in the private sector to contemplate how to protect vocal rights without chilling the innovative promise of artificial intelligence (AI). At the same time, public sector stakeholders (like the U.S. Copyright Office) issued reports on how AI was being used for various digital replication techniques.

For its part, Congress began debating an early draft of the Nurture Originals, Foster Art, and Keep Entertainment Safe Act (the NO FAKES Act) of 2024, which would have provided uniform protections for vocal rights throughout the United States. With debate ongoing and the changing of administrations, the bill failed to garner final congressional support. And while some industry watchers may have thought the NO FAKES Act of 2024 was the last opportunity to pass this law, certain lawmakers have signaled “No, there is another …”

DeepSeek to Regulating Deepfakes—China’s Legal Landscape Takes Shape

China has emerged as a leader in the AI sphere with its advent of “DeepSeek”—an AI-driven high-performance language model. Just as the Lunar New Year began, international headlines were abuzz with DeepSeek’s staggering capacity which rivaled ChatGPT—and at a fraction of the cost.

Not surprisingly, a country with more than a billion people prompted questions of ownership, copyright, consent, and more. From artists and voice actors wanting to protect their likeness and intellectual property, to thousands falling victim to deepfake scams, Chinese citizens have begun mounting pressure on courts and lawmakers to answer these questions in a regulatory framework.

Chinese Courts Uphold the Right to One’s Voice

In April 2024, the Beijing Internet Court ruled a defendant could not use an AI-generated version of a voice actor’s voice without her consent.

The case involved a plaintiff who had provided voice recordings to a company, which then sold the recordings to a second company that generated an AI version of the plaintiff’s voice, before selling it to yet a third company.

In directing the defendant to pay damages to the plaintiff, it underscored the need for clarity and consent in the era of AI-synthesized audio. In particular, the court ruled the consent the plaintiff originally gave in her contract with the first company did not extend to the third company’s use. The court also remarked that a voice could be recognizable to the public even after AI synthesis, so long as the voice creates thoughts or emotions associated with the person’s identity. The decision was an important step in not just protecting the right to one’s voice, but in clarifying that consent or authorization for traditional recordings does not automatically extend to AI processing and commercialization.

These Are the Droids You’re Looking For: Lawmakers Clamp Down on Differentiating AI Content

Motivated in part by these concerns, The Cyberspace Administration of China recently released its “Measures for the Identification of Artificial Intelligence Generated and Synthetic Content” to ensure all AI-generated content is clearly labeled. This regulation, which takes effect on Sept. 1, 2025, mandates AI service providers explicitly label AI generated and synthesized content. The restriction spans across all content media forms, from text and images to audio and video, and directs app stores to verify whether such providers are in compliance. The law, which also outlaws tampering of any such labels, hopes to address the spread of disinformation from AI-generated content.

A New Hope: The NO FAKES Act of 2025

Although similar efforts to protect vocal rights in America have not been successful to-date, hope is not lost.

In early April 2025, a new version of the NO FAKES Act was reintroduced with renewed bipartisan support and industry backing (the 2025 Act). Similar to the original bill, it seeks to “protect intellectual property rights in the voice and visual likeness of individuals.” But the revitalized bill also adds several new protections for individuals, safeguards for online service providers, and clear enforcement mechanisms.

Some of the new key provisions include:

  • “Digital Fingerprinting”—Because of the unique nature of vocal deepfakes, it is difficult to differentiate fake content from real content; the 2025 Act requires that unique metadata identifiers, such as hash values, to be used to identify and remove digital content. This will provide a dependable and accurate method of tracing digital deepfake content.
  • “Safe Harbors”—The 2025 Act also includes safe harbors for online service providers. If a provider “has adopted and reasonably implemented” policies to comply with notice-and-takedown provisions, then the provider falls within a safe harbor. This is similar to the Digital Millenium Copyright Act (DMCA) that allows the owner of copyrighted materials to take unauthorized content down from websites. The 2025 Act also covers a broad range of online services, including websites, mobile applications, search engines and digital music providers.
  • “No Duty to Monitor”—In response to concerns from the private sector regarding the difficulty of monitoring deepfake content, the 2025 Act contains a provision that explicitly states online service providers do not have a duty to monitor for digital replicas. A provider must only take affirmative steps to remove deepfake content once a digital replica has been identified, which is also similar to the notice-and-takedown provisions of the DMCA.
  • “Subpoenas”—As a new addition, the 2025 Act allows the holder of any intellectual property rights to issue a subpoena to online service providers for information that identifies the party or person that published the deepfake. Under these provisions, a subpoena can be issued from a court after a notice of the digital replica has been sent to the online service provider. This provides individuals who are the victims of deepfake content with a strong tool to identify who has generated deepfake content. 

As shown above, the 2025 Act incorporated some of the most successful and effective tactics from other laws while addressing concerns from the private sector. Although the law continues to evolve while under consideration by Congress, online service providers should begin consulting with experienced counsel to plan for compliance with the NO FAKES Act of 2025 should it be enacted. It remains to be seen if, or how, the evolving Chinese regulatory framework will affect domestic law. But whether it is at home or abroad, the legal environment is starting to catch up to the tech—and that means things are moving faster than a pod racer on the desert planet of Tatooine.

"Revenge of the Synth(etic Voice)," by Jeffrey N. Rosenthal, Timothy J. Miller, and Deniz Tunceli was published in The Legal Intelligencer on May 2, 2025.

Reprinted with permission from the May 2, 2025, edition of The Legal Intelligencer © 2025 ALM Properties, Inc. All rights reserved. Further duplication without permission is prohibited.