Tennessee Law Addresses Proliferation of Deepfakes

Skadden Publication / AI Insights

Stuart D. Levi Anthony J. Dreyer David A. Simon Ken D. Kumayama William E. Ridgway Anita Oh

Tennessee has enacted the Ensuring Likeness, Voice and Image Security (ELVIS) Act, which aims to protect individuals from the use of their persona in connection with “deepfakes” (i.e., fake content generated by artificial intelligence (AI) that a user is likely to mistakenly believe is legitimate). 

The act, which was signed into law on March 21, 2024, is notable in that it targets not only those who create deepfakes without authorization, but also the providers of the systems and tools used to create them.

The law takes effect on July 1, 2024, and represents the first regulation that directly addresses the commercial uses of AI to generate deepfakes.

Given the state’s prominent role in the music industry, deepfakes that replicate a singer’s voice are a driving force behind the law. The name of the act, ELVIS, is a nod to this issue, which has been of growing concern to the industry. In 2023, for example, an artist using the pseudonym Ghostwriter created a deepfake song that simulated the voices of the artists Drake and The Weeknd without their approval, and many believed the song was legitimate when first released. 

The ELVIS Act specifically addresses the deepfake issue in three ways. 

First, it expands the state’s existing personal rights law (the Personal Rights Protection Act of 1984, or PRRA), which previously only protected a person’s name, image and likeness, to explicitly include protections for an individual’s “voice” (defined as an individual’s actual voice as well as simulations of that voice). The act also expands PRRA by prohibiting any unauthorized “commercial use” of a person’s personal rights; PRRA had previously been limited to uses “in advertising.”

Second, the act creates a private right of action against anyone who unlawfully publishes, performs, distributes, transmits or otherwise makes available to the public an individual’s voice or likeness, with knowledge that use of the voice or likeness was not authorized by the individual. 

Third, it creates a private right of action against anyone who “distributes, transmits, or otherwise makes available an algorithm, software, tool, or other technology, service, or device, the primary purpose or function of which is the production of an individual’s photograph, voice, or likeness without authorization from the individual.” Where the individual is a minor, authorization is required from a parent or guardian, and where the individual is deceased, authorization is required from an executor or heir. While this provision does not mention AI explicitly, the focus of the prohibition is clear. 

Other states have started to address the use of deepfakes in connection with political advertising. For example, on March 21, 2024, Wisconsin enacted a law requiring that any political communication must disclose whether it includes any “synthetic media” (defined as audio or video content substantially produced by means of generative AI).


  • The technology prohibition outlined in the ELVIS Act creates a number of ambiguities that those subject to the act will need to assess when conducting a risk analysis of the tools they are distributing or making available.
    • It is not clear who might be covered by the clause “distributes, transmits, or otherwise makes available.” This could be read to encompass not only tool developers but also sites that aggregate and make available tools from third-party AI providers, as well as any site or platform that provides access to a third-party tool. 
    • The act concerns software and tools whose “primary purpose or function” is to replicate an individual’s persona without authorization. However, many developers of tools that are capable of creating deepfakes do not control or specify whether authorization needs to be obtained from the person who is the subject of the deepfake. Thus, it is not clear from the wording of the law what would be required to establish that the developer created a tool whose primary purpose was that it be used without authorization.
    • The act does not address cases where a tool was distributed for general purpose use but then becomes primarily used to create deepfakes without authorization.
    • It is not clear how the law would apply to a platform that makes available a tool to create parodies and where the outputs are clearly labeled as AI-generated.
    • The act does not provide guidance on the level of “authorization” that is required to avoid violating the act, nor who must seek such authorization. 
  • Given these potential ambiguities, companies distributing or making available any technology that is primarily used to create deepfakes of individuals — even if that was not the original intent of the tool — should determine whether appropriate authorization from the subject individuals was obtained, and closely monitor developments in this space.
  • Efforts to address deepfakes at the federal law have, to date, not gained much traction. Two bills are currently in Congress: the No AI FRAUD Act (introduced in the House in December 2023) and the NO FAKES Act (a Senate bill that has not yet been introduced). In that void, we expect to see additional state legislation, similar to the ELVIS Act, as states look to protect individuals against deepfakes, particularly in states that already have a right of publicity law.

This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.