Utah Becomes First State To Enact AI-Centric Consumer Protection Law

Skadden Publication / AI Insights

Stuart D. Levi William E. Ridgway David A. Simon Meredith C. Slawe Anita Oh

On March 13, 2024, Utah enacted the Utah Artificial Intelligence Policy Act (UAIP), which imposes certain disclosure requirements on entities using generative AI tools with their customers, and limits an entity’s ability to “blame” generative AI for statements or acts that constitute consumer protection violations.

Companies subject to the UAIP will need to ensure they have the appropriate disclosure regime in place, and other companies should consider whether the UAIP approach is a good business practice they should adopt. The UAIP goes into effect on May 1, 2024.

Defining Generative AI

The UAIP requirements only concern generative AI, which the act defines as “an artificial system that (a) is trained on data; (b) interacts with a person using text, audio or visual communication; and (c) generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight” — in effect, the use of AI to generate content, such as chatbot responses. Non-generative AI tools, such as ones that might list product recommendations based on customer interests, are not subject to the UAIP.

Disclosure Obligations

Under the UAIP, those in “regulated occupations” (i.e., those which require a person to obtain a license or state certification), such as most health care professionals, must “prominently” disclose that a consumer is interacting with generative AI, or materials created by generative AI, at the beginning of any communication. This disclosure must be made verbally before an oral exchange and through electronic messaging before written exchanges.

Although the UAIP does not specify what “prominently” entails, entities or persons in regulated occupations should assume that merely disclosing the use of generative AI in a privacy policy or terms of use likely will not be sufficient to satisfy this obligation.

Those outside of “regulated occupations” but which are subject to Utah consumer protection laws must “clearly and conspicuously” disclose the use of generative AI if asked or prompted by a consumer. The law does not specify how a consumer can pose this question, nor dictate how such disclosure should take place but, given the case law to date on what is required for “clear and conspicuous” notice to consumers (such as those cases analyzing what is required to create a binding agreement), businesses should assume that merely directing an inquiring consumer to a website’s terms of use or privacy policy may not be sufficient.

Companies Are Responsible for Generative AI Output

Under the UAIP, a company that has violated a Utah consumer protection law cannot defend itself by arguing that it was the generative AI tool that made the violative statement, took the violative act or was used in furtherance of the violation. In effect, companies subject to the UAIP should view statements “made” by a generative AI tool no differently than statements made by its own employees.

Fines and Penalties

While the UAIP does not provide for a private right of action, the Utah Division of Consumer Protection (UDCP) may impose an administrative fine of up to $2,500 per violation, and courts are empowered, in actions brought by the UDCP to impose such fines, enjoin the unlawful activity and order disgorgement of any money received in violation of the UAIP. The Utah Attorney General may also seek $5,000 per violation from any person who violates such administrative or court order.

Other Provisions of the UAIP

While the UAIP imposes the foregoing obligations on AI usage, it also seeks to encourage AI innovation. To that end, the UAIP creates an Office of Artificial Intelligence Policy, which is tasked with creating and administering an “Artificial Intelligence Learning Laboratory Program” (AI Lab) and to consult with businesses and other stakeholders about AI regulatory proposals.

The AI Lab provides a mechanism for companies to apply for 12 months of “regulatory mitigation” (with a single 12-month extension) while they develop AI systems. Such mitigation can include reduced fines for violations and cure periods before fines are assessed. The program is effectively a regulatory sandbox for AI development in Utah.

Key Points

  • Companies that are subject to the UAIP need to put in place a compliant disclosure regime by May 1, 2024. For those in regulated occupations, this means including a prominent disclosure before the user engages with any generative AI content. This might include, for example, a prominent text statement before an AI-enabled chatbot launches. Companies in non-regulated occupations will need to establish a means to detect whether a user has asked if they are engaging with a generative AI tool, which inquiry could take many forms, and be able to respond to that prompt. Companies should keep in mind that such inquires might be posed to the generative AI tool itself, and such tools will therefore need to be programmed to respond “clearly and conspicuously” to that inquiry. Employees that interact with consumers will also need to be trained as to how the company is using generative AI and how to respond to inquiries they may receive.

  • Companies that are not subject to the UAIP may want to consider whether establishing a disclosure regime to respond to inquiries from users as to whether they are interacting with a generative AI tool is a good business practice to foster transparency with its users.
  • The provision of the UAIP that prohibits companies from “blaming” generative AI for a statement made or action taken serves as an important guidepost for companies developing AI policies. In general, companies should not assume that they will be able to treat statements generated by AI as if they were made by an unaffiliated third party that is responsible for its own actions. For example, in February 2024, a British Columbia Civil Resolution Tribunal found that Air Canada negligently misrepresented its bereavement airfare policy because of a statement made by the company’s customer service chatbot. Companies should seek out AI tools that are developed and trained in a manner that minimizes the risk of erroneous information being presented to customers, and should also consider adding disclaimers that content generated by AI is for general information purposes only and that the company’s (human-generated) official terms and policies are what govern.

  • We expect that, in the absence of federal legislation, individual states will continue to enact laws regulating the use of AI, including requiring disclosures as to how AI is being used and making companies responsible for statements made by generative AI tools. This could lead to a patchwork of AI laws with which companies must comply, increasing costs and requiring companies to establish and maintain robust AI compliance programs.

This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.

BACK TO TOP