Key Points
- While the Trump administration has set out to promote AI, most recently by targeting regulatory obstacles through a December 2025 executive order, Congress and the states have been exploring their own potential controls on the technology.
- A number of states have enacted laws to protect against perceived risks associated with the use of AI, and others are debating proposed regulations.
- The potential harm stemming from interactions with chatbots — particularly for minors — has drawn scrutiny from congressional committees as well as state legislatures and attorneys general.
- With so many investigations and proposals in the works, along with the ongoing federal response, AI developers and companies employing the technology will need to closely monitor developments on many fronts.
__________
In a few short years, artificial intelligence (AI) has become central to innovations in industries as diverse as medical research and entertainment, and it has become the defining motivation for many of the policies driving geopolitical competition.
Against that backdrop, the Trump administration chose to pivot the U.S. government’s messaging and strategy on AI from the “safety first” approach of the Biden administration to one of American competitiveness and AI dominance. This change was on display in July 2025, when the White House released its “Winning the Race: America’s AI Action Plan” heralding AI’s potential for economic growth. (See our July 30, 2025, client alert “White House Releases AI Action Plan: Key Legal and Strategic Takeaways for Industry.”)
The plan also included recommendations advocating the reconsideration of existing regulations and the suspension of investigations viewed as disproportionately stifling AI innovation. The report was widely seen by policymakers, business leaders and pundits as a complete victory for those advocating against AI-focused regulations and enforcement investigations.
Then, in December 2025, President Donald Trump issued an executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” seeking to redefine the landscape of AI regulation in the United States. The order aims to establish a single, national regulatory framework for AI, pushing to streamline AI oversight, reduce regulatory fragmentation and bolster American competitiveness. (See our December 15, 2025, client alert “White House Launches National Framework Seeking to Preempt State AI Regulation.”)
While the order seeks to preempt most state-level AI laws, it notably names otherwise lawful child safety protections as within state authority falling outside of its priority.
As federal agencies prepare to implement the new framework and Congress continues to debate comprehensive AI legislation, this regulatory environment remains dynamic and complex.
Regardless of the administration’s ambitions, the reality is far more complicated. Even after the White House released its July 2025 action plan, different branches of government at both the federal and state levels continued to actively advance their own AI agendas with intensifying investigations, enforcement actions and legislative proposals targeting AI.
Even now, it is similarly unlikely that the new executive order will prevent various state and federal bodies from issuing AI regulations or pursuing AI-related investigations. If anything, the expected wave of legislation is only just beginning. Businesses that ignore this reality by focusing solely on the administration’s policies do so at great legal peril.
Chatbots Traction Leads to Government Action
One area of particular concern for policymakers is the deployment of AI-powered user interfaces, particularly generative AI chatbots. As customer-facing chatbots are used across industries — including in financial services, health care and consumer products — policymakers are raising alarms about risks for both companies and their customers, especially for minors.
Congressional officials, for example, regularly highlight alleged incidents of chatbots encouraging self-harm or transmitting sexually explicit content to children. As a result, across all levels of government, the use and deployment of AI chatbots is now ripe for increased scrutiny from legislators, regulators and enforcement agencies.
Congressional and Federal Agency Action
Many federal agencies have begun evaluating the implications of AI in their respective domains. The Food and Drug Administration’s (FDA’s) Digital Health Advisory Committee, for example, initiated discussions on the potential regulation of AI therapy chatbots, reflecting a broader trend toward sector-specific oversight.
Congress has been responding to AI chatbot concerns with high-profile hearings and legislative proposals. Momentum toward further action built throughout the second half of 2025:
- In September 2025, the Senate Judiciary Subcommittee on Crime and Counterterrorism held its “Examining the Harm of AI Chatbots” hearing, featuring testimony from, among others, parents of affected children. Following the hearing, the subcommittee targeted major AI companies with requests for information regarding their chatbot policies and practices.
- In October 2025, the Senate Committee on Health, Education, Labor and Pensions heard testimony about the integration of AI chatbots in health care. The testimony highlighted the need for rigorous vetting of these technologies in the health care space and discussed the consequences of improper implementation.
- On November 18, 2025, the House Committee on Energy and Commerce Subcommittee on Oversight and Investigations held a hearing addressing “Innovation With Integrity: Examining the Risks and Benefits of AI Chatbots.” Chairman Brett Guthrie, R-Ky., expressed an increasingly common refrain from lawmakers that “additional oversight is needed to better understand risks to users when interacting with these technologies.” Lawmakers raised concerns at the hearing about documented cases in which vulnerable users, including minors, claimed severe harm, misinformation and emotional manipulation as a result of chatbot interactions. Some witnesses testified that current chatbot designs often maximize engagement over safety, lack confidentiality provisions and can inadvertently increase the risk of self-harm.
- On December 9, 2025, the Senate Judiciary Committee held a hearing titled “Protecting Our Children Online Against the Evolving Offender.” During the hearing, Sen. Josh Hawley, R-Mo., emphasized the importance of passing legislation to prevent AI companies from targeting minors with chatbots that may provide inappropriate content.
Other congressional committees, including those focused on science, technology, commerce and financial services, have also held hearings on AI, including to address issues of exploitation and communications, that may portend additional attention on chatbots to come.
- In April 2025, the House Committee on Science, Space and Technology Subcommittee on Research and Technology held a hearing on DeepSeek, a Chinese AI startup.
- In June 2025, the House Committee on Energy and Commerce Subcommittee on Communications and Technology conducted a hearing on “AI in the Everyday: Current Applications and Future Frontiers in Communications and Technology.”
- In July 2025, the House Judiciary Subcommittee on Crime and Federal Government Surveillance took evidence about the growing threat of AI-enabled crime.
- In September 2025, the House Financial Services Subcommittee on Digital Assets, Financial Technology and Artificial Intelligence hosted a hearing on the use of AI in the U.S. financial system.
On the legislative front, the GUARD Act was introduced in October 2025. It seeks to ban AI companions for minors and impose civil and criminal liability on companies that enable harmful chatbot interactions with children. The measure has bipartisan support and would create penalties of up to $100,000 for violations.
In other AI efforts, in June 2025 the Algorithmic Accountability Act of 2025 was introduced in the Senate, seeking to provide greater levels of transparency and accountability for the ways in which companies use AI by requiring the Federal Trade Commission (FTC) to mandate impact assessments for algorithms used in consequential decision-making systems like housing, employment and education.
Since Congress’ focus on AI has often been related to minor safety, and minor safety is expressly exempted from the president’s executive order, businesses should not anticipate less regulatory activity in this area.
State Action
While at the federal level legislators are driving the scrutiny of AI chatbots, at the state level state attorneys general (AGs) are emerging as pivotal actors in the AI regulatory landscape, launching chatbot investigations and inquiries aimed at leading AI companies, including concerns about minor safety.
- In August 2025, the Texas attorney general opened an investigation into deceptive mental health-related chatbot exchanges targeting children.
- The Missouri attorney general has initiated inquiries into potential political bias and commercial violations by AI chatbots.
- More broadly, in December 2025, a bipartisan coalition of 42 state AGs wrote to major AI companies outlining safeguards the companies should implement and stressing the potential risks that AI chatbots pose to children. The letter states: “Our support for innovation and America’s leadership in A.I. does not extend to using our residents, especially children, as guinea pigs while A.I. companies experiment with new applications.” This letter follows an August 2025 letter from a bipartisan coalition of 44 state AGs that included a warning for AI companies: “We wish you success in the race for AI dominance. But if you knowingly harm kids, you will answer for it.”
State legislatures have likewise been enacting new laws to address AI chatbot risks. In October 2025, California passed a bill requiring AI chatbots to disclose their artificial nature and chatbot developers to implement safeguards against harmful content and submit annual reports. (See our October 2, 2025, client alert “Landmark California AI Safety Legislation May Serve as a Model for Other States in the Absence of Federal Standards.”)
New York recently enacted similar requirements for in-state AI companies, specifically targeting AI chatbots and companion tools. Maine also implemented similar disclosure requirements earlier in 2025.
In total, at least six states have passed laws targeting AI chatbot risks, with some penalties up to $15,000 per violation. Effective in January 2026, Texas will impose a similar law with fines up to $200,000.
For more on state enforcement actions, see “Corporate Compliance Remains Critical as State Enforcement Initiatives Gain Momentum Following Governors’ Races.”
Final Thoughts
The trajectory of AI enforcement and regulation in 2025 underscores the need for industry stakeholders to be agile, informed and engaged. While the administration continues to champion AI innovation with minimal regulatory intervention and has taken a material step forward with the December 2025 executive order, Congress and state authorities are nevertheless intensifying their focus on AI enforcement, particularly in areas of child safety and content moderation.
As companies’ use of AI chatbots to interface with customers becomes more commonplace, we anticipate oversight and enforcement actions to increase in frequency and breadth. Companies operating in the AI sector and those employing AI interface tools for customer engagement should closely monitor legislative and enforcement developments at both the federal and state levels and be prepared to consider their compliance and business strategy practices accordingly.
Read more about AI:
See the full 2026 Insights publication
This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.