Report From FDLI Annual Meeting: FDA’s Expanding Use of AI – What Regulated Industry Should Know

Skadden Publication / The Nucleus: Life Sciences Enforcement and Regulatory Updates

Rachel Turow

Executive Summary

  • What’s new: FDA is rapidly integrating AI tools — including its Elsa 4.0 platform and the HALO data system — into application review, inspectional planning and FOIA responses, among other things, as detailed by agency leadership at the 2026 FDLI Annual Conference.
  • Why it matters: Companies can anticipate that FDA will apply AI to submissions, promotional materials and in determining inspections, which raises questions about accuracy, proprietary data protection and inter-sponsor firewalls.
  • What to do next: Regulated companies can run submissions through AI before filing, scrutinize agency communications for AI-generated reasoning, invest in prompt engineering capabilities and build internal AI governance frameworks.

__________ 

The U.S. Food and Drug Administration (FDA) is rapidly incorporating artificial intelligence (AI) into nearly every dimension of its operations: from application review and inspectional planning to enforcement communications and Freedom of Information Act (FOIA) processing. Regulators discussed these developments in detail at the 2026 Food and Drug Law Institute (FDLI) Annual Conference, where senior agency officials described a vision in which AI becomes embedded in the fabric of FDA’s regulatory activities.

FDA’s Current AI Initiatives

FDA reported that the latest iteration of its internal AI tool, Elsa 4.0, is built within a FedRAMP High secure platform environment and does not train on input data or any data submitted by regulated industry. Alongside Elsa 4.0, the agency has begun integrating the consolidation of more than 40 disparate data sources into a new platform called HALO (Harmonized AI & Lifecycle Operations for Data) with Elsa so that FDA staff can query data and build workflows without manually uploading documents. FDA’s main goal for AI use is to reduce the amount of time staffers spend on administrative and organizational tasks by identifying insights and analyses in minutes that previously would have taken weeks or months, or never happened at all.

Speaking at the FDLI conference, Sridar (Sri) Mantha, the acting chief information officer at the Office of Digital Transformation; Tiffany Branch, the director of the Office of Management and Enterprise Services (within the Office of the Commissioner); and Steven Musser, the associate commissioner for Human Food Research in the Human Foods Program, detailed how AI is being deployed across the agency.

Consolidating, Categorizing, Summarizing and Redacting Information

Branch described how FDA is using AI for FOIA processing, including the automated redaction of confidential commercial information. Officials reported that AI-assisted redaction has achieved accuracy rates of 98%. Because attorneys are no longer inundated with massive document volumes, these tools are expected to enhance accuracy while allowing staff to keep pace with production obligations and court-ordered deadlines. Elsa is also being used in FOIA decision-making; for example, to help determine whether a sponsor has acknowledged the existence of an application or whether a request is from a first party or a third party. Disclosure experts previously performed these time-intensive tasks manually. More broadly, the agency is using Elsa and related tools for information consolidation, literature review, drafting and summarization. Agency officials noted that FDA staff previously spent approximately 50% of their time simply aggregating the information needed to perform review work across guidance documents, regulatory history and submissions; AI is dramatically reducing that burden.

Mantha described the agency’s use of Retrieval Augmented Generation (RAG) and Model Context Protocol (MCP) to synthesize information from specific datasets with traceability, without fine-tuning models on proprietary data. The use of MCP is notable because it allows the agency to connect AI models to structured data sources while maintaining an audit trail, thereby reducing the risk that AI outputs will be untraceable or unverifiable. As Mantha framed it, MCP allows FDA to retrieve and amalgamate information from specific contexts with full traceability without embedding proprietary data into the model itself. The agency has also piloted technology-assisted review for litigation document review, which Branch added is a first for a government agency. Also, the Office of Medical Policy shared that staff there is using AI to organize and categorize public comments received on guidance documents.

Recognizing Patterns for Inspections

In the inspections context, Musser emphasized that an inspection driven entirely by AI remains theoretical at this stage. However, FDA is employing AI for pattern recognition, for example, in a shrimp pilot program designed to detect unauthorized and violative patterns to identify adulteration and fraud. He also outlined use cases for pre- and post-inspection data analysis to draw conclusions and find patterns that human reviewers may not otherwise detect. The agency’s new one-day inspectional assessment pilot, announced on May 6, 2026, uses AI to identify low-risk facilities suitable for abbreviated screening inspections.

Toxicological Decision-Making

In the scientific review context, Musser presented how FDA is using AI in toxicological decision-making and to help to assess whether substances are more or less toxic based on published literature. The agency has also implemented an internal governance framework that includes tenant-and-persona tools to control data access across centers (such as CDER and CBER) and REN Bench, an automated framework designed to validate large language model outputs against FDA’s own datasets.

Limitations and Challenges

One critical limitation of current AI tools noted by Steven Musser deserves emphasis: AI does not interpret the quality or nuance of scientific evidence. An AI tool may report, for example, that 100 published papers conclude that a substance is dangerous and three papers conclude that it is not, without any assessment of the relative methodological rigor, study design or evidentiary weight of those studies. Moreover, AI outputs can vary from one query to the next, raising questions about the reproducibility and reliability of AI-assisted analyses. These limitations are particularly consequential in the regulatory context, where the quality of evidence, not merely its quantity, is often dispositive.

Branch recognized that AI adoption involves significant change-management challenges. Staff skepticism is natural, and concerns about job displacement are real, but the workforce’s subject matter expertise lends itself to quick adoption of AI tools.

FDA’s Use of AI in Enforcement Communications

While FDA officials have stated publicly that the agency is not using AI for enforcement or external communications, officials at the FDLI conference acknowledged that they are using AI in connection with untitled letters and warning letters relating to promotional materials. This is helpful for industry in planning responses to such letters and in anticipating what the agency’s AI-based tools may find in advertisements submitted on Form 2253. When companies receive enforcement communications from FDA, particularly those involving promotional claims, it is even more important now to scrutinize these communications carefully for accuracy, consistency and any indicia that AI-generated analysis may have contributed to the agency’s conclusions without human review. Branch confirmed that AI introduces its own forms of bias in the context of enforcement communications, just as a human reviewer brings inherent bias. While having a human in the lead can mitigate some of that bias, a degree of AI-driven bias will be an ongoing challenge that industry will want to prepare to identify and address.

Proprietary Data and Firewall Concerns

One of the most significant legal concerns arising from FDA’s AI deployment involves the protection of proprietary data and the maintenance of inter-sponsor firewalls. FDA is not permitted to look at one sponsor’s application to inform its review of another sponsor’s submission. If AI models are trained on or used to query across all submissions, this firewall could erode in ways that are difficult to detect. Sponsors could inadvertently benefit from other companies’ trade secrets or confidential regulatory strategies, and affected companies might never know.

FDA officials at the FDLI conference indicated that the agency is not fine-tuning models on proprietary data and that the use of MCP provides traceability guardrails. The government has claimed that Elsa does not train on data submitted by regulated industry. Nevertheless, the consolidation of more than 40 data sources into HALO and the integration of that platform with Elsa raises structural questions about the adequacy of existing safeguards. There is also the possibility that human users may inadvertently combine data sources in a way that erodes separations that previously existed merely due to the clunky nature of older systems.

Importantly, external data sources such as electronic health records are not currently part of HALO or FDA’s AI data platform. FDA is presently applying AI to data that has already been submitted to the agency, such as adverse event reports. In the context of sentinel studies, the FDA is just exploring the use of external data. However, as FDA moves in this direction, the scope of data accessible to AI tools will expand, and proprietary data concerns will become more acute.

Practical Implications

Given these developments, industry engagement with FDA may change in the following ways:

  • Running submissions through AI before filing. Manufacturers are now building their own AI-based tools to analyze submissions before filing. As one general counsel noted at another FDLI session, industry practitioners are already using AI for promotional review. Agency officials were direct: Companies should expect that AI will be applied to their submissions and should prepare accordingly. Identifying the issues that an AI tool is likely to flag can allow companies to address them proactively.
  • Scrutinizing AI-generated communications from FDA. Reviewing all agency communications for accuracy, internal consistency and indicia that AI may have contributed to the analysis will be increasingly important. Where a communication appears to reflect AI-generated reasoning, options exist to challenge conclusions that are unsupported by high-quality scientific evidence.
  • Investing in prompt engineering capabilities. Prompt engineering (the skill of crafting precise queries to elicit accurate and useful AI outputs) is an emerging competency that will be increasingly relevant in the legal and regulatory context. FDA officials explained that the process of validating prompts is itself part of quality assurance, and that both the agency and industry are still in the early stages of understanding what constitutes a good prompt.
  • Monitoring international developments. FDA is not the only regulatory authority investing in AI. The European Medicines Agency has developed its own AI initiative, known as Regulus, and companies operating in multiple jurisdictions are tracking how different regulators are deploying AI and what that means for global regulatory strategy.

Conclusion

FDA’s rapid implementation of AI represents a fundamental shift in how the agency operates. For regulated industry, the imperative is clear: Companies will want to understand how FDA is using AI, adapt their own processes accordingly and remain vigilant about the risks that AI-driven regulatory action presents. Running submissions through AI before filing, scrutinizing agency communications, investing in data quality and building internal AI governance frameworks are actions that will best position industry to navigate the regulatory landscape ahead.

This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.

BACK TO TOP