Key Points
- Divergence across major jurisdictions on key issues such as the protectability of AI tools and outputs, the use of copyrighted data for AI training, and the requirements for inventorship and authorship require companies to adopt different IP strategies in different countries.
- In the U.K. and EU, the scope for patent protection of AI tools has broadened, which may be prompting some companies to shift from relying on trade secret and copyright protections toward implementing formal patent strategies.
- The use of copyrighted works as AI training data is a major area of contention.
- The proliferation of AI-generated voices and images has intensified concerns over digital identity rights. Celebrities and notable figures are exploring a patchwork of different protections.
__________
The relationship between intellectual property (IP) and artificial intelligence (AI) is an unsettled one. Questions remain as to whether AI-generated output by “machines” can obtain IP protection in the absence of a human author or inventor, and courts internationally have yet to resolve the issue of whether IP-protected content is fair game for training AI large language models (LLMs).
In some areas of IP law involving AI, there is a degree of jurisdictional convergence. The U.K. recently aligned its approach to the patentability of computer programs with the European Patent Office’s (EPO’s), for example. But in other areas, jurisdictions are pulling in markedly different directions.
While the European Union’s 2019 Digital Single Market Directive created an exception to copyright protection, allowing AI models to train on otherwise protected works, the Australian government recently rejected a proposal for a comparable measure. Meanwhile, in the U.S. AI developers are defending a wave of class actions over their use of IP-protected material for training large language models (LLMs).
The question proved sufficiently contentious in the U.K. to prompt a public consultation on copyright and AI, which resulted in a much-anticipated report released on March 18, 2026, the “Report on Copyright and Artificial Intelligence” (U.K. Copyright & AI Report).
The U.K. government did not introduce a text and data mining (TDM) exception to copyright protection, but it proposed removing copyright protection for computer-generated works without a human author, which has been in force since 1988.
In this article, we examine the current state of play across the principal areas of IP law most affected by AI and identify the questions that remain outstanding.
Patentability, Trade Secrets and Inventorship
Patentability and Trade Secrets
In the U.S., in principle, computer software and AI-implemented inventions may be patentable. However, under two U.S. Supreme Court decisions, Alice Corp. v. CLS Bank International, 573 U.S. 208 (2014), and Mayo Collaborative Services v. Prometheus Laboratories, Inc., 566 U.S. 66 (2012), eligibility is subject to a two-part test.
First, a court asks whether the claims are directed to a patent-ineligible concept, such as an abstract idea, law of nature or natural phenomenon. If so, the court then asks whether the claim elements, considered individually and as an ordered combination, transform the nature of the claim into a patent-eligible application.
In practice, applicants must further account for current U.S. Patent and Trademark Office (USPTO) AI-specific subject matter eligibility guidance.
Under English law, a “program for a computer as such” is excluded from patent protection (Section 1(2)(c), Patents Act 1977). Until this year, the four-part so-called Aerotel test governed whether the exclusion applied, but the U.K. Supreme Court’s (UKSC’s) February 2026 decision in Emotional Perception AI Ltd v Comptroller General of Patents, Designs and Trade Marks, [2026] UKSC 3, abandoned the test and opened the door more widely to patent protection in the U.K. for AI tools characterized as computer programs.
In the case, which concerned the patentability of an artificial neural network (ANN), the UKSC instead adopted the EPO’s position that, if the claims of the patent incorporated any hardware (including, in this case, the physical form of the ANN and the computer running the software), they were not a “program for a computer as such” and therefore not automatically beyond the potential scope of patent protection.
As the pathway for securing patents for AI tools becomes clearer in the U.K., we may see a strategic shift in IP protection across the U.K., EU and beyond, with companies increasingly incorporating formalized patent protection and away from reliance on trade secrets and copyright, which can be difficult to establish, maintain and enforce.
Inventorship
A further and arguably more consequential question is whether an AI tool itself can be an inventor of its outputs.
In the U.S., only natural persons may be named as inventors. For AI-assisted inventions, the governing question is whether the human inventor conceived the claimed invention by forming a definite and permanent idea of the complete and operative invention.
The USPTO’s revised November 2025 guidance rescinded its earlier AI-specific inventorship framework that applied a “significant contribution” test and clarified that while that test remains relevant in joint-inventorship analysis among multiple human contributors, AI systems themselves are treated as tools, not inventors.
By contrast, in the EU and U.K., a formal test has not yet been established, but the direction of the law was indicated when an American inventor, Stephen Thaler, attempted to press the issue in patent applications he filed with the USPTO, EPO and U.K. on behalf of an AI machine Thaler called DABUS, which he asserted was the sole inventor of two devices.
All three jurisdictions denied his applications. The EU and U.K. outcomes indicate that in those jurisdictions the question of whether a human can be the inventor of an AI-assisted invention will depend on the level and nature of input of that individual. It is not enough for a human simply to own, oversee or direct the use of an AI system.
Until there is further case law, records of human creative and technical contribution should be carefully documented throughout the development process.
Copyright in Training Data and Outputs
Training Data
The central question of whether the use of copyright-protected works to train AI models constitutes infringement and, if so, what exceptions apply is being answered differently across jurisdictions.
In the U.S., while there is no explicit exemption for TDM, the fair use defense to copyright infringement provides a flexible “transformative use” justification for AI training.
But recent decisions demonstrate that this defense may be significantly limited where models ingest unauthorized “shadow libraries” of pirated content. Some 17 class actions have been filed in the U.S. on behalf of authors and other copyright holders against AI developers.
- In one of those, Bartz v. Anthropic PBC, No. 3:24-cv-05417 (N.D. Cal.), the court found that fair use applied to legally acquired training data but did not extend to works downloaded from pirated databases, and the defendant has agreed to pay $1.5 billion to settle claims by copyright holders whose works it used to train its LLMs.
- By contrast, in Kadrey v. Meta Platforms, Inc., No. 3:23-cv-03417 (N.D. Cal.), the court granted Meta summary judgment on fair use grounds, finding the plaintiffs had failed to demonstrate market harm.
- The ongoing case The New York Times Company v. Microsoft Corporation, et al., No. 1:23-cv-11195 (S.D.N.Y.), is testing fair use in the distinct context of whether AI-generated outputs can substitute for the original copyrighted works.
In the U.K., the debate had been focused on whether to implement a TDM exception to copyright, with or without the opt-out available under EU law, which allows rights holders who expressly state they wish to opt out of the exception to continue to benefit from copyright protection of their works.
However, the U.K. Copyright & AI Report discussed above has seen the U.K. government row back on its initial preference to mirror the EU’s approach. Before any changes are made in U.K. law, it recommended gathering “evidence on how copyright laws are impacting the development and deployment of AI across the economy.”
Outputs
In Europe, copyright subsists only in “original” works, for which a “natural person” must be the author. Purely AI-generated outputs, devoid of sufficient human creative intervention, do not receive copyright protection.
Meanwhile, under English law, there is also a specific category for works with no human author, where the “author” is deemed to be the person who made the “arrangements necessary” for the creation. However, the scope of this provision in the age of generative AI remains a point of significant debate, and its application to AI-generated content is likely to be highly fact-specific.
In its U.K. Copyright & AI Report, the U.K. government proposed abolishing this specific type of protection on the basis that it departs from the core rationale for copyright: “to encourage and reward human creativity.”
In the United States, the U.S. Copyright Office has consistently maintained that copyright protection requires human authorship. Works generated entirely by AI without meaningful human creative control are not registrable. Where a human author has exercised sufficient creative choices in selecting, arranging or modifying AI-generated material, registration may be possible for those human-authored elements, but the AI-generated portions themselves remain unprotectable.
Digital Identity
The rise of generative AI has significantly intensified concerns about the unauthorized replication of personal identity — voice, likeness and other indicia of persona — for commercial purposes.
In the U.S., rights against unauthorized digital replicas and other identity-based misappropriation are substantial but fragmented. There is no federal statute directed to digital replicas or other uses of name, image, likeness or voice by generative AI models.
Protection comes primarily from state statutory and common law rights of publicity and privacy, along with trademark, unfair competition and other claims.
Such protection is currently lacking in the U.K. and EU. Although a March 6, 2026, House of Lords report calls on the U.K. government to introduce “protections against unauthorized digital replicas and ‘in the style of’ uses,” there is no dedicated “image” or “publicity” right in the U.K. and EU. Instead, rights holders must rely on a combination of other rights (such as trademarks, passing off and privacy) to prevent unauthorized use of their images.
It is worth noting, however, that certain EU member states, such as Germany, afford broader personality rights protections.
In response to the perceived “threat” of generative AI, certain celebrities and athletes have registered trademarks protecting their digital identities and image. But that trademark protection offers no direct protection against noncommercial misuse.
In the absence of a dedicated publicity right, trademark registration represents only a partial, imperfect substitute. Protecting the exploitation of one’s identity in those jurisdictions may therefore require a combination of trademark registration, copyright and moral rights, common law rights such as passing off, carefully drafted contractual controls and available technical measures.
Coping With the Uncertainty and Divergence in Protections
The debate over TDM exceptions illustrates how there is no global consensus on the right balance between protection of IP rights and AI development. For businesses operating across multiple jurisdictions, the practical implications of the continued divergence on many important issues is clear: A one-size-fits-all IP strategy for AI assets is not viable.
Several immediate priorities follow:
- Companies developing or deploying AI systems should consider auditing their existing IP portfolios in light of the expanded patentability landscape, following Emotional Perception AI.
- Rights holders and AI developers alike may want to closely monitor developments following the UK Copyright & AI Report, including any forthcoming legislative proposals, together with the outcomes of significant cases in this space across regions.
- Those exposed to digital identity risk should not wait for legislative clarity before taking protective steps. A combination of established legal rights, contractual controls and technical measures can provide meaningful interim protection.
It is also important to remember that AI is itself frequently protectable by IP rights, and that the strength and composition of an AI-related IP portfolio is an increasingly significant competitive differentiator. How those technologies are protected — and how portfolio strategies evolve as the jurisprudential landscape clarifies — will be a defining feature of the AI industry’s commercial development in the years ahead.
Uncertainty, in this context, is not an excuse for inaction. It is an argument for a more sophisticated, jurisdiction-specific and proactive approach to IP strategy.
This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.