As generative AI transforms the creative industries, legal battles over intellectual property are heating up. In this episode of "SkadBytes," host Deborah Kirk is joined by European counsel Jonathan Stephenson and associate Alistair Ho to unpack the landmark Getty Images v. Stability AI case. They explore the core legal issues — copyright infringement, secondary liability and trademark misuse — at stake in the U.K.’s first major AI/IP lawsuit. The team also discusses the global wave of litigation against AI developers, the challenges facing creators and how this case could shape the future of copyright law, licensing and AI innovation.
Episode Summary
The Getty Images v. Stability AI case represents a pivotal moment at the intersection of intellectual property and generative AI. To unpack this landmark litigation, partner Deborah Kirk talks with Alistair Ho and Jonathan Stephenson, her London-based colleagues on Skadden's IP and Technology team. The trio lays out Getty's allegations against Stability AI, including claims that Stability AI used millions of Getty's copyrighted images without authorization to train its AI, and allowed users to create similar images — sometimes even replicating Getty's watermark. They note that Getty has shifted its legal strategy from broad copyright claims to more targeted trademark and secondary infringement arguments.
Beyond the immediate legal dispute, the case brings into focus the evolving relationship between creators and tech companies. It also raises important questions about the adequacy of existing institutional frameworks to address the challenges posed by AI and intellectual property.
The discussion highlights the broader implications: a win for Stability AI could make it easier for companies to use publicly available data for AI training, while a win for Getty could set stricter standards for licensing and due diligence in the industry.
Voiceover (00:01):
Welcome to SkadBytes, a podcast from Skadden, exploring the latest developments shaping today's rapidly evolving tech landscape. Join host Deborah Kirk and colleagues as they deliver concise insights on the pressing regulatory issues that matter to tech businesses, investors, and industry leaders worldwide.
Deborah Kirk (00:24):
Hello and welcome back to SkadBytes, the podcast from Skadden's IP and technology team here in London. I'm Deborah Kirk and I'm joined by associate Alistair Ho and newly promoted European Council, Jonathan Stephenson. We're here to unpack one of the most high profile legal developments in the AI and IP space this year, the Getty Images and Stability AI case. So this case really does have the potential to shape the direction at least of English law on the subject of how intellectual property interacts with generative AI. And today we're going to break it down for you in terms of the legal issues at stake, where the case stands, and what might come next. So, Jonathan, it's a moment, right?
Jonathan Stephenson (01:06):
Definitely. It is a huge moment, and this case really brings into focus the future relationship between creators and tech companies.
Alistair Ho (01:14):
It's just a piece of the global AI puzzle, right? So it is happening as part a worldwide uptick in air litigation. So New York Times, Disney, Universal, they're all pursuing AI developers over how these models are built, including what they're trained on, the inputs and the outputs that they produce.
Deborah Kirk (01:29):
Yeah. So there's a lot going on here globally, but the Getty case, certainly the first of its kind in the UK, and it gives us a window, or at least it will do, we hope, as to how the courts might approach these issues before potentially Parliament steps in. So let's get to the legal fundamentals. Getty's core allegation is that Stability AI used millions of its Getty's copyrighted images without permission to train Stability AI's model. They then imported, allegedly that trained model into the UK, which is a fairly critical point in the case, and then enabled users to generate lookalike content. In some cases, as was at least alleged, even reproducing Getty's watermark. So, Jonathan, what were the actual bases of those claims?
Jonathan Stephenson (02:13):
Yeah, taking a walk down memory lane, they originally brought a very broad set of claims, copyright infringement, database right infringement, trademark misuse, passing off and secondary infringement.
Deborah Kirk (02:25):
Yeah, that's how they started a multipronged attack. But they've since changed tack, right?
Jonathan Stephenson (02:30):
Exactly that. And in June, Getty dropped its primary copyright and database, right claims that shift was strategic. Those claims faced some very real evidential hurdles, especially when it came to proving how and where the training took place. Stability says it used resources and personnel outside the UK, but without clear evidence of the contrary, the claims would've struggled to meet the jurisdictional threshold under English copyright law.
Deborah Kirk (02:59):
And let's not forget that Getty also acknowledged in the context of dropping those claims, that stability has since introduced mitigation measures like blocking certain prompts, such that certain of the injunctive relief that Getty was at least originally seeking has arguably already been achieved.
Alistair Ho (03:12):
I think then just to summarize, so we're saying that they're still pursuing trademark infringement, still pursuing passing off, still pursuing secondary copyright infringement. So whilst it's narrower, it's still quite serious.
Deborah Kirk (03:22):
And secondary infringement being where someone enables or facilitates infringement by others even if they're not the direct infringer themselves.
Jonathan Stephenson (03:30):
Right. And that's typically where the defendant knowingly provides tools, instructions, or services that result in others infringing copyright. And what helps is when we have our discussions as think platform liability or software tools knowingly used to reproduce that protected content. And that's where AI tools, especially text to image or text to text come under real pressure.
Alistair Ho (03:52):
Exactly. And it's something that in terms of lawsuits, that's we're seeing quite often coming up now, and again, it connects to this wider AI puzzle. So it connects to what we're seeing in the US. So for example, in the context of newspapers, they're suing GPT developers for reproducing articles, especially in relation to content that's actually behind paywalls, right?
Deborah Kirk (04:09):
And those claims of copyright infringement are centered around the way the claims express it are memorization by the model. So it's not just an academic discussion about technical copying in the way that I guess we originally think about copyright. We should remember that this then is also that there's a potential business impact here too. And the newspapers are also pointing to breaches of their websites terms. So a contractual angle and undermining of their subscription model.
Jonathan Stephenson (04:38):
Exactly. We should mention that Getty is also pursuing a separate lawsuit against ability in the US filed in federal court. And though it's proceeding on slightly different legal grounds, that's really worth remembering in the wider context. Meanwhile, Disney and Universal have initiated proceedings against Midjourney, the AI image generator raising concerns in short, that Midjourney was trained on copyrighted material and that users can now generate near identical depictions of characters that we know and love, like Shrek and even Homer Simpson.
Deborah Kirk (05:10):
Yeah. And I guess this all highlights how creators across media types or forms are pushing back. So it's not just photographers and journalists, it's studios, musicians, publishers who are all, as they see, it's at least fighting for their right to protect their creative output in the AI world and era.
Alistair Ho (05:30):
And it's not easy for creatives right now in the AI world, right? So big tech aren't making it very easy for them. They're being very creative in how they're applying existing law to AI issues. So in the UK for example, there's a prestige exemption under the Copyright, Designs of Patents Act 1988. So that's essentially exemption that allows companies, individuals to use copyright works for parody, caricature, even prestige. So looking at phrasing it as stylized imitation rather than just direct copying.
Jonathan Stephenson (05:56):
Precisely, Ali. That really interesting because it's been barely being tested in core and definitely never within the AI context. So whether AI-generated content can fall within that exception or not is really an open question. And this case that we're discussing today, it might touch on it or at least really set the groundwork for how future claims might be assessed. But for now, it's just one of those many great areas within this topic.
Deborah Kirk (06:20):
And I guess what we're seeing is companies, I suppose, on either side of this debate, trying to use the current laws to sort of fit into the AI picture. And that is because there is an absence of arguably clear or certainly targeted AI-specific legislation in the UK. So the government hasn't passed specific laws on how copyright law applies to AI training or outputs.
Alistair Ho (06:45):
Again, something creators find difficult to deal with, so something they aren't happy about, they're pushing for greater transparency, particularly in the training process. What sort of content is used in these models? It came to the forefront in, for example, the recent Parliament debates around text and data mining exceptions.
Deborah Kirk (06:59):
Yeah, we've seen major artists like Elton John, Dua Lipa, Paul McCartney, strongly objecting to that proposal and backing a push to require AI companies to disclose what copyrighted materials have been used to train their systems. Elton John in particular was super vocal about the use of copyrighted material in generative AI. He said a machine is incapable of writing anything with any soul in it, sort of, I guess, creative debates to whether that's true or not. And he also said in an interview with Laura Kuenssberg that it was George Orwell times a thousand. So there's a discussion too about what should the remuneration format be in the event that copyright wins out. And these kind of arrangements should be subject to licenses and royalties. It'd be interesting to see the way the US would see that. Would it be seen as tariffs by the US government in the same way other tech reg penalties and fines are. But anyway, back on track as to the UK position. So, Ali, where is UK policy at now?
Alistair Ho (07:58):
So the next step is really this government promised report on ag and copyright policy in the UK. So they promised that report expected around December or March. So until that lands, until we really see what's in that report, we're kind of in a holding pattern. For now, it's up to cases like this one to decide how the law applies.
Deborah Kirk (08:14):
And all of this, I guess, raises real questions around institutional framework. So should the judges be deciding how AI interacts with IP or really should it be Parliament leading the way? And I guess with that backdrop, this case doesn't just test the law. It asks how our legal systems should be dealing with these frontier technology questions
Jonathan Stephenson (08:36):
That's exactly what makes Getty and Stability such a big deal. If Stability wins, it might lower the bar for relying on publicly available data and training. But on the other hand of Getty succeeds, even partially, it could reshape how developers approach licensing, rights clearance, and really raise the bar for due diligence.
Deborah Kirk (08:55):
And it could affect everything from business models to how and where the AI models are trained, especially if these jurisdictional preferences, which was part of this case, become part of the litigation strategy. We're already seeing a strategic shift towards jurisdictional engineering, so-called, so developers locating model training offshore or separating the commercial deployment from where the training actually happens to avoid arguably local IP and other legislation. And that tension between legal reach and technical structure is an interesting one, and it's likely only going to grow, I think, as these questions are worked through.
Alistair Ho (09:34):
Yeah, I think tensions are that super important to remember because I think as long as we find the technical details fascinating, we can talk about it for hours much more than this podcast, but it's not just a copyright issue. Every discussion around AI potentially it's an economic and policy issue as well.
Deborah Kirk (09:47):
Absolutely. That is dead right, which is why we'll be following it closely here at Skadden. Yes, the questions raised in Getty's case won't be answered overnight, that's for sure, but they have opened the door to a much wider rethink of how we handle copyright and other IP laws in the AI era. And that links beautifully into a plug for our next episode where we'll do a deep dive into the UK's upcoming copyright reform proposals and what they could mean for developers, publishers, and policymakers. Thank you for listening to SkadBytes. We'll see you next time.
Voiceover (10:21):
Thank you for joining us for today's episode of SkadBytes. If you like what you're hearing, be sure to subscribe in your favorite podcast app so you don't miss any future conversations. Additional information about Skadden can be found at skadden.com.
Listen here or subscribe via Apple Podcasts, Spotify or anywhere else you listen to podcasts.
See all episodes of SkadBytes: Tech Innovation Meets Regulation