The focus in AI regulation has shifted from writing the rules to applying them, and the EU and UK are taking very different approaches. In this episode of “SkadBytes,” Deborah Kirk, Jonathan Stephenson and Alistair Ho discuss where the EU’s Digital Omnibus stands now that trilogue negotiations between the European Parliament, Council of the European Union and European Commission are underway. They also explore the UK’s overhauled approach to automated decision-making under the DUAA, new deepfake prohibitions on both sides of the English Channel, and the UK’s retreat from a broad text and data mining exception for AI. Drawing on Skadden’s recent AI forum, they discuss why governance — not just compliance — has become the defining challenge for organizations deploying AI at scale.
Episode Summary
Skadden attorneys Jonathan Stephenson and Alistair Ho join host Deborah Kirk to unpack the latest developments in EU and UK AI regulation and what they mean in practice. Jonathan explains that the EU’s AI Omnibus is heading toward finalization before its August 2026 deadline, with key changes to high-risk AI timelines, SME exemptions and AI literacy obligations. The panel then turns to the UK’s Data Use and Access Act, with Jonathan detailing how it overhauls automated decision-making, Alistair discussing the introduction of recognized legitimate interests and new criminal offenses targeting non-consensual deepfakes, and all three examining the government’s decision to step back from a broad text and data mining exception. Drawing on insights from Skadden’s recent AI forum, the discussion underscores one consistent message: Businesses operating across borders should plan for regulatory fragmentation, not convergence.
Voiceover (00:00):
Welcome to “SkadBytes,” a podcast from Skadden, exploring the latest developments shaping today’s rapidly evolving tech landscape. Join host Deborah Kirk and colleagues as they deliver concise insights on the pressing regulatory issues that matter to tech businesses, investors and industry leaders worldwide.
Deborah Kirk (00:23):
Hello there and welcome back to “SkadBytes.” I’m Debs Kirk, and I’m joined today by Jonathan Stephenson and Alistair Ho from our IP and technology team here in Skadden in London. Today we are having a much-needed update on AI regulation in the EU and the UK and what it means in practice. And if we needed any reminder, 2025 was a significant year for AI regulation. October 2025 saw the launch of the EU’s Apply AI policy, which shifted the focus from the legislative framework of the AI Act, which had entered into force just over a year earlier, to the practicalities of AI implementation. And in November 2025, the EU introduced the Digital Omnibus proposal, signaling a shift toward a more coherent and manageable regulatory framework. Meanwhile, over here in the UK, 2025 also witnessed a cascade of developments at the intersection of intellectual property, artificial intelligence and digital infrastructure, with a growing recognition of the need to align IP frameworks with technological advancements.
(01:30):
And now in April 2026, what we are seeing is the next phase, not a new wave of legislation but a recalibration of what is already in place, driven by the realities of implementation, political pressure to simplify, and the pace at which AI is actually being developed and deployed. Timelines are being pushed out and certain obligations soften, but the core framework remains intact and, in some areas, is being reinforced. The focus is shifting from writing the rules to making them work in practice. And for most organizations, these changes are now landing in real decisions, how products are designed, how contracts are negotiated and how risk is managed day to day. So today we are going to unpack a few key EU and UK examples of this and discuss where that leaves things. Ali, over to you.
Alistair Ho (02:21):
Yep, exactly. So we’re going to start in Brussels with a status update on the EU’s Digital Omnibus, the form package, which is now in trilogue between the European Parliament and the Council and the Commission. We’ll then cross the channel, look at the UK’s Data Use and Access Act 2025, the DUAA, specifically in the context of its changes affecting the development and deployment of AI tools, though. And that’s the provisions on AI automated decision-making, recognizing legitimate interest, monitoring reporting on AI and copyright and the new criminal offenses targeting non-consensual deepfakes. We’ll focus particularly on automated decision-making, or ADM. That’s filtered down into draft guidance that the UK ICO has just opened for consultation, and that tells us a bit about where the regulator’s heading. So we’ll look in more detail at that. We’re then going to just close with practical themes that came out of our recent AI forum, which is titled “AI on the Horizon: Aligning Regulation and Innovation.” And our key takeaways from some great discussions at that AI forum. Jonathan, so why don’t you set the scene for us on the Digital Omnibus to start off with?
Jonathan Stephenson (03:17):
Yeah. So where do we start? As a refresher for you, listeners, the European Commission published the omnibus package in late 2025, and this was a bundle of reform streamlining EU digital rules, and it has two parts. First part, an AI Omnibus amending the EU AI Act and a data omnibus amending the GDPR and the Data Act. And we are, of course, are focusing on today’s episode on the AI Omnibus. Now, the commission’s original package was proposed in November 2025. The council adopted its position in mid-March 2026, and Parliament adopted its position later that same month. A lot of jargon and terms, which we have come on to explain, but the trilogue process is now underway in a nutshell with the timetable finalization of this process before the 2nd of August 2026.
Alistair Ho (04:07):
And just a level set for listeners not familiar with the EU legislative process: that’s the usual process in action; the commission proposes legislation, parliament and council each adopt their own positions, and then the trilogue negotiations is that final phase where institutions work towards a final agreed text.
Deborah Kirk (04:21):
So then picking up the AI thread, the European Commission’s AI Omnibus package comprised the following key reforms to the EU AI Act. It delayed certain high-risk AI obligations, which were due to come into force on the 2nd of August of this year. So for Annex-1 systems, broadly AI built into products already covered by EU sectorial safety and market surveillance rules, meaning extra safeguards against harms from AI systems used in contexts like biometrics, critical infrastructure and law enforcement, the delay was proposed as 12 months after the availability of harmonized standards or other tools supporting the compliance with the AI Act, so with a long stop date of the 2nd of August 2028. Secondly, it extended the simplified SME regime to small mid-cap enterprises, so those larger entities could also benefit from more favorable treatments. So for example, when fines are assessed under the EU AI Act.
(05:21):
And then thirdly, it removed the AI literacy obligation for providers and deployers of AI systems. So in practice, that means organizations deploying systems in areas like hiring, access to services or infrastructure management may see more time to build out compliance frameworks before those obligations fully apply, which for those sectors is good news. Ali, do you want to touch on how that has moved in the council proposals?
Alistair Ho (05:49):
Yes, of course. So there’s a few key developments, both that the council introduced and that they amended from the commission’s position. So the first one that they introduced is a prohibition on the generation of non-consensual, sexual and intimate content or child sexual abuse material, so-called “nudification apps.” The second is that they actually reinstated the obligation on AI providers to register systems in the EU high-risk database, even where they considered those systems exempt from high-risk classification. And that was a proposal that was relaxed by the commission’s proposals, which the council decided should be reinstated in full. And thirdly, on the timelines that Debs just mentioned, they simplified this and pinned it directly to the 2nd December 2027 and 2nd of August 2028 dates, removing that subjective element around when the tools and other compliance standards would be implemented.
Deborah Kirk (06:35):
Right. And one point I would note here is that those high-risk timelines are a real shift from the commission’s approach, which linked timing to when supporting guidance becomes available. So businesses now get fixed dates rather than a moving target. And that matters in practice because it allows legal and product teams to plan against a defined compliance horizon rather than waiting for the regulatory guidance to land and for the dates to be pinned on that. So, Jonathan, tell us, how did the parliament’s position propose changing this all?
Jonathan Stephenson (07:06):
Yeah. The parliament adopted parts of the commission’s original proposals and also parts of the council’s amendments but departs from them in a few important respects. On high-risk AI, starting there, the parliament has adopted the council’s extended deadlines but notably also proposed stripping the commission of its power to bring those deadlines forward. And that really gives businesses more certainty when they’re planning their compliance timelines as to what things will look like in the coming months and years. On the AI literacy obligation, the commission has proposed scrapping entirely the duty on providers to make sure their staff and any third parties acting on their behalf are AI literate. So in simple terms, that’s a requirement for organizations to ensure that all their people using or deploying AI systems understand how those systems work and the risks of those systems. The parliament has taken a really different approach on this.
(08:03):
Rather than removing the obligation altogether, it wants to keep it for providers and deployers, but in a softer form, essentially a duty to support the improvement of AI literacy with the commission to follow up with practical implementation guidance. The council, by contrast, sided with the commission’s idea of replacing the obligation with the model where member states are simply encouraged to just promote the concept of AI literacy. It’s also worth touching briefly on the commission’s proposal to simplify the legislative framework for mid-cap enterprises by extending the framework, including certain exemptions, previously which were only available to SMEs. Both the council and the parliament have supported this proposal, so it seems likely at this point that that will feature in the finalized text. Taken together, all these changes really affect how organizations plan compliance timelines, train staff and assess their exposure under the AI Act, so we’ll be keeping an eye out for where this all lands.
Alistair Ho (09:04):
Yep. And just to touch on defects, the parliament essentially backed the council’s position. Any AI system creating or manipulating realistic sexually explicit imagery of an identifiable person without the consent of that individual would be banned. So that’s unless the developer has built in effective safeguards against misuse into that tool. So for developers of generative AI tools, what does that mean? Whether image generators, Facebook tools, similar systems, this translates directly into a question of product design, of course, committed use cases and also monitoring for any wrongdoing. So in practice, that’s driving closer scrutiny of how these tools are designed; how they’re governed, including what safeguards they’ve built into that product at the product level; and how misuse is both detected, monitored and addressed. So pulling our discussions on specifically the AI Omnibus together, none of this is final. So we’ve gone through each of the positions, and that’s part of those trilogue negotiations, right?
(09:52):
So that may change, that may be amended slightly further still, and that process is hopefully going to conclude in the coming months. The expedited timeline that is reflected here reflects the need to finalize that law before the remaining AI Act provisions come into force on the 2nd of August 2026. Those delays really need some certainty to them for businesses and for the AI industry as a whole, I think.
Deborah Kirk (10:13):
So watch this space, I guess, and hopefully, an answer before the summer.
Alistair Ho (10:16):
Yeah, exactly. Hopefully an answer for the summer.
Deborah Kirk (10:18):
Great. Thank you. And so turning to the UK, the DUAA focused on data protection. It pursues the same broad goal as the EU Digital Omnibus, which is modernizing and simplifying data regulation. The DUAA’s core data protection and e-privacy provisions came into force on the 5th of February this year, and the reforms span a wide range. So cookie and e-privacy rules, how subject access requests are handled, a revised test for international data transfers, broader ICO enforcement powers and, from June 19th of this year, a mandatory complaints handling regime.
Alistair Ho (11:00):
Yeah, exactly. The DUAA is far-reaching. It’s got several parts covering everything from smart data schemes to digital verification and a few things you just mentioned, but really let’s focus here on the AI tools and changes that the DUAA is bringing in relation to those. And so I think from our perspective, those are essentially automated decision-making, which applies to significant decisions, meaning those producing an adverse legal effect or similarly significant adverse effect on the data subject. Recognized legitimate interests at a basis of using personal information within the context of AI tools, the outcomes of the UK’s report on copyright and AI, and also non-consensual deepfakes, which we’ve touched on with the Digital Omnibus as well.
Jonathan Stephenson (11:37):
Yeah. Just taking a little bit of a dive into ADM or automated decision-making, which is in essence kind of shorthand for decisions that are made about a person by a computer system with no human in the loop. By this, we can often think of systems, screening job applications, approving loans or setting insurance pricing as some kind of commonly held examples. So before the DUAA or the Data Use and Access Act, the framework sat under Article 22 of the UK GDPR, which was really quite restrictive. Organizations making solely automated decisions producing legal or similarly significant adverse effects, referred to here as kind of a significant ADM, had to satisfy one of three gateway conditions: explicit consent, contractual necessity, or specific authorization by domestic or UK law. And the ICO’s examples include entitlement to child or housing benefit, an automated decision to offer somebody a job, and a decision to grant or decline a mortgage application, which all fell within the scope of what we were talking about here.
(12:44):
And in practice, this is already prompting organizations to revisit existing decision-making tools, particularly in areas like recruitment and lending, to reassess the lawful basis and the safeguards around how those systems were really built. And it’s really worth pausing on that, as you noticed earlier; given just how narrow that gateway was, the bar is even higher where such significant ADM involves special category personal data. And by that we mean health, race or biometric-related data. And in those circumstances, the data subject’s, the individual’s, consent or a substantial public interest is really required. However, the DUAA has completely overhauled this model just described. Significant decisions involving special category data remain subject to the stricter regime, but otherwise the blanket prohibition is completely gone. Organizations now have access to the full range of lawful bases for processing personal data, including legitimate interest. So this is a really key shift in how they go about these everyday tasks.
Deborah Kirk (13:48):
Yes, that really is a key shift. And even with that new freedom, significant ADM still requires mandatory safeguards. Transparency, the ADM is being used, the right for individuals to put forward their views, access to meaningful human review and the ability to challenge the outcome. And in practice, that means organizations cannot treat this as a pure automation play. There still needs to be a meaningful human layer built into decision-making processes.
Alistair Ho (14:17):
Yes. And ADM is the first piece that the ICO has been starting to build out into its guidance. It’s opened a consultation into some draft guidance on automated decision-making on the 31st of March, 2026. That’s currently live on the website, pending the consultation, and the draft refreshes its existing material on ADM profiling, the use of personal data to evaluate rates, or predict things about a person. It’ll be interesting to see whether the consultation shapes the ICO’s interpretation of the DUAA’s ADM changes and also any updates to its good practice recommendations. In other words, what the ICO expects organizations should or could do to comply effectively with the changes in law rather than just what it must do to be compliant. And this is what we’re already seeing clients engage with. The consultation closes on the 29th of May 2026, so there’s still time to respond if you want to, or of course you can share your views with us. We’d be happy to hear them.
Jonathan Stephenson (15:04):
That’s a great plug, Ali. And the ICO is already putting this syncing into practice in its enforcement actions. Its recruitment-required report, which was published at the end of March after engagement with over 30 employers found that many employers do not even recognize they are using ADM in their hiring processes. And we shared some of those examples of how it was commonly used earlier in this discussion. And as a result, they’re really failing to put the required safeguards in place. So this is not theoretical. The ICO has written to 16 organizations already and signaled clearly that enforcement action will follow where practices fall short.
Deborah Kirk (15:43):
So one where I think our clients really need to take notice. And as of course you know, the EU is taking a different route on ADM. The omnibus retains the existing EU GDPR position of allowing fully automated decision-making specifically where it is needed to enter into or perform a contract, even where a human could make the same call. It also introduces separate measures for handling special category data in an AI context. The general principle is that this type of data should not be used for developing or operating AI systems, unless the organization in charge, the controller in data protection terms, can effectively prevent it from being used to generate outputs or from being shared with third parties. So already we are seeing divergence in how these regimes, the EU and the UK, approach the same underlying technology and themes around them.
Alistair Ho (16:35):
Moving on to recognized legitimate interest, another change that the DUAA introduces. DUAA introduces essentially a defined list, including national security, crime prevention, emergency response, disclosures to public bodies and protecting vulnerable people, where organizations can rely on this legitimate interest without carrying out the usual balancing test, documenting it in an LIA, et cetera. By contrast, the EU omnibus confirms that organizations may rely on legitimate interests when developing and operating AI systems expressly, but it remains necessary to do that standard balancing test and associated safeguards as well. You can’t just say it’s a recognition of interest as you might be able to under the DUAA. And that divergence matters in practice; of course, businesses operating across both the UK and the EU will need to navigate that different threshold for relying on legitimate interests. Even where the underlying AI use case is the same, so you might have to do an LIA in the EU where you might not have to have done it in the UK, for example.
Deborah Kirk (17:27):
And that may not help most developers whose AI tools are used across a range of purposes, but I suppose it could be significant for more targeted use cases. Crime prevention being an obvious example, I suppose.
Jonathan Stephenson (17:40):
Yeah, exactly that. And just continue to tick through the list. On copyright, the DUAA imposed a statutory duty on the government to publish a report on copyright and AI, really drawing on a public consultation that ran through late 2024 to early 2025. And that report, called the Report on Copyright and Artificial Intelligence, really in the name, was published on the 18th of March this year. And the government has notably pulled back from where it started. Its preferred option had previously been a broad text and data mining exception, very analogous to what has been seen in the EU currently, and this would’ve allowed AI developers to use copyrighted works to train models with the option for rights holders to be able to opt out. And in practice, that would’ve significantly reduced friction for AI developers building and training models in the UK. But the creative industries pushed back strongly, and the report formally states that this is no longer the preferred way forward. Before any changes to UK law, it recommends gathering evidence on how copyright laws are impacting development and deployment of AI across the economy. So those crimes have really been heard.
Deborah Kirk (18:52):
So Elton John might get his way after all.
Jonathan Stephenson (18:54):
Yeah.
Deborah Kirk (18:56):
Ali, over to you on deepfakes.
Alistair Ho (18:58):
Yep. So moving on to deepfakes, the UK has taken a fairly different approach and more forceful approach arguably to the EU, introduces a new criminal offense targeting the creation and sharing of sexually explicit deepfake images of identifiable adults without their consent. So for developers of generative tools such as image generators, Facebook tools as mentioned, this creates a clear compliance question about the safeguards built into those tools, the use cases that those tools allow and how misuse is identified and addressed, with criminal enforcement sanctions rather than implications for the model itself.
Deborah Kirk (19:27):
So pulling those DUAA strands together, a more permissive ADM regime backed by mandatory safeguards; a defined list of recognized legitimate interests; the government’s pulling back from a broad text and data mining exception; and a new criminal regime for non-consensual sexual deepfakes, all with practical implications for developers and deployers.
Alistair Ho (19:52):
Yeah. And as we’ve discussed with divergence from the EU regime, ironically, businesses operating across borders face potentially more complexity with divergence, not less.
Deborah Kirk (20:01):
Right. And that is the theme we explored in real detail at our AI forum, which we hosted earlier this year. And for listeners who could not join us, we hosted AI on the horizon: Aligning Regulation and Innovation back in February. We were privileged to host Matt Clifford CBE alongside Adam Dawson of Blackstone, Andy Gandhi of Keystone AI, Lucy Tyrrell of Wordsmith AI and Alex Haskell of ElevenLabs, and it was such a great afternoon. Everyone was super engaged. What made the afternoon really land was the range of conversations. We had a fireside chat, which covered geopolitical tension and policy trade-offs. We had breakout sessions focused on deployment realities, governance friction, and IP and contractual exposure. So I think as key takeaways, the forum covered several themes: the quickening pace of AI capability compared to regulation, secondly, regulatory fragmentation across the EU, UK, and the US. Thirdly, governance and operational accountability for AI systems.
(21:05):
And then finally, the unresolved relationship between AI and IP, including in the context of training data and digital identities. And I think what really came through in all of those discussions is that these are no longer abstract issues. They really are now showing up in live deployments, in business problems, in contract negotiations and board-level discussions.
Jonathan Stephenson (21:29):
You’re exactly right. And it was really great. Those themes connect directly with what we’ve covered here, which really what I take out from that list, Debs, is that AI capability is accelerating faster than regulation can really keep pace. And we’ve seen that-
Deborah Kirk (21:44):
Still.
Jonathan Stephenson (21:45):
... still, and increasingly so. And I think that AI agent capabilities are, we understand, doubling roughly every seven months and in the near future will be pivotal for scaled agent deployment and AI agents communicating directly with each other. And so the government’s frameworks are trying, but finding it really, honestly, very difficult to keep up.
Alistair Ho (22:04):
Yeah. And part of that fragmentation is because as a form of hire, AI is already regulated, and regulators are building on top of existing regulations and frameworks. So employment discrimination, for example, is just as unlawful when carried out by an AI system as when it’s carried out by a human. Regulators are showing flexibility around experimenting with existing frameworks and developments they’re on, but failures in core risk areas are unlikely to be treated leniently, of course.
Jonathan Stephenson (22:27):
Yeah, exactly. And looking back at the macro level for those who weren’t in the room with us, sovereignty and concentration are becoming central policy questions. Governments are weighing open innovation against supply chain risk and against dependence on foreign technology. And we’ve seen this not just in our own events here, also events that we’ve attended all across the sector and industry. And the EU, the UK and the US approaches to AI regulation, yes, they differ structurally, and businesses operating internationally should expect fragmentation rather than convergence and the reality that we live in. And yes, that creates real operational friction for organizations trying to deploy a single product across multiple jurisdictions. So I guess the question is, how do we do it?
Deborah Kirk (23:11):
Absolutely. And on governance, the models built for traditional IT systems are struggling. Companies face pressure to deploy AI quickly, but the approval mechanisms and governance structures designed for slower-moving technologies; they weren’t built for AI. And knowing who internally owns AI-system behavior and accountability is different for every organization, and it’s become an immediate operational priority.
Alistair Ho (23:36):
Yes. And talking about ownership, that links directly to IP contractual issues, which we also discussed. Relationship between AI and IP still leaves a lot of unanswered questions, some of which you discussed on previous podcasts or articles, which are likely to be resolved by courts and legislators, jurisdiction by jurisdiction, perhaps in different ways.
Deborah Kirk (23:51):
Although when we don’t know.
Alistair Ho (23:52):
Yeah, exactly.
Jonathan Stephenson (23:54):
And that’s a really hard one, as you say, Ali, for clients operating globally.
Alistair Ho (23:57):
Exactly. And then on the practical side, there’s often the image of transparency around the training data. The outputs can vary even with the same input. Market standards are still evolving. Training data providers may not provide transparency on where they get their data from. That makes allegations of liability, allocation of ownership and contracts even more complex, particularly where legal teams are acting as both suppliers and procurers of AI systems, for example.
Deborah Kirk (24:18):
Yeah. It’s so difficult to know what good looks like in the absence of knowing what perfect looks like. And I guess training data is perhaps one of the clearest examples of jurisdictional divergence. So in the EU, the digital single market directive provides us, as we mentioned earlier, a text and data mining exception that allows models to be trained on copyrighted works subject to a rights’ holder opt-out. In the UK, the government has stepped back from that approach, as we touched on earlier. So developers are operating in a more cautious environment while the policy debate continues.
Alistair Ho (24:52):
On IP and AI, digital identity is another fast-moving area. There’s no dedicated publicity or image right in the UK or EU. So rights holders are currently relying on a combination of trademarks, passing off, misrepresenting your goods or services as someone else’s, and privacy laws to sort of fill that gap. The House of Lords report from March 2026 has actually called for protections in the UK against unauthorized digital replicas and the sort of in-the-style-of uses. But for now in the UK, contractual and technical controls really remains the primary tools to protect against that sort of use.
Jonathan Stephenson (25:20):
For sure. And the question is, where does this leave us? Things are difficult, is the short answer. But what ties it all together is that a one-size-fits-all strategy is not necessarily viable. And what we’ve often been advising, the discussions that we’ve had, what we’ve seen businesses implement, is that businesses can have common themes but ultimately need jurisdiction-specific tailored approaches on regulation, IP and governance to really get this over the line.
Deborah Kirk (25:47):
Okay. So we have covered a lot today. Before we wrap up, let’s perhaps pull out the key takeaways, of which I think there are five. So number one: the Digital Omnibus is in trilogue, and the coming months will determine whether the AI Act, the EU AI Act amendments, are finalized ahead of the August 2026 deadlines. Number two: the Data Use and Access Act is now in force with significant changes to ADM, legitimate interests, copyright and deepfakes. Number three: the UK and the EU are diverging in meaningful ways, and businesses should plan for a fragmented regulatory landscape in this space. Number four: the consistent theme from both regulation and market practice is that governance is not optional. It is central to how AI systems are built, deployed and managed. And then finally, the direction of travel really here is clear. Regulation is diverging, but expectations around governance and accountability are converging just as quickly.
(26:51):
So that is the lay of the land as we see it, a busy moment for AI and data regulation in both the UK and the EU with structural divergence baked in and with regulators already beginning to apply these frameworks in practice. If any of the issues we’ve discussed align with your challenges you’re working through, we would love to continue the conversation, so please feel free to reach out to us. Thanks for listening to “SkadBytes.” We will see you next time.
Voiceover (27:20):
Thank you for joining us for today’s episode of SkadBytes. If you like what you’re hearing, be sure to subscribe in your favorite podcast app so you don’t miss any future conversations. Additional information about Skadden can be found at skadden.com.
Voiceover (02:40):
Thanks for listening to “Bytes.” Be sure to subscribe for more tech insights. Additional information about Skadden can be found at Skadden.com.
Listen here or subscribe via Apple Podcasts, Spotify, YouTube or anywhere else you listen to podcasts.
See all episodes of SkadBytes: Tech Innovation Meets Regulation


