As the EU pivots from drafting AI rules to deploying AI at scale, businesses must face a swiftly advancing regulatory landscape. In this episode of "SkadBytes," partner Deborah Kirk and associate Alistair Ho from Skadden's IP and Technology team discuss the EU AI Act's phased rollout, the European Commission's €1 billion Apply AI Strategy and the just-published Digital Omnibus. They explore how these developments are driving sector-by-sector AI adoption, reshaping infrastructure and sovereignty considerations, and elevating the role of governance and early engagement for organizations across the region.
Episode Summary
As the EU pivots from drafting AI rules to deploying AI at scale, businesses must face a swiftly advancing regulatory landscape. In this episode of “SkadBytes,” partner Deborah Kirk and associate Alistair Ho from Skadden's IP and Technology team discuss the EU AI Act's phased rollout, the European Commission's €1 billion Apply AI Strategy and the just-published Digital Omnibus. They explore how these developments are driving sector-by-sector AI adoption, reshaping infrastructure and sovereignty considerations, and elevating the role of governance and early engagement for organizations across the region.
Voiceover (00:00):
Welcome to “SkadBytes,” a podcast from Skadden, exploring the latest developments shaping today’s rapidly evolving tech landscape. Join host Deborah Kirk and colleagues as they deliver concise insights on the pressing regulatory issues that matter to tech businesses, investors, and industry leaders worldwide.
Deborah Kirk (00:22):
Welcome back to “SkadBytes,” the podcast from Skadden’s IP and technology team here in London. This is episode four. I’m Debs Kirk. I’m joined by Alistair Ho. Today we’re talking about what might be the most significant shift in Europe’s digital strategy since the GDPR. We’re seeing the EU pivot from writing the AI rule book to deploying AI at scale while simultaneously refining the rules mid-flight.
Alistair Ho (00:48):
Yeah, that’s right. Just yesterday, the commission published its long anticipated Digital Omnibus, which is a set of amendments that’s already dominating legal conversations.
Deborah Kirk (00:55):
We will be reporting on that in due course. At the same time, the commission is rolling out a €1 billion Apply AI Strategy, aiming to push AI into real world use with funding, infrastructure, support mechanisms. So this complements but also really reshapes the compliance journey under the EU AI Act.
Alistair Ho (01:15):
Yeah, that’s right. It’s not just startups we’re speaking about here. These changes affect a wide range of businesses. So we’re talking financial sponsors all the way to infrastructure operators ﹘ from insurers game companies to pharma and platforms. Across the firm, we are seeing deals already and product launches where AI is no longer peripheral. It’s central to the deal itself.
Deborah Kirk (01:33):
Absolutely. We are really seeing a change in the landscape in that regard. This all sits within the EU’s broader digital decade agenda, which is an initiative aimed at transforming Europe’s digital capacity by 2030 with targets across skills, infrastructure and business tech adoption, and AI is one of its cornerstones. Let’s unpack what’s changing. Maybe Ali, you can set the scene for us on the AI Act timeline.
Alistair Ho (01:56):
Of course. The AI Act came into force August 2025, but the obligations are being phased in. So transparency duties for general purpose AI or GP AI, they start from August 2025. The highest obligations apply from August 2026. And then there’s some transition arrangements that run all the way into 2027.
Deborah Kirk (02:14):
That’s right. The new Digital Omnibus proposes delaying those high risk obligations by at least a year easing SME burdens and potentially eliminating some obligations altogether like AI literacy or excessive technical documentation, which in many ways is good news. This isn’t just about process, it is politically driven. We’ve seen former Italian Prime Minister Mario Draghi urging the commission to simplify its digital framework so that it doesn’t hold back innovation. There’s also been sensitivity around how these rules will land internationally, particularly with the U.S. What we’re seeing is Brussels really trying to strike a balance between maintaining leadership on trust while making the system work for businesses. Right?
Alistair Ho (02:59):
Yeah, exactly. Just to put that in a bit more context, the UK is taking a very different approach. So it’s data use and access act, and it’s broader AI strategy focuses on a pro-innovation sector-like regulation, all those buzzwords with key guidance and sandboxes rather than hard statutory obligations. There’s no direct UK equivalent to the EU Act. Of course, there’s no centralized enforcement framework, and that divergence matters for any business that’s planning to operate or does operate across border respect to AI. So, you can’t assume that what’s compliant in one market will work in the other, or that regulators will take the same approach to the same product.
Deborah Kirk (03:31):
It’s just a business aspect of being in the tech business across the board, I suppose. And that really is a key point. Let’s walk through the five big takeaways from the Apply AI Strategy and show how they connect to this ever-evolving regulatory picture. Takeaway one. First up, sector-by-sector deployment. AI is now being treated almost like a critical national infrastructure. The EU Commission is targeting adoption in energy, manufacturing, healthcare, mobility, and public services, key areas, and tying funding and infrastructure support to them.
Alistair Ho (04:06):
We’ve already seen how this rises in practice. A business in the infrastructure space building predictive maintenance tools was invited into a national pilot, but the catch was that compliance alignment with AI principles was a precondition. So you get that sort of deal. Another great example of where this is going is the Commission’s AI drug discovery challenge. That’s targeting diseases like Alzheimer’s. The winner gets back to compute power, expert guidance and access to what is being called AI factories. So it’s that Apply AI in action.
Deborah Kirk (04:33):
Great. Takeaway two. The second takeaway is the targeted support for SMEs and scale-ups. Now, the EU knows that for this to work, AI adoption needs to go beyond the top tier players.
Alistair Ho (04:46):
And that’s why they’re prioritizing support for the parts of the market, often lack that in-house AI or compliance resources, but crucial towards scale deployment of AI. We’re seeing funding for the AI Skills Academy expansion of this digital innovation hub, and even a launch of the AI experience centers.
Deborah Kirk (05:02):
Under the omnibus, we’re seeing proposals for streamlined registration, reduced documentation burden, and more clarity around exemption thresholds for these smaller players. We’ve talked to companies who are already considering accessing these centers for things like technical due diligence, classification help, and early stage audits, but any business building with AI without an in-house regulatory function. As you say, it’s not just for startups, it could be a manufacturing mid-cap or a SaaS platform in the insurance sector. So the support is real. Even though it’s nascent, it is being used.
Alistair Ho (05:37):
Third, takeaway, AI infrastructure and digital sovereignty. This is definitely a big one. The EU is increasingly tying its funding and trust frameworks to where your AI is hosted, trained, and also where’s deployed.
Deborah Kirk (05:48):
So the message essentially is align your infrastructure with EU preferences?
Alistair Ho (05:52):
Precisely. It’s clear how this could come up in a deal or deployment strategy. For instance, a business in the gaming sector may need to shift its inference hosting from a US-based cloud to an EU zone to justify or qualify for a regional funding scheme. It’s not theoretical location and vendor strategy are now risk and eligibility variables for key funding and key infrastructure.
Deborah Kirk (06:10):
But I guess there’s a tension there, isn’t there? So the EU is pushing hard and sovereignty, but most of the major infrastructure and foundational model players remain outside the EU.
Alistair Ho (06:21):
Yeah, and that’s the policy you got. That’s still very much in progress and still very much in development.
Deborah Kirk (06:25):
Takeaway four, using compliance as a strategic differentiator. We’re seeing clients who proactively classify their systems and document risk controls, gaining real traction with partners, investors, and customers. One financial sponsor who we’ve spoken to has built AI risk assessment into their playbook for all of their new deals. They’re not waiting for enforcement. They’re using good governance as a proxy for being ready to scale.
Alistair Ho (06:52):
That’s where tools like the AI Act service desk, commission guidance, even the AI observatory come in. It’s not just about staying out of trouble, it’s about demonstrating capability. The challenge, especially with general purpose AI models, is that many providers haven’t disclosed the compute power or training needed to even assess their systematic risk status. Definition of what that is and what’s in scope for that is constantly evolving, constantly changing.
Deborah Kirk (07:12):
Yeah, absolutely. Finally, our fifth takeaway, and this is about joined-up governance and engagement. With the Apply AI alliance, that is a lot of, A’s the AI observatory and the AI board now live, companies are being invited to help shape how the rules are interpreted and enforced. Although public visibility of uptake is limited, businesses can join sector specific alliances in insurance, mobility, digital health to preempt how enforcement might land.
Alistair Ho (07:43):
Maybe it’s about time that, for example, that could be, for example, a platform fraud that may be able to push clearer guidance on whether model-based ranking tool is actually considered high risk or not. That level of influence in policy and law is rare, but theoretically it’s available now. So getting that early warning, interpretive insight and influence, this may be the moment to get involved for companies in those kinds of decisions.
Deborah Kirk (08:03):
Absolutely. So what’s the key takeaway from all of this? We think it is that Europe is not standing still. The AI Act is now being phased in and partially re-calibrated, but the Apply AI Strategy is already shifting how AI is funded, adopted, and assessed. If you’re building, deploying, or acquiring AI driven systems across gaming, insurance, infrastructure, pharma, or platforms, now is the moment to do the following five things. Number one, map your use case. Number two, classify your role. Number three, understand the phasing and guidance gaps. Four, monitor infrastructure risk and funding eligibility. Number five, join the governance conversation if you can. The next 12 months are going to, we think, really define AI compliance posture for years to come, not just for regulators, but for counterparties, investors and users. Thanks for listening to SkadBytes. If you found this helpful, subscribe and explore our earlier episodes from the AI Act, Early Design to Getty and Stability AI, and the future of copyright. Thanks. See you next time.
Voiceover (09:15):
Thank you for joining us for today’s episode of “SkadBytes.” If you like what you’re hearing, be sure to subscribe in your favorite podcast app so you don’t miss any future conversations. Additional information about Skadden can be found at skadden.com.
Listen here or subscribe via Apple Podcasts, Spotify, YouTube or anywhere else you listen to podcasts.
See all episodes of SkadBytes: Tech Innovation Meets Regulation


