AI Act State of Play – Key Obligations Postponed and Amended, Alongside New Guidance

Skadden Publication / Cybersecurity and Data Privacy Update

Nicola Kerr-Shaw David A. Simon William E. Ridgway Susanne Werry Aleksander J. Aleksiev

Executive Summary

  • What’s new: The European Parliament and the European Council announced an agreement to postpone the entry into force of the AI Act’s high-risk AI obligations, alongside other amendments. In parallel, the European Commission published guidance on the AI Act’s transparency obligations, which enter into force starting in August 2026.
  • Why it matters: The delay gives companies additional time to assess and prepare for AI Act compliance, while the European Commission’s guidelines will likely drive local regulators’ enforcement priorities and approach to interpreting the AI Act’s transparency obligations.
  • What to do next: Companies may want to reprioritize their AI Act compliance efforts to focus on other obligations (such as the transparency obligations) that will come into force this year. Companies should also consider benchmarking their existing transparency measures against the commission’s new guidance on transparency measures. We provide a table of key regulatory expectations.

__________

The AI Act Delay and Other Amendments

On 7 May 2026, the European Parliament (EP) and the European Council (Council) announced that they had reached an agreement to amend the EU’s AI Act. The agreement, which is broadly similar to the amendments initially proposed by the European Commission through its Digital Omnibus regulation (see our previous client alert), will:

  • Postpone AI Act obligations:
    • For high-risk AI uses, such as biometric identification and recruitment screening, to December 2027 (previously, compliance was scheduled to begin August 2026).
    • For high-risk AI used as components of certain safety-critical products, such as lifts, to August 2028 (previously, compliance was scheduled to begin August 2027).
    • For watermarking AI content obligations to December 2026 (previously, compliance was scheduled to begin August 2026).
  • Ban “nudifier” applications that can “depict intimate parts” or “sexually explicit activities” of an identifiable person, starting December 2026.
  • Exempt certain machinery from the AI Act, if the machinery is already covered by the EU Machinery Regulation (e.g., construction and manufacturing equipment).

Transparency Guidelines

Although the agreement between the Council and the EP postpones the AI Act’s watermarking obligations, the remaining AI Act transparency obligations will come into effect on 2 August 2026. In parallel with the EP’s and council’s announcement, on 7 May 2026, the European Commission published draft guidance on these transparency obligations. The table below sets out a few key points from the draft guidance:

AI ACT TRANSPARENCY OBLIGATION WHAT THE GUIDANCE SAYS

Providers1 of AI systems that interact with individuals (e.g., customer service chatbots) must design the system so that individuals are “informed” they are interacting with an AI system.

This obligation comes into force 2 August 2026.

  • No specific format of disclosure is required, but the disclosure must be clear to users.
  • Companies are expected to tailor the level of disclosure to users.
  • For example, if an AI system is intended to be used by children or the elderly, then more elaborate disclosures may be required, whereas AI systems intended to be used by a professional audience (e.g., an AI coding assistant intended for use by computer programmers, or a medical AI system intended for use only by doctors) may not require any specific disclosure at all.
  • Companies will need to make disclosures within the user interface itself (rather than, for example, in terms and conditions) and should provide disclosure on “first interaction or exposure” rather than after an interaction has begun.

Providers of AI systems that can produce AI-generated or AI-manipulated content must ensure that such content is digitally marked as, and detectable as, artificially generated.

The Council and EP agreement delays this obligation until December 2026.

  • The watermarking obligations require both:

(i) A machine-readable watermark.

(ii) A means to enable humans to “detect” AI-generated content — for example, by providing a software tool that users can apply to detect whether a given image is AI-generated.

  • These solutions must be effective, reliable, interoperable and robust, notwithstanding international standards for such markings are still relatively immature.

Deployers of AI systems that generate “deep fake” content must disclose that those “deep fakes” are AI-generated.

This obligation comes into force 2 August 2026.

  • The focus of this obligation is to prevent “deep fakes” of real people, meaning that fanciful images such as a “video of mice arguing in human language over the best type of cheese” do not require disclosure.

Deployers of AI systems that generate text used for “the purpose of informing the public” must disclose the text as AI-generated.

This obligation comes into force 2 August 2026.

  • The “purpose of informing the public” is broad and includes, for example, newspaper reporting, a company’s public corporate reports, academic papers and social media posts from government departments.

The commission’s guidance is not strictly binding, and the AI Act’s transparency obligations will primarily be enforced by national regulators rather than the European Commission. Still, the guidance is likely to influence national regulators’ interpretation and enforcement of the AI Act, and companies should therefore consider assessing their transparency measures against the guidance to identify gaps.

What Companies Should Consider Now

  • Following the agreement to delay the AI Act’s high-risk AI obligations, companies may want to reprioritize their AI Act compliance programs to reflect the additional time before high-risk AI obligations become enforceable.
  • Companies should consider benchmarking their compliance with AI Act transparency obligations — obligations which come into force beginning in August 2026 and have not been delayed — against the commission’s guidance. 

_______________

1 The AI Act’s transparency obligations apply to “providers” (the company that develops an AI system) and “deployers” (the company that uses/deploys the AI system).

This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.

BACK TO TOP