Executive Summary
- What’s new: New AI-enabled vulnerability discovery tools, including Anthropic’s Claude Mythos Preview, have reportedly demonstrated the potential to identify previously unknown software vulnerabilities faster and on a scale previously unimaginable.
- Why it matters: Traditional vulnerability management practices — regular weekly or monthly patching schedules, prioritization of business downtime concerns over security remediation, and the conducting of remediation based in part on the likelihood of vulnerability exploitation — are likely to prove insufficient to manage the dramatic increase in vulnerabilities identified by these new AI-enabled tools and the anticipated increase in the volume and efficacy of cyberattacks leveraging these vulnerabilities.
- What to do next: Companies should consider revisiting their vulnerability management and incident response processes to ensure that they are can identify and address vulnerabilities at increased pace.
__________
New AI Vulnerability Discovery Capabilities
Recent developments in generative artificial intelligence (AI), exemplified by the release of Anthropic’s Claude Mythos Preview, have enhanced the automated detection and exploitation of software vulnerabilities. These tools allow for the exploitation of systems on a scale and with a level of sophistication previously unattainable by human actors alone. Anthropic has stated, for example, that its latest AI model has, when directed to do so, identified thousands of previously unknown high- and critical-severity vulnerabilities, including vulnerabilities in every major operating system and every major web browser. Many of these vulnerabilities had been present and undiscovered for years by even highly skilled human security researchers.
Moreover, public reporting also indicates that next-generation GenAI models are better at exploiting vulnerabilities as well. Anthropic indicates, for example, that its newest model has demonstrated the ability, in controlled testing environments, to autonomously exploit certain known vulnerabilities at materially higher rates than prior models. Taken together, these changes suggest a potential step-change in the efficiency in the ability of malicious cyber actors to identify and exploit software vulnerabilities.
In response to this threat to system defenders globally, Anthropic has created Project Glasswing. This multiparty initiative is providing industry partners and critical infrastructure providers with exclusive early access to Claude Mythos so that they can pre-emptively identify and patch critical vulnerabilities. The coalition consists of over 40 major technology firms. However, other similarly powerful models with fewer safeguards will eventually go public and all businesses should start preparing now for a world where these adversarial capabilities are widely available.
European and US Legal Requirements for Vulnerability Management
Tracking this shifting threat landscape is especially important for companies operating in Europe and the U.S., where multiple regulatory frameworks impose legal requirements for cyber governance, vulnerability management and vulnerability reporting obligations.
European Legal Requirements
In the EU, companies are subject to strict obligations under a range of regulations. For example:
| Law | Topic | Requirement |
| The EU Cyber Resilience Act (CRA), which applies to producers of “products with digital elements,” including network-connected hardware and software. | Governance | For regulated products, cybersecurity risks must be assessed across the product development lifecycle, from initial planning and design through to post-release maintenance. |
| Vulnerability/patch management | Regulated products must comply with the “essential cybersecurity requirements,” including requirements to ensure products have no known exploitable vulnerabilities, that any vulnerabilities discovered are remediated without delay by providing security updates and that cybersecurity risk assessments are kept up to date. | |
| Exploited vulnerabilities reporting | Manufacturers of regulated products must notify regulators within 24 hours of any actively exploited vulnerabilities or incidents that compromise or could compromise data on that product. | |
| The European Network and Information Systems Directive (NIS2), which applies to “essential” and “important” entities in various critical infrastructure and public administration sectors. | Governance | Management bodies (generally, boards) must approve cybersecurity risk management, oversee its implementation and be personally liable for failures of cybersecurity risk management. |
| Vulnerability/patch management | Regulated entities must put in place a range of security policies, including in relation to vulnerability handling and disclosure. | |
| Incident reporting | Regulated entities must notify regulators within 24 hours of any incident that has a “significant impact” on the provision of their services. | |
| The U.K. Network and Information Systems Regulations (NIS), which applies to “essential” and “digital service provider” entities across critical infrastructure sectors. | Vulnerability/patch management | Regulated entities must put in place a range of security policies, including in relation to vulnerability handling and reporting. |
| Incident reporting | Regulated entities must notify the regulators of any incident that has a “significant impact” on the provision of their services. | |
| The Digital Operational Resilience Act (DORA), which applies to financial entities, including alternative investment fund managers. | Governance | Management bodies (generally, boards) must define, approve, oversee and be personally responsible for the implementation of cybersecurity risk management policies. |
| Vulnerability/patch management | Regulated entities must put in place a range of cybersecurity risk management policies, including policies in relation to vulnerability and patch management. | |
| Incident reporting | Regulated entities must notify regulators within 24 hours of any “major ICT-related incident,” including incidents that result in unauthorized access or loss of confidentiality. | |
| The General Data Protection Regulation (GDPR), which applies to companies that process personal data. | Governance | Regulated entities are responsible for and must be able to demonstrate compliance with the GDPR. |
| Vulnerability/patch management | Regulated entities must implement “technical and organizational measures” to ensure an appropriate level of cybersecurity, including vulnerability and patch management measures. | |
| Incident reporting | Regulated entities must report any security breach affecting personal data to regulators within 72 hours. |
The emergence of the next-generation of AI models will require companies to rethink how they comply with the laws described above. Once automated or AI-assisted vulnerability discovery becomes more common, regulators, customers and counterparties are likely to ask whether a company’s testing and vulnerability management processes remain commensurate with known risks. As the threat landscape advances, stakeholders (including regulators) may increasingly ask companies whether they have updated their testing, prioritization and remediation practices, including by making use of next-generation AI tools to identify and address vulnerabilities in their own software. For example, GDPR regulators are likely to ask what changes companies have made to their vulnerability management processes to ensure those processes still provide an “appropriate level of cybersecurity” in light of the increased threat.
Companies will need to rethink their approaches to patch management for third-party software, including open source software, as patching cycles that are measured in weeks instead of minutes may no longer be sufficient as a response to AI-assisted attackers.
US Legal Requirements
In addition to European requirements, companies with U.S. operations face an array of proactive obligations under state and federal laws to maintain reasonable data security practices.
At the federal level, the Federal Trade Commission (FTC) has long treated the failure to implement reasonable security measures as an unfair or deceptive practice under Section 5 of the FTC Act, and its enforcement actions have increasingly scrutinized whether companies’ security programs keep pace with evolving threats. The Federal Communications Commission (FCC) imposes data security obligations on telecommunications carriers and has brought enforcement actions for failures to safeguard customer proprietary network information. In the defense contracting space, the Department of Defense’s Cybersecurity Maturity Model Certification (CMMC) program and related Defense Federal Acquisition Regulation Supplement (DFARS) requirements impose specific security controls on contractors handling controlled unclassified information, with compliance failures potentially jeopardizing contract eligibility.
At the state level, attorneys general across the country have leveraged both general consumer protection statutes and sector-specific data security laws to pursue enforcement actions against companies whose security practices were found wanting. The emergence of next-generation AI vulnerability discovery tools may influence how regulators and plaintiffs assess what constitutes “reasonable security”: Companies that fail to account for AI-enabled threats in their security programs may risk falling short of these regulatory expectations.
These obligations carry particular litigation risk under the California Consumer Privacy Act (CCPA). The CCPA effectively codifies the reasonable security standard as a private right of action — meaning individual consumers, not just regulators, can hold businesses accountable for failing to keep pace with the evolving threat landscape. Where AI-powered tools are making vulnerability exploitation faster and more effective, the failure to adapt security practices to these emerging threats could be cited by plaintiffs as direct evidence that a business did not implement and maintain “reasonable security procedures and practices” as required by law. This standard-of-care argument is at the heart of CCPA exposure: A company’s security posture will be measured not against a static benchmark, but against the current threat environment, which increasingly includes AI-enabled attack vectors.
The potential financial consequences compound this risk. Under California Civil Code § 1798.150, where nonencrypted and nonredacted consumer personal information is subject to unauthorized access and exfiltration, theft, or disclosure as a result of a business’s failure to implement and maintain reasonable security procedures and practices, such businesses face statutory damages ranging from $100 to $750 per consumer per incident. Given the scale of modern data breaches, which can affect millions of consumers, these damages can aggregate to significant exposure in larger-scale incidents.
Taken together, the combination of a plaintiff-friendly reasonable security standard and potentially massive statutory damages makes the CCPA one of the most significant litigation risk vectors for companies that do not proactively account for AI-driven threats in their security programs.
What Companies Should Consider Now
- Review existing cybersecurity compliance documentation, particularly vulnerability management, patching, testing policies, and incident response policies and procedures, to assess whether they remain appropriate in light of the anticipated step-change in the volume of identified vulnerabilities and efficiency in the ability of malicious cyber actors to identify and exploit software vulnerabilities.
- Map existing security testing, vulnerability management, open-source diligence and product documentation processes against European and U.S. legal requirements.
- Brief boards and senior management on the augmented risks posed by recently announced AI-enabled vulnerability discovery tools, and facilitate development of robust roadmaps for advancing defensive measures.
- Identify opportunities to use next-generation AI models to assist with cybersecurity defense and compliance. Use cases could include penetration testing, internal code review, automated endpoint detection and response, and patch development.
- Consider expanding existing vulnerability management capabilities, processes and staffing to prepare for the potential flood of incoming patches as Project Glasswing and other similar initiatives identify critical vulnerabilities across widely used software.
- Prepare to respond to incidents at a higher volume and intensity by running tabletop exercises for multithreat incidents, automating remediation actions where possible and enabling security controls, including segmentation, phishing resistant multifactor authentication (MFA) and regular rotation of secrets.
- Engage in collective defense strategies and consider integrating advanced tooling and automation into vulnerability identification and remediation workflows. Threat actors are increasingly collaborating to optimize for each stage of an attack, and in response companies should coordinate with information sharing and analysis centers (ISACs) and computer emergency response teams (CERTs) to develop standardized sector guidance and build a bank of threat intelligence.
This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.