In this month's edition of our Privacy & Cybersecurity Update, we examine the European Commission's second annual review of the Privacy Shield and the Department of Commerce's guidance on how to comply with the Privacy Shield post-Brexit, as well as the Federal Trade Commission's request for public comment on whether to amend identity theft detection rules. We also look at the developments that came out of the European Data Protection Congress recently organized by the International Association of Privacy Professionals and the European Central Bank's recently published Cyber Resilience Oversight Expectations to enhance cyber awareness and resiliency throughout the financial sector.
European Commission Publishes Second Annual Review of EU-US Privacy Shield
The European Commission’s second annual report on the EU-U.S. Privacy Shield (Privacy Shield) featured a generally positive review of the transborder data flow program.
As 2018 drew to a close, the European Commission issued its second annual review of the performance of the 2016 EU-U.S. Privacy Shield, the negotiated framework that allows U.S. companies who have self-certified to import data from the European Economic Area (EEA) in compliance with European data protection laws.1
The report was generally very positive, with the commission noting that the Privacy Shield continues to ensure an adequate level of protection for personal data transferred from the EEA to Privacy Shield-compliant companies in the U.S. The commission also noted that the U.S. had taken steps in 2018 to address recommendations made by the commission in its 2017 report, such as having the Department of Commerce conduct compliance “spot checks” and look for false claims of compliance.
The commission also praised the Federal Trade Commission for taking a more proactive approach to enforcement, including by issuing administrative subpoenas to request information from self-certifying companies.
The commission also noted that it expects the U.S. to nominate a permanent ombudsperson by the end of February 2019 to replace the current acting ombudsperson. Under the Privacy Shield, the ombudsperson is responsible for ensuring that complaints regarding access to personal data by U.S. authorities are addressed.
Although many privacy advocates have criticized the Privacy Shield as inadequate, arguing that U.S. enforcement is weak, the commission’s second annual review provides an important endorsement of the Privacy Shield. The commission encouraged the U.S. to adopt a comprehensive data protection law, an issue that is likely to garner significant attention in 2019.
Department of Commerce Addresses Impact of Brexit on the Privacy Shield
The Department of Commerce has added a new Frequently Asked Questions section to its Privacy Shield site to address the impact of Brexit.
The Department of Commerce has provided companies who self-certify to the Privacy Shield with some guidance on how to handle United Kingdom-based data in light of the U.K.’s intended withdrawal from the European Union on March 29, 2019, commonly referred to as Brexit. Given the general uncertainty surrounding Brexit, the agency outlined two potential scenarios:
- Scenario (1) “Transition Period”: The U.K. and EU preliminarily have agreed that there will be a transition period from March 30, 2019, to December 31, 2020, during which EU law, including EU data protection law, will continue to apply in the U.K. During this period, the Privacy Shield will continue to apply to data transfers from the U.K. to U.S. Privacy Shield participants. No additional action will be required of Privacy Shield participants during this period, although companies should begin to implement plans for the post transition period outlined below.
- Scenario (2) “No Transition Period”: If there is no transition period, then the steps outlined below need to be in place by March 29, 2019 (assuming no delay in the Brexit date).
Steps that will be required to import U.K. data under the Privacy Shield:
- All public commitments regarding the Privacy Shield explicitly must state that personal data received from the U.K. is in reliance on the Privacy Shield. Human resource policies also must be updated if HR data is imported from the U.K. in reliance on the Privacy Shield. Through this commitment, a company participating in the Privacy Shield will be ruled to have complied with the U.K. Information Commissioner’s Office (ICO) with regard to personal data received from the U.K. in reliance on Privacy Shield. The Department of Commerce has provided the following model language:
- Unchanged from current requirements, organizations must continue to maintain a current Privacy Shield certification, recertifying annually.
Conference Adopts Declaration on Ethics and Data Protection in Artificial Intelligence
A group of data protection commissioners issued an edict outlining measures to preserve human rights as the development of artificial intelligence (AI) technology becomes more prevalent.
On October 23, 2018, the 40th International Conference of Data Protection and Privacy Commissioners (the conference) adopted the Declaration on Ethics and Data Protection in Artificial Intelligence (the declaration) .2 The declaration endorses six guiding principles to ensure the protection of human rights in conjunction with the development of AI. The conference also established a permanent working group on Ethics and Data Protection in AI, which will be in charge of setting common governance principles on AI at an international level.
The declaration was written by the European Union's independent data protection authority (the European Data Protection Supervisor, or EDPS), the French and Italian national supervisory authorities (the French Commission Nationale de l’Informatique et des Libertés and the Italian Garante per la protezione dei dati personali) and was sponsored by another 14 privacy regulators worldwide.
The announcement at the conference triggered an official public discussion on digital ethics and its place in regulating the use of AI.
Human Rights and Artificial Intelligence
The declaration recognizes the significant benefits that AI may have for society while also highlighting the corresponding risks. It acknowledges that the rights to privacy and data protection are increasingly challenged by the rapid development of AI and that the collection of personal information has the potential to impact human rights more broadly, most notably involving the right to not be discriminated against and the right to freedom of expression and information.
One example of this direct impact is that AI systems have been found to contain “inherent bias.” This can occur as a result of the initial configuration of AI models, which may not include an exhaustive or representative list of parameters or use cases, and may accordingly generate prejudiced results. As such, this may lead to unfair discrimination against certain individuals or groups in areas such as credit scoring by potentially restricting the availability of certain services or content.
The Six Guiding Principles Endorsed by the Conference
The principles outlined by the declaration are:
- AI and machine learning technologies should be designed, developed and used in respect of fundamental human rights and in accordance with the fairness principle. The conference suggested this might be achieved by considering the collective impact that the use of AI may have, and ensuring that systems adhere to their original purposes, while making certain that the data they generate is used in a way that is compatible with such original purposes.
- There should be continued attention and vigilance through actions such as promoting accountability of all relevant stakeholders. The principle encourages documented governance structures and processes to ensure collective and joint responsibility involving the whole chain of stakeholders at the outset of any AI project.
- Improvement in the transparency and intelligibility of AI systems remains necessary. For example, by providing adequate information on purpose and effects of such systems, individual users of the technology can manage their expectations and increase their level of control over AI systems.
- An "ethics by design" approach should be adapted, focusing on responsible design and fair use of AI systems, thereby implementing the newly codified principles of "data protection by design and by default" set out in the GDPR.
- Echoing the greater transparency requirements set out in the GDPR, empowerment of every individual should be promoted, which can be achieved through communicating information and ensuring that individuals are aware of their rights. In the context of AI solutions, which may be relying on solely automated decision-making processes, individuals can exercise their right to challenge any such decision in line with GDPR requirements.
- Unlawful biases or discriminations that may result from the use of data in AI should be reduced or mitigated, including by issuing specific guidance to acknowledge and address any such bias or discrimination-related issues.
Overall, the six principles aim to promote a use of AI that is fair and transparent, and to ensure greater accountability for failure to meet this standard, in compliance with the principles applicable to the processing of personal data under the GDPR.
The conference also established a permanent working group on Ethics and Data Protection in Artificial Intelligence, which will promote understanding and respect for the six guiding principles and encourage the establishment of international principles on AI. In this vein, the conference calls for common governance principles on AI that, due to the breadth of issues raised by the widespread use of AI worldwide, only can be achieved on the basis of concerted cross-sectoral and multi-disciplinary efforts.
The declaration, which was authored by two EU supervisory authorities and the EDPS and featured the United Kingdom’s Information Commissioner’s Office as a co-sponsor, is likely to be an ongoing focus in the EU. The declaration also has been endorsed by 42 organizations and 185 individuals, many of whom are non-European, indicating that the intersection of ethics and AI is a topic of increasing concern worldwide.
Other parties also have signalled their interest in this area, including the European Commission, which revealed plans to draft its own ethical guidelines on AI in April 2018. In addition, a group of German data protection commissioners recently called for public bodies to ensure protections on algorithms and AI transparency.
Ethical considerations and data protection in AI is an area in which we are likely to see considerable development. The declaration also suggests a renewed focus by regulators on ethics. In light of recent scandals regarding certain uses of personal data, companies may now be expected to concentrate on ethics in addition to complying with applicable laws and regulations, including the GDPR. Areas such as AI, where statutory regulation may not be able to keep up with rapid technological development, may be particularly suited to regulation through ethical principles. Companies that use machine learning and AI should monitor the effects that the declaration (and other AI-focused regional or worldwide standards and practices) might have on their operations and compliance mechanisms.
FTC Seeks Public Comment on Identity Theft Detection Rules
In early December 2018, the FTC, as part of its periodic review of its regulations, announced that it is seeking public comment on whether to amend rules requiring financial institutions and creditors to take steps to detect and prevent identity theft.
The FTC's announcement involves two specific measures. The first, the FTC’s Red Flags Rule, requires financial institutions and creditors to implement a written identity theft prevention program designed to identify “red flags,” and prevent and mitigate the damage of identity theft. The second rule at issue, the Card Issues Rule, requires that debit and credit card issuers implement policies and procedures to assess the validity of requests for change of address and for additional or replacement cards.
Among other questions, the FTC specifically asked about the continuing need for the rules, their benefits and costs, and how the rules should be updated to account for changes in technology and economic conditions.
The FTC published the two rules more than 10 years ago, in November 2007, and they were implemented in 2011 under the Fair and Accurate Credit Transactions Act of 2003 (FACTA). FACTA requires the FTC to establish guidelines and regulations for financial institutions and creditors to identify patterns and practices that might reveal identity theft.
In addition to asking about the necessity, benefits and costs of the rules, the FTC seeks public comment regarding the evidence of industry compliance with the rules, the impact the rules have on small businesses specifically and whether there are any types of creditors that are not currently covered by the Red Flags Rule that should be because they offer or maintain accounts that could be at risk of identity theft.
IAPP Holds Europe Data Protection Congress in Brussels
A recent European Data Protection Congress organized by the International Association of Privacy Professionals (IAPP) brought together a wide range of privacy and data protection professionals and regulators to discuss current developments in data protection throughout the EU.
The IAPP held a Europe Data Protection Congress in Brussels at the end of November 2018, bringing together EU and national government regulatory officials, private sector professionals, academics and nonprofit representatives to discuss new developments in the field of data protection, particularly in the GDPR era.
The general consensus at the Congress was that the six-month transition period following the May 25, 2018, rollout of the GDPR has now ended. This was the clear message communicated by the representatives of the various supervisory authorities, the EU Commission and the European Data Protection Board (EDPB). It also was apparent that the state of data protection has transitioned from the theory of the GDPR to the practice of the GDPR, specifically focusing on how to ensure sustainable compliance over time, where accountability is critical.
The GDPR supervisory authorities appear ready to make use of their increased investigation and sanction powers, and it seems there are already a few pending investigations following the early activity of France's Commission Nationale de L'informatique et des Libertés (CNIL), with the U.K.'s Information Commissioner's Office also beginning to issue warnings, remediation actions and fines. A presentation slide from CNIL revealed that in the most recent period, 8,275 complaints had led to 300 inspections, which proceeded to 47 formal notices, culminating in 11 sanctions. At the same time, the Comissão Nacional de Protecção de Dados (CNPD), the Portuguese regulator, recently imposed the first monetary fine for non-compliance with the GDPR, determining that a hospital failed to adequately control access to patient data or maintain sufficient policies and procedures, adding to the sense that GDPR enforcement is now becoming a reality for companies.
The GDPR introduced the option of collective action complaints, including by not-for-profit bodies. These complaints can arise from individuals’ rights to compensation for material and non-material damages. Aside from the pecuniary damages that may follow from such complaints, the potential for negative external reputational effects is a key concern for companies.
The institutional mindset regarding the GDPR should not be merely becoming "GDPR-compliant," where a one-off compliance exercise is perceived as a sufficient response to the new regime. Rather, becoming "GDPR-ready" on an ongoing and evolving basis is the necessary approach. It is not sufficient for companies to rely solely on a relevant suite of data protection policies and processes. The focus now is for companies to embed the GDPR-compliant policies and procedures within their organizations, put them into practice, and cultivate and monitor their ongoing efficiency. This likely represents a challenge for many companies that initially viewed GDPR compliance through the prism of gap-analysis, reacting on a key risk-area approach. As a result, such companies have been forced to continue managing outdated legacy systems, creating additional institutional complexities in bringing their internal procedures and processes up to date with GDPR requirements.
The Congress also revealed that close to 98 percent of companies are not willing to outsource their data protection officer (DPO) role and prefer to keep that position in-house, signaling the importance and sensitivity of the role. At the same time, the IAPP-EY 2018 Privacy Governance Report revealed that 75 percent of respondent firms reported appointing a DPO, with an almost even split between respondents reporting legal compliance and business functionality as the motivation for creating the role.
A report from The Future of Privacy Forum and Nymity also provided insight into the "legitimate interests" justification for data processing. The report outlines a selection of practical scenarios and established cases illustrating when the legal ground of “legitimate interest” may be relied upon. The report also identifies and reviews the core elements at the heart of the analysis in the form of a Legitimate Interest Assessment, identifying the “interest,” including a necessity test and balancing analysis to ensure that the controller’s legitimate interest does not override the rights and freedom of individuals. The EDPB reiterated that there is no hierarchy among legal bases with regard to data processing justifications, and that each instance must be determined on a case-by-case basis.
One panel at the Congress addressed the U.S. Clarifying Lawful Overseas Use of Data (CLOUD) Act and “e-evidence,” specifically focusing on current proposals from the EU Commission before Parliament that would facilitate cross-transfer of e-evidence in the context of criminal proceedings. The EU Commission expressed its preference that an executive agreement (or treaty) be entered into between the EU member states as a whole and the U.S. under the CLOUD Act (to which Jason Biros, the U.S.-based panelist,3 seemed to suggest that two to three EU member states would need internal legal reforms in place in order to satisfy the stringency requirements set by the U.S. under the CLOUD Act). Negotiations between the U.K. and the U.S. are still pending, and the U.K. already made several domestic law changes to satisfy those requirements.
The Congress also expressed concern regarding increased commercial AI use, stating that automated AI functions would create effects that, as unintended consequences of data processing, would violate GDPR rules (e.g. emerging home monitoring/caretaking systems for elderly individuals with ongoing data monitoring and processing). They stressed that the principles of data processing minimization and purpose limitation should inform AI use of data. Particularly in a world with increasingly competitive technological developments, it remains unclear whether the GDPR offers the potential for eventual global convergence or simply a differentiator for Europe.
European Central Bank Publishes Cyber Resilience Oversight Expectations
The European Central Bank (ECB) recently published its final Cyber Resilience Oversight Expectations (CROE) as part of its efforts to enhance cyber awareness and resiliency throughout the financial sector.
On December 3, 2018, the ECB published its final CROE for systems that process, settle and record financial transactions or financial market infrastructures (FMIs). As a part of the ECB's push to enhance the financial sector's cyber resilience, the CROE builds on the 2016 global guidance on cyber resilience for FMIs (the Guidance) and the 2012 principles for FMIs in order to provide detailed steps to help FMIs operationalize the Guidance, outline clear expectations with which overseers might assess FMI compliance and offer a basis for meaningful discussion between FMIs and their overseers. The 62-page CROE is a comprehensive component of the ECB's strategy to mitigate the escalating cyber risks facing FMIs and the domestic and international financial markets in which they operate.
The CROE presents a framework for compliance with five risk management categories (governance, identification, protection, detection, and response and recovery) and three components of risk management programs (testing, situational awareness, and learning and evolving).
Within each category, FMIs are required to reach a certain expectation level, otherwise known as cyber maturity levels. The cyber maturity levels add to each other like building blocks and are set out incrementally as follows:
- evolving, the basic standard in which essential capabilities to identify, manage and mitigate cyber risk are established, evolved and sustained across the FMI;
- advancing, the intermediate standard, requiring compliance with both the advancing and evolving levels, in which more advanced tools to proactively manage cyber risks are implemented and integrated across business lines; and
- innovating, the highest standard, requiring compliance with all three maturity levels, in which the FMI is expected to drive innovation in people, processes and technology in order to enhance cyber resilience. Innovating also involves possible information sharing and collaboration with external stakeholders.
Once a maturity level is reached, the CROE emphasizes that FMIs should aim to improve and reach higher levels in line with the specifics of their businesses.
The CROE explicitly designates the minimum cyber maturity levels that different FMIs are expected to reach. In the Eurosystem oversight function, all prominently important payment systems (PIRPS) and other retail payment systems (ORPS) are expected to reach the evolving level; systematically important payment systems (SIPS) and the Target2-securities system (T2S) are expected to reach the advancing level.
Despite specific guidance provided in the CROE, the ECB acknowledges that FMIs may meet expectations in different ways. To account for departures, the ECB put forward the “meet or explain” principle, allowing each FMI to either meet prescribed cyber resilience expectations or explain to the appropriate national regulators how alternative implementation "meets the objective of the underlying expectation."
The CROE also outlines expectations for the appointment of a senior executive in charge of cyber resilience issues within each FMI, normally the chief information security officer (CISO). FMIs should guarantee the CISO's independence through measures prescribed in the CROE.
The ECB intends for the CROE to be applied by the Eurosystem in its oversight of all payment systems, including PIRPS, ORPS and SIPS, and T2S. Furthermore, the CROE may be used by national regulators (in line with national law) overseeing clearing and settlement systems, such as securities settlement systems or central securities depositories and central counterparties.
Cyber risk and resilience oversight will be an area to watch in the new year, and the new expectations set out in the CROE may set a precedent for future frameworks. While some consider the new guidance to be onerous, the “meet or explain” approach affords FMIs more flexibility in cyber resilience strategies. But, the flexibility given by the approach also means that FMIs may want to evaluate the gaps between their current cyber resilience strategies and their expectation level and, in preparation for conversations with regulators, detail the reasoning for any departures from the CROE requirements.
3 Biros currently is a researcher at Vrije Universiteit, Brussels. He previously served as a legal adviser at the U.S. Mission to the European Union.
This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.