Thu. Mar 28th, 2024

Content Assessment: Serving Mankind? Automated Decision Making Under the GDPR

Information - 96%
Insight - 98%
Relevance - 92%
Objectivity - 91%
Authority - 93%

94%

Excellent

A short percentage-based assessment of the qualitative benefit of the recent report highlighting Automated Decision Making (ADM) through the lens of the GDPR.

Editor’s Note: From time to time, ComplexDiscovery highlights publicly available or privately purchasable announcements, content updates, and research from cyber, data, and legal discovery providers, research organizations, and ComplexDiscovery community members. While ComplexDiscovery regularly highlights this information, it does not assume any responsibility for content assertions.

To submit recommendations for consideration and inclusion in ComplexDiscovery’s cyber, data, and legal discovery-centric service, product, or research announcements, contact us today.


Background Note: On May 17, 2022, the Future of Privacy Forum launched a comprehensive Report analyzing case-law under the General Data Protection Regulation (GDPR) applied to real-life cases involving Automated Decision Making (ADM). The Report is informed by extensive research covering more than 70 Court judgments, decisions from Data Protection Authorities (DPAs), specific Guidance, and other policy documents issued by regulators. According to the announcement of this new report, the GDPR has a particular provision applicable to decisions based solely on automated processing of personal data, including profiling, which produces legal effects concerning an individual or similarly affects that individual: Article 22. This provision enshrines one of the “rights of the data subject”, particularly the right not to be subject to decisions of that nature (i.e., ‘qualifying ADM’), which has been interpreted by DPAs as a prohibition rather than a prerogative that individuals can exercise. However, the Report asserts that GDPR’s protections for individuals against forms of automated decision-making (ADM) and profiling go significantly beyond Article 22. Given the potential privacy challenges of ADM, this report may be beneficial for cybersecurity, information governance, and legal discovery professionals seeking to better understand the potential privacy challenges and current case law considerations related to ADM.

Future of Privacy Forum Research Report*

Automated Decision-Making Under the GDPR:  Practical Cases from Courts and Data Protection Authorities

By Sebastião Barros Vale and Gabriela Zanfir-Fortuna

Report Extract – Background and Overview

The European Union’s (EU) General Data Protection Regulation (GDPR) establishes one of its foundational rationales in Recital 4, stating that “the processing of personal data should be designed to serve mankind.” This refers to any processing of personal data, from its collection to its various uses, as simple as keeping a record about one’s purchases at their favorite grocery store and as complex as using personal data for automated decision-making, such as pre-screening candidates for a job through the use of algorithms, or having personal data result from complex processing, like creating a profile of the customer of a grocery store on the basis of their purchase history. The same underlying rationale of the GDPR applies if personal data are in any way processed as part of an Artificial Intelligence (AI) or Machine Learning (ML) application — either as input or output of such processing.

While all the provisions of the GDPR apply to such complex processing of personal data — from the obligation of the controller to have a lawful ground for processing in place, to the obligation to ensure that the processing is done fairly and transparently, to more technical obligations like ensuring an adequate level of data security and ensuring that the protection of personal data is baked into the design of a processing operation, one particular provision of the GDPR is specifically applicable to decisions “based solely on automated processing [of personal data — n.], including profiling, which produces legal effects” concerning an individual “or similarly affects” that individual: Article 22.

This provision enshrines one of the “rights of the data subject,” particularly “the right not to be subject to a decision based solely on automated processing” which has a legal or similarly significant effect on the individual. All automated-decision making (ADM) that meets these criteria as defined in Article 22 GDPR is referred to as “qualifying ADM” in this Report.

Even if apparently introduced in the GDPR to respond to the current age of algorithms, AI and ML systems, this provision has in fact existed under the former EU Data Protection Directive adopted in 1995, and has its roots in a similar provision of the first French data protection law adopted in the late 1970s. However, it has only scarcely been enforced under previous law. Cases started to pick up after the GDPR became applicable in 2018, also considering that automated decision-making is becoming ubiquitous in daily life, and it now looks like individuals are increasingly interested in having their right under Article 22 applied.

This Report outlines how national courts and Data Protection Authorities (DPAs) in the EU/ European Economic Area (EEA) and UK have interpreted and applied the relevant GDPR provisions on ADM so far, as well as the notable trends and outliers in this respect. To compile the Report, we have looked into publicly available judicial and administrative decisions and regulatory guidelines across EU/EEA jurisdictions and the UK, which was a member of the EU until December 2020 and whose rules on ADM are still an implementation of the GDPR at the time of writing this Report. To complement the facts of the cases discussed, we have also looked into press releases, annual reports and media stories. This research is limited to documents released until April 2022, and it draws from more than 70 cases — 19 court rulings and more than 50 enforcement decisions, individual opinions or general guidance issued by DPAs, — from a span of 18 EEA Member-States, the UK and the European Data Protection Supervisor (EDPS). The main cases and documents used for reference are listed in Annex I. The Report primarily contains case summaries, as well as relevant guidelines, with the cases explored in detail being numbered consistently so that all the notes on a particular case can be easily identified throughout the document (e.g. Case 3 will be referred to several times, under different sections).

The cases we identified often stem from situations of daily life where ADM is increasingly playing a significant role. For instance, one cluster of cases envisages students and educational institutions. These cases vary from the use of live Facial Recognition technologies to manage access on school premises and recording of attendance, to online proctoring and further to fully automated grading based on the individual profile of a student, but also on the profile of their school district, as a substitute of high school graduation exams during the COVID-19 pandemic.

Another significant cluster of cases has at its core the situation of gig workers and the way they are being distributed shifts, gigs, income and penalties through their respective platforms. A significant number of cases challenge automated credit scoring. The way in which governments distribute social benefits, like unemployment, and manage tax avoidance and potential fraud is increasingly subject to more cases — individual challenges or ex officio investigations. We also encountered cases where the underlying ADM was challenged in situations like the issuing of gun licenses, scraping publicly available sources to build an FR product, or profiling of prospective clients by a bank.

Our analysis will show that the GDPR as a whole is relevant for ADM cases and has been effectively applied to protect the rights of individuals in such cases, even in those situations where the ADM at issue does not meet the high threshold established by Article 22 GDPR, and the right not to be subject to solely automated decision-making is not applicable. For instance, without even analyzing whether Article 22 applies in those cases — Courts and DPAs have found that the deployment of live FR applications to manage access to school premises and monitor attendance was unlawful under other provisions of the GDPR because it did not have a lawful ground for processing in place and it did not respect the requirements of necessity and proportionality, thus protecting the rights of students in France and Sweden (see Cases 30 and 31).

A comparative reading of relevant cases will also show how complex transparency requirements are considered in practice, being effectively translated into a right of individuals to receive a high-level explanation about the parameters that led to an individual automated decision concerning them or about how profiling applied to them.

The principles of lawfulness and fairness are applied separately in ADM-related cases, with the principle of fairness gaining momentum in enforcement. For instance, in one of the most recent cases enshrined in the Report, the Dutch DPA found that the algorithmic system used by the government to automatically detect fraud in social benefits requests breached the principle of fairness, since the processing was considered “discriminatory” for having taken into account the dual nationality of the people requesting childcare benefits.

Another important point that surfaced from our research is that when enforcers are assessing the threshold of applicability for Article 22 (“solely” automated, and “legal or similarly significant effect” of ADM on individuals), the criteria used are increasingly sophisticated as the body of case-law grows. For example, Courts and DPAs are looking at the entire organizational environment where an ADM is taking place, from the organization structure, to reporting lines and the effective training of staff, in order to decide whether a decision was “solely” automated or had meaningful human involvement. Similarly, when assessing the second criterion for the applicability of Article 22, enforcers are looking whether the input data for an automated decision includes inferences about the behavior of individuals, and whether the decision affects the conduct and choices of the persons targeted, among other multi-layered criteria.

Finally, we should highlight that in virtually all cases where an ADM process was found to be unlawful, DPAs went beyond issuing administrative fines by also ordering specific measures which varied in scope: orders to halt practices, orders to delete the illegally collected personal data, orders to prohibit further collecting personal data.

All of the sections of the Report are accompanied by summaries of cases and brief analysis pointing out commonalities and outliers. The Report initially explores the context and key elements of Article 22 and other relevant GDPR provisions that have been applied in ADM cases, all of them reflected in concrete examples (Section 1). Then, it delves into how the two-pronged threshold required by Article 22 GDPR has been interpreted and applied in practice (Section 2). Finally, Section 3 brings forward how Courts and DPAs have applied Article 22 in sectoral areas, namely in employment, live facial recognition and credit scoring matters. The Conclusion will lay out some of the identified legal interpretation and application trends that surface from our research and highlight remaining areas of legal uncertainty that may be clarified in the future by regulators or the CJEU (Section 4).

Read the original announcement.


Complete Report – Automated Decision-Making Under the GDPR: Practical Cases from Courts and Data Protection Authorities (PDF) – Mouseover to Scroll

FPF-ADM-Report - May 2022

Read the original report.


*Shared with permission under Creative Commons, Attribution 4.0 International license.

Additional Reading

Source: ComplexDiscovery

Facial Recognition

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, Midjourney, and DALL-E, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is a highly recognized digital publication focused on providing detailed insights into the fields of cybersecurity, information governance, and eDiscovery. Based in Estonia, a hub for digital innovation, ComplexDiscovery OÜ upholds rigorous standards in journalistic integrity, delivering nuanced analyses of global trends, technology advancements, and the eDiscovery sector. The publication expertly connects intricate legal technology issues with the broader narrative of international business and current events, offering its readership invaluable insights for informed decision-making.

For the latest in law, technology, and business, visit ComplexDiscovery.com.