Thu. Dec 1st, 2022
    en flag
    nl flag
    fr flag
    de flag
    pt flag
    es flag

    Editor’s Note: Given the increasing legal and regulatory requirements in the areas of privacy and personal data protection, the following article published under the Creative Commons Attribution 4.0 License provides information and insight that highlights how anonymized datasets, even when heavily incomplete, may still pose privacy challenges based on generative models that have a high degree of success in re-identifying anonymized data.

    Estimating the Success of Re-identifications in Incomplete Datasets Using Generative Models

    An article by Luc Rocher, Julien M. Hendrickx, and Yves-Alexandre de Montjoye as Published in Nature Communications.

    Abstract

    While rich medical, behavioral, and socio-demographic data are key to modern data-driven research, their collection and use raise legitimate privacy concerns. Anonymizing datasets through de-identification and sampling before sharing them has been the main tool used to address those concerns. We here propose a generative copula-based method that can accurately estimate the likelihood of a specific person to be correctly re-identified, even in a heavily incomplete dataset. On 210 populations, our method obtains AUC scores for predicting individual uniqueness ranging from 0.84 to 0.97, with low false-discovery rate. Using our model, we find that 99.98% of Americans would be correctly re-identified in any dataset using 15 demographic attributes. Our results suggest that even heavily sampled anonymized datasets are unlikely to satisfy the modern standards for anonymization set forth by GDPR and seriously challenge the technical and legal adequacy of the de-identification release-and-forget model.

    Introduction

    In the last decade, the ability to collect and store personal data has exploded. With two-thirds of the world population having access to the Internet, electronic medical records becoming the norm, and the rise of the Internet of Things, this is unlikely to stop anytime soon. Collected at scale from financial or medical services, when filling in online surveys or liking pages, this data has an incredible potential for good. It drives scientific advancements in medicine, social science, and AI and promises to revolutionize the way businesses and governments function.

    However, the large-scale collection and use of detailed individual-level data raise legitimate privacy concerns. The recent backlashes against the sharing of NHS [UK National Health Service] medical data with DeepMind and the collection and subsequent sale of Facebook data to Cambridge Analytica are the latest evidences that people are concerned about the confidentiality, privacy, and ethical use of their data. In a recent survey, >72% of U.S. citizens reported being worried about sharing personal information online. In the wrong hands, sensitive data can be exploited for blackmailing, mass surveillance, social engineering, or identity theft.

    De-identification, the process of anonymizing datasets before sharing them, has been the main paradigm used in research and elsewhere to share data while preserving people’s privacy. Data protection laws worldwide consider anonymous data as not personal data anymore allowing it to be freely used, shared, and sold. Academic journals are, e.g., increasingly requiring authors to make anonymous data available to the research community. While standards for anonymous data vary, modern data protection laws, such as the European General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), consider that each and every person in a dataset has to be protected for the dataset to be considered anonymous. This new higher standard for anonymization is further made clear by the introduction in GDPR of pseudonymous data: data that does not contain obvious identifiers but might be re-identifiable and is therefore within the scope of the law.

    Yet numerous supposedly anonymous datasets have recently been released and re-identified. In 2016, journalists re-identified politicians in an anonymized browsing history dataset of 3 million German citizens, uncovering their medical information and their sexual preferences. A few months before, the Australian Department of Health publicly released de-identified medical records for 10% of the population only for researchers to re-identify them 6 weeks later. Before that, studies had shown that de-identified hospital discharge data could be re-identified using basic demographic attributes and that diagnostic codes, year of birth, gender, and ethnicity could uniquely identify patients in genomic studies data. Finally, researchers were able to uniquely identify individuals in anonymized taxi trajectories in NYC, bike-sharing trips in London, subway data in Riga, and mobile phone and credit card datasets.

    Statistical disclosure control researchers and some companies are disputing the validity of these re-identifications: as datasets are always incomplete, journalists and researchers can never be sure they have re-identified the right person even if they found a match. They argue that this provides strong plausible deniability to participants and reduce the risks, making such de-identified datasets anonymous including according to GDPR. De-identified datasets can be intrinsically incomplete, e.g., because the dataset only covers patients of one of the hospital networks in a country or because they have been subsampled as part of the de-identification process. For example, the U.S. Census Bureau releases only 1% of their decennial census and sampling fractions for international census range from 0.07% in India to 10% in South American countries. Companies are adopting similar approaches with, e.g., the Netflix Prize dataset including <10% of their users.

    Imagine a health insurance company who decides to run a contest to predict breast cancer and publishes a de-identified dataset of 1000 people, 1% of their 100,000 insureds in California, including people’s birth date, gender, ZIP code, and breast cancer diagnosis. John Doe’s employer downloads the dataset and finds one (and only one) record matching Doe’s information: male living in Berkeley, CA (94720), born on January 2nd, 1968, and diagnosed with breast cancer (self-disclosed by John Doe). This record also contains the details of his recent (failed) stage IV treatments. When contacted, the insurance company argues that matching does not equal re-identification: the record could belong to 1 of the 99,000 other people they insure or, if the employer does not know whether Doe is insured by this company or not, to anyone else of the 39.5M people living in California.

    Our paper shows how the likelihood of a specific individual to have been correctly re-identified can be estimated with high accuracy even when the anonymized dataset is heavily incomplete. We propose a generative graphical model that can be accurately and efficiently trained on incomplete data. Using socio-demographic, survey, and health datasets, we show that our model exhibits a mean absolute error (MAE) of 0.018 on average in estimating population uniqueness and an MAE of 0.041 in estimating population uniqueness when the model is trained on only a 1% population sample. Once trained, our model allows us to predict whether the re-identification of an individual is correct with an average false-discovery rate of <6.7% for a 95% threshold (𝜉𝑥ˆ>0.95)(ξx^>0.95) and an error rate 39% lower than the best achievable population-level estimator. With population uniqueness increasing fast with the number of attributes available, our results show that the likelihood of a re-identification to be correct, even in a heavily sampled dataset, can be accurately estimated, and is often high. Our results reject the claims that, first, re-identification is not a practical risk and, second, sampling or releasing partial datasets provide plausible deniability. Moving forward, they question whether current de-identification practices satisfy the anonymization standards of modern data protection laws such as GDPR and CCPA and emphasize the need to move, from a legal and regulatory perspective, beyond the de-identification release-and-forget model.

    Read the complete article at Estimating the Success of Re-identifications in Incomplete Datasets Using Generative Models

    Complete Paper (Including Hyperlinked Reference Section)

    Estimating the Success of Re-identifications in Incomplete Datasets Using Generative Models

    Direct Access to Complete Paper (PDF)

    Additional Reading

    Source: ComplexDiscovery

     

    Have a Request?

    If you have information or offering requests that you would like to ask us about, please let us know and we will make our response to you a priority.

    ComplexDiscovery is an online publication that highlights cyber, data, and legal discovery insight and intelligence ranging from original research to aggregated news for use by cybersecurity, information governance, and eDiscovery professionals. The highly targeted publication seeks to increase the collective understanding of readers regarding cyber, data, and legal discovery information and issues and to provide an objective resource for considering trends, technologies, and services related to electronically stored information.

    ComplexDiscovery OÜ is a technology marketing firm providing strategic planning and tactical execution expertise in support of cyber, data, and legal discovery organizations. Focused primarily on supporting the ComplexDiscovery publication, the company is registered as a private limited company in the European Union country of Estonia, one of the most digitally advanced countries in the world. The company operates virtually worldwide to deliver marketing consulting and services.

    Beyond the Perimeter? The DoD Zero Trust Strategy and Roadmap

    Current and future cyber threats and attacks drive the need for...

    Balancing Spend and Standards? Cybersecurity Investments in the European Union

    According to EU Agency for Cybersecurity Executive Director Juhan Lepassaar, “The...

    Stricter Supervisory and Enforcement Measures? European Parliament Adopts New Cybersecurity Law

    According to European Member of Parliament (MEP) Bart Groothuis, “Ransomware and...

    Geopolitical Shakedowns? The Annual ENISA Threat Landscape Report – 10th Edition

    According to EU Agency for Cybersecurity Executive Director Juhan Lepassaar, “Today's...

    A Technology-Driven Solution? Integreon Announces New Chief Executive Officer

    Subroto’s people-first leadership style combined with his passion for leveraging technology...

    A Magnet for Revenue? Magnet Forensics Announces 2022 Third Quarter Results

    According to Adam Belsher, Magnet Forensics' CEO, "Our solutions address the...

    Progress and Opportunity? Cellebrite Announces Third Quarter 2022 Results

    “We are pleased to report a solid third quarter, delivering strong...

    Fueling Continued Growth? Renovus Capital Acquires Advisory Business from HBR Consulting

    "The legal industry remains in the early stages of digital and...

    An eDiscovery Market Size Mashup: 2022-2027 Worldwide Software and Services Overview

    From retraction to resurgence and acceleration, the worldwide market for eDiscovery...

    On the Move? 2022 eDiscovery Market Kinetics: Five Areas of Interest

    Recently ComplexDiscovery was provided an opportunity to share with the eDiscovery...

    Trusting the Process? 2021 eDiscovery Processing Task, Spend, and Cost Data Points

    Based on the complexity of cybersecurity, information governance, and legal discovery,...

    The Year in Review? 2021 eDiscovery Review Task, Spend, and Cost Data Points

    Based on the complexity of cybersecurity, information governance, and legal discovery,...

    Five Great Reads on Cyber, Data, and Legal Discovery for November 2022

    From cyber shakedowns and threats to the total cost of eDiscovery...

    Five Great Reads on Cyber, Data, and Legal Discovery for October 2022

    From cyber claims and data privacy to corporate litigation and the...

    Five Great Reads on Cyber, Data, and Legal Discovery for September 2022

    From privacy legislation and special masters to acquisitions and investigations, the...

    Five Great Reads on Cyber, Data, and Legal Discovery for August 2022

    From AI and Big Data challenges to intriguing financial and investment...

    Onsite or Remote? Document Reviewer Preferences Survey (Winter 2023)

    Today CompexDiscovery expands that survey portfolio by introducing a new business...

    In The House? The Fall 2022 eDiscovery Total Cost of Ownership Survey – Final Results

    Today CompexDiscovery shares the results of a new business survey focused...

    Cold Front Concerns? Eighteen Observations on eDiscovery Business Confidence in the Fall of 2022

    In the fall of 2022, 49.0% of survey respondents felt that...

    Stereotyping Data? Issues Impacting eDiscovery Business Performance: A Fall 2022 Overview

    In the fall of 2022, 28.0% of respondents viewed increasing types...

    The Arrival of General Winter? Ukraine Conflict Assessments in Maps (November 21-27, 2022)

    According to a recent update from the Institute for the Study...

    Digging Out and Digging In? Ukraine Conflict Assessments in Maps (November 14-20, 2022)

    According to a recent update from the Institute for the Study...

    A Liberating Momentum? Ukraine Conflict Assessments in Maps (November 7-13, 2022)

    According to a recent update from the Institute for the Study...

    Rhetoric or Reality? Ukraine Conflict Assessments in Maps (November 1-6, 2022)

    According to a recent update from the Institute for the Study...