Data re-identification
The examples and perspective in this article deal primarily with the United States and do not represent a worldwide view of the subject. (May 2017) |
The de-identification process involves masking, generalizing or deleting both direct and indirect
More and more data are becoming publicly available over the Internet. These data are released after applying some anonymization techniques like removing personally identifiable information (PII) such as names, addresses and social security numbers to ensure the sources' privacy. This assurance of privacy allows the government to legally share limited data sets with third parties without requiring written permission. Such data has proved to be very valuable for researchers, particularly in health care.
GDPR-compliant pseudonymization seeks to reduce the risk of re-identification through the use of separately kept "additional information". The approach is based on an expert evaluation of a dataset to designate some identifiers as "direct" and some as "indirect." Proponents of this approach argue that re-identification can be avoided by limiting access to "additional information" that is kept separately by the controller. The theory is that access to separately kept "additional information" is required for re-identification, attribution of data to a specific data subject can be limited by the controller to support lawful purposes only. This approach is controversial, as it fails if there are additional datasets that can be used for re-identification. Such additional datasets may be unknown to those certifying the GDPR-compliant pseudonymization, or may not at exist at the time of the pseudonymization but may come into existence at some point in the future.
Legal protections of data in the United States
Existing privacy regulations typically protect information that has been modified, so that the data is deemed anonymized, or de-identified. For financial information, the
Educational records
In terms of university records, authorities both on the state and federal level have shown an awareness about issues of
Medical records
There have been state laws enacted to ban data mining of medical information, but they were struck down by federal courts in Maine and New Hampshire on First Amendment grounds. Another federal court on another case used "illusive" to describe concerns about privacy of patients and did not recognize the risks of re-identification.[3]
Biospecimen
The Notice of Proposed Rule Making, published by the
Re-identification efforts
There have been a sizable amount of successful attempts of re-identification in different fields. Even if it is not easy for a lay person to break anonymity, once the steps to do so are disclosed and learnt, there is no need for higher level knowledge to access information in a database. Sometimes, technical expertise is not even needed if a population has a unique combination of identifiers.[3]
Health records
In the mid-1990s, a government agency in Massachusetts called Group Insurance Commission (GIC), which purchased health insurance for employees of the state, decided to release records of hospital visits to any researcher who requested the data, at no cost. GIC assured that the patient's privacy was not a concern since it had removed identifiers such as name, addresses, social security numbers. However, information such as zip codes, birth date and sex remained untouched. The GIC assurance was reinforced by the then governor of Massachusetts, William Weld. Latanya Sweeney, a graduate student at the time, put her mind to picking out the governor's records in the GIC data. By combining the GIC data with the voter database of the city Cambridge, which she purchased for 20 dollars, Governor Weld's record was discovered with ease.[9]
In 1997, a researcher successfully de-anonymized medical records using voter databases.[3]
In 2001, Professor Latanya Sweeney again used anonymized hospital visit records and voting records in the state of Washington and successfully matched individual persons 43% of the time.[10]
There are existing algorithms used to re-identify patient with prescription drug information.[3]
Consumer habits and practices
Two researchers at the
In 2006, after AOL published its users' search queries, data that was anonymized prior to the public release, The New York Times reporters successfully carried out re-identification of individuals by taking groups of searches made by anonymized users.[3] AOL had attempted to suppress identifying information, including usernames and IP addresses, but had replaced these with unique identification numbers to preserve the utility of this data for researchers. Bloggers, after the release, pored over the data, either trying to identify specific users with this content, or to point out entertaining, depressing, or shocking search queries, examples of which include "how to kill you wife", "depression and medical leave", "car crash photos." Two reporters, Michael Barbaro and Tom Zeller, were able to track down a 62 year old widow named Thelma Arnold from recognizing clues to the identity of User 417729 search histories. Arnold acknowledged that she was the author of the searches, confirming that re-identification is possible.[9]
Location data
Location data - series of geographical positions in time that describe a person's whereabouts and movements - is a class of personal data that is specifically hard to keep anonymous. Location shows recurring visits to frequently attended places of everyday life such as home, workplace, shopping, healthcare or specific sparetime patterns.[14] Only removing a person's identity from location data will not remove identifiable patterns such as commuting rhythms, sleeping places, or work places. By mapping coordinates onto addresses, location data is easily re-identified[15] or correlated with a person's private life contexts. Streams of location information play an important role in the reconstruction of personal identifiers from smartphone data accessed by apps.[16]
Court decisions
In 2019, Professor Kerstin Noëlle Vokinger and Dr. Urs Jakob Mühlematter, two researchers at the University of Zurich, analyzed cases of the Federal Supreme Court of Switzerland to assess which pharmaceutical companies and which medical drugs were involved in legal actions against the Federal Office of Public Health (FOPH) regarding pricing decisions of medical drugs. In general, involved private parties (such as pharmaceutical companies) and information that would reveal the private party (for example, drug names) are anonymized in Swiss judgments. The researchers were able to re-identify 84% of the relevant anonymized cases of the Federal Supreme Court of Switzerland by linking information from publicly accessible databases.[17][18] This achievement was covered by the media and started a debate if and how court cases should be anonymized.[19][20]
Concern and consequences
In 1997,
Unauthorized re-identification on the basis of such combinations does not require access to separately kept "additional information" that is under the control of the data controller, as is now required for GDPR-compliant pseudonymization.
Individuals whose data is re-identified are also at risk of having their information, with their identity attached to it, sold to organizations they do not want possessing private information about their finances, health or preferences. The release of this data may cause anxiety, shame or embarrassment. Once an individual's privacy has been breached as a result of re-identification, future breaches become much easier: once a link is made between one piece of data and a person's real identity, any association between the data and an anonymous identity breaks the anonymity of the person.[3]
Re-identification may expose companies and institutions which have pledged to assure anonymity to increased tort liability and cause them to violate their internal policies, public privacy policies, and state and federal laws, such as laws concerning financial confidentiality or medical privacy, by having released information to third parties that can identify users after re-identification.[3]
Remedies
To address the risks of re-identification, several proposals have been suggested:
- Higher standards and uniform definition of de-identification while retaining data utility: the definition of de-identification should balance privacy protections to reduce re-identification risk with the refusal of companies to delete data [23]
- Heightened privacy protections of anonymized information [3]
- Tighter security for databases that store anonymized information [3]
- Strong ban on malicious re-identification, the passing of broader anti-discrimination and privacy legislation that ensures privacy protections as well as encourage participation in data sharing projects and endeavors, as well as establishment of uniform data protection standards in academic communities, such as in the scientific community, in order to minimize privacy violations [24]
- Creation of data-release policies: making sure de-identification rhetoric is accurate, drawing up contracts that prohibit re-identification attempts and dissemination of sensitive information, establishing data enclaves, and utilizing data-based strategies to match required protection standards to the level of risk.[25]
- Implementation of Differential Privacy on requested data sets
- Generation of Synthetic Data that exhibits the statistical properties of the raw data, without allowing real individuals to be identified
While a complete ban on re-identification has been urged, enforcement would be difficult. There are, however, ways for lawmakers to combat and punish re-identification efforts, if and when they are exposed: pair a ban with harsher penalties and stronger enforcement by the Federal Trade Commission and the Federal Bureau of Investigation; grant victims of re-identification a right of action against those who re-identify them; and mandate software audit trails for people who utilize and analyze anonymized data. A small-scale re-identification ban may also be imposed on trusted recipients of particular databases, such as government data miners or researchers. This ban would be much easier to enforce and may discourage re-identification.[9]
Examples of de-anonymization
- "Researchers at Université catholique de Louvain, in Belgium, analyzed data on 1.5 million cellphone users in a small European country over a span of 15 months and found that just four points of reference, with fairly low spatial and temporal resolution, was enough to uniquely identify 95 percent of them. In other words, to extract the complete location information for a single person from an "anonymized" data set of more than a million people, all you would need to do is place him or her within a couple of hundred yards of a cellphone transmitter, sometime over the course of an hour, four times in one year. A few Twitter posts would probably provide all the information you needed, if they contained specific information about the person's whereabouts."[26]
- "Here, we report that surnames can be recovered from personal genomes by profiling short tandem repeats on the Y chromosome (Y-STRs) and querying recreational genetic genealogy databases. We show that a combination of a surname with other types of metadata, such as age and state, can be used to triangulate the identity of the target."[27]
See also
References
- ISBN 978-0-387-23473-1.
- S2CID 9384220.
- ^ a b c d e f g h i j k l m Porter, Christine (2008). "Constitutional and Regulatory: De-Identified Data and Third Party Data Mining: The Risk of Re-Identification of Personal Information". Shidler Journal of Law, Commerce & Technology. 5 (1).
- SSRN 1495788.
- doi:10.15779/Z385Z78.
- S2CID 77790820.
- ^ Groden, Samantha; Martin, Summer; Merrill, Rebecca (2016). "Proposed Changes to the Common Rule: A Standoff Between Patient Rights and Scientific Advances?". Journal of Health & Life Sciences Law. 9 (3).
- ^ 24 C.F.R. § .104 2017.
- ^ a b c d Ohm, Paul (2010). "Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization". UCLA Law Review.
- ^ Sweeney L. Only You, Your Doctor and Many Others May Know. Technology Science. 2015092903. 25 September 2015.
- ^ Rouse, Margaret. "de-anonymization (deanonymization)". WhatIs.com. Retrieved 19 January 2014.
- ^ Narayanan, Arvind; Shmatikov, Vitaly. "Robust De-anonymization of Large Sparse Datasets" (PDF). Retrieved 19 January 2014.
- arXiv:cs/0610105.
- ISBN 978-1-4020-6913-0
- PMID 31337762.
- ISBN 978-3-88579-671-8.
- ^ Vokinger / Mühlematter, Kerstin Noëlle / Urs Jakob (2 September 2019). "Identifikation von Gerichtsurteilen durch "Linkage" von Daten(banken)". Jusletter (990).
- ^ Vokinger / Mühlematter, Kerstin Noëlle / Urs Jacob. "Re-Identifikation von Gerichtsurteilen durch "Linkage" von Daten(banken)".
- ^ Chandler, Simon (4 September 2019). "Researchers Use Big Data And AI To Remove Legal Confidentiality". Forbes. Retrieved 10 December 2019.
- ^ "SRF Tagesschau". SRF Swiss Radio and Television. 2 September 2019. Retrieved 10 December 2019.
- ^ "How Unique am I?". Data Privacy Lab, Harvard University. Retrieved 22 July 2021.
- ^ Sweeney, Latanya. "Simple Demographics Often Identify People Uniquely" (PDF). Carnegie Mellon University, Data Privacy Working Paper 3. Retrieved 22 July 2021.
- ^ Lagos, Yianni. 2014. "Symposium: Taking the Personal Out of Data: Making Sense of De-identification." Indiana Law Review. Retrieved 26 March 2017.
- ^ Ahn, Sejin. 2015. "Comment: Whose Genome Is It Anyway?: Re-identification and Privacy Protection in Public and Participatory Genomics." San Diego Law Review. Retrieved 26 March 2017.
- ^ Rubinstein, Ira S, and Hartzog, Woodrow. 2016. "Anonymization and Risk" Washington Law Review. Retrieved 26 March 2017.
- ^ Hardesty, Larry (27 March 2013). "How hard is it to 'de-anonymize' cellphone data?". MIT news. Retrieved 14 January 2015.
- Wikidata Q29619963.