Data Protection
Given the scope and categories of data processed in the Human Brain Project, we have appointed a Data Protection Officer and produced several resources to support compliance.
The General Data Protection Regulation (GDPR) requires the designation of a Data Protection Officer (DPO) in some circumstances. Given the scope and categories of personal data processed in the HBP, the Project has appointed a DPO.
The DPO is a professional in the field of data protection and works with HBP Partners to facilitate compliance with the GDPR. The role of DPO includes consultation on data processing activities and providing advice and recommendations on compliance with applicable laws. In particular, the DPO assists in carrying out data protection impact assessments (DPIA), among other compliance tasks.
In addition to data protection compliance, the DPO has a communication function and consults with data subjects, HBP Partners and leadership, and supervisory authorities. The DPO is also a part of the Data Governance Working Group.
HBP & EU Resources
Utilise the resources we have developed & collected that are helpful for data protection in the Human Brain Project.
Applying the GDPR
When does the GDPR apply?
The GDPR applies to “the processing of personal data”. In determining whether activities fall within the material scope of the GDPR, two elements must be evaluated. First, the data must be “processed”. The processing of personal data includes “...any operation or set of operations which is performed on personal data…”. Processing has a very broad definition and is likely to include many HBP operations. For instance, processing includes simply storing data. In other words, data protection law takes a much broader view of “processing” than is generally used by technologists. The second necessary element for the GDPR to be applicable is that the data must be “personal”. The intention of focusing on personal data is to protect the rights of the “data subject”. That is, the “identified or identifiable natural person” to which the data being processed and collected refers. This protection is limited to natural living persons and thus does not include legal or deceased persons or anonymised data.
What are personal data?
In determining whether data are personal, EU data protection law takes an expansive view. Pursuant to the GDPR, personal data include “any information relating to an identified or identifiable natural person.” This includes names, identification numbers, location data IP addresses or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person. Therefore, if your WP has contact information including names and email addresses, that data will be considered personal and is regulated by the GDPR. In addition to human data and medical records, survey data and the results of questioners will also be considered personal data. The GDPR also applies additional protection to special categories of personal data. These data types include information regarding race, ethnic origin, political affiliation, trade union membership, genetics, biometrics used for identification, and health data, amongst others.
Although expansive, the inclusion of ‘any information’ does not mean that all data stored on cloud services are personal data. For example, commercial data or trade secrets, although possibly subject to other restrictions, are not personal data and thus fall beyond the purview or scope of the GDPR. The same is true of animal data. However, records of researchers submitting such data may be considered personal data. Therefore, WPs should be careful to conclude that they do not have personal data, even if their primary research area involves animal models, robotics, or another data type.
Anonymous Data
The General Data Protection Regulation (GDPR) applies only to information concerning an identified or identifiable natural person. If data are anonymised, they are no longer considered to be personal and is thus outside the scope of GDPR application. In other words, if data in a Work Package (WP) are anonymous, the GDPR does not apply and the data can be processed for research purposes without the restrictions of data protection law.
However, given the difficulty in creating truly anonymous data, the bar for anonymisation has been set extremely high in the GDPR. To determine whether a person is identifiable, we must consider “all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly.” To make this determination, we must consider all “objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments.” In making this determination, researchers must consider the robustness of the anonymisation techniques they apply.
Human Brain Project's Policy
If HBP human data requires re-identification at some point, the data is not anonymised for purposes of the GDPR. The GDPR will remain applicable. Personal data that have been pseudonymised (i.e. encrypted) has certain advantages under the GDPR such as data breach reporting. Whenever possible, personal data in HBP data should be pseudonymised. However, the GDPR still applies to pseudonymised data because the data can be attributed to a natural person by the use of additional information such as a decryption key.
Resources
Working Party 29 Opinion on Anonymisation Techniques (in particular Annex I on page 26)
ENISA: Privacy by design in big data (especially pages 28-38)
Anynomisation Techniques
The main anonymisation techniques applied in data protection law are randomisation and generalisation. Regardless of the technique applied (e.g. addition, permutation, differential privacy, aggregation, k-anonymity, l-diversity, t-closeness, etc.) three main questions should be considered:
- Is it still possible to single out an individual?
- Is it still possible to link records relating to an individual?
- Can information be inferred concerning an individual?
In its opinion on anonymisation techniques, the Working Party 29—a board of data protection authorities in the EU, also called the Article 29 Working Party—focuses particularly on appropriate engineering and application of the techniques. On that foundation, the Working Party 29 provides that “the optimal solution should be decided on a case-by-case basis, possibly by using a combination of different techniques…”. The table below shows the strengths and weaknesses of some of most common anonymisation techniques.
Technique | Is Singling out still a risk? | Is Linkability still a risk? | Is Inference still a risk? |
---|---|---|---|
Pseudonymisation | Yes | Yes | Yes |
Noise addition | Yes | May not | May not |
Substitution | Yes | Yes | May not |
Aggregation or K-anonymity | No | Yes | Yes |
L-diversity | No | Yes | May not |
Differential privacy | May not | May not | May not |
Hashing/Tokenisation | Yes | Yes | May not |
Although there is no prescriptive standard for in the EU, the Working Party 29 states that anonymisation requires “irreversibility preventing identification of the data subject” taking into account all the means “reasonably likely to be used” for identification. In short, effectively using this exception in practice is extremely difficult. Although this ‘zero risk’ approach has been criticised, it is the current position taken by regulators.
Once a determination has been made that data are anonymous, they must be re-evaluated and updated in light of technological changes such as big data. In other words, anonymisation is not a one-off exercise and requires regular revaluation and risk assessment by the data controllers employing the techniques.
Data Processors & Controllers
Roles & Responsibilities under the GDPR
Two of the most important roles in EU data protection law are those of the “data processor” and the “data controller”. Understanding these concepts and their interactions is essential to applying the GDPR. The controller/processor relationship largely boils down to an allocation of responsibility. Under the GDPR, data controllers have the primary responsibility of treating the personal data entrusted to them in conformance with the law.
Data Controller
The primary component necessary to meet the controller designation is that the natural or legal person makes a specific determination regarding “the purposes and means” of data processing. In evaluating whether a party determines the purposes and means of processing, the level of influence they have over the processing activities is critical. Specifically, does the actor determine the ‘how’ and the ‘why’ of data processing? If a party makes primary decisions about data, such as use and access requirements and length of storage, among other core control elements, they are most likely acting as a data controller.
In the HBP, the WPs will generally be “data controller” for the data collected and processed in their research. For instance, if a WP collects and processes human data, they will be considered data controller for that data and will be responsible for complying with the GDPR. Even when the project primarily evaluates “account related data” such as names and contact information, the WP will be responsible for meeting data protection requirements (e.g. accountability, documentation, deletion). When WPs introduce data into HBP platforms, such as the MIP or the NIP, they are responsible for making certain the data are compliant with the GDPR.
Joint Data Controller
If the purpose and means of processing is determined by various entities working in concert, they may be considered joint controllers where responsibility is shared. Joint controllers have some flexibility in allocation of obligations and responsibilities, as long a full compliance is obtained.
Data Processor
To qualify as a processor, two conditions must be met. First, the party must be a separate legal entity from the controller. Second, the processor must process data “only on documented instructions from the controller”. Being deemed a data processor has several advantages. Principal among them is the apportionment of liability. As long as the processor processes personal data under the instruction of the controller, it has reduced liability because most of the responsibility resides with the controller. The classic example of a processor is an infrastructure or storage provider. The controller gives specific instructions on how data are stored and under what conditions. In some cases, the data are encrypted before being sent to the infrastructure provider, thus leaving the infrastructure provider with little influence over how the data is processed.
Prior to the GDPR, the controller/processor relationship was wholly contractual. Processor liability was also limited to the terms of the contract. Under the GDPR, processors now have direct statutory liability and are required to provide certain technical or organisational measures including keeping records of processing activities, reporting data breaches to controllers, among others. DPAs can now impose administrative fines on processors directly on processors in addition to other penalties for violating the GDPR, albeit in limited areas.
Contractual designations
An additional aspect of the “controller” and “processor” distinction is that it has been a relatively common practice to designate these roles in contract terms (i.e. providing that one party will always be deemed a processor). However, these terms are ineffective because they do not negate the requirements set out in the GDPR. The GDPR places requirements on parties based on their actual roles or conduct in data processing operations and not simply on the labels they give themselves. Therefore, looking at what the parties actually do, rather than how they define their roles contractually, is dispositive when applying the GDPR. Ultimately, the “controller” or “processor” designation, and ultimately compliance responsibilities, will be based on conduct.
Resources
Working Party 29 Opinion 1/2010 on the concepts of “controller” and “processor” WP 169 (2010)
Data Protection Impact Assessments
A Data Protection Impact Assessment (DPIA) is a tool for building and demonstrating compliance with the GDPR. A DPIA applies a systematic process for assessing the impacts of the processing of personal data and the effect that processing has on the fundamental right to privacy of the data subject (e.g. patient). The DPIA process is not a one-time exercise and should be continuously reviewed and regularly re-assessed.
In some cases, a DPIA is voluntary. However, in many areas of the HBP, a DPIA will be mandatory. Specifically, in areas where the processing of personal data has the potential to “result in a high risk to the rights and freedoms of natural persons.” In particular, HBP platforms, and more generally the scope and subject matter of research in many of the WPs contains special categories of sensitive personal data (e.g. medical data).
The HBP started its DPIA work in SP8 with the Medical Informatics Platform (MIP). The SP8 DPIA applied the methodology created by the Commission Nationale de l'Informatique et des Libertés (CNIL, the French Data Protection Authority). The CNIL methodology is supported with open source software tools for conducting a Privacy Impact Assessment (PIA), which are useful in conducting a DPIA. Furthermore, the CNIL guidance incorporates the requirements of the GDPR and the Working Party 29 opinion on DPIAs. The CNIL methodology can be combined with other DPIA methods including those developed by the ISO (International Organization for Standardization).
Resources
- CNIL Privacy Impact Assessment (PIA) methodology
- CNIL Privacy Impact Assessment (PIA) open source software
- ISO/IEC 29134:2017, ‘Information technology -- Security techniques -- Guidelines for privacy impact assessment’ (2017)
- UK Information Commissioner's Office (DPA), ‘Conducting privacy impact assessments code of practice, Information Commissioner’s Office (ICO)’ (2014)
- UK Information Commissioner’s Office, ‘Consultation: GDPR DPIA Guidance’ (22 March 2018)
- Working Party 29 248, 'Guidelines on Data Protection Impact Assessment (DPIA) and determining whether processing is “likely to result in a high risk” for the purposes of Regulation 2016/679, wp248rev.01' 1-22 (2017)
- Wright et. al., ‘Integrating privacy impact assessment in risk management’ (2014) 4 International Data Privacy Law 155-70
Ethics Coordination
We coordinate the management of ethical issues across the Human Brain Project. This includes the interaction with the European Commission and its ethics reviewers for ethics checks, ethics reviews, or ethics audits, as well as coordinating all work on ethics across all Work Packages in the Project. The task also includes the ethics rapporteur programme, and communications activities to promote responsible research and innovation in the Human Brain Project.
Key contact
Anyone can request to address ethical, regulatory and social issues raised by HBP research.
Register an Ethical ConcernEthics & Society Training Resources
We have developed training resources that cover a wide range of issues in data governance, responsible research and innovation, and neuroethics.
Our training modules provide tools and methods for foresight, as well as critical and philosophical reflection.
Introduction to Responsible Research & Innovation in HBP Human & Animal Data in EBRAINS Gender, Diversity & Inclusion Researcher Awareness & Integrity Dual Use of Concern & Misuse Knowledge Transfer & Commercialization Neuroethics, Consciousness & AI Ethics Foresight & Public Engagement Science CommunicationMore Ethics & RRI
The Human Brain Project will have an impact on both science and society.
We promote RRI practices within the HBP, and help to shape the direction of its research in ethically sound ways that serve the public interest.