The Information Commissioner’s position paper on the UK government’s proposal for a trusted digital identity system provides insight into the interplay between data protection and digital identity.
- Given the nature and volume of the data it involves, any controller substantively involved in accommodating digital identity verification would need to carry out a Data Protection Impact Assessment (DPIA) by law. It may also require an ICO review.
- Concepts such as federated identity management, attribute-based credentials and tokenization are important. Coupled with on-device processing, this can reduce the likelihood and severity of potential risks and harms, such as misuse of personal information; loss of trust or unwarranted intrusion; and decrease in cost (both in terms of implementation and compliance).
- It means that organizations should not use the data a person provides specifically for digital identity verification for other purposes except where allowed by law or with an individual’s permission.
- In our experience, failure to limit the purposes for which organizations collect personal data poses a risk to individuals.
- People have a reasonable expectation that organizations will use their data for the purpose(s) about which they are told at the outset.
- It would significantly undermine the public’s trust in the framework if organizations use peoples' data in a way they would not expect.
- This could be the case both with private sector organizations and within government.
- In addition, processing data collected for one purpose for another incompatible purpose (where an exemption does not apply) is a breach of UK GDPR. The framework’s governing body should therefore have a significant role in ensuring data used in digital identity is limited for this purpose in practice.
- Concerns and potential risks may arise for individuals if digital identity and attribute systems (or the service providers consuming digital identity and attributes) rely on automated processing. This could include use of algorithms or artificial intelligence as part of the system.
- Even with automated processing not covered by Article 22, for example where the processing is not solely automated or it does not result in a legal or similarly significant effect, organizations still need to fully consider and comply with data protection rights and obligations — particularly transparency, accuracy and redress mechanisms.
- The ICO welcomes the trust framework’s recognition of the potential discriminatory biases in automated decision-making and for an appropriate governing body to receive annual exclusion reports.
The UK GDPR notes that children generally merit specific protection due to the risks posed by collecting and processing their data. Therefore, any digital identity system needs to give special consideration to how it safely accommodates and protects children.
Fair and Lawful Processing
- It is important that individuals are offered genuine choice over whether any digital identity and attribute scheme processes their data. However, organizations need to be particularly careful seeking consent or explicit consent as a condition of accessing a digital identity and attribute scheme in the framework. Consent is likely to be invalid if they are in a position of power over the individual, for example a public authority or potential employer asking for confirmation of a medical screening.
- All organizations within the trust framework need to consider not just how they can use digital identities and attributes, but whether they should in any given scenario. Assessing whether the processing is fair can depend on how they obtain the personal data. If organizations deceive or mislead anyone when they obtain the personal data, then this is unlikely to be fair.
- In communicating about digital identities and attributes to the public, it is important that this information is user-friendly and easily understood by people who are not technical specialists.
- There should also be an appropriate degree of consistency between different controllers and the privacy information they provide.
- The design of this new system should also create an opportunity to draw on best practice in terms of transparency built into the service experience, not just formal privacy notices. This should also include user experience (UX) testing and design.
- If organizations collect excessive data for identification, there could be “function creep” and the data is seen as valuable for marketing. There are also risks if multiple organizations hold duplicated data that could be hacked or misused.
- Organizations must therefore ensure that personal data is adequate, relevant and limited to what is necessary for the purposes for which it is processed.
- Only process data necessary to verify an individual’s identity or their attributes.
- Acquiring, using and retaining the minimum amount of data necessary reduces privacy risks, so any scheme within the framework should support this.
- Organizations should only have access to the data that they need to carry out their services, such as receiving the verification of facts rather than the transmission of detailed information.
- Any scheme should be based on strong technical and organizational security arrangements. This is due to the attractiveness of the data in digital identity systems to bad actors and the high risk it poses if they compromise the data.
- Such measures might include the use of privacy-enhancing technologies to minimize the risk of fraud, impersonation and other misuse or loss of data.
- Organizations should keep security measures under regular review to ensure their effectiveness, including the monitoring of false positive rates.
- The distributed, decentralized model can also support the effectiveness of these measures, alongside joined-up threat assessment and intelligence about risks.