- On 12 October 2021, the Information Commissioner’s Office (“ICO“) opened its consultation in relation to the use of the beta version of its AI and data protection risk mitigation and management toolkit (the “Consultation“).
- The Consultation runs until 1 December 2021 and the ICO is seeking responses from all industry sectors and from organisations of all types that engage in the “development, deployment and maintenance of AI systems that process personal data”.
- The AI and data protection risk mitigation and management toolkit (the “AI Toolkit“) provides organisations with a framework against which to assess internal AI risk by identifying potential risks for consideration and offering practical, high-level steps on how organisations can mitigate such risks.
In this article we highlight some noteworthy aspects of the AI Toolkit, including the key practical steps that organisations should consider when processing personal data in connection with developing and operating AI systems, and flag key elements that respondees to the ICO’s consultation may want to consider.
The initial, alpha version of the AI Toolkit was launched in March 2021. The Toolkit forms part of the ICO’s commitment to enable good data protection practice in AI and incorporates elements of the ICO’s Guidance on AI and Data Protection (the “Guidance“). This Guidance is designed to assist organisations to mitigate risks posed by the use of AI from a data protection perspective. It is intended to be used in, and considered before embarking upon, AI projects to ensure that organisations have devoted enough time to considering the impact that their data protection obligations will have on the development of the AI system or application concerned.
Following feedback received on the alpha version, the current, beta version of the AI Toolkit was launched on 20 July 2021. To test the Toolkit’s effectiveness and practical application, the ICO is currently applying the Toolkit to a range of live AI systems that process personal data. Alongside the responses gained from the Consultation, the results of this testing will inform the final version of the Toolkit which is due to be published in December 2021.
Scope of the AI Toolkit
The AI Toolkit provides organisations with a framework against which to assess internal AI risk by identifying potential risks for consideration and offering practical, high-level steps on how organisations can mitigate such risks. It should be noted that the Toolkit itself is drafted to reflect the auditing framework employed by the ICO’s internal assurance and investigation teams. Consequently, by applying the guidance in the AI Toolkit to their use of AI applications that process personal data, organisations can satisfy themselves that their use of AI is aligned with the ICO’s expectations in relation to data protection-related compliance.
The Toolkit has been designed with both technology specialists and those responsible for an organisation’s compliance with data protection laws (such as data protection officers, general counsel, and senior management) in mind, thereby encouraging organisations to build data protection considerations into the development stage of any AI project, rather than to consider these issues as an afterthought.
The ICO has made it clear that “there is no set way to use the toolkit” and that it is flexible enough to be applied to each and every stage of the development of an AI system. Nonetheless, the ICO explains that the Toolkit addresses four stages of the AI Lifecycle:
- Business requirements and design
- Data acquisition and preparation
- Training and testing
- Deployment and monitoring
The Toolkit itself is then divided up into two distinct sections that invite the user to review a series of risk statements for each stage of the project, and then use the corresponding “practical steps” guidance to put into place effective mitigation strategies to address these risks. A selection of the key risk domain areas and accompanying practical steps are outlined below.
Practical steps for organisations
- Accountability and governance
Demonstrating an AI system’s compliance with the UK GDPR accountability principle has traditionally been particularly difficult for organisations, largely due to the technical complexity of AI systems. To this end, the Toolkit recommends that organisations carry out suitable risk assessments (such as Data Protection Impact Assessments), conduct sufficient due diligence checks of any AI systems providers and agree appropriate responsibilities with any third party suppliers.
- Lawfulness and purpose limitation
In considering the issue of lawfulness and purpose limitation, the AI Toolkit reinforces the distinction between the development and deployment stages of an AI project and highlights the risks of conducting unlawful processing and contravening the purpose limitation principle when the different purposes involved in each stage of the project are not adequately considered. To mitigate such risks, the Toolkit advises organisations to conduct data flow mapping exercises at the start of any AI project and to continuously monitor and review their documented lawful bases for data processing to ensure that such bases are still relevant to each stage of the project.
- Fairness, preventing and monitoring bias
AI systems must be sufficiently statistically accurate and avoid discrimination in order to be considered ‘fair’. Where insufficiently diverse or discriminatory data is used in the training and development of AI systems, organisations are at risk of producing AI systems that create inaccurate outputs or decisions. To mitigate such risks, the AI Toolkit recommends that organisations document the minimum success criteria needed to proceed to the next step in a development lifecycle, ensure that datasets do not reflect past discrimination and take additional measures to increase data quality and improve model performance in instances where disproportionately high errors are recorded for a protected group.
It is crucial that the processes, services and decisions that are delivered by AI systems to individuals are capable of being clearly and easily communicated to those affected individuals. A failure to do so may expose an organisation to the risk of regulatory action. To guard against this, the AI Toolkit recommends that organisations ensure that their policies, protocols and procedures are easily accessible and understandable to the staff working on an AI project in the first instance, and then consider what information should be provided to data subjects about how their personal data will be used by the AI system before ensuring that the effectiveness of such explanations are periodically tested to check they are sufficiently clear.
As with the accountability principle discussed above, an AI system’s compliance with the security requirements of the GDPR or the UK GDPR can potentially be more challenging than with other, more established technologies. Key risks identified in the AI Toolkit include the risk of unauthorised or unlawful processing and accidental loss, destruction or damage caused by AI systems that do not have appropriate levels of security. To address these concerns, the Toolkit recommends that organisations deliver appropriate security training to their AI project staff, develop an AI incident response plan, document all movements and storing of personal data from one location to another and proactively test the system and investigate any anomalies immediately.
- Data minimisation
AI systems generally require large amounts of data to operate effectively. Nonetheless, the AI Toolkit highlights the risks posed by excessive collection and processing of personal data and the potential for such activities to breach Article 5(1) of the UK GDPR which requires that all personal data be adequate, relevant and limited to what is necessary in relation to the purposes for which the data is processed. To comply with this data minimisation requirement, the AI Toolkit recommends that organisations consistently assess whether the data they are collecting to train their AI systems is relevant for the purpose intended, carry out reviews during the project’s testing phase to assess whether all the data is needed and whether the same result can be achieved with a subset of that data, and periodically assess whether training data is still adequate and relevant to the prescribed purpose.
- Individual data subject rights
The AI Toolkit emphasises that the rights of data subjects enshrined in the UK GDPR will apply wherever personal data is used, at any stage of an AI system’s development and deployment lifecycle. Failure to recognise when such rights are applicable is a key risk faced by organisations and the AI Toolkit recommends that organisations design and apply a policy or process that defines how information requests (and other data subject right requests) by individuals will be dealt with. Additionally, organisations should index the personal data used in the AI system concerned so that such data is easy to locate in the event that a request is received.
- Meaningful human review
To the extent that organisations rely on human reviews in order to take certain processing activities outside of the scope of automated decision making and Article 22(1) of the GDPR and UK GDPR, the AI Toolkit identifies the risks posed by conducting “tokenistic” human reviews. To ensure that adequate human reviews are being undertaken in this context, the Toolkit suggests that all human reviewers are adequately trained to interpret and challenge outputs made by the AI system and that the reviewers should always have meaningful influence on any decision made. Specifically, human reviewers should take into account factors, such as local contextual factors, in addition to those considered by or put into the AI system and maintain the authority and competence to overrule any automated recommendation by the AI system.
The AI Toolkit and the National AI Strategy
The AI Toolkit and its recommendations should also be considered in light of the UK’s National AI Strategy. On 22 September 2021, the Department for Digital, Culture, Media and Sport (“DCMS“) published the UK’s National AI Strategy in partnership with the Department for Business, Energy and Industrial Strategy (the “Strategy“). Whilst the Strategy does not contain concrete legislative proposals, it affirms the UK government’s intention to harness the potential of AI and thereby ensure the UK’s position as an international market leader in the development of AI technologies. The Strategy will be discussed further in our upcoming blog post.
The Strategy is segmented into three “pillars”, the third of which is devoted to ensuring the “UK develops an appropriate national and international governance framework for AI technologies to encourage innovation, investment and protect the public and fundamental values”. Consequently, although it should be emphasised that the AI Toolkit is not legislation but instead functions as best practice guidance, it can still considered an integral part of the UK’s developing AI governance framework and will likely play an increasingly important role when its final iteration is published later in December 2021. Indeed, one of the key actions listed under pillar 3 of the Strategy is to “explore with stakeholders the development of an AI technical standards engagement toolkit to support the AI ecosystem to engage in the global AI standardisation landscape”. The AI Toolkit therefore has the potential to make a major contribution to the AI standardisation landscape.
Whilst the AI Toolkit will likely play an increasingly important role in the development of the UK’s approach to AI regulation in 2022, at this stage it is important for a wide variety of organisations to contribute to the Consultation. As responses will directly inform the drafting of the final version of the AI Toolkit, having a diverse pool of responses to draw from will ensure that the Toolkit’s guidance can be as widely applicable as possible.
Once the final AI Toolkit is published later in December 2021, organisations should carefully review the guidance and use the Toolkit’s framework as a guide to structure all AI projects that process personal data.