Alongside the introduction of the Data Protection and Digital Information Bill to Parliament on 18 July 2022 (see our blog post on this here), the UK Government published a Policy Paper containing a proposed framework for regulating the use of AI in the UK (the ‘Paper’).
The Paper advocates for a light-touch approach towards the regulation of AI in the UK and focusses on regulating the use of AI systems and the context in which they are applied as opposed to the technology itself.
The Paper confirms that the UK Government is not intending to introduce legislation to regulate AI at this stage (though the possibility is not entirely ruled out), with the rationale being that they will continue to ‘monitor, evaluate and if necessary update [its] approach and so that it remains agile enough to respond to the rapid pace of change in the way that AI impacts upon society’.
Definition of ‘AI’
The Paper purposely does not attempt to set out a universal definition for AI to ensure that the current and future applications of AI are captured by the proposed framework, and to allow developers and end users to have a better understanding of the scope of UK regulatory concerns.
Instead, the Paper focusses on laying out the core characteristics of AI in order to inform the scope of the framework, with the aim being that regulators are to define AI based on domain or sector specific considerations. Accordingly, the Paper sets out the core characteristics as being a consideration of the ‘adaptiveness’ and ‘autonomy’ of the technology, and these characteristics are to be considered by regulators in setting definitions for AI.
A pro-innovation approach to regulation
The UK Government intends for the regulatory framework to be: context-specific; pro-innovation and risk based (with a focus on applications of AI that result in “real, unacceptable levels of risk” rather than those that pose “low or hypothetical risk”), coherent; and proportionate and adaptable. The ultimate aim of this approach is to allow the framework to remain flexible in light of the fact that AI is a ‘rapidly evolving technology’.
The UK Government acknowledges that a context-specific approach to AI related regulation will result in less uniformity when compared to a centralised, single framework with a fixed, central list of risks and mitigations, and as a result, have proposed a set of cross-sectoral principles in an effort to approach cross-cutting challenges in a ‘coherent and streamlined’ way.
These principles will need to be interpreted and implemented by regulators (who operate as part of the UK’s existing regulatory structures) in the context of their sector or domain.
The cross-sectoral principles
The Paper contains six early proposals for the cross-sectoral principles, tailored to address the non-context specific risks regularly associated with the use of AI systems, such as the perceived lack of explainability when high-impact decisions are made:
- Ensure that AI is used safely – Identifying that safety considerations expand beyond the healthcare or critical infrastructure sectors, the Paper flags that safety should be a key consideration for all regulators going forward, and indicates that regulators should be taking a context-based approach to assessing risks in their domain or sector.
- Ensure that AI is technically secure and functions as designed – The Paper focusses on ensuring that consumers have confidence in the proper functioning of AI systems, and requires that the ‘proportionality, the functioning, resilience and security of a system should be tested and proven’.
- Make sure that AI is appropriately transparent and explainable – With the protection of confidential information and intellectual property rights in mind, the Paper considers that the public may benefit from transparency requirements which improve the understanding of AI decision making. The Paper also considers that in some circumstances, regulators may deem that decisions which cannot be explained should be prohibited entirely.
- Embed considerations of fairness into AI – The Paper identifies that in order to ensure proportionate and pro-innovation regulation, regulators will be able to continue to define ‘fairness’ in relation to their sectors or domains.
- Define legal persons’ responsibility for AI governance – Stemming from the fact that AI systems can operate with a high level of autonomy, the Paper states that accountability for the outcomes produced by AI and any legal liability must always rest with an identified or identified legal person, whether corporate or natural.
- Clarify routes to redress or contestability – The Paper acknowledges that the use of AI systems can introduce risks such as biases in a decision-making process, and these decisions could have a material impact on people’s lives. As a result, the Paper clarifies that the use of these systems should not remove an affected individual’s ability to contest an outcome.
In terms of applicability, the principles will apply to ‘any actor in the AI lifecycle whose activities create risk that the regulators consider should be managed through the context-based operationalisation of each of the principles’. As such, implementation of these principles will be largely delegated to regulators who will be expected to identify if, when and how their regulated entities will need to implement measures to ensure the principles are satisfied. It is not expected that the principles will necessarily translate into mandatory obligations.
Comparison to the EU AI Act proposal
The UK Government’s proposed light-touch approach to regulating AI contrasts with the EU AI Act proposal published by the EU Commission on 21 April 2022, which envisages a more detailed and prescriptive regime. Two of the key differences between the UK and EU approach are as follows:
- No central list of risk-classed AI systems – The EU’s proposal includes a tier based approach to risk, including lists of AI systems which are considered ‘high-risk’ or prohibited entirely. The UK’s proposal does not include a similar list of AI systems or categorisation, with the rationale being that a fixed list of risks could quickly become outdated, and that a ‘framework applied across all sectors would limit the ability to respond in a proportionate manner by failing to allow for different levels of risk presented by seemingly similar applications of AI in different contexts’.
- Establishment of a governing body – The EU’s proposal requires the establishment of a European Artificial Intelligence Board to facilitate the implementation and enforcement of the regulation. In contrast, the UK’s approach is that the implementation of the framework will be done by the existing regulators through the release of guidance to highlight relevant regulatory requirements.
The UK Government states that it will continue to consider how best to implement and refine this approach, and indicate that they are planning to publish a White Paper on the matter in late 2022.
The Paper also invites stakeholders to share their views as to how the UK can best regulate AI and sets out a number of questions for consideration. Stakeholders can provide their input until the closing date on 26 September 2022.