Following landmark developments in 2023, the international spotlight remains firmly on AI regulation as we enter into 2024. The last few days alone have not only seen the COREPER Ambassador’s agreement on the EU AI Act, but also the long-awaited Government response to its AI Regulation White Paper released yesterday (“Response“) and the House of Lords Communication and Digital Committee report on “Large language models and generative AI” (“Report“) published at the end of last week.
There are few surprises in the Response and Report; with the UK still forging its own path towards the regulation of AI – particularly compared to the EU’s approach. That said, the Government seems to have taken on board a couple of focus areas from the EU approach (such as highly capable general purpose AI systems).
Key take aways:
- Building on the White Paper: In contrast to the centralised legislative framework set out in the EU’s AI Act, the Response and Report largely re-iterate and build on the original adaptable, pro-innovation, sector-led approach set out in the Government’s March 2023 AI White Paper.
- No rush to regulate…: The Response and Report also re-confirm the UK’s agile “wait and see” approach to regulating AI. Given the technology is rapidly developing, this allows the UK to quickly adapt to emerging issues without implementing “quick-fix” rules that could become outdated or ineffective.
- …Until understanding of the risk matures: The Government does, however, acknowledge that legislative action will be required once the risks associated with the technology have matured, and focuses on preparing itself for emerging and near-term regulatory risks.
- Role of the regulator: Existing regulators retain a key role in implementing the UK’s agile approach, with the Government empowering them to create targeted measures in line with five common principles and tailored to the risks posed by the different sectors. Regulators have been asked to publish their strategic plans for managing the risks and opportunities around AI by the end of April 2024.
- Importance of consistency and coordination: To avoid a patchy approach between regulators, given the sector-led focus (and international regulatory fragmentation), other priorities include strengthening the central coordination mechanisms for UK regulators in AI and developing the expertise of the AI Safety Institute (both nationally and internationally). The Government also published new cross-sector guidance to support regulators to implement the principles effectively.
- Binding rules for GPAI: For the first time, the Response also sets out initial thinking for future targeted, binding requirements for the most advanced highly capable general purpose AI systems. This is principally because the wide-ranging potential uses of these systems challenge the current context-led regulatory approach (which relies on risk being determined by how and where the AI system is used).
- Engagement with IP issues: It is not lost on the Government that copyright issues are front and centre of the development, training and use of AI (as it was not on the House of Lords in the Report – see our IP blog post here). However, how to deal with the conflicting interests has eluded the IPO’s working committee and the Response does not provide a solution for now, other than further examination of ways to improve transparency of use of copyright material. It may well be for the courts to determine the copyright position in the short term, although this may not be to the liking of those investing in AI development.
- Certainty vs flexibility?: The UK’s approach seems to sit somewhere between other leading actors in AI regulation, including Australia, China, the EU and the US (see our “Deeper Dive“). It remains to be seen which of the variety of diverging international approaches adequately strikes the balance to enable trustworthy AI to thrive.
For a deeper dive on the Response and Report, as well as our thoughts on the two, please click here.