In light of the multiple announced investigations across Europe in relation to Open AI and its Chat GPT Service, April saw the EDPB launch a dedicated task force to “foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities”. The speed with which this coordinated taskforce was set up highlights the importance placed on the interests at stake.
Since the update in our March Data Wrap, the Italian Garante confirmed it would lift its ban on Chat GPT if Open AI complied with a set of conditions by 30 April 2023. The service was subsequently reinstated in Italy with enhanced transparency and rights for European users and non-users in particular, although it remains to be seen whether these relatively hastily implemented changes will be sufficient to address other EU GDPR concerns that have been raised. The Chat GPT Service is also subject to scrutiny by other EU regulators – the Spanish regulator announced it had initiated an ex-officio investigation into Open AI for potential breach of regulations and the French CNIL has also opened a control procedure to investigate the service following complaints.
With the spotlight still on AI, on 27 April 2023 Members of the European Parliament reached a provisional political agreement on the EU Artificial Intelligence Act, the first EU AI-specific centralised risk-based legal framework for regulating AI. Following intense debate around “general purpose” AI in particular (i.e. AI systems without a specific purpose), the European Parliament confirmed proposals to impose stricter obligations on “foundation models”, a sub-category of “general purpose AI” that includes ChatGPT. A plenary vote is expected in June 2023.
The UK’s Information Commissioner’s Office (“ICO”) has fined TikTok Information Technologies UK Limited and TikTok Inc (“TikTok”) £12.7 million for breaching the UK GDPR, in particular for failing to protect children’s privacy. TikTok’s infringement related to a failure to gain appropriate parental consent for children using its services who were under the age of 13, not sufficiently explaining its purposes of processing, and not processing data in a lawful, fair and transparent manner.
The ICO had previously issued a notice of intent to fine TikTok £27 million for various data protection law breaches between May 2018 and July 2020. A lower fine was applied on the basis that the ICO decided not to pursue a finding related to the unlawful use of special category data, however the fine is still the third highest the ICO has levied.
For TikTok, this sanction is not the end of the road, with other action being taken against it, such as the filing of two class actions in Portugal, valued at €1.1 billion, in relation to various breaches of the law, including in relation to data privacy. For other parties, this decision highlights the importance of having in place, and enforcing, appropriate policies and processes, and in particular where children are involved, being aware of and complying with the Children’s code, and keeping up to date with the evolving obligations in this space. For further information please refer to our full blog here.
While our March Data Wrap saw the Department for Science, Innovation and Technology’s (“DSIT“) long awaited white paper on the UK’s approach to regulating AI technologies (the “White Paper“), April saw the ICO’s response to the White Paper. The ICO supports the White Paper’s ambitions to “empower responsible innovation and sustainable economic growth” which align with the ICO’s own strategic priorities in ICO25.
Of note, the ICO sets out detailed comments on the “fairness” and “contestability and redress” AI principles in the White Paper to assist with consistency of application. In addition, where an AI system has a legal or similarly significant effect on an individual, the White Paper states that regulators are expected to consider the suitability of requiring AI system operators to provide an appropriate justification for that decision to affected parties. The ICO highlighted, however, that where an AI system uses personal data, if GDPR Article 22 is engaged, it ought to be a requirement for AI system operators to be able to provide an appropriate justification, not just a consideration, and that this should be clarified. Note that Article 22 prohibits decision making based solely on automated processing of personal data i.e. without human interaction where it has a legal impact or other significant impact on an individual, except in certain specified circumstances.
The long awaited Digital Markets, Competition and Consumer Bill (DMCC Bill) was published and introduced before Parliament in April, having been delayed by parliamentary priorities and timing considerations. Its scope and implications are wide-ranging and follow on from both a previous consultation on changes to the UK competition and consumer protection regimes, and proposals for a new pro-competition regime for digital markets. For further details on the DMCC Bill please refer to our blog post here.
On 27 April 2023, the Advocate General’s opinion was published in relation to controller liability for personal data breaches. The opinion provided that:
- a personal data breach is not sufficient in itself to conclude that technical and organisational measures implemented by the controller were not appropriate. The burden of proof that the measures are appropriate sits with the controller.
- where breach was committed by a third party, this does not constitute a ground for exempting controller from liability – a controller must show “to a high standard of proof” that it is not in any way responsible for the event giving rise to the damage.
- fear of mis-use of personal data in the future may constitute non-material damage giving rise to compensation, if it is actual and certain emotional damage and not simply trouble or inconvenience.
Whilst non-binding, the European Court of Justice tends to follow the AG opinion. The full text of the opinion is available (in Italian) here.
As part of its study into the UK cloud infrastructure services market, Ofcom confirmed in April that it is proposing to refer the market to the CMA for further investigation. In particular, Ofcom is concerned about data transfer issues such as:
- The charges that customers pay to transfer their data out of the cloud which can discourage customers from using services from more than one cloud provider or switching to alternative providers;and
- Technical restrictions on interoperability – imposed by cloud providers that prevent their services working effectively with services from other providers. This means customers need to put additional effort into reconfiguring their data and applications to work across different clouds.
Ofcom is consulting on the interim findings of its study until 17 May 2023 and plans to publish a final report setting out its findings and recommendations by 5 October 2023.
To subscribe to our HSF Data Blog please click here.