In a move that marks a major U-turn for the Government, the UK’s proposals for a centralised contact tracing app have been abandoned in favour of a decentralised model. The new model is based on technology developed by Apple and Google and replaces the original app designed by NHSX, which recently has faced criticism due to privacy concerns as well as technical issues and delays.

The UK follows Germany and Italy, who have already made the switch from centralised contact tracing apps to decentralised models. The UK’s health secretary, Matt Hancock, confirmed the news at the UK Government press conference last night.

To centralise or decentralise?

The UK Government had previously asserted the superiority of a centralised contact tracing model, but what exactly is the difference?

A ‘decentralised’ data model requires individual users to provide an anonymous ID to a centralised server. The user’s phone then downloads information from the centralised database and carries out contact matching and risk analysis on the phone itself before sending alerts to other users if necessary. Information on whether a user has come into contact with an infected person will be shared with that user, but not with the central server.

In contrast, a ‘centralised’ data model would require users to provide not only their own anonymous ID to a centralised database, but also to send any codes collected from other phones. The computer server then carries out contact matching and risk analysis using that information, making the decision as to whether someone is ‘at risk’ and sending alerts accordingly.

The UK’s previous preference for the centralised model was based on the belief that storing data in a centralised manner would promote a more considered approach to contact tracing based on risk factors, and would enable epidemiologists to use valuable data on the spread of the virus for further research. However, the centralised model was criticised for potentially encroaching on privacy by using more data than necessary, and using the data for purposes other than contact tracing.

What next?

NHSX, the health service’s innovation arm, has confirmed that its current leaders will step back from the project, and that Simon Thompson, current chief product manager at Ocado, will take over management of the new app.

While this move will be welcome to privacy campaigners and critics of the centralised model, concerns over the limitations of Bluetooth-enabled technology, as well as the uneasiness over allowing Apple and Google to control the UK’s response to the pandemic, may cause further obstructions to the eventual rollout of a UK-wide contact tracing app. The additional delays resulting from this change in approach may also result in a lower than ideal take-up rate, with much of the population of the view that the time for contact tracing has passed given the current downwards curve of the pandemic.

Miriam Everett
Miriam Everett
Partner, Head of Data Protection and Privacy, London
+44 20 7466 2378
Hannah Brown
Hannah Brown
Associate, Digital TMT, Sourcing and Data, London
+44 20 7466 2677
Katie Collins
Katie Collins
Trainee Solicitor, London
+44 20 7466 2117


On 17 April 2020, the ICO published an opinion by the Information Commissioner (the “Commissioner”) on Apple and Google’s joint initiative to develop COVID-19 contact tracing technology (the “Opinion”, available here).


  • The Commissioner found the CTF to be aligned with principles of data protection by design and by default.
  • Controllers designing contact tracing apps that use the CTF should ensure alignment with data protection law and regulation, especially if they process personal data (which the CTF does not require).
  • The Commissioner raised concerns regarding individuals assuming that the CTF’s compliance with data protection principles will extend to all aspects of the contact tracing app – which is not necessarily the case.
  • Therefore, it should be made clear to any app users who is responsible for data processing, especially if the app processes data outside of the CTF’s limited scope.
  • Data controllers designing CTF-enabled contact tracing apps must be transparent with potential and actual app users on the type of information they will be processing.
  • Finally, when it comes to a user’s ability to disable Bluetooth, the Commissioner observed that with regard to contact tracing apps in general: “a user should not have to take action to prevent tracking”.

As set out in our previous blogpost (available here), contact tracing is one of the measures being contemplated or implemented by European governments (including in the UK and Germany) in order to be able to put an end to lockdowns while containing the spread of the virus.

The scope of the Opinion was limited to the design of the contact tracing framework which enables the development of COVID-19 contact tracing apps by public health authorities through the use of Bluetooth technology (the “CTF”).

It is also worth noting that this Opinion has been published in the midst of a heated debate on contact tracing technology and fears that it may be used for mass surveillance – in an open letter published on 20 April 2020, around 300 international academics cautioned against creating a tool which will enable large scale data collection on populations.

How does the CTF work?

The CTF is composed of “application programming interfaces“ as well as “operating system level technology to assist contact tracing”. The collaboration between Apple and Google will result in interoperability between Android and iOS devices of apps developed by public health authorities using the CTF.

When two devices with contact tracing apps come into proximity, each device will exchange cryptographic tokens (which change frequently) via Bluetooth technology. Each token received will be stored in a ‘catalogue’ on the user’s device, effectively creating a record of all other devices a user has come into contact with. Once a user is diagnosed with COVID-19, and after they have given their consent, the app will upload the stored ‘catalogue’ of tokens to a server. Other users’ devices will periodically download a list of broadcast tokens of users who have tested positive to COVID-19. If a match is found between the broadcast tokens and the ‘catalogue’ of tokens stored on each user’s device, the app will notify the user that he/she has come into contact with a person who has tested positive and will suggest appropriate measures to be taken.

How does the CTF comply with data protection laws?

The Opinion finds that, based on the information released by Google and Apple on 10 April 2020, the CTF is compliant with principles of data protection by design and by default because:

  1. The data collected by the CTF is minimal: The information contained in the tokens exchanged does not include any personal data (such as account information or usernames) or any location data. Furthermore the ‘matching process’ between tokens of users who have tested positive for COVID-19 and tokens stored on each user’s phone happens on the device and therefore does not involve the app developer or any third party.
  2. The CTF incorporates sufficient security measures: The cryptographic nature of the token which is generated on the device (outside the control of the contact tracing app) means that the information broadcast to other nearby devices cannot be related to an identifiable individual. In addition, the fact that the tokens generated by one device are frequently changed (to avoid ultimate tracing back to individual users) minimises the risk of identifying a user from an interaction between two devices.
  3. The user maintains sufficient control over contact tracing apps which use the CTF: Users will voluntarily download and install the contact tracing app on their phone (although this may change in ‘Phase 2’ of the CTF as discussed below). Users also have the ability to remove and disable the app. In addition, the process of uploading the collected tokens of a user to the app once he/she has tested positive by the developer requires a separate consent process.
  4. The CTF’s purpose is limited: Although the CTF is built for the limited purpose of notifying users who came into contact with patients who have tested positive for COVID-19, the Commissioner stresses that any expansion of the use of CTF-enabled apps beyond this limited purpose will require an assessment of compliance with data protection principles.

What clarifications are required?

The Commissioner raises a number of questions on the practical functioning of the CTF, especially in respect of collection and withdrawal of user consent post-diagnosis. It is unclear how the CTF will facilitate the uploading of stored tokens to the app. Although consent will be required from the user, clarity is needed on: (i) management of the consent signal by a CTF-enabled app and (ii) what control will be given to users in this respect. In addition, the Commissioner lacks information on how consent withdrawal will impact the effectiveness of the contact tracing solutions and the notifications sent to other users once an individual has been diagnosed.

Issues for developers

The Commission will pay close attention to the implementation of the CTF in contact tracing apps. In particular, the CTF does not prevent app developers from collecting other types of data such as location. Although reasons for collecting other types of user information may be “legitimate and permissible” in order to pursue the public health objective of these apps (for example to ensure the system is not flooded with false diagnoses or to assess compliance with isolation), the Commissioner warns that data protection considerations will need to be assessed by the controller – this includes the public health organisations which develop (or commission the development of) contact tracing apps.

Another issue raised by the Commissioner is the potential user assumption that the compliance by the CTF with data protection laws will radiate to all other functionalities which may be built into contact tracing apps. In this regard, the Commissioner reminds app developers that, in addition to assessing data protection compliance in relation to other categories of data processed by the app, they will need to clearly specify to users who is responsible for data processing – in order to comply with transparency and accountability principles.

Finally, the Commissioner stressed that data controllers, such as app developers, must assess the data protection implications of both (i) the data being processed through the app and (ii) data undertaken by way of the CTF in order to ensure that both layers of processing are fair and lawful.

What has the ICO said about ‘Phase 2’ of the CTF?

‘Phase 2’ of development of the CTF aims to integrate the CTF in the operating system of each device. The Commissioner notes that users’ control, their ability to disable contact tracing or to withdraw their consent to contact tracing should be considered when developing the next phase of the CTF.

With regard to user’s ability to disable Bluetooth on their device, the Commissioner observes in respect of ‘Phase 2’ of the CTF, and contact tracing apps in general, that “a user should not have to take action to prevent tracking”.

How does this Opinion affect the development of Decentralized Privacy-Preserving Proximity Tracing protocol?

The Opinion can be applied to Decentralized Privacy-Preserving Proximity Tracing (or DP-3T) protocol in so far as it is similar to the CTF. The Commissioner states that the similarities between the two projects gives her comfort that “these approaches to contact tracing app solutions are generally aligned with the principles of data protection by design and by default”.


This Opinion is an important step in the development and roll out of contact tracing apps in the UK. As mentioned above, contact tracing is one of the tools necessary for the UK Government to lift the lockdown measures while minimising the impact of a potential second wave of infections. This has an indirect impact on the private sector as it will affect how and when employees will be able to go back to work.

The fact that the principles on which the CTF is based are compliant with data protection laws is crucial to the successful roll out of contact tracing apps. In order for these apps to be effective, they must be voluntarily downloaded by a large number of mobile users. Given the concerns around letting governments accumulate data on the population under the guise of putting an end to the pandemic, trust is a determining factor in this equation. The fact that the Commissioner is approving the foundation for these contact tracing apps will certainly play a role in gaining the public’s trust and its acceptance to give up some privacy rights in order to put an end to the current public health crisis.

Miriam Everett
Miriam Everett
Partner, Head of Data Protection and Privacy, London
+44 20 7466 2378
Hannah Brown
Hannah Brown
Associate, Digital TMT, Sourcing and Data, London
+44 20 7466 2677
Ghislaine Nobileau
Ghislaine Nobileau
Trainee Solicitor, London
+44 20 7466 7503

COVID-19: ICO publishes details of its regulatory approach during COVID-19 (UK)

The ICO has published details of its regulatory approach during the ongoing COVID-19 emergency; this is an approach which should reassure entities who are adapting to the economic and practical realities of operating in the current climate, as well as balancing their data protection obligations.  The UK regulator has continued to be reasonable and pragmatic, as outlined in our previous post in relation to response times to DSARs, and has stated that they are “committed to an empathetic…approach”.  Overall, the key takeaways from this guidance are that: Continue reading

COVID-19: How governments are using personal data to fight COVID-19


The COVID-19 outbreak has resulted in an unprecedented focus on the power of data to assist in resolving national emergencies. From health tracking, to volunteer coordination, to accurately identifying the vulnerable, data is being harnessed in both the public and private sectors to try to help bring COVID-19 under control and mitigate its impact. Continue reading

Morrisons wins Supreme Court appeal against finding of vicarious liability in data breach class action

Today the Supreme Court handed down its decision in Wm Morrisons Supermarkets Plc v Various Claimants [2020] UKSC 12, bringing to its conclusion a case which had the potential to alter significantly the data protection and cyber security litigation and class action landscape.

The headline news is that Morrisons has been found not to be vicariously liable for the actions of a rogue employee in leaking employee data to a publicly available file-sharing website.

The judgment will likely result in a collective sigh of relief for organisations who have been watching closely to track their potential liability for data breach class actions. However, it is important to note that the Morrisons case and judgment is very fact specific; it does not close the door on data breach class action compensation as a whole. Boardrooms should still be examining the technical and organisational measures they have in place to prevent personal data breaches in order to reduce the risk of regulatory enforcement and class actions.


In 2015 a former Morrisons employee was found guilty of stealing and unlawfully sharing the personal data (including names, addresses, bank account details, salary and national insurance details) of almost 100,000 of Morrisons’ employees with members of the press and with data sharing websites. At the time, the ICO investigated and found no enforcement action was required with respect to Morrisons’ compliance with the Data Protection Act 1998 (“DPA”).

Nevertheless, around 5,000 Morrisons employees brought a claim for damages – irrespective of the fact that they had not suffered any financial loss. Instead, the employees claimed that Morrisons was vicariously liable for the criminal acts of its employee and for the resulting distress caused to the relevant employees.

For full details of the background to the High Court and Court of Appeal arguments, please see our previous Morrisons data protection briefing. Following the conclusion of the Court of Appeal case, the Supreme Court subsequently granted leave to appeal, which led to a two day hearing in November 2019 and, ultimately, the judgment handed down today.


The Supreme Court overturned the Court of Appeal’s decision, finding that Morrisons was not vicariously liable for the employee’s unlawful actions. Lord Reed gave the judgment of the court, with which Lady Hale, Lord Kerr, Lord Hodge and Lord Lloyd-Jones agreed.

Lord Reed concluded that the courts below had misunderstood the principles of vicarious liability, and in particular had misinterpreted the Supreme Court’s judgment on vicarious liability in Mohamud v WM Morrison Supermarkets plc [2016] UKSC 11. That judgment was not intended to change the law on vicarious liability, but rather to follow existing authority including, importantly, Dubai Aluminium Co Ltd v Salaam [2002] UKHL 48.

In Dubai Aluminium, Lord Nicholls identified the general principle that applies when the court considers the question of vicarious liability arising out of an employment relationship: the wrongful conduct must be so closely connected with acts the employee was authorised to do that, for the purposes of the liability of the employer to third parties, it may fairly and properly be regarded as done by the employee while acting in the ordinary course of his employment.

Applying that test, vicarious liability was not established in this case. The Supreme Court found that the employee’s wrongful conduct was not so closely connected with the acts he was authorised to do that, for the purposes of assessing Morrisons’ liability, it could fairly and properly be regarded as being done in the ordinary course of his employment.

Lord Reed referred to the distinction drawn by Lord Nicholls in Dubai Aluminium between cases “where the employee was engaged, however misguidedly, in furthering his employer’s business, and cases where the employee is engaged solely in pursuing his own interests: on a ‘frolic of his own’, in the language of the time-honoured catch phrase.”

In the present case, he said, it was clear that the employee was not engaged in furthering Morrisons’ business when he committed the wrongdoing in question. On the contrary he was pursuing a personal vendetta against the company, seeking revenge for disciplinary proceedings some months earlier. In those circumstances, the close connection test was not met.

Although it did not affect the outcome, in light of the court’s findings, Lord Reed also considered Morrisons’ contention that the DPA excludes the imposition of vicarious liability in relation to data breaches under that Act and for the misuse of private information or breach of confidence – in effect that the DPA is a statutory scheme which “owns the field” in this respect. The Supreme Court rejected this argument, stating that: “the imposition of a statutory liability upon a data controller is not inconsistent with the imposition of a common law vicarious liability upon his employer, either for the breaches of duties imposed by the DPA, or for breaches of duties arising under the common law or in equity.”

Implications of the decision

Data privacy implications

Although this case was argued under the repealed Data Protection Act 1998, it will likely result in a collective sigh of relief for organisations now subject to the GDPR which expressly allows individuals to claim compensation for non-material damages (including distress) caused by non-compliance with the regulation. This is exactly the decision that many organisations wanted. In a world where companies can already be fined up to EUR 20 million or 4% of annual worldwide turnover for non-compliance with the GDPR, there are real fears concerning the potential for additional significant liability under class action claims for data breaches. Many organisations will be comforted by the steps that the Court has now taken to reduce the likelihood of such claims being successful.

However, it is important to caution against too much optimism. The Morrisons case was quite unique because the compensation claim was brought by individuals despite no regulatory action being taken by the ICO at the time of the data leak itself. The decision is no guarantee that similar claims would fail in circumstances where the regulator agrees that there has been a breach of the security requirements under the GDPR, such as has been the case when you look at some of the recent big data breaches we have seen which are starting to result in significant fines from the ICO.

Despite the claim not being successful in this instance, another, perhaps unintended, consequence of the case is that employers will be seriously considering what steps can to taken to mitigate against the risk of rogue employees leaking personal data. This may result in increased employee monitoring in the workplace, with all the data privacy implications that that may entail.

Employment implications

The “close connection” test for rendering an employer vicariously liable for an employee’s actions (as described above) is well established. What has been less clear is how broadly that test should be applied to the facts. For example, it was suggested that the Supreme Court’s ruling in Mohamud v WM Morrison Supermarkets meant that, where an employee’s role involves interacting with customers in some way, an employer might be vicariously liable for the employee’s conduct towards customers even if the employee engages in a wholly different nature of interaction from that envisaged (such as by using force or away from the usual work station) and regardless of motive.

Given there is no ‘reasonable steps’ defence against vicarious liability for torts, employers will welcome today’s ruling which rows back from that liberal interpretation of the test. The Supreme Court has made clear that the mere fact that the job provides the employee with “the opportunity to commit the wrongful act” is not sufficient to impose vicarious liability, nor is the fact that the wrongful act was the culmination of an unbroken temporal or causal chain of events regardless of the employee’s motive. Doing acts of the same kind as those which it is within the employee’s authority to do is not sufficient either.

The test is whether the wrongful act was sufficiently closely connected with what the employee was authorised to do. In Mohamud it was key that the employee was purporting to act on his employer’s business, threatening the customer not to return to the employer’s premises, and not acting to achieve some personal objective. In contrast, in the current case “the employee was not engaged in furthering his employer’s business when he committed the wrongdoing in question. On the contrary, he was pursuing a personal vendetta, seeking vengeance for the disciplinary proceedings some months earlier.” The ruling helpfully re-establishes that employers should not be liable for the acts of employees engaged on “frolics” of their own, pursuing their own objectives.

Cyber and data security implications

While the spectre of no fault liability presented by Morrisons has fallen away, there is still a significant risk from fault based claims. ICO is imposing substantial fines upon organisations for inadequate technical and organisational security measures. Claimants are cutting and pasting adverse findings from the ICO into claim forms. Organisations will have to make sure that the measures they are taking are appropriate, which will involve considering many factors including state of the art and the harm that may be caused if the security measures fail.

Class actions implications

Data breach class actions are on the rise in the UK and today’s judgment should be seen as a setback not a roadblock. Funders and claimant firms are looking to build class actions in relation to data breaches even where there is no specific evidence of individual damage. They are seeking damages for the whole class for “distress” or a standardised claim of loss of access to data and even a nominal damages award per claimant could lead to a significant amount over a class of tens or hundreds of thousands. Today’s judgment will not reverse that trend, but it will at least mean that companies who are themselves victims of data breaches by employees will not also face such claims on this basis alone.

The key question for the viability of those claims will be how much a bare data breach is “worth” by way of damages, even if there’s no other loss suffered by the victim. We will have to wait a bit longer now to find that out. The principles applied in misuse of private information cases may be helpful to the courts in considering this issue, given how little case law there is on damages in data protection claims.

Insurance implications

The judgment is good news for corporates and their insurers. The expectation of the courts below had been that insurance was the answer to the point that the judgment effectively helps achieve the rogue employee’s aim – namely to harm Morrisons. Insurers may therefore also be breathing a sigh of relief – but only up to a point. Vicarious liabilities for data breaches by rogue employees are insurable in principle, but these claims are not doomsday for the insurance market. That’s because the main risk for corporates – and therefore insurers – is direct liability claims and related losses, which continue apace on an upwards trajectory.

The good news for concerned corporates is that they can buy cyber insurance to cover data breach claims, whether for direct or vicarious liabilities, as well related losses such as costs of managing the incident, regulatory investigations and loss of profits if systems are impacted. However, risk transfer strategies within corporates vary, and that cover cannot necessarily be banked upon in all cases. The main challenge therefore remains – and is not answered here: how much cover would I need to buy for a reasonable worst case, and is that available at reasonable cost on a good wording. Given that the measure of damages is still unclear, this issue will continue to be wrestled with.

Miriam Everett
Miriam Everett
Partner, Head of Data Protection and Privacy, London
+44 20 7466 2378
Tim Leaver
Tim Leaver
Partner, Employment, Pensions & Incentives, London
+44 20 7466 2305
Julian Copeman
Julian Copeman
Partner, Disputes, London
+44 20 7466 2168
Greig Anderson
Greig Anderson
Partner, Disputes, London
+44 20 7466 2229
Andrew Moir
Andrew Moir
Partner, Global Head of Cyber Security, London
+44 20 7466 2773
Kate Macmillan
Kate Macmillan
Consultant, Disputes, London
+44 20 7466 3737
Lauren Hudson
Lauren Hudson
Associate, Digital TMT & Data, London
+44 20 7466 2483
Anna Henderson
Anna Henderson
Professional Support Consultant, Employment, Pensions & Incentives, London
+44 20 7466 2819
Maura McIntosh
Maura McIntosh
Professional Support Consultant, Disputes, London
+44 20 7466 2608

COVID-19 People: Data comes to the fore as outbreak continues (UK)

The COVID-19 outbreak is proving an interesting time to be a data protection practitioner. There seems to be a new article each day about the next exciting app which promises to use data to help manage the crisis.

This post focuses on two particular propositions that pose interesting data protection considerations. It also flags the wider issues that developers should bear in mind when trying to respond to this unprecedented crisis.

Contact Tracing

It was reported on 31 March 2020 that the UK government is actively set to develop some form of contact tracing app in the near future. This follows successful app-based contact tracing in Singapore and South Korea. Led by NHSX, the innovation arm of the NHS, the app will leverage Bluetooth to identify individuals who have been in close proximity to each other, storing a record of that contact, and providing a mechanism through which an individual can be notified if they have been in close proximity to someone that tested positive for COVID-19. Given the anticipated use of Bluetooth, it is possible that NHSX may leverage Singapore’s TraceTogether app which used the same technology, the code for which was open-sourced by the Singapore government last week. TraceTogether was widely praised for collecting the bare minimum of data despite the extraordinary circumstances at hand.

The success of any tracing app will depend on a critical mass of users downloading it. Concerns are already being raised about whether private entities might require either employees or customers to use the app, to show they have not been in contact with infected individuals. It will also depend on a comprehensive testing regime to ensure that those who are symptomatic are tested quickly so that the notification can be sent appropriately quickly. Similarly, swift testing may help avoid people being unduly required to quarantine themselves having been in contact with someone with minor symptoms which do not turn out to be COVID-19.

It is interesting to note that initial statements from NHSX suggest that contacts will be stored on users’ phones, with notifications sent via the app after a suitable delay to avoid identification of the infected individual. It is not currently intended that the data would be sent regularly to a central authority, which may give comfort to people concerned about their privacy. Additionally, NHSX has indicated that it intends to appoint an ethics board to oversee this project.

COVID Symptom Tracker

ZOE, a health and data science company, in conjunction with Tim Spector, a genetic epidemiology professor at Kings College London, have created an app called ‘COVID Symptom Tracker’ that allows users to self-report potential symptoms of COVID-19, even if feeling well. The aim is to use this data to track the progression of the virus in the UK, and potentially identify high risk areas.

At the time of writing the app has been downloaded over 1.5 million times and is listed in Apple’s top 10 free apps in the App Store. The app requires individuals to provide data including age, sex at birth, height, weight, postcode of residence, pre-existing health conditions, and habits such as smoking. Each day, users then report how they are feeling against a list of known symptoms. It appears from the app’s privacy policy that unanonymised personal data may be shared with the NHS or King’s College London, whilst data shared with other entities is given an anonymous identifier.

The app is based on consent, both to the data processing and to potential transfers of personal data to the US. Data is collected for the following purposes related to COVID-19 including: (i) better understanding the symptoms; (ii) tracking the spread of the virus; (iii) advancing scientific research into links between patient health and their response to infection with the virus; and (iv) potentially to help the NHS support sick individuals. Whilst at an initial glance this seems like a reasonably narrow set of processing purposes, you could envisage a surprisingly broad range of activities which might fall within these categories, including specifically tracking individuals.

Data protection considerations

When it comes to processing personal data, the post-GDPR mantra is increasingly ‘Just because you can, doesn’t mean you should’. The principles of fairness, transparency, purpose limitation and data minimisation in particular will require serious consideration to ensure that the proposed data usage is justifiable.

Whilst the Secretary of State for Health & Social Care Matt Hancock recently tweeted that “the GDPR does not inhibit use of data for coronavirus response”, this may not necessarily be aligned with the ICO position that the GDPR is still in full force, despite the fact that the ICO may take a pragmatic approach where necessary. There are certainly lawful routes to using personal data to fight COVID-19, but this should be done based on clear reasoning and analysis.

With that in mind, the following key considerations may assist when evaluating whether or not to use personal data in the context of COVID-19:

  • be confident that you have an appropriate lawful basis for processing the personal data. Remember that both vital interests and substantial public interest are very high bars to satisfy. Likewise, legitimate interests always needs to be balanced against any potential impact on individuals’ rights and freedoms;
  • do not use personal data for extraneous purposes. You should aim to keep your processing purposes as narrow as possible for the stated aims, and be conscious that any attempt to use the dataset for non COVID-19 related reasons might be seen as acting in bad faith. Similarly, the collected data should be limited to what is strictly necessary for the processing purposes. Avoid the temptation to collect additional categories of personal data because they ‘may’ be useful in future;
  • the potential volume of data processing, and categories of personal data being anticipated, suggest that in relation to many of the COVID-19 related apps a data privacy impact assessment should be undertaken. These should be completed carefully and not rushed for the sake of getting an app into the live environment;
  • consider who personal data is shared with, and whether sharing a full dataset is strictly necessary. It may be possible to anonymise personal data such that the recipient only receives fully anonymised data, which may help manage data subject concerns about where their personal data might go. Remember however that true anonymisation is difficult and the pseudonymisation alone does not take data outside of the scope of the GDPR;
  • given the potentially higher risk processing that is taking place, it is important that data subjects understand how their personal data will be used, and who it may be shared with, particularly where they are giving up unusual freedoms such as in the context of tracking. Data controllers should aim to go above and beyond to ensure their fair processing information is clear and easy to understand, so that individuals have good expectations of how their data will be used;
  • if and when relying on data subject consent for any processing, it is likewise important to ensure that the individuals understand exactly what they are consenting to. Now more than ever it is vital that consent is specific, freely given, informed and explicit when dealing with sensitive health data;
  • personal data collected in the context of COVID-19 is generally required for the specific aim of managing the outbreak of the virus or its effects. This may mean that it is not necessary or appropriate to retain this personal data once the virus has been controlled and life returns to normal, depending on what has been communicated to data subjects; and
  • holding larger volumes of personal data, or special category data, potentially represents a higher security risk and there may be increased cyber attacks on the dataset. Ensure that you have appropriate additional security measures in place where necessary.
Miriam Everett
Miriam Everett
Partner, Head of Data Protection and Privacy, London
+44 20 7466 2378
Hannah Brown
Hannah Brown
Associate, Digital TMT, Sourcing and Data, London
+44 20 7466 2677


Given the COVID-19 crisis, it is likely that data protection may no longer at the forefront of every controller’s mind, and rather, that business continuity has taken precedence. Acknowledging this shift and the need for companies to divert business as usual resources to their response to the crisis, the ICO has published two articles on its website, which are aimed at both controllers and data subjects more widely. Continue reading

International Data Privacy Day: Our predictions for 2020

What better day than today, International Data Privacy Day, to explore what 2020 is likely to have in store for data and privacy? Almost two years ago the EU General Data Protection Regulation (GDPR) thrust data and privacy issues firmly in the spotlight, where they remain. With attention having shifted from guidance to enforcement, this article sets out some predictions for further developments in the year to come.

  • Data ethics: The discussion is moving from “what can we do” to “what should we do” with data. Organisations are coming under increased pressure, not just from consumers who are now demanding greater transparency around how their data is collected, used and handled, but also other stakeholders such as government, regulators, industry bodies and shareholders. 2020 is likely to be the year in which we will see an increased focus in the boardroom on how to incorporate ‘ethical practices’ into data strategies, to leverage consumer trust and drive long-term profitability.
  • GDPR fines: In 2020 we expect to see the final enforcement notices for the British Airways and Marriott data breaches issued by the UK’s data protection authority, the Information Commissioner’s Office (ICO). These had originally been expected in early January, but an extension was agreed and final enforcement notices are now expected in March 2020 to finalise the penalties imposed on both organisations, both which were the result of high-profile data breaches and subsequent ICO investigations.
  • GDPR enforcement activity: Is 2020 also the year in which we see other big data breaches, investigations and fines? 2020 will also likely see a shift in enforcement activity – going beyond data breaches to other areas of non-compliance with the GDPR. For example, the Berlin data protection authority issued a €14.5 million fine on a real estate company for the over retention of personal data. Elsewhere in Europe, 2020 should be the year when we see the results of the Irish Data Protection Commissioner’s investigations into some of the biggest tech companies, including WhatsApp and Twitter.
  • Adtech focus:We also expect the GDPR to start becoming real for the adtech sector in 2020. In June 2019, the ICO released its Adtech Update Report, with a clear message to the real-time bidding industry that they had six months to act; the ICO expressed significant concerns about the lawfulness of the processing of special category data and the lack of explicit consent for that processing. That six-month period is now up, and while – to the dismay of privacy advocates – the ICO has announced that the proposals of the leaders of the industry, the Internet Advertising Bureau (IAB) and Google, will result in real improvements to the handling of personal data, in the same statement, it has stated that “[t]hose who have ignored the window of opportunity to engage and transform must now prepare for the ICO to utilise its wider powers.” So, will 2020 be the year in which we see meaningful enforcement action from the ICO in this area?
  • Adequacy decision for the UK: Yes, a Brexit-related prediction had to feature somewhere on this list. At the time of writing, it looks set that the United Kingdom will leave the European Union on 31 January 2020, with an 11-month transition period in place. The pertinent question now is what will Brexit look like at the end of this transition period, and in particular with respect to how international data transfers will be treated. It may be that 2020 is the year in which the European Commission makes an adequacy decision in favour of the United Kingdom, but concerns remain over the processing of personal data for law enforcement purposes in the UK – and the EU’s data protection supervisor has essentially said that the United Kingdom is at the back of the queue for any such decision. So, will 2020 be the year of a United Kingdom adequacy decision, or will it be the year in which organisations undertake a review of their UK data transfer flow agreements in a scramble to be compliant?
  • Lead supervisory authority no more: From 31 January 2020, the ICO will no longer be a supervisory authority for GDPR purposes and will not participate in the one stop shop mechanism or the consistency and cooperation procedure. The ICO will also lose its power to be the lead supervisory authority for approving binding corporate rules. It is possible that any future deal may change that position, but in the meantime multinational organisations whose activities are caught by the GDPR should ensure that they have an appropriate lead supervisory authority based in an EU Member State.
  • Schrems II and the SCCs: While in the case of Schrems II, the Advocate General (AG) of the Court of Justice of the European Union (CJEU) issued an opinion that upheld the validity of the European Commission standard contractual clauses (SCCs), the AG also raised concerns about the practical use of the SCCs in jurisdictions where national security laws would breach the SCCs, and suggests moving the responsibility for using the SCCs away from the data importer to the individual company exporting data. If the CJEU follows this opinion, which is expected in the first quarter of 2020, it could result in substantial additional burdens before using SCCs. It could also have ramifications for the United Kingdom after Brexit.
  • Fall of the US Privacy Shield: In Schrems II, the AG opinion also expressed concerns over the EU/US Privacy Shield. If the CJEU follows the AG’s opinion then it could influence the case of La Quadrature du Net v Commission – a case concerning the French advocacy group, La Quadrature du Net, which is seeking to invalidate the Privacy Shield on the basis that it fails to uphold fundamental EU rights because of US government mass surveillance practices. Will 2020 be the year we see the Privacy Shield suffer the same fate as its predecessor, the Safe Harbour?
  • Artificial Intelligence regulation: The European Commission’s incoming president, Ursula von der Leyen, has stated that she will put forward legislation to regulate the use of artificial intelligence and only this month a draft Commission white paper was leaked, which floated a number of options on how to achieve this. This ranged from imposing mandatory risk-based requirements on developers, to sector-specific requirements, to voluntary labelling. Although it would not be a reality for a number of years, 2020 looks likely to be the year that we see a firmer picture emerge about the direction that the European Commission wishes to take AI regulation.
  • Data class actions: In November 2019, the Supreme Court heard Morrisons’ appeal of the finding that it was vicariously liable under the Data Protection Act 1998 for a data breach committed by a disgruntled employee, even though Morrisons themselves were data protection compliant. While this case involves the law as it stood before the GDPR, given the increase in the rights of data subjects under the GDPR, should the Supreme Court decision find in favour of the claimants, this could open the door in 2020 to a wave of class actions from employees, customers, and others whose personal data has been compromised in a data breach.
  • Data-focused commercial disputes: And it is not just collective actions from data subjects that may rise – in 2020 we could also see increased data protection-focused litigation and commercial disputes in the business to business sphere, as the spotlight continues to remain on data. For example, disputes over the allocation of liability where a controller has been fined and is seeking to claim this back from a third party processor. Which leads us on to…
  • Third party risks: Focus in 2020 will also be firmly directed at third party risk management and demands on suppliers and vendors to demonstrate compliance. Gartner research reveals that “compliance programs are focused on third-party risk more than ever before, with more than twice the number of compliance leaders considering it a top risk in 2019 than three years ago.” As the nature of third party relationships continues to evolve, and the amount of data that third parties host and process for organisations on the rise, processes and procedures also need to evolve to address this risk.
  • Data is a global issue: In the wake of the GDPR and the California Consumer Privacy Act, we are seeing a global trend in other jurisdictions to introducing or seeking to introduce more robust data protection laws. For example, 2020 will see both the Brazilian General Data Protection Law (which is largely based on the GDPR) and Thailand’s Personal Data Protection Act come into force. Other data protection legislation initiatives are also going through approval stages – for example, the New Zealand Privacy Bill and India’s first major data protection bill.
  • ePrivacy: But will 2020 be the year that finally sees agreement on the new ePrivacy proposals in Europe? The update to the European legislation which regulates cookies and electronic marketing has been plagued by delays and disagreements. Even if 2020 is the year that ePrivacy is finally agreed in Europe, considerations will then move to the UK’s own approach to ePrivacy in a post-Brexit world.

For more information, or if you have any queries, please contact Miriam Everett.

Miriam Everett
Miriam Everett
Partner, Head of Data Protection and Privacy, London
+44 20 7466 2378
Chloe Kite
Chloe Kite
Associate, London
+44 20 7466 2540

Protecting your company’s critical resource: Is your company PDPA ready?

Data has been labeled the world’s most valuable resource in our current digital economy.  It is the lifeblood of many companies, especially those in the technology, media and telecommunications sector where data is often ­used to predict, analyse and respond to consumers’ behaviours, patterns and preferences for services and products.  Capabilities to collect and analyse mass data are therefore seen as a decisive factor used to distinguish whether one company is a cut above the rest, using data to accurately determine current and future market trends.  But in a regulated society, companies cannot freely process whatever data they choose – a balance must be struck between technological innovation and protection of individuals’ rights attaching to their personal data. Continue reading

Facial Recognition Technology and Data Protection Law: the ICO’s view

The Information Commissioner’s Office in the UK (ICO) has announced an investigation into the use of facial recognition technology following a string of high profile uses. Prior to the results of this investigation, companies using facial recognition technology should:

  • undertake a balancing test to ensure proportionality in the use of such technology, acknowledging its intrusiveness;
  • ensure that appropriate documentation, including data protection impact assessments and policy documentation are developed and maintained; and
  • monitor use of the technology to eliminate any potential bias in the algorithms.

The use of Live Facial Recognition Technology (LFR) in public places has increased considerably over the last few years by the police, other law enforcement agencies and also by the private sector. This increase is causing growing concern amongst regulators, government and ethics committees relating to the serious risks it poses to privacy given the sensitive nature of the processing involved, the potential volume of people affected and the level of intrusion into privacy it has the capacity to create. Moves are now being made to address the use of this technology and put a legal framework in place in a bid to mitigate the risks it poses.

ICO launches facial recognition Investigation

The Information Commissioner, Elizabeth Denham, published a blog on 9th July 2019 entitled ”Live Facial Technology – data protection law applies” (available at: and announced that the Information Commissioner’s Office (ICO) is conducting investigations into the use of LFR in the King’s Cross area of London.

The ICO investigation follows a spate of trials of LFR going on at various sites across the country including Meadowfield Shopping Centre in Liverpool, Liverpool’s World Museum, Manchester’s Trafford Centre and King’s Cross where the technology has been used by police forces primarily but also in conjunction with site owners to identify individuals at risk or linked to criminal activity.

The ICO was also recently called to advise the judge on data protection law in the case of R (Bridges) v Chief Constable of South Wales Police (SWP).

The ICO’s principle concern is that organisations utilising facial recognition technology, including the police, be able to provide demonstrable evidence when deploying this technology that it is ‘necessary, proportionate and effective considering its invasiveness.’

In addition, she emphasises that police forces must comply with data protection law which currently includes the GDPR and the Data Protection Act 2018, paying particular attention to the compilation of watch lists, the selection of images used and the need to remove inherent bias in the technology to prevent false-positive matches from certain ethnic groups.

ICO Guidance

The ICO has issued a guidance for police forces considering the deployment of LFR which consists of four basic instructions:

  1. Conduct a Data Protection Impact Assessment (DPIA) before any deployment of LFR and submit these to the ICO for consideration to ensure timely discussion on mitigation of risks.
  2. Create a separate policy document to cover the use of LFR which establishes for what type of circumstances, in what types of places, at what times and in what way the technology will be used.
  3. Monitor algorithms within the software to ensure that no race or sex bias is created.
  4. Read the ICO Guide to Law Enforcement Processing (available at: which deals with Part 3 of the DPA and highlights individual rights (including the right to be informed, the right of access, the right to rectification, the right to erasure, the right to restriction and the right not to be subject to automated decision-making) and the importance of accountability and governance.

This Guidance should be appropriately considered by any business considering the use of this type of technology.

It has been a critical moment for regulators to begin to scrutinise LFR and provide guidance given the inherent risk of abuse of data protection and privacy laws it poses and the results of the ICO’s investigation are anticipated with great interest. It is likely that greater vigilance will be called for in the future, especially given the expected rise in the use of this technology and when new uses of the technology come into play.

LFR technology has already been developed, for example, that uses a combination of facial recognition technology and people’s mobile phones which may be used to speed up the immigration process. It is evident that LFR is potentially an extremely useful tool for the enhancement of public safety but the accuracy of images and the elimination of bias in algorithms will undoubtedly be critical to ensure that this technology can be adopted in the mainstream and in compliance with applicable privacy legislation.

Miriam Everett
Miriam Everett
Partner, London
+44 20 7466 2378