COVID-19: ICO OPINES ON APPLE AND GOOGLE’S CONTACT TRACING TECHNOLOGY (UK)

On 17 April 2020, the ICO published an opinion by the Information Commissioner (the “Commissioner”) on Apple and Google’s joint initiative to develop COVID-19 contact tracing technology (the “Opinion”, available here).

Summary

  • The Commissioner found the CTF to be aligned with principles of data protection by design and by default.
  • Controllers designing contact tracing apps that use the CTF should ensure alignment with data protection law and regulation, especially if they process personal data (which the CTF does not require).
  • The Commissioner raised concerns regarding individuals assuming that the CTF’s compliance with data protection principles will extend to all aspects of the contact tracing app – which is not necessarily the case.
  • Therefore, it should be made clear to any app users who is responsible for data processing, especially if the app processes data outside of the CTF’s limited scope.
  • Data controllers designing CTF-enabled contact tracing apps must be transparent with potential and actual app users on the type of information they will be processing.
  • Finally, when it comes to a user’s ability to disable Bluetooth, the Commissioner observed that with regard to contact tracing apps in general: “a user should not have to take action to prevent tracking”.

As set out in our previous blogpost (available here), contact tracing is one of the measures being contemplated or implemented by European governments (including in the UK and Germany) in order to be able to put an end to lockdowns while containing the spread of the virus.

The scope of the Opinion was limited to the design of the contact tracing framework which enables the development of COVID-19 contact tracing apps by public health authorities through the use of Bluetooth technology (the “CTF”).

It is also worth noting that this Opinion has been published in the midst of a heated debate on contact tracing technology and fears that it may be used for mass surveillance – in an open letter published on 20 April 2020, around 300 international academics cautioned against creating a tool which will enable large scale data collection on populations.

How does the CTF work?

The CTF is composed of “application programming interfaces“ as well as “operating system level technology to assist contact tracing”. The collaboration between Apple and Google will result in interoperability between Android and iOS devices of apps developed by public health authorities using the CTF.

When two devices with contact tracing apps come into proximity, each device will exchange cryptographic tokens (which change frequently) via Bluetooth technology. Each token received will be stored in a ‘catalogue’ on the user’s device, effectively creating a record of all other devices a user has come into contact with. Once a user is diagnosed with COVID-19, and after they have given their consent, the app will upload the stored ‘catalogue’ of tokens to a server. Other users’ devices will periodically download a list of broadcast tokens of users who have tested positive to COVID-19. If a match is found between the broadcast tokens and the ‘catalogue’ of tokens stored on each user’s device, the app will notify the user that he/she has come into contact with a person who has tested positive and will suggest appropriate measures to be taken.

How does the CTF comply with data protection laws?

The Opinion finds that, based on the information released by Google and Apple on 10 April 2020, the CTF is compliant with principles of data protection by design and by default because:

  1. The data collected by the CTF is minimal: The information contained in the tokens exchanged does not include any personal data (such as account information or usernames) or any location data. Furthermore the ‘matching process’ between tokens of users who have tested positive for COVID-19 and tokens stored on each user’s phone happens on the device and therefore does not involve the app developer or any third party.
  2. The CTF incorporates sufficient security measures: The cryptographic nature of the token which is generated on the device (outside the control of the contact tracing app) means that the information broadcast to other nearby devices cannot be related to an identifiable individual. In addition, the fact that the tokens generated by one device are frequently changed (to avoid ultimate tracing back to individual users) minimises the risk of identifying a user from an interaction between two devices.
  3. The user maintains sufficient control over contact tracing apps which use the CTF: Users will voluntarily download and install the contact tracing app on their phone (although this may change in ‘Phase 2’ of the CTF as discussed below). Users also have the ability to remove and disable the app. In addition, the process of uploading the collected tokens of a user to the app once he/she has tested positive by the developer requires a separate consent process.
  4. The CTF’s purpose is limited: Although the CTF is built for the limited purpose of notifying users who came into contact with patients who have tested positive for COVID-19, the Commissioner stresses that any expansion of the use of CTF-enabled apps beyond this limited purpose will require an assessment of compliance with data protection principles.

What clarifications are required?

The Commissioner raises a number of questions on the practical functioning of the CTF, especially in respect of collection and withdrawal of user consent post-diagnosis. It is unclear how the CTF will facilitate the uploading of stored tokens to the app. Although consent will be required from the user, clarity is needed on: (i) management of the consent signal by a CTF-enabled app and (ii) what control will be given to users in this respect. In addition, the Commissioner lacks information on how consent withdrawal will impact the effectiveness of the contact tracing solutions and the notifications sent to other users once an individual has been diagnosed.

Issues for developers

The Commission will pay close attention to the implementation of the CTF in contact tracing apps. In particular, the CTF does not prevent app developers from collecting other types of data such as location. Although reasons for collecting other types of user information may be “legitimate and permissible” in order to pursue the public health objective of these apps (for example to ensure the system is not flooded with false diagnoses or to assess compliance with isolation), the Commissioner warns that data protection considerations will need to be assessed by the controller – this includes the public health organisations which develop (or commission the development of) contact tracing apps.

Another issue raised by the Commissioner is the potential user assumption that the compliance by the CTF with data protection laws will radiate to all other functionalities which may be built into contact tracing apps. In this regard, the Commissioner reminds app developers that, in addition to assessing data protection compliance in relation to other categories of data processed by the app, they will need to clearly specify to users who is responsible for data processing – in order to comply with transparency and accountability principles.

Finally, the Commissioner stressed that data controllers, such as app developers, must assess the data protection implications of both (i) the data being processed through the app and (ii) data undertaken by way of the CTF in order to ensure that both layers of processing are fair and lawful.

What has the ICO said about ‘Phase 2’ of the CTF?

‘Phase 2’ of development of the CTF aims to integrate the CTF in the operating system of each device. The Commissioner notes that users’ control, their ability to disable contact tracing or to withdraw their consent to contact tracing should be considered when developing the next phase of the CTF.

With regard to user’s ability to disable Bluetooth on their device, the Commissioner observes in respect of ‘Phase 2’ of the CTF, and contact tracing apps in general, that “a user should not have to take action to prevent tracking”.

How does this Opinion affect the development of Decentralized Privacy-Preserving Proximity Tracing protocol?

The Opinion can be applied to Decentralized Privacy-Preserving Proximity Tracing (or DP-3T) protocol in so far as it is similar to the CTF. The Commissioner states that the similarities between the two projects gives her comfort that “these approaches to contact tracing app solutions are generally aligned with the principles of data protection by design and by default”.

Insight

This Opinion is an important step in the development and roll out of contact tracing apps in the UK. As mentioned above, contact tracing is one of the tools necessary for the UK Government to lift the lockdown measures while minimising the impact of a potential second wave of infections. This has an indirect impact on the private sector as it will affect how and when employees will be able to go back to work.

The fact that the principles on which the CTF is based are compliant with data protection laws is crucial to the successful roll out of contact tracing apps. In order for these apps to be effective, they must be voluntarily downloaded by a large number of mobile users. Given the concerns around letting governments accumulate data on the population under the guise of putting an end to the pandemic, trust is a determining factor in this equation. The fact that the Commissioner is approving the foundation for these contact tracing apps will certainly play a role in gaining the public’s trust and its acceptance to give up some privacy rights in order to put an end to the current public health crisis.

Miriam Everett
Miriam Everett
Partner, Head of Data Protection and Privacy, London
+44 20 7466 2378
Hannah Brown
Hannah Brown
Associate, Digital TMT, Sourcing and Data, London
+44 20 7466 2677
Ghislaine Nobileau
Ghislaine Nobileau
Trainee Solicitor, London
+44 20 7466 7503

COVID-19: ICO publishes details of its regulatory approach during COVID-19 (UK)

The ICO has published details of its regulatory approach during the ongoing COVID-19 emergency; this is an approach which should reassure entities who are adapting to the economic and practical realities of operating in the current climate, as well as balancing their data protection obligations.  The UK regulator has continued to be reasonable and pragmatic, as outlined in our previous post in relation to response times to DSARs, and has stated that they are “committed to an empathetic…approach”.  Overall, the key takeaways from this guidance are that: Continue reading

ICO TELLS PEOPLE TO EXPECT DELAYS TO DSARS DURING COVID-19

Given the COVID-19 crisis, it is likely that data protection may no longer at the forefront of every controller’s mind, and rather, that business continuity has taken precedence. Acknowledging this shift and the need for companies to divert business as usual resources to their response to the crisis, the ICO has published two articles on its website, which are aimed at both controllers and data subjects more widely. Continue reading

The ICO publishes its Age–Appropriate Design Code of Practice for online services

Following a public consultation on its draft code of practice with parents, children, schools, children’s campaign groups, developers, tech and gaming companies and online service providers which closed on 31 May 2019, the Information Commissioner’s Office (ICO) submitted its Age-appropriate design Code of Practice on 12 November 2019 but due to restrictions in the pre-election period it was not permitted to be published until 23 January 2020. Continue reading

International Data Privacy Day: Our predictions for 2020

What better day than today, International Data Privacy Day, to explore what 2020 is likely to have in store for data and privacy? Almost two years ago the EU General Data Protection Regulation (GDPR) thrust data and privacy issues firmly in the spotlight, where they remain. With attention having shifted from guidance to enforcement, this article sets out some predictions for further developments in the year to come.

  • Data ethics: The discussion is moving from “what can we do” to “what should we do” with data. Organisations are coming under increased pressure, not just from consumers who are now demanding greater transparency around how their data is collected, used and handled, but also other stakeholders such as government, regulators, industry bodies and shareholders. 2020 is likely to be the year in which we will see an increased focus in the boardroom on how to incorporate ‘ethical practices’ into data strategies, to leverage consumer trust and drive long-term profitability.
  • GDPR fines: In 2020 we expect to see the final enforcement notices for the British Airways and Marriott data breaches issued by the UK’s data protection authority, the Information Commissioner’s Office (ICO). These had originally been expected in early January, but an extension was agreed and final enforcement notices are now expected in March 2020 to finalise the penalties imposed on both organisations, both which were the result of high-profile data breaches and subsequent ICO investigations.
  • GDPR enforcement activity: Is 2020 also the year in which we see other big data breaches, investigations and fines? 2020 will also likely see a shift in enforcement activity – going beyond data breaches to other areas of non-compliance with the GDPR. For example, the Berlin data protection authority issued a €14.5 million fine on a real estate company for the over retention of personal data. Elsewhere in Europe, 2020 should be the year when we see the results of the Irish Data Protection Commissioner’s investigations into some of the biggest tech companies, including WhatsApp and Twitter.
  • Adtech focus:We also expect the GDPR to start becoming real for the adtech sector in 2020. In June 2019, the ICO released its Adtech Update Report, with a clear message to the real-time bidding industry that they had six months to act; the ICO expressed significant concerns about the lawfulness of the processing of special category data and the lack of explicit consent for that processing. That six-month period is now up, and while – to the dismay of privacy advocates – the ICO has announced that the proposals of the leaders of the industry, the Internet Advertising Bureau (IAB) and Google, will result in real improvements to the handling of personal data, in the same statement, it has stated that “[t]hose who have ignored the window of opportunity to engage and transform must now prepare for the ICO to utilise its wider powers.” So, will 2020 be the year in which we see meaningful enforcement action from the ICO in this area?
  • Adequacy decision for the UK: Yes, a Brexit-related prediction had to feature somewhere on this list. At the time of writing, it looks set that the United Kingdom will leave the European Union on 31 January 2020, with an 11-month transition period in place. The pertinent question now is what will Brexit look like at the end of this transition period, and in particular with respect to how international data transfers will be treated. It may be that 2020 is the year in which the European Commission makes an adequacy decision in favour of the United Kingdom, but concerns remain over the processing of personal data for law enforcement purposes in the UK – and the EU’s data protection supervisor has essentially said that the United Kingdom is at the back of the queue for any such decision. So, will 2020 be the year of a United Kingdom adequacy decision, or will it be the year in which organisations undertake a review of their UK data transfer flow agreements in a scramble to be compliant?
  • Lead supervisory authority no more: From 31 January 2020, the ICO will no longer be a supervisory authority for GDPR purposes and will not participate in the one stop shop mechanism or the consistency and cooperation procedure. The ICO will also lose its power to be the lead supervisory authority for approving binding corporate rules. It is possible that any future deal may change that position, but in the meantime multinational organisations whose activities are caught by the GDPR should ensure that they have an appropriate lead supervisory authority based in an EU Member State.
  • Schrems II and the SCCs: While in the case of Schrems II, the Advocate General (AG) of the Court of Justice of the European Union (CJEU) issued an opinion that upheld the validity of the European Commission standard contractual clauses (SCCs), the AG also raised concerns about the practical use of the SCCs in jurisdictions where national security laws would breach the SCCs, and suggests moving the responsibility for using the SCCs away from the data importer to the individual company exporting data. If the CJEU follows this opinion, which is expected in the first quarter of 2020, it could result in substantial additional burdens before using SCCs. It could also have ramifications for the United Kingdom after Brexit.
  • Fall of the US Privacy Shield: In Schrems II, the AG opinion also expressed concerns over the EU/US Privacy Shield. If the CJEU follows the AG’s opinion then it could influence the case of La Quadrature du Net v Commission – a case concerning the French advocacy group, La Quadrature du Net, which is seeking to invalidate the Privacy Shield on the basis that it fails to uphold fundamental EU rights because of US government mass surveillance practices. Will 2020 be the year we see the Privacy Shield suffer the same fate as its predecessor, the Safe Harbour?
  • Artificial Intelligence regulation: The European Commission’s incoming president, Ursula von der Leyen, has stated that she will put forward legislation to regulate the use of artificial intelligence and only this month a draft Commission white paper was leaked, which floated a number of options on how to achieve this. This ranged from imposing mandatory risk-based requirements on developers, to sector-specific requirements, to voluntary labelling. Although it would not be a reality for a number of years, 2020 looks likely to be the year that we see a firmer picture emerge about the direction that the European Commission wishes to take AI regulation.
  • Data class actions: In November 2019, the Supreme Court heard Morrisons’ appeal of the finding that it was vicariously liable under the Data Protection Act 1998 for a data breach committed by a disgruntled employee, even though Morrisons themselves were data protection compliant. While this case involves the law as it stood before the GDPR, given the increase in the rights of data subjects under the GDPR, should the Supreme Court decision find in favour of the claimants, this could open the door in 2020 to a wave of class actions from employees, customers, and others whose personal data has been compromised in a data breach.
  • Data-focused commercial disputes: And it is not just collective actions from data subjects that may rise – in 2020 we could also see increased data protection-focused litigation and commercial disputes in the business to business sphere, as the spotlight continues to remain on data. For example, disputes over the allocation of liability where a controller has been fined and is seeking to claim this back from a third party processor. Which leads us on to…
  • Third party risks: Focus in 2020 will also be firmly directed at third party risk management and demands on suppliers and vendors to demonstrate compliance. Gartner research reveals that “compliance programs are focused on third-party risk more than ever before, with more than twice the number of compliance leaders considering it a top risk in 2019 than three years ago.” As the nature of third party relationships continues to evolve, and the amount of data that third parties host and process for organisations on the rise, processes and procedures also need to evolve to address this risk.
  • Data is a global issue: In the wake of the GDPR and the California Consumer Privacy Act, we are seeing a global trend in other jurisdictions to introducing or seeking to introduce more robust data protection laws. For example, 2020 will see both the Brazilian General Data Protection Law (which is largely based on the GDPR) and Thailand’s Personal Data Protection Act come into force. Other data protection legislation initiatives are also going through approval stages – for example, the New Zealand Privacy Bill and India’s first major data protection bill.
  • ePrivacy: But will 2020 be the year that finally sees agreement on the new ePrivacy proposals in Europe? The update to the European legislation which regulates cookies and electronic marketing has been plagued by delays and disagreements. Even if 2020 is the year that ePrivacy is finally agreed in Europe, considerations will then move to the UK’s own approach to ePrivacy in a post-Brexit world.

For more information, or if you have any queries, please contact Miriam Everett.

Miriam Everett
Miriam Everett
Partner, Head of Data Protection and Privacy, London
+44 20 7466 2378
Chloe Kite
Chloe Kite
Associate, London
+44 20 7466 2540

ICO OPENS CONSULTATION ON DATA SUBJECT ACCESS RIGHTS

The ICO (the UK privacy regulator) has published draft guidance on the right of individuals under the GDPR to access their data. Key takeaways include:

  • An acknowledgement that subject access requests can be burdensome, with a requirement to ‘make extensive efforts’ to locate and retrieve information and confirmation that a significant burden does not make a request ‘excessive’;
  • A warning against companies asking for proof of identity as a matter of course when there is no reason to doubt the requestor’s identity; and
  • Confirmation that it is possible to consider the intention or motive behind a subject access request when assessing whether or not it is possible to refuse to comply.

Continue reading

Facial Recognition Technology and Data Protection Law: the ICO’s view

The Information Commissioner’s Office in the UK (ICO) has announced an investigation into the use of facial recognition technology following a string of high profile uses. Prior to the results of this investigation, companies using facial recognition technology should:

  • undertake a balancing test to ensure proportionality in the use of such technology, acknowledging its intrusiveness;
  • ensure that appropriate documentation, including data protection impact assessments and policy documentation are developed and maintained; and
  • monitor use of the technology to eliminate any potential bias in the algorithms.

The use of Live Facial Recognition Technology (LFR) in public places has increased considerably over the last few years by the police, other law enforcement agencies and also by the private sector. This increase is causing growing concern amongst regulators, government and ethics committees relating to the serious risks it poses to privacy given the sensitive nature of the processing involved, the potential volume of people affected and the level of intrusion into privacy it has the capacity to create. Moves are now being made to address the use of this technology and put a legal framework in place in a bid to mitigate the risks it poses.

ICO launches facial recognition Investigation

The Information Commissioner, Elizabeth Denham, published a blog on 9th July 2019 entitled ”Live Facial Technology – data protection law applies” (available at: https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2019/07/blog-live-facial-recognition-technology-data-protection-law-applies/) and announced that the Information Commissioner’s Office (ICO) is conducting investigations into the use of LFR in the King’s Cross area of London.

The ICO investigation follows a spate of trials of LFR going on at various sites across the country including Meadowfield Shopping Centre in Liverpool, Liverpool’s World Museum, Manchester’s Trafford Centre and King’s Cross where the technology has been used by police forces primarily but also in conjunction with site owners to identify individuals at risk or linked to criminal activity.

The ICO was also recently called to advise the judge on data protection law in the case of R (Bridges) v Chief Constable of South Wales Police (SWP).

The ICO’s principle concern is that organisations utilising facial recognition technology, including the police, be able to provide demonstrable evidence when deploying this technology that it is ‘necessary, proportionate and effective considering its invasiveness.’

In addition, she emphasises that police forces must comply with data protection law which currently includes the GDPR and the Data Protection Act 2018, paying particular attention to the compilation of watch lists, the selection of images used and the need to remove inherent bias in the technology to prevent false-positive matches from certain ethnic groups.

ICO Guidance

The ICO has issued a guidance for police forces considering the deployment of LFR which consists of four basic instructions:

  1. Conduct a Data Protection Impact Assessment (DPIA) before any deployment of LFR and submit these to the ICO for consideration to ensure timely discussion on mitigation of risks.
  2. Create a separate policy document to cover the use of LFR which establishes for what type of circumstances, in what types of places, at what times and in what way the technology will be used.
  3. Monitor algorithms within the software to ensure that no race or sex bias is created.
  4. Read the ICO Guide to Law Enforcement Processing (available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-law-enforcement-processing/) which deals with Part 3 of the DPA and highlights individual rights (including the right to be informed, the right of access, the right to rectification, the right to erasure, the right to restriction and the right not to be subject to automated decision-making) and the importance of accountability and governance.

This Guidance should be appropriately considered by any business considering the use of this type of technology.

It has been a critical moment for regulators to begin to scrutinise LFR and provide guidance given the inherent risk of abuse of data protection and privacy laws it poses and the results of the ICO’s investigation are anticipated with great interest. It is likely that greater vigilance will be called for in the future, especially given the expected rise in the use of this technology and when new uses of the technology come into play.

LFR technology has already been developed, for example, that uses a combination of facial recognition technology and people’s mobile phones which may be used to speed up the immigration process. It is evident that LFR is potentially an extremely useful tool for the enhancement of public safety but the accuracy of images and the elimination of bias in algorithms will undoubtedly be critical to ensure that this technology can be adopted in the mainstream and in compliance with applicable privacy legislation.

Miriam Everett
Miriam Everett
Partner, London
+44 20 7466 2378

A Clearer Roadmap to Recovery: the roles of NCSC and ICO clarified at CYBERUK

The National Cyber Security Centre (NCSC) and the Information Commission Office (ICO) have clarified their roles in relation to breaches of cyber security.  NCSC manages cyber incidents at a national level to prevent harm being caused to both victims and the UK overall. It helps manage the response at a governmental level and seeks to ensure that lessons are learned to help deter future attacks. The ICO is the independent regulator for enforcing and monitoring data protection legislation and the competent authority for Digital Service Providers under the Network and Information Systems (NIS) Directive. The ICO is the first port of call for organisations who have suffered a breach of cyber security. Continue reading

ICO’s proposed largest ever fine of £183 million against BA prompts the question: can you insure penalties imposed for breach of GDPR?

The UK’s data protection authority, the ICO, has announced twice in two days this week that it proposes to levy significant fines on organisations for breaches of the General Data Protection Regulation (GDPR), which took effect in May 2018. First it announced that it intends to fine British Airways some £183 million for a data breach in 2018 that affected 500,000 customers (see our Data Blog here for more details). The following day it announced that it proposed to fine Marriott hotels group nearly £100 million, again for a data breach that affected customers (see our Data Blog here for more details). Both BA and Marriot may make representations to the ICO before final decisions are taken. These proposed fines dwarf previous fines issued by the ICO which were capped at £500,000 under the old privacy regime.

Until now the business world has been waiting to see how the ICO would use its powers under the new GDPR regime. Under the regime, the ICO can now impose a broader range of significant civil penalties for data protection breaches than was previously possible. This includes penalties of up to €20 million or 4% of a company’s global annual turnover, as well as potentially ordering companies to stop processing personal data altogether. The ICO is clearly now baring its teeth. Continue reading

Marriott/Starwood Data Breach: ICO intention to issue another big £99 million ‘mega fine’

  • Just one day after its notice of its intent to fine British Airways £183.39 million, the ICO has issued a further notice of intent to fine Marriott International £99.2 million for its own data breach;
  • The systems of the Starwood hotel group were originally compromised in 2014, prior to the acquisition of Starwood by Marriott in 2016 – the breach itself was not discovered until 2018 following completion of the corporate acquisition;
  • The fine shines a spotlight on the importance of data and cyber due diligence in corporate transactions;
  • No details have yet been published by the ICO regarding the specific GDPR infringements involved;
  • Marriott now has the chance to respond to the notice of intent, after which a final decision will be made by the ICO.

Continue reading