AI regulation remains EU priority, as UK Committee Report on AI raises issues but answers still pending

Yesterday European Commission President von der Leyen presented the 2023 State of the Union address which, as anticipated, included a focus on prioritising the responsible use of artificial intelligence. This is set against the global policy discussions around AI at G7 and G20 last week, the impending UK Artificial Intelligence Safety Summit and publication of the House of Commons UK Science, Innovation and Technology Select Committee (the Committee) The Governance of Artificial Intelligence: Interim Report (the Report) (pdf here) on 31 August 2023.

The Report follows a recently conducted inquiry into the impact of AI on several sectors. In particular, it identifies 12 challenges with use of AI and recommends that legislation is introduced to address regulation of AI during this parliamentary session (ie before the UK general election due in 2024). The Committee expresses concerns that the UK will fall behind if there are delays, given the efforts made by the EU and US to regulate AI already.

Whilst the Report usefully identifies in one place key challenges with use of AI, these are not new concepts and the Report does not put forward solutions to address those challenges at this stage. It will be interesting to see the extent to which progress is made in grappling with these issues through the various international cooperation efforts in due course. We will be providing you with the key takeaway from the UK Artificial Intelligence Safety Summit in due course.

2023 State of Union Address: The three pillars of the new global framework for AI

As part of her address, President von der Leyen acknowledged that “Europe has become the global pioneer of citizen’s rights in the digital world” including through the Digital Service Act and Digital Markets Act “ensuring fairness with clear responsibilities for big tech”.

The President stated “the same should be true for artificial intelligence.” In particular, she referenced a recent warning from leading AI developers, academics and experts that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. In doing so, the President described a “narrowing window of opportunity to guide this technology responsibly” and a belief that “together with partners, [Europe] should lead the way on a new global framework for AI built on three pillars: (i) guardrails; (ii) governance; and (iii) guiding innovation.”

The EU AI Act was mentioned as a “blueprint for the whole world” as the “world’s first comprehensive pro-innovation AI law” in the context of the guardrails pillar and ensuring AI develops in a human-centric, transparent and responsible way. In respect of the “governance” pillar, the President considered laying the foundations for a single AI governance system in Europe (alongside ensuring a “global approach to understanding the impact of AI in our societies”). As well as setting up a body of experts in AI to consider the risks and benefits for humanity, not dissimilar to the invaluable contribution of the IPCC for climate (a global panel that provides the latest science to policymakers) and building on the Hiroshima Process.

In respect of the final pillar guiding innovation in a responsible way, the President announced: (i) a new initiative to open up European high-performance computers to AI start-ups to train their models; (ii) an open dialogue with those developing and deploying AI; and (iii) initiatives to establish voluntary commitments to the principles of the AI Act before it comes into force (akin to the voluntary AI rules around safety, security and trust agreed to by seven major technology companies).

The UK Governance of Artificial Intelligence: Interim Report

(A) The need for regulation now and the establishment of an international forum on AI:

The Report encourages the UK Government to move directly to legislate AI, rather than to apply the approach set out in its White Paper of March 2023. The approach set out in the White Paper envisaged five common principles to frame regulatory activity, guide future development of AI models and tools, and their use. These principles were not to be put on a statutory footing initially but were to be “interpreted and translated into action by individual sectoral regulators, with assistance from central support functions”. The White Paper goes on, however, to anticipate “introducing a statutory duty on regulators requiring them to have due regard to the principles” when parliamentary time allows.

The Report recognises that although the UK has a long history of technological innovation and regulatory expertise, which “can help it forge a distinctive regulatory path on AI“, the AI White Paper is only an initial effort to engage with AI regulation and it’s approach risks the UK falling behind given the pace of development of AI and especially in light of the efforts of other jurisdictions, principally the European Union and United States, to set international standards.

The Report suggests “a tightly-focussed AI Bill in the next King’s Speech would help, not hinder, the Prime Minister’s ambition to position the UK as an AI governance leader. Without a serious, rapid and effective effort to establish the right governance frameworks—and to ensure a leading role in international initiatives—other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer.

(B) 12 essential challenges of AI identified: Of particular note, the Report identifies the challenges associated with use of AI in general and twelve essential challenges that AI governance must address if public safety and confidence in AI are to be secured:

  1. The Bias challenge. AI can introduce or perpetuate biases that society finds unacceptable. The Report warns that inherent human biases encoded in the datasets used to inform AI models and tools could replicate bias and discrimination against minority and underrepresented communities in society.
  2. The Privacy challenge. AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants. Particular emphasis is placed on live facial recognition technology, with the warning that systems may not adequately respect individual’s rights, currently set out in legislation such as the Data Protection Act 2018, in the absence of specific, comprehensive regulation.
  3. The Misrepresentation challenge. AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions, or character. The Report attributes the combination of data availability and new AI models to the increasingly convincing dissemination of ‘fake news’. Examples given included purporting to show individuals ‘passing off’ information through voice and image recordings, particularly damaging if used to influence election campaigns, enable fraudulent transactions in financial services, or damage individual’s reputations. The Report goes on to warn of the dangers when coupled with algorithmic recommendations on social media platforms targeting relevant groups.
  4. The Access to Data challenge. The most powerful AI needs very large datasets, which are held by few organisations.The Report raises competition concerns caused by the lack of access to sufficient volumes of high-quality training data for AI developers outside of the largest players. There is proposed legislation to mandate research access to Big Tech data stores “to encourage a more diverse AI development ecosystem“.
  5. The Access to Compute challenge. The development of powerful AI requires significant compute power, access to which is limited to a few organisations. Academic research is deemed to be particularly disadvantaged by this challenge compared to private developers. The Report suggests efforts are already underway to establish an Exascale supercomputer facility and AI-dedicated compute resources, with AI labs giving priority access to models for research and safety purposes.
  6. The Black Box challenge. Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements. The Report calls for regulation to ensure more transparent and more explicable AI models and reckons that explainability would increase public confidence and trust in AI.
  7. The Open-Source challenge. Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms. This is a further example of how the Committee view the need to increase the capacity for development and use of AI amongst more widely distributed players. The Report acknowledges the need to protect against misuse as it cites opinions that open-source code would allow malign actors to cause harm, for example through the dissemination of misleading content. There is no conclusion by the Committee as to which method is preferable.
  8. The Intellectual Property and Copyright Challenge. Some AI models and tools make use of other people’s content: policy must establish the rights of the originators of this content, and these rights must be enforced. The Report comments that “Some AI models and tools make use of other people’s content: policy must establish the rights of the originators of this content, and these rights must be enforced” and that whilst the use of AI models and tools have helped create revenue for the entertainment industry in areas such as video games and audience analytics, concerns have been raised about the ‘scraping’ of copyrighted content from online sources without permission.                                                                                                                  The Report refers to “ongoing legal cases” (unnamed but likely a reference to Getty v StabilityAI) which are likely to set precedents in this area, but also notes that the UK IPO has begun to develop a voluntary code of practice on copyright and AI, in consultation with the technology, creative and research sectors, which guidance should “… support AI firms to access copyrighted work as an input to their models, whilst ensuring there are protections (e.g. labelling) on generated output to support right holders of copyrighted work”. The report notes that the Government has said that if agreement is not reached or the code not adopted, it may legislate. For further information around the IP related challenges please refer to our full blog here.
  9. The Liability challenge. If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done. The Report considers that if AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done.
  10. The Employment challenge. AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption. It is noted in the Report that automation has the potential to impact the economy and society through displacement of jobs. It highlights the importance of planning ahead through an assessment of the jobs and sectors most likely to be affected. The Report highlights the Prime Minister’s attitude to be cognisant of the “large-scale shifts” through providing people with the necessary skills to thrive in the technological age.
  11. The International Coordination challenge. AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking. The Report compares the UK pro-innovation strategy, the risk-based approach of the EU and the US priority to ensure responsible innovation and appropriate safeguards to protect people’s rights and safety. These divergent approaches contrast with the shared global implications of the “ubiquitous, general-purpose” AI technology as heard by the Committee inquiry, and therefore calls for a coordinated international response.
  12. The Existential challenge. Some people think that AI is a major threat to human life: if that is a possibility, governance needs to provide protections for national security. The 2023 AI White Paper deemed such existential risks as “high impact but low probability” but the debate remains whether such a prospect is realistic. Suggestions are made in the Report of using the international security framework governing nuclear weapons as a template for mitigating AI risks. The Report calls for the government to address each of the twelve challenges outlined and makes clear the growing imperative to accelerate the development of public policy thinking on AI “to ensure governance and regulatory frameworks are not left irretrievably behind the pace of technological innovation”.

UK AI Safety Summit

The Report welcomes the global AI Safety Summit, due to be hosted in the UK later this year on 1 and 2 November with a call to address the challenges from the Report and advance a shared international understanding of the challenges and opportunities of AI. The UK government has since set out the focus of the Summit, centring on the risks created or significantly exacerbated by AI and how safe AI can be used for public good. The aim is to make frontier AI safe, ensuring nations and citizens globally can realise the benefits of AI.

The Summit will be framed by the following five objectives:

  1. a shared understanding of the risks posed by frontier AI and the need for action
  2. a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
  3. appropriate measures which individual organisations should take to increase frontier AI safety
  4. areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance
  5. showcase how ensuring the safe development of AI will enable AI to be used for good globally

UK Frontier Taskforce

Ahead of the Summit, the UK Government also launched the Frontier AI Taskforce (previously named the Foundation Model Taskforce), to drive forward cutting-edge research, build UK capabilities, and lead the international effort on AI safety, research, and development.

The Taskforce, chaired by Ian Hogarth, have released their first progress report. It sets out the AI researchers and key UK national security figures that form their expert advisory board. The progress report states how the Taskforce is building on and supporting the work of leading technical organisations rather than “starting from scratch”. The initial set of partnerships includes ARV Evals, Trail of Bits, The Collective Intelligence Project, and the Center of AI Safety.

Through effective collaboration, the Taskforce can deliver on Challenges 4 and 5 of the Committee’s Report outlined above. The progress report confirms leading companies like Anthropic, DeepMind or OpenAI are giving government AI researchers deep model access and through No10 Data Science (’10DS’) the engineers and researchers will have the necessary compute infrastructure for AI research inside government to excel.

The progress report contains continued praise for the current team members, while urging more technical experts and organisations to apply to join the Taskforce. This exemplifies their aim of growing the team “by another order of magnitude” because “moving fast matters“, particularly with the upcoming AI Safety Summit.

Claire Wiseman
Claire Wiseman
Professional Support Lawyer, London
+44 20 7466 2267
Niamh Connell
Niamh Connell
Paralegal, London
+44 2074 663 057

Rachel Montagnon
Rachel Montagnon
Professional Support Consultant, London
+44 20 7466 2217

EU Commission releases regulatory proposal to increase trust in AI

Key takeaways

The European Union Commission released its long awaited proposed regulation of artificial intelligence on 21 April 2021 (see press release here), which sets out a risk-based approach to regulation designed to increase trust in the technology and ensure the safety of people and businesses above all. The regulation has extra-territorial scope meaning that AI providers located outside of the EU whose technology is used either directly or indirectly in the EU will be affected by the proposal. This wide ranging applicability and the ambitious nature of the proposal have afforded it intense scrutiny, as it is the first regulation of its kind. Although it provides for fines of up to EUR 30 million or 6% of the total worldwide annual turnover, the proposal would impose controls on what are the most risky forms of AI – potentially leaving unaffected many AI applications which are in use today.

Broad scope of application

AI is broadly defined in the proposal and the assessment of whether a piece of software is covered will be based on key functional characteristics of the software – in particular, its ability to generate outputs in response to a set of given human defined objectives. AI can also have varying levels of autonomy and can be either free standing or a component of a product.

To prevent the circumvention of the regulation and to ensure effective protection of natural persons located in the EU, the regulation applies to:

  • any provider of AI systems irrespective of whether they are based inside or outside the EU, if their systems are used directly in the EU or if the output of their system would impact a natural person in the EU; and
  • to individuals, public or private entities using these AI systems in the EU (the ‘users’), except where the AI system is used in the course of a personal non-professional activity.

For example, where an EU operator subcontracts the use of an AI system to a provider outside of the EU, and the output of such use would have an impact on people in the EU, then the provider would be obliged to comply with the regulation if using a “high-risk” AI system.

This wide scope of application is not unusual for the Commission, as a similar approach was adopted for the protection of personal data under the GDPR and in the draft EU Digital Services Act and the draft ePrivacy Regulation.

Risk based approach

The proposal sets out four categories of AI systems based on the risk they present to human safety.

  1. Those systems which unequivoqually harm individuals are banned, such as AI applications which manipulate human behaviour through subliminal techniques, circumvent the user’s free will or systems which allow ‘social scoring’. Operating an AI system in violation of such a prohibition may lead to the maximum penalty of up to EUR 30 million or 6% of the total worldwide annual turnover.
  2. The most extensive set of provisions deal with “high-risk” AI systems and start applying during their development, before they are made accessible on the EU market. Such regulatory requirements include obligations for ex-ante testing, risk management and human oversight to preserve fundamental rights by minimising the risk of erroneous or biased AI-assisted decisions in critical areas such as education and training, employment, important services, law enforcement and the judiciary. AI systems relating to critical infrastructure (e.g. autonomous vehicles or the supply of utilities) also fall within this risk category. The classification of an AI system as “high-risk” depends not only on the purpose of the system but also on the potential affected persons, the dependency of these persons on the output and the irreversibility of harms they could suffer. In particular, the regulation requires that the data sets which are used to train the AI algorithm be of high quality to ensure their accuracy and their non-discriminatory nature.
  3. Those AI systems which present limited risks to fundamental rights will be subject to transparency obligations. For instance when users are interacting with chatbots, the user should be made aware that the chatbot is powered by an AI algorithm.
  4. The majority of AI applications in use today present minimal risks to citizens’ rights or safety (e.g. AI enabled video games and spam filters), which means no restrictions are imposed on their use by the proposal.

Impact

If the proposal is passed (see the What’s next? section below), this would generate a significant compliance burden on companies developing and marketing “high risk” AI systems, including providing risk assessments to regulatory authorities that demonstrate their safety (effectively giving those authorities the right to determine what is acceptable and what is unacceptable). In light of this, industry stakeholders will welcome the proposed 24 month grace period after the regulation is finalised before the legislation will apply.

The regulation could also have a significant impact outside the EU given European regulations such as the GDPR have influenced regulations abroad. We have seen regulators so far shy away from being the first to act when it comes to AI because of concerns about constraining innovation and investment. Therefore this action by the Commission could be a catalyst for other regulators to act.

The proposal provides for the creation of an ‘EU AI Board’ to set standards and help national regulators with enforcement. This approach differs from that of the GDPR (which created a single regulator) as national competent authorities would be in charge of monitoring and enforcing the provisions.

The fines imposed by the proposed regulation mainly relate to an absence of cooperation or incomplete notification of the competent authorities, but could be significant:

  • developing and placing a blacklisted AI system on the market or putting it into service could trigger a fine of up to EUR 30 million or 6% of the total worldwide annual turnover of the preceding financial year (whichever is higher);
  • failing to fulfill the obligations of cooperation with the national competent authorities, including their investigations could amount to up to EUR 20 million or 4% of the total worldwide annual turnover of the preceding financial year in fines (whichever is higher); or
  • supplying incorrect, incomplete or false information to notified authorities could cost up to EUR 10 million or 2% of the total worldwide annual turnover of the preceding financial year (whichever is higher).

What’s next?

It will likely take a number of years for the proposal to be passed into law. It must first be debated and adopted by the European Parliament and the Member States before it becomes directly applicable in all Member States. The current provisions may be changed during this process and further clarification may be brought to concepts such as obligations imposed on users. In addition, the Commission has retained the ability to add onto the list of AI prohibited or highly regulated in order to adapt the regulation to any future developments of the technology.

Market response

Privacy activists have questioned the loopholes in the regulation which seek to ban real-time remote biometric identification in public spaces, except where law enforcement uses such facial recognition for:

  • the search for potential victims of crime, including missing children;
  • certain threats to the life or physical safety of natural persons or of a terrorist attack; or
  • the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences.

Business will be closely monitoring the development of the proposal as it goes through the legislative process and how it impacts current and future activities, especially in areas like advertising. If passed, the proposal would have wide ranging consequences on businesses using AI systems as it will impact how the AI algorithm is created as well as regulatory monitoring during the life of the technology.

Background

The proposal is part of a set of initiatives to set up Europe for the digital age. Fueling innovation in AI has been part of the EU’s agenda to create jobs and attract investments. First, in 2018 the Commission published a strategy paper putting AI at the center of its agenda, followed by guidelines for building trust in human centric AI published in 2019 – after extensive stakeholder consultation (see our previous blogpost here). It has also encouraged collaboration and coordination between Member States in order to create AI hubs in Europe by releasing a Coordinated Plan on AI in 2018 – which has been updated with the release of the proposal (see the New Coordinated Plan on AI 2021).

The Commission also published a White Paper on AI in 2020 which set the scene for the proposal by setting out the European vision for a future built around AI excellence and trust (see our previous blogpost here). The White Paper was also accompanied by a ‘Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics‘ which highlighted the gaps in the current safety legislation and lead the Commission to release a new Machinery Regulation alongside the proposal.

Aaron White
Aaron White
Partner
+44 20 7466 2188

Claire Wiseman
Claire Wiseman
Professional Support Lawyer
+44 20 7466 2267

Ghislaine Nobileau
Ghislaine Nobileau
Associate
+44 20 7466 7503

The European Parliament adopts two legislative initiatives setting out its stance on the Digital Service Act

On 20 October 2020, the European Parliament set out its stance in relation to the upcoming Digital Service Act  (“DSA”), with Members of the European Parliament (“MEPs”) overwhelmingly approving two legislative initiative reports: one relating to improving the functioning of the single market and the other on adapting commercial and civil laws for commercial entities operating online.

The DSA is an ambitious legislative package announced by European Commission President Ursula von der Leyen in her political guidelines in July 2019, and which was subsequently formally adopted by the Commission in its communication ‘Shaping Europe’s digital future’ in February 2020. The DSA is intended to modernise the legal framework for the regulation of  digital services, which have until now been largely governed by the e-Commerce Directive adopted in the year 2000. The Commission also intends to introduce an additional legislative package which will include new ex ante regulatory rules for those platforms which are seen as ‘gatekeepers’ to the internet (such as Google and Facebook).

On 2 June 2020, the Commission initiated a public consultation in relation to the DSA and the new ex ante regulatory rules. The deadline for submissions from interested parties was the 8 September 2020; publication of the findings is still awaited.

See our previous blog post on the DSA here.

The European Parliament’s recommendations

The European Parliament’s recommendations in relation to the DSA (as set out in the new legislative initiative reports) include:

  • Tougher regulation for targeted advertising

One of the more controversial proposals by the MEPs is centred around targeted advertising. MEPs have backed the inclusion into the DSA of more stringent restrictions on targeted advertising (as compared to contextual advertising, which is based on the content of the website the advertisement appears on and is therefore less dependent on personal data). MEPs have advocated legislation that gives users more control over the advertisements they see online, with an option to opt-out of targeted advertising on content hosting platforms. Going further still, the text adopted by the MEPs invites the Commission to consider a phase-out of targeted advertisements, eventually leading to a general prohibition against targeted advertising within the EU.

Although targeted advertisements have drawn criticism from regional law-makers, they remain an integral part of the business model for many content hosting platforms. During an EU policy debate in September 2020 Nick Clegg, Facebook’s Vice-President for Global Affairs and Communications, claimed that ‘personalised advertising’ benefits SMEs, allowing them to compete on an equal basis with larger and better-resourced businesses to reach customers.

The Commission can be expected to carefully consider the opposing views on targeted advertising as the MEPs’ proposals, if introduced, would drastically impact the business models of Facebook, Google and other major stakeholders in the digital economy.

  • Tougher rules to tackle illegal content

The European Parliament has stressed the importance of the upcoming DSA making a clear distinction between illegal and harmful content, and proposes that the content management measures in the DSA should only be applicable to illegal content aimed at consumers within the EU.

To tackle illegal content the MEPs propose that the DSA should put in place a harmonised and legally enforceable “notice and action” mechanism for online services which would facilitate the notification and reporting of illegal content by users to content hosting platforms. The hosting platform would then be required to verify the notified content and reply in a timely manner to the notice provider and the content uploader with a reasoned decision. It has been stressed that the notice and action mechanism must be “human-centric” to reduce the incidences of false positives regarding content taken down.

The MEPs’ proposals also provide for an independent dispute mechanism for disputes regarding content management, with the dispute bodies to be provided for by Member States. The MEPs have highlighted the importance or quick and efficient extra-judicial recourse for deciding the legality of user-uploaded content in light of the immediate nature of content hosting.

The recommendations acknowledge that this process could be open to abuse and therefore place onerous compliance obligations on content hosting platforms. To mitigate this, MEPs have stated that safeguards should be set up to prevent abusive behaviour, however, the substantive detail of these safeguards is not provided.

  • Specific ex-ante rules for “gatekeepers” of market access

As well as amending the e-Commerce Directive, the MEPs call for the Commission’s legislative package to introduce ex-ante rules on ‘systemic operators’ (a phrase MEPs recommend is clearly defined on the basis of objective indicators) which take up a de facto gatekeeper role within the digital economy. The ex-ante regulation mechanism would aim to prevent rather than merely remedy market failures with the aim of opening markets to new entrants and SMEs.

  • European entity tasked with ensuring compliance

To ensure compliance with the provisions of the DSA, the MEPs recommend that a European entity (either an existing or new European body or the Commission coordinating a network of national authorities) be set up to monitor content hosting platforms. To strengthen the position of the European entity, it would have the ability to impose fines on content hosting platforms for non-compliance with the new rules.

Further obligations would be put on content hosting platforms with significant market power, which under the proposals would be required to produce a biannual report to the European entity setting out the fundamental rights impact and risk management of their content management policies.

Looking ahead

The European Parliament’s legislative initiative reports will now be sent to the Commission to feed into the Digital Services Act, which is due to be published in December.

Although the recommendations put forward by the MEPs are non-binding on the Commission, they are likely to be taken by Commission lawmakers as a strong steer in respect of the content of the DSA, particularly since the DSA will ultimately need to be backed by the EU Parliament (and the Council) in order to be adopted into EU law. If the Commission intends to reject any of the proposals it will also need to communicate the grounds for such rejection to the European Parliament.

Hayley Brady
Hayley Brady
Partner, Head of Digital and Media, London
+44 20 7466 2079

James Balfour
James Balfour
Senior Associate, London
+44 20 7466 7582

Jeremy Purton
Jeremy Purton
Senior Associate, London
+44 20 7466 2142

Digital Services Act: European Commission commences consultation

On the 2 June 2020 the European Commission initiated an open public consultation as part of its evidence-gathering exercise to inform the contents of the upcoming Digital Services Act (DSA) legislative package (expected to be put forward in late 2020). The consultation seeks to gather views, evidence and data from a variety of interested parties including:

  • individuals;
  • businesses;
  • online platforms;
  • academics; and
  • civil society.

The consultation covers issues such as safety online, freedom of expression, fairness and a level playing field in the digital economy. The consultation will run until 8 September 2020.

What is the Digital Services Act and how did we get here?

The DSA is a landmark legislative package first announced by Commission President Ursula von der Leyen in her political guidelines back in July 2019 and is expected to reinforce the single market for digital services, upgrade the EU’s liability and safety rules for digital platforms and provide smaller businesses with the legal clarity and level playing field they need to compete effectively in the digital economy. Margrethe Vestager (the EC’s VP for Digital) has also expressed her hope that the DSA can be used to prevent the tipping of markets, where one company obtains high monopoly profits and market share, creating an anti-competitive environment for other firms.

The DSA comes in the wake of recent scandals regarding data harvesting and selling, Cambridge Analytica, fake news, political advertising and manipulation and a host of other online harms (from hate speech to the broadcast of terrorism). The IMCO (the EU Parliament’s Committee on the Internal Market and Consumer Protection) has also noted the relevance of the DSA in light of COVID-19 and recent abusive practices by traders selling fake or illegal products or imposing unjustified and abusive price increases or other unfair conditions on consumers.

On 24 April 2020 the IMCO published a draft report with recommendations to the Commission on the objectives and contents of the Digital Services Act. In particular, the IMCO recommended that the DSA should:

  • place greater transparency and compliance obligations on information society and internet service providers and their business customers;
  • introduce concrete measures (including a ‘notice-and-action mechanism’) to empower users to notify online intermediaries of the existence of potentially illegal content or behaviour;
  • close the existing legal loophole allowing suppliers based outside of the EU to sell products online to European customers which do not comply with Union rules on safety and consumer protection;
  • introduce ex-ante regulation of the ‘online gatekeepers’ of the digital economy (i.e. large platforms such as Google, Amazon and Facebook) so as to open up the market to new entrants; and
  • strengthen and modernise existing provisions on out-of-court settlement and court actions to allow for effective enforcement and consumer redress.

The DSA is expected to impact social media platforms, search engines, video gaming platforms, online marketplaces and other information society services and internet service providers.

See the official European Commission press release here.

Hayley Brady
Hayley Brady
Partner, Head of Digital and Media, London
+44 20 7466 2079

James Balfour
James Balfour
Associate, London
+44 20 7466 7582

Jeremy Purton
Jeremy Purton
Senior Associate, Digital TMT and Sourcing, London
+44 20 7466 2142

European Parliament’s transport committee opposes Commission’s preference for Wi-Fi as the communication standard for connected and autonomous vehicles

Following months of debate, the European Commission approved its long-anticipated delegated act on the preferred communication technology standard for connected and autonomous vehicles (CAVs) on 13 March 2019 (the “Regulation“, available here). However, the Commission’s decision – favouring Wi-Fi technology based on the existing ITS-G5 standard for short-range communications (V2V) – has already hit a road block: it was rejected by the European Parliament’s transport committee on Monday.  There will now be intense focus from industry on whether the European Parliament vote next week follows its transport committee’s recommendation to block the Regulation.

In this post, we consider the content of the Regulation, why the Commission’s decision has proved so controversial and what may happen next. Continue reading

Political agreement reached on controversial EU Digital Copyright Directive: A fair and balanced result?

Following a turbulent course of lengthy negotiations and delays, political agreement was finally reached by the European Commission, European Parliament and the Council of the EU on the revised proposal of the EU Copyright Directive (the “Directive“) earlier this month. The final consolidated text was made available on 20 February 2019.

The Commission first adopted its proposal for the Directive back in September 2016, as part of its Digital Single Market Strategy. The Directive forms part of a broader initiative to “adapt copyright rules” to ensure they are “fit for a digital era“. The modernisation is long overdue, given the changes which have occurred in the use of material on the internet since its inception, including the explosion of social media.

The Directive is intended to develop a fair and sustainable marketplace for creators, the creative industries and the press; to this end, in the Commission’s press release, Vice-President for the Digital Single Market, Andrus Ansip, referred to the Directive as a “fair and balanced result that is fit for a digital Europe“. The European Parliament’s press release also refers to the Directive re-dressing the balance; ensuring “tech giants” share revenue with “artists and journalists” and also incentivising internet platforms to enter into fair licensing arrangements with rights holders.

The legislation has, however, been the subject of considerable lobbying and public pressure by copyright holders, technology companies and consumer digital rights advocates, which is unsurprising, given the vast array of stakeholder interests at play. In particular it has implications for online platforms and media companies. We set out below further detail around the more contentious provisions, Articles 13 and 11, and discuss the next steps for the legislation. Continue reading

Audiovisual media services: Back to the 80’s?!

On 19 March 2018, the European Commission published a notice to stakeholders on the consequences of Brexit for audiovisual media services. This makes it clear that, subject to any transitional arrangement, as of the withdrawal date, the EU rules in the field of audiovisual media services will no longer apply to the UK. Therefore, in summary, UK-based broadcasters would be left relying on laws written in the 1980s. Continue reading

Scientific opinion commissioned by the European Commission makes ten recommendations on cyber security in the Digital Single Market

On 24 March 2017, the European Commission’s Scientific Advice Mechanism published an independent scientific opinion on cyber security in the Digital Single Market to aid EU-level policy makers. The opinion includes ten broad recommendations for simplifying and securing online operations undertaken by people and businesses throughout the EU Continue reading

EU-US Privacy Shield first annual review announced following a challenging introduction

On 12 July 2016, the European Commission adopted an “adequacy decision” allowing for the transatlantic transfer of personal data from the EU to the US in accordance with the framework and principles of the EU-US Privacy Shield (the “Privacy Shield“).

Two privacy advocacy groups have however since filed actions in the European General Court to annul the adequacy decision. On 28 October 2016 the Irish privacy advocacy group, Digital Rights Ireland, filed an “action for annulment” on the basis that the Privacy Shield does not sufficiently protect the privacy rights of EU citizens. If successful, the action would invalidate the European Commission’s adequacy decision that approved and adopted the Privacy Shield. The group filed the challenge in the General Court based in Luxembourg, the second highest EU Court after the CJEU. A further challenge was also filed in the General Court by a French civil society group at the end of October 2016. It could take the General Court twelve months or more before a decision is handed down.

Continue reading