Regulating AI: Two steps forward for the UK as pro-innovation approach remains

Following landmark developments in 2023, the international spotlight remains firmly on AI regulation as we enter into 2024. The last few days alone have not only seen the COREPER Ambassador’s agreement on the EU AI Act, but also the long-awaited Government response to its AI Regulation White Paper released yesterday (“Response“) and the House of Lords Communication and Digital Committee report on “Large language models and generative AI” (“Report“) published at the end of last week.

There are few surprises in the Response and Report; with the UK still forging its own path towards the regulation of AI – particularly compared to the EU’s approach. That said, the Government seems to have taken on board a couple of focus areas from the EU approach (such as highly capable general purpose AI systems). 

Key take aways:

  • Building on the White Paper: In contrast to the centralised legislative framework set out in the EU’s AI Act, the Response and Report largely re-iterate and build on the original adaptable, pro-innovation, sector-led approach set out in the Government’s March 2023 AI White Paper.
  • No rush to regulate…: The Response and Report also re-confirm the UK’s agile “wait and see” approach to regulating AI. Given the technology is rapidly developing, this allows the UK to quickly adapt to emerging issues without implementing “quick-fix” rules that could become outdated or ineffective.
  • …Until understanding of the risk matures: The Government does, however, acknowledge that legislative action will be required once the risks associated with the technology have matured, and focuses on preparing itself for emerging and near-term regulatory risks.
  • Role of the regulator: Existing regulators retain a key role in implementing the UK’s agile approach, with the Government empowering them to create targeted measures in line with five common principles and tailored to the risks posed by the different sectors. Regulators have been asked to publish their strategic plans for managing the risks and opportunities around AI by the end of April 2024.
  • Importance of consistency and coordination: To avoid a patchy approach between regulators, given the sector-led focus (and international regulatory fragmentation), other priorities include strengthening the central coordination mechanisms for UK regulators in AI and developing the expertise of the AI Safety Institute (both nationally and internationally). The Government also published new cross-sector guidance to support regulators to implement the principles effectively.
  • Binding rules for GPAI: For the first time, the Response also sets out initial thinking for future targeted, binding requirements for the most advanced highly capable general purpose AI systems. This is principally because the wide-ranging potential uses of these systems challenge the current context-led regulatory approach (which relies on risk being determined by how and where the AI system is used).
  • Engagement with IP issues: It is not lost on the Government that copyright issues are front and centre of the development, training and use of AI (as it was not on the House of Lords in the Report – see our IP blog post here). However, how to deal with the conflicting interests has eluded the IPO’s working committee and the Response does not provide a solution for now, other than further examination of ways to improve transparency of use of copyright material. It may well be for the courts to determine the copyright position in the short term, although this may not be to the liking of those investing in AI development.
  • Certainty vs flexibility?: The UK’s approach seems to sit somewhere between other leading actors in AI regulation, including Australia, China, the EU and the US (see our “Deeper Dive“). It remains to be seen which of the variety of diverging international approaches adequately strikes the balance to enable trustworthy AI to thrive.

For a deeper dive on the Response and Report, as well as our thoughts on the two, please click here.

Key Contacts

Nick Pantlin
Nick Pantlin
Partner
+44 20 7466 2570

Claire Wiseman
Claire Wiseman
Professional Support Lawyer
+44 20 7466 2267

Sara Lee
Sara Lee
Associate
+44 20 74662346

Rachel Montagnon
Rachel Montagnon
Professional Support Consultant
+44 20 7466 2217

UK Government should deal definitively with copyright issues on LLM/GenAI training data whilst adopting a positive vision for LLMs to ensure UK does not miss “AI goldrush” – recommends House of Lords Committee

Large language models (LLMs) and generative AI (genAI) will produce “epoch defining changes comparable with the invention of the internet“, stated the House of Lords Communications and Digital Committee as it issued its report  “Large language models and generative AI” today (2 February 2024). The Committee concluded that the “goldrush” opportunity that AI presents requires the UK Government to adopt a more positive vision for LLM’s in order “to reap the social and economic benefits, and enable the UK to compete globally”. Key measures suggested include “more support for AI start-ups, boosting computing infrastructure, improving skills, and exploring options for an ‘in-house’ sovereign UK large language model” as well as devising a solution to the copyright disputes that the use of data without permission for the training of the AIs is currently generating.

The Committee sets out 10 core recommendations, as it says: “to steer the UK toward a positive outcome”. These include measures to boost opportunities, address risks, support effective regulatory oversight – including to ensure open competition and avoid market dominance by established technology giants – achieve the aims set out in the AI White Paper, introduce new standards, and resolve copyright disputes.

The Committee calls on the Government to support copyright holders, saying the Government “cannot sit on its hands” while LLM developers exploit the works of rightsholders.The Committee Chair is quoted on the key role of copyright issues:

One area of AI disruption that can and should be tackled promptly is the use of copyrighted material to train LLMs. LLMs rely on ingesting massive datasets to work properly but that does not mean they should be able to use any material they can find without permission or paying rightsholders for the privilege. This is an issue the Government can get a grip of quickly and it should do so.

The report “rebukes” tech firms for using data without permission or compensation, and says the Government should end the disputes over copyright and AI in this context “definitively” including through legislation if necessary. The report calls for a way for rightsholders to check training data for copyright breaches, investment in new datasets to encourage tech firms to pay for licensed content, and a requirement for tech firms to declare what their web crawlers are being used for.

Chapter 8 of the report deals specifically with copyright issues, in particular concluding:

•  In response to this report the Government should publish its view on whether copyright law provides sufficient protections to rightsholders, given recent advances in LLMs. If this identifies major uncertainty the Government should set out options for updating legislation to ensure copyright principles remain future proof and technologically neutral (paragraph 247).
• The voluntary IPO-led process is welcome and valuable. But debate cannot continue indefinitely. If the process remains unresolved by Spring 2024 the Government must set out options and prepare to resolve the dispute definitively, including legislative changes if necessary (paragraph 249).
• The IPO code must ensure creators are fully empowered to exercise their rights, whether on an opt-in or opt-out basis. Developers should make it clear whether their web crawlers are being used to acquire data for generative AI training or for other purposes. This would help rightsholders make informed decisions, and reduce risks of large firms exploiting adjacent market dominance (paragraph 252).
• The Government should encourage good practice by working with licensing agencies and data repository owners to create expanded, high quality data sources at the scales needed for LLM training. The Government should also use its procurement market to encourage good practice (paragraph 256).
• The IPO code should include a mechanism for rightsholders to check training data. This would provide assurance about the level of compliance with copyright law (paragraph 259)

See the House of Lords Communications and Digital Committee’s announcement here.

The IPO working group began meeting on 5 June 2023 to look at identifying, developing and codifying good practice on the use of copyright, performance and database material in relation to AI, including data mining (previous plans for a legislated text and data mining exception to copyright infringement having been withdrawn in March 2023 – see our post here). However, progress towards a voluntary code appears to have been very difficult, with the code previously having been expected to be finalised in autumn 2023.  The House of Lords Committee recommendation for the process to be taken back by Government if no code is forthcoming in the next few months is well timed, although the publication of this report may give further incentive to reach a conclusion.

Rachel Montagnon
Rachel Montagnon
Professional Support Consultant - London
+44 20 7466 2217
Heather Newton
Heather Newton
Of Counsel - London
+44 20 7466 2984
Victoria Gettins
Victoria Gettins
Trainee solicitor, IP
+44 20 3 6929648

 

Latest news on planned AI regulation under the EU AI Act and on AI litigation in the EU, UK, US and China

The development of artificial intelligence (AI) continues to raise new issues and concerns every day. 2024 looks to be the year in which this innovative technology will be confronted with the need for increasingly sophisticated regulation on the one hand and growing litigation on the other.

In particular, one of the hottest topics in relation to AI concerns IP rights and their protection in the AI world. For example, training AI systems with massive amounts of data indiscriminately found on the web entails the risk of infringing third parties’ copyright; whilst, the ability of generative AI to create new and original works raises the question of whether, how and to what extent it is possible to protect these works under IP laws.

These topics have recently hit the news and gained more public attention. Here we start the New Year by taking a look at the latest legislative and case law developments and shedding light on how these issues will be addressed and what questions still remain open to debate. In particular we look at The EU AI Act and the protection of copyright in materials used in machine learning; the UK and US proceedings in Getty v Stability AI on the infringement of rights associated with images used to train Stability’s Stable Diffusion image-generating AI;  New York Times v. OpenAI and Microsoft case in the US that revolves around fair use of copyright materials; and Li v Liu in China in relation to copyright in AI generated images.

EU: The AI Act

On 8 December 2023, after lengthy discussions, representatives of the European Parliament, the EU member states and the European Commission finally reached an agreement on the provisional text of the AI Act, which is now in the process of being formally approved by the European Parliament and the European Council. Representatives of the governments of EU member states will discuss the EU AI Act at a meeting of the Council on Friday 2 February. It still seems that France might insist on seeking some changes on the current text.

In general, the AI Act has been welcomed by many as the world’s first comprehensive piece of legislation on AI which aims to achieve uniform protection at European level and to promote the development and use of AI in a safe, reliable, and transparent way, while ensuring the respect for the fundamental rights of EU citizens and businesses and striking a balance between innovation and protection.

In line with the purpose of representing a uniform and comprehensive piece of legislation, the AI Act adopts the OECD’s definition of an AI system (“a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment”) and applies to all operators (both public and private providers or users) that use the AI system for commercial purposes within the EU or outside the EU, provided that the AI system is placed on the EU market, or its use affects people in the EU.

Safety, reliability, and transparency are amongst the main principles that the AI act wants to promote and regulate. On the one hand, users must be made aware that they are interacting with an AI technology or that they are facing an AI generated output; on the other hand, companies developing AI will have to comply with various levels of disclosure requirements, which should help to prevent infringement on rights or risks to individuals.

While the provisional agreement on the text of the AI Act shows that at least some key points have been agreed so far, allowing operators to make an initial assessment of what could appear in the final text, many issues regarding IP rights do not seem to have reached a consensus and appear to require further discussion and consideration, particularly in light of potential risk profiles and practical implementation of the legislation. Were future drafts to include specific provision on exceptions to copyright infringement for example, it would be interesting to see how the various jurisdictions would implement those provisions considering that copyright is not fully harmonised at EU level.

Copyright protection and IP provisions

Although many of the questions relating to IP rights and the use of AI system have not been addressed by the provisional agreement, the main takeaway from the agreement is that general purpose AI developers (e.g., ChatGPT) will be required to implement some policies to ensure copyright compliance. One of these compliance requirements appears to be the mandatory disclosure of the material used in the training phase of the AI, to ensure that no copyrighted work has been used without proper authorisation. In particular, the provision requires general purpose AI models to draw and make publicly available a sufficiently detailed summary of the content (including text and data protected by copyright) used for training the model. We should wait to see what form this summary will actually take.

Such requirements seem to apply “regardless of the jurisdiction in which the copyright-relevant acts underpinning the training of these foundation models [i.e. general purpose AI systems] take place“. The broad geographical scope is intended to avoid the circumvention of the rules and an unfair “competitive advantage in the EU market” for the developers who could benefit from “lower copyright standards” by moving software training outside the EU territory.

General purpose AI models would then need to obtain specific authorisations from rightsholders if they want to carry out TDM over works over which the rightsholder has reserved its rights according to EU Directive 2019/790.

These few references to copyright leave many questions unanswered and raise some new line of inquiry that we hope will find a resolution and clarification in the final text. To name a few:

  • Do the text and data mining (TDM) exceptions provided in individual jurisdictions under EU Directive 2019/790 on copyright and related rights in the Digital Single Market apply?
  • The AI Act explicitly refers to Article 4(3) of Directive 2019/790, under which, rights holders may reserve the right to prevent TDM on their protected content i.e. “opt out” from allowing access to their protected materials. What will be the appropriate opt out method and how will it apply in practice?
  • In relation to the duty to provide documentation with a detailed summary of the use of copyright-protected training data in Foundation Models, what level of detail will be considered useful?
  • What role will existing competition, intellectual property, privacy and consumer protection laws play?
  • Will it still be possible to do data training outside EU for AI to be used within the EU, and what will be the applicable law in the case of data training?
  • Will the AI Act slow down the development or use of AI in Europe?
  • Will the AI Act be sufficiently future-proof?

It will be interesting to see what guidance or practice on these issues will arise from self-regulation, disputes (that are increasingly coming before the courts), and commercial policies before the AI Act comes into force.

US: New York Times v. OpenAI and Microsoft; Getty Images v Stability AI

In the United States, several lawsuits revolving around the issues of copyright and AI have emerged over the last years, making it one of the most prolific stages in which to observe the development of the legal response to these new matters. The latest news is that The New York Times (NYT) has sued OpenAI and Microsoft for alleged copyright infringement of its written works, seeking billions of dollars in damages. In particular, NYT, which filed its complaint before the Federal District Court in Manhattan on 27 December 2023, claims that millions of articles published by the New York Times were used by the defendants to train automated chatbots, which it claim cannot be covered by the US doctrine of fair use, which represents a general exception to copyright protection under US law, allowing the use of copyrighted material without permission under certain conditions. We will have to wait and see how the US courts consider the fair use doctrine may or may not apply in relation to training an AI with written works.

This issue of fair use of text in training an AI has not yet been considered by the UK courts (the Getty v Stability AI case involves images not text, aside from the use of Getty watermarks (a trade mark issue)) or by the EU courts. In these jurisdictions there are specific copyright exceptions (in the EU varying between member states and not harmonisation) instead of the general principle of “fair use”.

Getty is suing Stability AI in both the US and the UK on similar grounds – both for infringement of Getty’s rights in its collections of photographs in their use to train Stability AI image generating AI engine “Stable Diffusion” and in terms of the output of the AI being infringing also. See the UK section below for the detail on the UK case.

CHINA: Stable Diffusion generated images held to be copyright of artist: Li v Liu

In November 2023, The Beijing Internet Court ruled that images generated by the artificial intelligence-powered software Stable Diffusion are entitled to copyright protection. This is an interesting conclusion in light of the current litigation running in parallel in the US and UK, where Getty Images are suing Stability AI in relation to the latter’s AI engine Stability Diffusion which Getty alleges has been trained using Getty’s images without permission and that the resulting images also infringe its copyright in those images (see UK section below

In this case (Li v Liu), the plaintiff used Stable Diffusion to create an image by inputting prompts and posted the same on a personal profile on a famous social media platform. Thereafter, the plaintiff found that the defendant used the image from which the plaintiff’s signature is removed in a public article without permission, and thus sued the defendant for infringing copyright before the Beijing Internet Court.

The court held that the image in this case could be a work entitled to copyright protection. Specifically, the image is derived from an intellectual investment. During generation of the image by Stable Diffusion, the plaintiff set up the presentation style of the character, selected and arranged the prompts, set relevant parameters, and selected images that met expectations, which reflects the plaintiff’s intellectual contribution. In addition, the image possesses originality. The plaintiff continuously adjusted and corrected the images by adding prompts and modifying parameters, to obtain the final image. The adjustment and modification reflect the plaintiff’s aesthetic choice and personality judgment. As the above act of using an AI tool to generate an image is essentially a human creation by using a tool, which reflects the original intellectual investment of the human, the image should be recognized as a work.

The court further held that the plaintiff is the copyright owner of the image. Since the AI tool itself is not a natural person, a legal person or an unincorporated organisation, it can not constitute an author recognized by the Chinese Copyright Law. In addition, the producer of the AI tool was not involved in the generation process of the image. Therefore, the producer was not the author either. Since the image were directly generated based on the plaintiff’s intellectual investment and reflected the plaintiff’s personalized expression, the plaintiff is the author of the image and thus enjoys the copyright (case reference: [(2023) Jing 0491 Min Chu No 11279], 27 November 2023).

UK: Copyright – Getty v Stability AI; the patentability of AI; and AI as an inventor

Getty Images are bringing a case against Stability AI in the UK (in parallel to the US action mentioned above) in relation to Stability AI’s allegedly infringing use of Getty’s images to train its image-generating AI “Stable Diffusion” and also in relation to the allegedly infringing outputs of that engine.

The case involves allegations of copyright infringement in both the training process as well as the outputs themselves being copyright infringements, plus allegations of sui generis database right infringement (a right not available in the US) which centres of the extraction and use of content from a database – here Getty Images’ database of photographs being used to train the AI. In addition trade mark infringement and passing off are alleged in relation to the outputs of the AI, many of which had the Getty watermark (or parts of it) incorporated into them.

Stability have so far not submitted a Defence, instead attempting a strike out in relation to two elements of the claims against them struck out (in a reverse summary judgment application) – the claim for infringement via training (Stability had argued that this did not occur in the UK and copyright and database right being a territorial right they therefore did not infringe) – and the claim for secondary infringement (which Stability challenged on the basis that the making available of Stable Diffusion in the UK did not fit within the provisions on secondary infringement in the Copyright Designs and Patent Act 1988 which required an “article” to be imported for there to be infringement, contending that its software supplied online was not an “article” within the meaning of the Act as it was not tangible). In relation to both issues the UK High Court found that these were issues it would need to decide at a full trial and not at an interim stage in a reverse summary judgment/strike out application.

Stability AI have yet to file a Defence but it will be interesting to see what approach they take and key for AI developers to have the court decide on these issues in due course.

Two further cases of particular interest have been decided recently in the UK in relation to patent rights and AI:

  • The patentability of an AI involving an artificial neural network was consider by the UK High Court which found that the ANN was not a computer program and so did not fall to be excluded from patentability under s.1(2)(c) of the Patents Act 1977 and in any case, even if it had been held to be a excluded as an computer program, would otherwise have been patentable for having made a “substantial technical contribution” following the long line of law developed around computer implemented inventions (Emotional Perception v Comptroller of patents) – see our blog post here.
  • Dr Thaler’s case challenging the UKIPO to allow his AI DABUS to be the inventor of two patents which went all the way to the UK Supreme Court, being rejected at each stage (see our blog post on the UK Supreme Court’s December 2023 decision in DABUS here) with the courts finding that an inventor must be a human under the law as it currently stands.

Although not an EU or UK decision, it is worth mentioning here that Dr Thaler who was behind the DABUS AI patent inventorship challenges worldwide (see above) also has an AI system he calls the “Creativity Machine” which generates art, he claims, of its own accord. Dr Thaler sought to obtain copyright registration in the US for an artwork entitled “A Recent Entrance to Paradise”, which he claimed was generated by the Creativity Machine. The US Copyright Office rejected his application for copyright registration on the on the grounds that the work lacked human authorship, which was a pre-requisite for valid copyright to be registered. Thaler had confirmed that the work was autonomously generated and acknowledged that it lacked “traditional human authorship” but had urged the Copyright Office to consider to “acknowledge [the Creativity Machine] as an author where it otherwise meets authorship criteria, with any copyright ownership vesting in the AI’s owner“. Following that decision, Thaler appealed to the District Court for the District of Columbia. In August 2023 the court rejected Thaler’s appeal and upheld the original decision that the work was not protected by copyright. In doing so, the Court noted there was “centuries of settled understanding” that an “author”, for copyright purposes, must be a human (for more on this case see our IP blog post here).

Conclusions

These cases and legislative developments, demonstrate that the issue of AI and copyright continues to be hot topic that demands serious discussion and some clear direction. With legislators being slow to act so far, AI users around the world are turning to the courts to get the answers they need. Courts must use the tools at their disposal, the current laws as they stand, to these ground-breaking issues, with concepts that are often hard to grasp and harder to frame in legal terms.

The risk across the EU, UK, US, China and elsewhere is that these answers will be vary from court to court, as already illustrated in the Li v Liu decision of the Chinese court compared to the US decision in Dr Thaler (see above), creating uncertainty in an already fragmented copyright landscape although at the same time giving some jurisdictions a potential competitive advantage. In the EU certainly, the hope is that the much-awaited EU AI Act can set a clear and strong example for legislators to take action and provide guidance which the rest of the whole may find attractive to implement likewise s without an international resolution to these issues there will continue to be tensions.

For more on IP issues and AI in the UK, EU and internationally, see our series “The IP in AI” and our regular AI blog posts here.

Authors and Contacts

Giulia Maienza
Giulia Maienza
Senior Associate - London
+44 20 7466 6445
Rachel Montagnon
Rachel Montagnon
Professional Support Consultant - London
+44 20 7466 2217
Andrea Appella
Andrea Appella
Consultant - Milan
+39 02 3602 1392
Bob Bao
Bob Bao
Partner, Kewei
+86 21 2322 2113
Pietro Pouche
Pietro Pouche
Partner - Milan
+39 02 3602 1394
Andrew Moir
Andrew Moir
Partner - London
+44 20 7466 2773
Heather Newton
Heather Newton
Of Counsel - London
+44 20 7466 2984
Andrea Pontecorvi
Andrea Pontecorvi
Associate - Milan
+39 02 3602 1424

UK Supreme Court unanimously dismisses DABUS appeal to allow AI to be named as a patent inventor

On 20 December 2023, The Supreme Court dismissed the appeal of Dr Stephen Thaler, reiterating earlier decisions as to the ineligibility for patent protection of inventions where there is no named human inventor.

In its judgment, the Supreme Court held that an inventor, for the purposes of the Patent Act 1977 (the Act), must be a natural person, and that therefore an autonomous AI system cannot be named as inventor under the current provisions of the Act. Further, the Supreme Court held that, under the Act, ownership of an AI system does not confer a right for the owner to apply for or obtain a patent relating to inventions generated by said AI system.

This decision is the latest in a series of test cases filed around the world by Dr Thaler in respect of inventions generated by his AI system known as ‘DABUS’ (see here), the majority of which have been rejected.

Background

In late October 2018, Dr Thaler filed two patent applications under the Patents Act 1977 for a new form of food container and an emergency lighting beacon. Neither application designated a human inventor, and the request for grant forms accompanying the applications expressly stated that Dr Thaler was not an inventor either. The UKIPO responded to these applications by requesting statements of inventorship be filed, and that Dr Thaler indicate how he derived the right to be granted the patent within 16 months. A failure to do so would mean that the patent applications would be taken to be withdrawn, in accordance with rule 10(3) of the Patent Rules 2007.

On 23 July 2019 Dr Thaler filed his statements of inventorship, which outlined Dr Thaler’s position – the inventions were created by his AI machine DABUS and Dr Thaler had acquired the right to the grant of the patents as the owner of the machine.  A hearing took place on 14 November 2019.

On 4 December 2019, a decision was issued by the UKIPO, finding that DABUS was not a person as envisaged by section 7 or section 13 of the Patents Act 1977 and so was not an inventor, and that therefore DABUS had no rights that could be transferred to allow Dr Thaler to apply for patents in respect of the inventions. Further, Dr Thaler was not entitled to the grant of a patent on the basis that he owned DABUS.

Dr Thaler appealed the refusal to the High Court, which was dismissed by Marcus Smith J on 21 September 2020, upholding the grounds of refusal by the UKIPO. The Court of Appeal dismissed a subsequent appeal by Dr Thaler on 21 September 2021, where the court held by a majority (Arnold LJ and Elisabeth Laing LJ) that DABUS could not be an inventor per the Patents Act 1977, as such an inventor was required to be a person.

For further background, an explanation of the relevant law and a summary of previous decisions please see our blog post here (on the UK High Court decision), and here (on the UK Court of Appeal decision).

Leave was provided to appeal to the UK Supreme Court.  This appeal was heard on 2 March 2023 by Lord Hodge, Lord Kitchen, Lord Hamblen Lord Leggat and Lord Richards.

Supreme Court Judgment

Lord Kitchin’s judgment (with whom Lord Hodge, Lord Hamblen, Lord Leggatt and Lord Richards all agreed) makes clear that it is not concerned with the more general issue of whether technical advances generated by AI powered autonomous machines should be patentable, or whether the term “inventor” should be expanded to include AI powered machines. The appeal instead concerns “the much more focused question of the correct interpretation and application of the relevant provisions of the [Patents Act 1977]. The Supreme Court addressed three key issues.

Issue 1: Scope and meaning of “inventor”

In considering sections 7 and 13 of the Patents Act 1977, the Supreme Court held that an inventor within the meaning of the Act must be a natural person, which DABUS was not. An inventor is the deviser of the invention, which through its ordinary meaning would be “a person who devises a new and non-obvious product or process”.

Accordingly, it was held that DABUS “is not a person, let alone a natural person and it did not devise any relevant invention. Accordingly, it is not and never was an “inventor” for the purposes of section 7 or 13 of the 1977 Act”.

Issue 2: Was Dr Thaler the owner of any invention in any technical advance made by DABUS and entitled to apply for and obtain a patent in respect of it?

Lord Kitchin noted that the Act “confers the right to apply for and obtain a patent and it provides a complete code for that purpose”. Under section 7 of the Patents Act 1977, if the applicant is not the inventor (which Dr Thaler has never claimed to be) then the applicant must fall into one of the limbs of either section 7(2)(b) (persons entitled by rule of law or agreement with the inventor) or section 7(2)(c) (successors in title).

The Court held that Dr Thaler does not satisfy “any part of this carefully structured code”. At its starting point, section 7 requires an inventor, and that inventor must be a person. DABUS “was not and is not a person”.

The Supreme Court did not accept Dr Thaler’s submissions on the doctrine of accession, in which he sought to rely on arguments based on a property right in an invention. Dr Thaler argued that DABUS’ inventions were the “fruits” of his DABUS machine and that just as “the farmer owns the cow and the calf”, Dr Thaler submitted that, as owner of DABUS, he was the owner of all rights in DABUS’s developments.

The Supreme Court rejected this (as the lower courts had). Lord Kitchin noted that firstly (and fatally), this argument did not overcome the need for an inventor; and DABUS was not an inventor. Secondly, the doctrine of accession argument was misguided, and the analogy presented by Dr Thaler false. The doctrine was concerned with the ownership of new tangible property created by existing tangible property (the calf and the cow in Dr Thaler’s example). There is no tangible property in an invention, however. Rather, the right in an invention under the Act was the right to apply for a patent where the other requirements of Act were satisfied. The doctrine of accession did not confer on Dr Thaler property in or the right to apply for a patent in these circumstances.

Issue 3: Was the Hearing Officer entitled to hold that the applications would be taken to be withdrawn?

Given that, for the reasons above:

  • Dr Thaler failed to identify any person or persons whom he believed to be the inventor or inventors of the inventions described in the applications; and
  • his ownership of DABUS did not provide a proper basis for accepting his claim to be entitled to the grant of these patents;

The Comptroller was right to find the applications would be taken to be withdrawn at the expiry of the sixteen-month period in accordance with rule 10(3) of the Patent Rules 2007.

Implications and comment

The decision of the Supreme Court was widely anticipated, as the Act is relatively clear in respect of the requirement that an inventor named in a patent application be a natural person and cannot be a machine (or AI). It also follows the decisions reached in most other jurisdictions in which Dr Thaler has sought to prosecute these applications.

Importantly for businesses, the decision reinforces the position that, under current law, inventions which are truly created by AI without any human inventor are not patentable in the UK. Of course, it may often be the case that there is human oversight of AI in directing its work, such that a human inventor can be named. However, this may not always be the case (at least in the future) and we may expect to see challenges where AI has been used and a human has been named as an inventor in a patent application as to whether or not there can properly be said to be a human inventor (and thus whether or not the invention can be said to be patentable). Organisations using AI to in their development processes will need to be careful in crafting policies and processes to ensure that their inventions remain patentable while making use of the benefits AI assistance can bring.

That said, the Supreme Court made clear the limits of its judgment, acknowledging the increasing importance of these questions in light of the significant recent advances in AI technology, and the policy issues that accompany them. It was considering a narrow point of interpretation of the current law, rather than the wider question of what form protections for AI generated works might take in the future, which are, properly, questions for the legislature and not the courts. Lord Kitchin quoted with approval the comments made by Elisabeth Laing LJ in the Court of Appeal in this regard:

Whether or not thinking machines were capable of devising inventions in 1977, it is clear to me that that Parliament did not have them in mind when enacting this scheme. If patents are to be granted in respect of inventions made by machines, the 1977 Act will have to be amended.” (para 103)

A UKIPO consultation on AI and IP law in 2022 concluded that no changes to UK patent law were then needed, including in respect of inventorship requirements. It found that the majority of respondents to the consultation considered that AI technology was not sufficiently advanced to enable invention without human intervention. Whether or not this remains the case, it is likely that questions over the proper way to protect AI-devised inventions and other outputs will only become more pronounced in the coming years. Legislatures around the world are continuing to look at these issues, and at other jurisdictions, as they grapple with how best to cater for the diverse interests and strongly held views in this area.

For more on AI and IP see our series The IP in AI and our blog posts on AI related developments, cases and issues.

Authors

Peter Dalton
Peter Dalton
Partner, IP & Cyber
+44 20 7 4662181
Adam Evans
Adam Evans
Trainee solicitor, IP
+44 20 7 4663409

Victoria Gettins
Victoria Gettins
Trainee solicitor, IP
+44 20 3 6929648

Anand Varu
Anand Varu
Legal assistant, IP
+44 20 7 4662736

Artificial neural networks (AI) not excluded from patentability and have a “substantial technical contribution” finds E&W High Court

The High Court of England and Wales has issued a judgment on the patentability of AI both in respect of its exclusion from patentable subject matter and in relation to whether the AI in question produced a substantial technical contribution.

In Emotional Perception AI Ltd v Comptroller-General of Patents, Designs, and Trade Marks [2023] EWHC 2948 (Ch) (“Emotional Perception“), Sir Anthony Mann (sitting in retirement) held that a patent application for an artificial neural network (“ANN“) did not invoke the statutory exclusion from patentability which applies to  computer programs (as such) under s.1(2)(c)  Patents Act 1977,  as no computer program was claimed (the ANN involved not being such) and if that was wrong, the system’s method of selection through the application of “technical criteria which the system has worked out for itself” satisfied the requirement for a technical contribution for the purposes of escaping that subject matter exclusion in any case.

In response, the UK IPO has temporarily suspended its guidance on the examination of AI inventions while it considers the impact of this decision and has issued a practice update specifically relating to the examination of ANNs. The UK IPO has further been granted leave to appeal the decision to the Court of Appeal.

The Decision in Emotional Perception

Background

The case concerned a patent application for a system for recommending media files to end users by passing music tracks through a trained ANN. This ANN was trained in a particular way to identify similar tracks by taking into account both natural language descriptions of a music file and its physical properties based on human perceptions and descriptions and used machine learning to correct its own internal workings without human input.  The advantage of the patent was said to be its ability to suggest similar music in terms of human perception and emotion, irrespective of the genre of music and the apparently similar tastes of other humans and to arrive at such suggestions by passing music through a trained ANN which performed the categorisation.

The proceedings were brought as an appeal against a decision of 22 June 2002, in which a UK IPO Hearing Officer refused to grant the proposed patent on the basis that the claimed invention constituted subject-matter excluded from patentability under s.1(2)(c) of the Patents Act 1977 (the “Patents Act“), which excludes from patentability “a program for a computer … only to the extent that a patent or application for a patent relates to that thing as such”.

Decision

The main issue was whether the use of an aspect of artificial intelligence (the ANN) fell within the exclusion in the Patents Act relating to computer programs. The application had been found to be excluded by the Comptroller of Patents whose decision was being appealed in this case.

Sir Anthony Mann first considered whether the ANN was a “program for a computer” within the meaning of the s.1(2)(c) of the Patents Act. In doing so, Sir Anthony considered that the ANN could be classified into two types: the “hardware” ANN, which is described as a “physical box with electronics in it” and the “software” or “emulated” ANN, whereby a “conventional computer runs a piece of software which enables the computer to emulate the hardware ANN as if it were a hardware ANN“. It appeared to be accepted by the parties that a hardware ANN would not fall under the exclusion in s.1(2)(c) of the Patents Act, and Sir Anthony commented that this was justified on the basis that the hardware was not implementing a series of instructions pre-ordained by a human, but was operating according to something that it has learned itself. He found that the same reasoning applied to the “emulated” ANN since it was not implementing code given to it by a human, and was “in substance, operating at a different level (albeit metaphorically) from the underlying software on the computer”. Sir Anthony therefore concluded that while the program used at the training stage of the ANN could be a program for a computer, the emulated ANN did not fall within the exclusion.

Training stage involving a computer program

While holding that an emulated ANN was not a program for a computer, Sir Anthony commented that programming activity was involved in the training phase, and so “the only remaining candidate computer program is therefore the program which achieves, or initiates, the training”. However, he held that the actual training program was only a subsidiary part of the claim and was not what was claimed in the invention. On the basis that the claims went beyond that, the exclusion was not invoked.

Technical contribution

In case he was wrong on this conclusion, Sir Anthony then went on to find that the invention provided a technical contribution which would allow an invention that might otherwise have been excluded to be patentable; in his words, “a technical effect which prevents the exclusion applying“. Following the line of cases on computer implemented inventions and technical contribution, he considered steps 3 and 4 of the Aerotel steps, which assess whether the contribution falls solely within the excluded subject matter and whether the contribution is technical in nature. He held that the system’s method of selection and the output of files that would not otherwise be selected through the application of “technical criteria which the system has worked out for itself” constituted a technical effect outside the computer for these purposes, and when coupled with the purpose and method of selection satisfied the requirement of technical effect in order to escape the subject matter exclusion of s.1(2)(c) of the Patents Act.

Sir Anthony also considered an alternative approach to the extent that the computer program was either the training program or the overall training activity. In this case, he considered that the resulting ANN, and particularly a trained hardware ANN, could be regarded as an external technical effect which prevented the exclusion from applying to any prior computer program. Again, he concluded that there was no difference between hardware and emulated ANNs for these purposes.

Further exclusions

Sir Anthony also explored another possible exclusion of the invention as a “mathematical method” (s.1(2)(a) Patents Act), which had been raised in post-hearing submissions, but this was rejected on a procedural issue and the judge considered it not to be in play in the appeal and did not consider it further.

UK IPO’s response

The UK IPO has responded by temporarily suspending its guidance on the examination of AI inventions and issuing specific guidance that “patent examiners should not object to inventions involving ANNs under the “program for a computer” exclusion.” In what may signal a shift in approach to the patentability of AI more generally, it has also indicated that the Manual of Patent Practice and the Office’s guidelines for examining patent applications relating to artificial intelligence (AI) inventions will be updated in due course to reflect the Emotional Perception judgment.

Further, on 15 December 2023, the UK IPO confirmed that they had been given leave to appeal the decision to the Court of Appeal.

Authors

Peter Dalton
Peter Dalton
Partner, IP & Cyber
+44 20 7 4662181
Rachel Montagnon
Rachel Montagnon
Professional Support Consultant, IP
+44 20 7466 2217
Victoria Gettins
Victoria Gettins
Trainee solicitor, IP
+44 20 3 6929648

The IP in AI: Can patents protect AI-generated inventions?

In this instalment of our series The IP in AI, we consider whether patents can be awarded for inventions made by AI systems, and the challenges faced by patent law in protecting innovation in an AI-enabled world.

Read the full article here

For more on the developing area of intellectual property protection and risks for AI and ML systems, follow our blog series The IP in AI.

 

Key contacts

Aaron Hayward
Aaron Hayward
Senior Associate, Australia
+61 2 9225 5739
Anna Vandervliet
Anna Vandervliet
Senior Associate, Australia
+61 2 9322 4868
Byron Turner
Byron Turner
Solicitor, Australia
+61 2 9322 4155

Bryce Robinson
Bryce Robinson
Solicitor, Australia
+61 3 9288 1155

Rachel Montagnon
Rachel Montagnon
Professional Support Consultant, UK
+44 20 7466 2217
Heather Newton
Heather Newton
Of Counsel, UK
+44 7809 200 246
Maximilian Kucking
Maximilian Kucking
Senior Associate, Germany
+49 211 975 59096
Peng Lei
Peng Lei
Partner, Kewei, China
+86 10 6535 5151
Alex Wang
Alex Wang
Patent Attorney, China
+86 10 6535 5156

Series: The IP in AI

Uses of machine learning and AI are expanding rapidly, and IP rights play a critical role in both regulating the use of AI and protecting the rights of inventors and creators. In this series, we will explore the key challenges governments worldwide are currently grappling with in order to provide the right level of protection to AI and ML systems.

Continue reading

UK Select Committee recommends legislation on AI including to establish and enforce rights of IP owners

The UK Science, Innovation and Technology Select Committee (which recently conducted an inquiry into the impact of AI on several sectors) has published The Governance of Artificial Intelligence: Interim Report  (pdf  here).  The report identifies 12 challenges of AI, including that for intellectual property, and recommends legislation during this parliament (ie before the general election due in 2024). The Select Committee expresses concerns that the UK will fall behind if there are delays, given the moves made by the EU and US to regulate AI already. On IP it recommends that where AI models and tools make use of other people’s content, policy must establish the rights of the originators of this content, and these rights must be enforced.

The need for regulation now and the establishment of an international forum on AI:  The report encourages the UK Government to go direct to legislation on AI regulation rather than to apply the approach set out in its white paper of March 2023. The white paper used five principles to frame regulatory activity, guide future development of AI models and tools, and their use – but these principles were not to be implemented via statute but were to be “interpreted and translated into action by individual sectoral regulators, with assistance from central support functions“.

The report recognises that although the UK has a long history of technological innovation and regulatory expertise, which “can help it forge a distinctive regulatory path on AI“, the AI white paper is only an initial effort to engage with AI regulation and it’s approach risks the UK falling behind given the pace of development of AI and especially in light of the efforts of other jurisdictions, principally the European Union and United States, to set international standards.

The report suggests “a tightly-focussed AI Bill in the next King’s Speech would help, not hinder, the Prime Minister’s ambition to position the UK as an AI governance leader. Without a serious, rapid and effective effort to establish the right governance frameworks—and to ensure a leading role in international initiatives—other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer.

An international summit on AI safety is expected to held in the UK in November or December, will also be key and the report recommends invitations being extended to as wide a range of countries as possible to create a forum “for like-minded countries who share liberal, democratic values, to ensure mutual protection against those actors—state and otherwise—who are enemies of these values and would use AI to achieve their ends.”

12 essential challenges of AI identified: The report identifies the challenges of AI in general and twelve essential challenges that AI governance must address if public safety and confidence in AI are to be secured: – including IP at challenge 8:

  1. The Bias challenge. AI can introduce or perpetuate biases that society finds unacceptable.
  2. The Privacy challenge. AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants.
  3. The Misrepresentation challenge. AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions or character.
  4. The Access to Data challenge. The most powerful AI needs very large datasets, which are held by few organisations.
  5. The Access to Compute challenge. The development of powerful AI requires significant compute power, access to which is limited to a few organisations.
  6. The Black Box challenge. Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements.
  7. The Open-Source challenge. Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms.
  8. The Intellectual Property and Copyright Challenge. Some AI models and tools make use of other people’s content: policy must establish the rights of the originators of this content, and these rights must be enforced.
  9. The Liability challenge. If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done.
  10. The Employment challenge. AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption.
  11. The International Coordination challenge. AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking.
  12. The Existential challenge. Some people think that AI is a major threat to human life: if that is a possibility, governance needs to provide protections for national security

In relation to challenge 8 on Intellectual Property and Copyright – the report comments that “Some AI models and tools make use of other people’s content: policy must establish the rights of the originators of this content, and these rights must be enforced” and that whilst the use of AI models and tools have helped create revenue for the entertainment industry in areas such as video games and audience analytics, concerns have been raised about the ‘scraping’ of copyrighted content from online sources without permission.

The report refers to “ongoing legal cases” (unnamed but likely a reference to Getty v StabilityAI) which are likely to set precedents in this area, but also notes that the UK IPO has begun to develop a voluntary code of practice on copyright and AI, in consultation with the technology, creative and research sectors, which guidance should “… support AI firms to access copyrighted work as an input to their models, whilst ensuring there are protections (e.g. labelling) on generated output to support right holders of copyrighted work”. The report notes that the Government has said that if agreement is not reached or the code not adopted, it may legislate.

The withdrawal of the proposed text and database mining exception, following pressure from creative industries, is noted, but also that other parties have commented that this now “… prevents the UK from capitalising on the diverse, agile and creative benefits that AI can bring to the UK’s economy, its society and its competitive research environment”.

On the Liability challenge (9) the report considers that if AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done.

For more on the IP in AI see our series of blog posts here.

Rachel Montagnon
Rachel Montagnon
Professional Support Consultant, UK
+44 20 7466 2217
Heather Newton
Heather Newton
Of Counsel, UK
+44 7809 200 246

US court refuses copyright registration for AI-generated art

Dr Thaler’s latest attempt to obtain IP protection for “creations” of his AI system has failed, with a US Court rejecting copyright protection for an AI-generated artwork.

Background: Dr Thaler, DABUS and the Creativity Machine

Dr Thaler has become well-known as the owner of the DABUS AI which he has been seeking to have identified as the inventor of a patent at multiple patent offices around the world. So far he has done so without much success, although he is currently awaiting the outcome of an appeal in the UK Supreme Court.

Dr Thaler also owns an AI system he calls the “Creativity Machine” which generates art, he claims, of its own accord.  In his latest foray into the world of IP registration, Dr Thaler sought to obtain copyright registration in the US for an artwork entitled “A Recent Entrance to Paradise”, which he claimed was generated by the Creativity Machine.

“A Recent Entry to Paradise” (source)

Dr Thaler’s application to the US Copyright Office identified the Creativity Machine as the copyright author, and he claimed that the ownership should transfer to him as “a work for hire” –  a US term which means that ownership automatically transfer to the employer from the employee if copyright is generated by its employees if done within the scope of their employment hirer, which is similar to the employer ownership provisions of under UK and Australian law.

The US Copyright Office rejected his application on the grounds that the work lacked human authorship, which was a pre-requisite for valid copyright to be registered. Thaler had confirmed that the work was autonomously generated and acknowledged that it lacked “traditional human authorship” but had urged the Copyright Office to consider to “acknowledge [the Creativity Machine] as an author where it otherwise meets authorship criteria, with any copyright ownership vesting in the AI’s owner“. Following that decision, Thaler appealed to the District Court for the District of Columbia.

Court says no: copyright authors must be human

Last week the Court issued its appeal decision, in which it rejected Thaler’s appeal and upheld the original decision that the work was not protected by copyright. In its reasons, the Court responded that although Thaler:

“…correctly observe[d] that throughout its long history, copyright has proven malleable enough to cover works created with or involving technologies developed long after traditional media of writings memorialized on paper  … Copyright has never stretched so far, however, as to protect works generated by new forms of technology operating absent any guiding human hand, as plaintiff urges here. Human authorship is a bedrock requirement of copyright.”

In doing so, the Court noted there was “centuries of settled understanding” that an “author”, for copyright purposes, must be a human, referring to previous decisions denying copyright protection for written works asserted to have been created by “celestial beings[1] and “a spirit named Phylos the Thibetan[2], a garden that “ow[ed] [its] form to the forces of nature”,[3] and photographs taken by a crested macaque that got its hands on a photographer’s camera.[4]

Thaler’s claims in relation to ownership were not relevant where there was no copyright in existence, and so did not need the Court’s consideration.

Unanswered questions

Importantly, the Court observed because Thaler had identified the author of the work as the Creativity Machine, its decision was narrowly directed at “the sole issue of whether a work generated entirely by an artificial system absent human involvement should be eligible for copyright”. Although in the course of the appeal Thaler had attempted to assert for the first time that he “provided instructions and directed his AI to create the Work”, that “the AI is entirely controlled by [him]” and that “the AI only operates as [his] direction”, the Court had to consider the application on the basis that he had originally sought it, namely that the work had been generated without human involvement.

As a consequence, the decision did not address what the Court described as “challenging questions” regarding:

“how much human input is necessary to qualify the user of an AI system as an “author” of a generated work, the scope of protection obtained over the resultant image, how to assess the originality of AI-generated works where the systems may have been trained on unknown pre-existing works, how copyright might best be used to incentivize creative works involving AI, and more.”

As we have previously observed, those questions are likely to be the key ones to be addressed by the Courts or legislatures in grappling with the application of copyright protection to AI-generated works. Given this decision, those questions remain open.

To read more on AI and copyright see our series  The IP in AI and in particular the edition Does Copyright Protect AI-Generated Works? 

Authors and Contacts

Rachel Montagnon
Rachel Montagnon
Professional Support Consultant, UK
+44 20 7466 2217
Aaron Hayward
Aaron Hayward
Senior Associate, Australia
+61 2 9225 5739
Heather Newton
Heather Newton
Of Counsel, UK
+44 7809 200 246
Anna Vandervliet
Anna Vandervliet
Senior Associate, Australia
+61 2 9322 4868
Byron Turner
Byron Turner
Solicitor, Australia
+61 2 9322 4155

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

[1] Urantia Found. v. Kristen Maaherra, 11 F.3d 955, 958-59 (9th cir. 1997).

[2] Oliver v. St. Germain Found., 41 F. Supp. 296, 297, 299 (S.D. Cal 1941).

[3] Kelley v. Chicago Park District, 635 F.3d 290, 304-306 (7th Cit. 2011).

[4] Naruti v. Slater, 888 F.3d 418, 420 (9th Cit. 2018).

 

The IP in AI: Does copyright protect AI-generated works?

In this instalment of our series The IP in AI, we take a look at the extent to which copyright and other rights currently provide protection for output generated by AI systems, including how concepts of ‘authorship’ and ‘originality’ may need to be adapted to meet the rapid growth of generative AI.

Read the full article here

For more on the developing area of intellectual property protection and risks for AI and ML systems, follow our blog series The IP in AI.

 

Key contacts

Aaron Hayward
Aaron Hayward
Senior Associate, Australia
+61 2 9225 5739
Anna Vandervliet
Anna Vandervliet
Senior Associate, Australia
+61 2 9322 4868
Byron Turner
Byron Turner
Solicitor, Australia
+61 2 9322 4155

Rachel Montagnon
Rachel Montagnon
Professional Support Consultant, UK
+44 20 7466 2217
Heather Newton
Heather Newton
Of Counsel, UK
+44 7809 200 246
Peng Lei
Peng Lei
Partner, Kewei, China
+86 10 6535 5151
Alex Wang
Alex Wang
Patent Attorney, China
+86 10 6535 5156

Giulia Maienza
Giulia Maienza
Associate, Europe
+44 20 7466 6445
Michael Dardis
Michael Dardis
Solicitor, Australia
+61 3 9288 1173