The development of artificial intelligence (AI) continues to raise new issues and concerns every day. 2024 looks to be the year in which this innovative technology will be confronted with the need for increasingly sophisticated regulation on the one hand and growing litigation on the other.
In particular, one of the hottest topics in relation to AI concerns IP rights and their protection in the AI world. For example, training AI systems with massive amounts of data indiscriminately found on the web entails the risk of infringing third parties' copyright; whilst, the ability of generative AI to create new and original works raises the question of whether, how and to what extent it is possible to protect these works under IP laws.
These topics have recently hit the news and gained more public attention. Here we start the New Year by taking a look at the latest legislative and case law developments and shedding light on how these issues will be addressed and what questions still remain open to debate. In particular we look at The EU AI Act and the protection of copyright in materials used in machine learning; the UK and US proceedings in Getty v Stability AI on the infringement of rights associated with images used to train Stability's Stable Diffusion image-generating AI; New York Times v. OpenAI and Microsoft case in the US that revolves around fair use of copyright materials; and Li v Liu in China in relation to copyright in AI generated images.
EU: The AI Act
On 8 December 2023, after lengthy discussions, representatives of the European Parliament, the EU member states and the European Commission finally reached an agreement on the provisional text of the AI Act, which is now in the process of being formally approved by the European Parliament and the European Council. Representatives of the governments of EU member states will discuss the EU AI Act at a meeting of the Council on Friday 2 February. It still seems that France might insist on seeking some changes on the current text.
In general, the AI Act has been welcomed by many as the world's first comprehensive piece of legislation on AI which aims to achieve uniform protection at European level and to promote the development and use of AI in a safe, reliable, and transparent way, while ensuring the respect for the fundamental rights of EU citizens and businesses and striking a balance between innovation and protection.
In line with the purpose of representing a uniform and comprehensive piece of legislation, the AI Act adopts the OECD's definition of an AI system ("a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment") and applies to all operators (both public and private providers or users) that use the AI system for commercial purposes within the EU or outside the EU, provided that the AI system is placed on the EU market, or its use affects people in the EU.
Safety, reliability, and transparency are amongst the main principles that the AI act wants to promote and regulate. On the one hand, users must be made aware that they are interacting with an AI technology or that they are facing an AI generated output; on the other hand, companies developing AI will have to comply with various levels of disclosure requirements, which should help to prevent infringement on rights or risks to individuals.
While the provisional agreement on the text of the AI Act shows that at least some key points have been agreed so far, allowing operators to make an initial assessment of what could appear in the final text, many issues regarding IP rights do not seem to have reached a consensus and appear to require further discussion and consideration, particularly in light of potential risk profiles and practical implementation of the legislation. Were future drafts to include specific provision on exceptions to copyright infringement for example, it would be interesting to see how the various jurisdictions would implement those provisions considering that copyright is not fully harmonised at EU level.
Copyright protection and IP provisions
Although many of the questions relating to IP rights and the use of AI system have not been addressed by the provisional agreement, the main takeaway from the agreement is that general purpose AI developers (e.g., ChatGPT) will be required to implement some policies to ensure copyright compliance. One of these compliance requirements appears to be the mandatory disclosure of the material used in the training phase of the AI, to ensure that no copyrighted work has been used without proper authorisation. In particular, the provision requires general purpose AI models to draw and make publicly available a sufficiently detailed summary of the content (including text and data protected by copyright) used for training the model. We should wait to see what form this summary will actually take.
Such requirements seem to apply "regardless of the jurisdiction in which the copyright-relevant acts underpinning the training of these foundation models [i.e. general purpose AI systems] take place". The broad geographical scope is intended to avoid the circumvention of the rules and an unfair "competitive advantage in the EU market" for the developers who could benefit from "lower copyright standards" by moving software training outside the EU territory.
General purpose AI models would then need to obtain specific authorisations from rightsholders if they want to carry out TDM over works over which the rightsholder has reserved its rights according to EU Directive 2019/790.
These few references to copyright leave many questions unanswered and raise some new line of inquiry that we hope will find a resolution and clarification in the final text. To name a few:
- Do the text and data mining (TDM) exceptions provided in individual jurisdictions under EU Directive 2019/790 on copyright and related rights in the Digital Single Market apply?
- The AI Act explicitly refers to Article 4(3) of Directive 2019/790, under which, rights holders may reserve the right to prevent TDM on their protected content i.e. "opt out" from allowing access to their protected materials. What will be the appropriate opt out method and how will it apply in practice?
- In relation to the duty to provide documentation with a detailed summary of the use of copyright-protected training data in Foundation Models, what level of detail will be considered useful?
- What role will existing competition, intellectual property, privacy and consumer protection laws play?
- Will it still be possible to do data training outside EU for AI to be used within the EU, and what will be the applicable law in the case of data training?
- Will the AI Act slow down the development or use of AI in Europe?
- Will the AI Act be sufficiently future-proof?
It will be interesting to see what guidance or practice on these issues will arise from self-regulation, disputes (that are increasingly coming before the courts), and commercial policies before the AI Act comes into force.
US: New York Times v. OpenAI and Microsoft; Getty Images v Stability AI
In the United States, several lawsuits revolving around the issues of copyright and AI have emerged over the last years, making it one of the most prolific stages in which to observe the development of the legal response to these new matters. The latest news is that The New York Times (NYT) has sued OpenAI and Microsoft for alleged copyright infringement of its written works, seeking billions of dollars in damages. In particular, NYT, which filed its complaint before the Federal District Court in Manhattan on 27 December 2023, claims that millions of articles published by the New York Times were used by the defendants to train automated chatbots, which it claim cannot be covered by the US doctrine of fair use, which represents a general exception to copyright protection under US law, allowing the use of copyrighted material without permission under certain conditions. We will have to wait and see how the US courts consider the fair use doctrine may or may not apply in relation to training an AI with written works.
This issue of fair use of text in training an AI has not yet been considered by the UK courts (the Getty v Stability AI case involves images not text, aside from the use of Getty watermarks (a trade mark issue)) or by the EU courts. In these jurisdictions there are specific copyright exceptions (in the EU varying between member states and not harmonisation) instead of the general principle of "fair use".
Getty is suing Stability AI in both the US and the UK on similar grounds – both for infringement of Getty's rights in its collections of photographs in their use to train Stability AI image generating AI engine "Stable Diffusion" and in terms of the output of the AI being infringing also. See the UK section below for the detail on the UK case.
CHINA: Stable Diffusion generated images held to be copyright of artist: Li v Liu
In November 2023, The Beijing Internet Court ruled that images generated by the artificial intelligence-powered software Stable Diffusion are entitled to copyright protection. This is an interesting conclusion in light of the current litigation running in parallel in the US and UK, where Getty Images are suing Stability AI in relation to the latter's AI engine Stability Diffusion which Getty alleges has been trained using Getty's images without permission and that the resulting images also infringe its copyright in those images (see UK section below
In this case (Li v Liu), the plaintiff used Stable Diffusion to create an image by inputting prompts and posted the same on a personal profile on a famous social media platform. Thereafter, the plaintiff found that the defendant used the image from which the plaintiff's signature is removed in a public article without permission, and thus sued the defendant for infringing copyright before the Beijing Internet Court.
The court held that the image in this case could be a work entitled to copyright protection. Specifically, the image is derived from an intellectual investment. During generation of the image by Stable Diffusion, the plaintiff set up the presentation style of the character, selected and arranged the prompts, set relevant parameters, and selected images that met expectations, which reflects the plaintiff's intellectual contribution. In addition, the image possesses originality. The plaintiff continuously adjusted and corrected the images by adding prompts and modifying parameters, to obtain the final image. The adjustment and modification reflect the plaintiff's aesthetic choice and personality judgment. As the above act of using an AI tool to generate an image is essentially a human creation by using a tool, which reflects the original intellectual investment of the human, the image should be recognized as a work.
The court further held that the plaintiff is the copyright owner of the image. Since the AI tool itself is not a natural person, a legal person or an unincorporated organisation, it can not constitute an author recognized by the Chinese Copyright Law. In addition, the producer of the AI tool was not involved in the generation process of the image. Therefore, the producer was not the author either. Since the image were directly generated based on the plaintiff’s intellectual investment and reflected the plaintiff’s personalized expression, the plaintiff is the author of the image and thus enjoys the copyright (case reference: [(2023) Jing 0491 Min Chu No 11279], 27 November 2023).
UK: Copyright - Getty v Stability AI; the patentability of AI; and AI as an inventor
Getty Images are bringing a case against Stability AI in the UK (in parallel to the US action mentioned above) in relation to Stability AI's allegedly infringing use of Getty's images to train its image-generating AI "Stable Diffusion" and also in relation to the allegedly infringing outputs of that engine.
The case involves allegations of copyright infringement in both the training process as well as the outputs themselves being copyright infringements, plus allegations of sui generis database right infringement (a right not available in the US) which centres of the extraction and use of content from a database – here Getty Images' database of photographs being used to train the AI. In addition trade mark infringement and passing off are alleged in relation to the outputs of the AI, many of which had the Getty watermark (or parts of it) incorporated into them.
Stability have so far not submitted a Defence, instead attempting a strike out in relation to two elements of the claims against them struck out (in a reverse summary judgment application) – the claim for infringement via training (Stability had argued that this did not occur in the UK and copyright and database right being a territorial right they therefore did not infringe) – and the claim for secondary infringement (which Stability challenged on the basis that the making available of Stable Diffusion in the UK did not fit within the provisions on secondary infringement in the Copyright Designs and Patent Act 1988 which required an "article" to be imported for there to be infringement, contending that its software supplied online was not an "article" within the meaning of the Act as it was not tangible). In relation to both issues the UK High Court found that these were issues it would need to decide at a full trial and not at an interim stage in a reverse summary judgment/strike out application.
Stability AI have yet to file a Defence but it will be interesting to see what approach they take and key for AI developers to have the court decide on these issues in due course.
Two further cases of particular interest have been decided recently in the UK in relation to patent rights and AI:
- The patentability of an AI involving an artificial neural network was consider by the UK High Court which found that the ANN was not a computer program and so did not fall to be excluded from patentability under s.1(2)(c) of the Patents Act 1977 and in any case, even if it had been held to be a excluded as an computer program, would otherwise have been patentable for having made a “substantial technical contribution” following the long line of law developed around computer implemented inventions (Emotional Perception v Comptroller of patents) – see our blog post here.
- Dr Thaler's case challenging the UKIPO to allow his AI DABUS to be the inventor of two patents which went all the way to the UK Supreme Court, being rejected at each stage (see our blog post on the UK Supreme Court's December 2023 decision in DABUS here) with the courts finding that an inventor must be a human under the law as it currently stands.
Although not an EU or UK decision, it is worth mentioning here that Dr Thaler who was behind the DABUS AI patent inventorship challenges worldwide (see above) also has an AI system he calls the "Creativity Machine" which generates art, he claims, of its own accord. Dr Thaler sought to obtain copyright registration in the US for an artwork entitled “A Recent Entrance to Paradise”, which he claimed was generated by the Creativity Machine. The US Copyright Office rejected his application for copyright registration on the on the grounds that the work lacked human authorship, which was a pre-requisite for valid copyright to be registered. Thaler had confirmed that the work was autonomously generated and acknowledged that it lacked “traditional human authorship” but had urged the Copyright Office to consider to “acknowledge [the Creativity Machine] as an author where it otherwise meets authorship criteria, with any copyright ownership vesting in the AI’s owner“. Following that decision, Thaler appealed to the District Court for the District of Columbia. In August 2023 the court rejected Thaler's appeal and upheld the original decision that the work was not protected by copyright. In doing so, the Court noted there was “centuries of settled understanding” that an “author”, for copyright purposes, must be a human (for more on this case see our IP blog post here).
Conclusions
These cases and legislative developments, demonstrate that the issue of AI and copyright continues to be hot topic that demands serious discussion and some clear direction. With legislators being slow to act so far, AI users around the world are turning to the courts to get the answers they need. Courts must use the tools at their disposal, the current laws as they stand, to these ground-breaking issues, with concepts that are often hard to grasp and harder to frame in legal terms.
The risk across the EU, UK, US, China and elsewhere is that these answers will be vary from court to court, as already illustrated in the Li v Liu decision of the Chinese court compared to the US decision in Dr Thaler (see above), creating uncertainty in an already fragmented copyright landscape although at the same time giving some jurisdictions a potential competitive advantage. In the EU certainly, the hope is that the much-awaited EU AI Act can set a clear and strong example for legislators to take action and provide guidance which the rest of the whole may find attractive to implement likewise s without an international resolution to these issues there will continue to be tensions.
For more on IP issues and AI in the UK, EU and internationally, see our series "The IP in AI" and our regular AI blog posts here.
Visit our artificial intelligence hub for the latest legal and industry analysis.
Andrew Moir
Partner, Intellectual Property and Global Head of Cyber & Data Security, London
Key contacts
Andrew Moir
Partner, Intellectual Property and Global Head of Cyber & Data Security, London
Disclaimer
The articles published on this website, current at the dates of publication set out above, are for reference purposes only. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action.