The IP in AI – What you need to know
With AI sending waves throughout the business world, we explain the salient role of intellectual property in regulating the technology and protecting the rights of inventors.
The patent system rewards the development of new and useful inventions with exclusive rights to exploit these inventions for a certain period of time. In return, it requires the inventor to publicly disclose their invention, so that other people can learn from it and continuously build upon the state of the art. However, this process of iterative innovation is no longer the sole domain of humans. Much as we noted in part 3 of this series that AI systems are now generating “creative” outputs, AI systems such as IBM’s Watson and Google’s DeepMind have already been deployed to solve some of the great technical, scientific, and medical challenges of our time.1
In a world where machines are playing an increasingly important role in innovation, the question has been asked how the patent system will protect AI-generated inventions. Is a system designed to foster human ingenuity and innovation sufficiently flexible to accommodate non-human inventors? Should it be?
As we explained in part 3, under copyright law, the existence of a (human) author is directly relevant to the question of whether a work will attract copyright protection at all. Patent law is different. While the identity of the inventor has legal certain implications, it does not affect patentability of the invention itself. This is because the patentability of an invention is assessed objectively, rather than by reference to the subjective “inventive” or other mental process of the inventor. As the High Court of Australia has observed, a valid patent may be obtained from something “stumbled across by accident” or “remembered from a dream”, so long as it is otherwise meets the criteria for patentability.2
So, why does it matter who the inventor is? The answer is ownership. In most jurisdictions, a patent can only be granted to the inventor (ie the person – or people – who are responsible for the “inventive concept”) or to someone who derives title from the inventor. For example, this might be the case due to the assignment of rights under contract, inventions developed in the course of employment, or being the legal representative of a deceased inventor. Given that the role of patents is to encourage innovation, ensuring that there is clarity over who will be entitled to a patent stemming from inventive activities is critical.
The question of whether an AI system is capable of being named as an inventor of patentable subject matter is being tested around the world as part of The Artificial Inventor Project. The project comprises a series of “test” patent applications filed by Dr Stephen Thaler in respect of inventions generated by his AI system, ‘DABUS’.
DABUS (or ‘Device for the Autonomous Bootstrapping of Unified Sentience’) is a type of ‘connectionist AI’, which uses multiple neural networks to generate new ideas, the novelty of which is then assessed by a second system of neural networks. By this process, DABUS has autonomously generated two “inventions” in respect of which Dr Thaler has applied for patent protection: the fractal container (a food container) and the neural flame (a search and rescue beacon).
So far, with very few exceptions, the global trend in courts and patent offices has been to reject these applications, on the basis that an AI system cannot be regarded as an inventor for the purposes of patent law. Although the reasoning has varied from country to country, the emerging global consensus is that an inventor of a patented invention must be a human or a person with legal capacity.
We have previously reported on outcomes in:
The position in the United States is similar. The US Patent and Trademark Office also rejected Dr Thaler’s patent applications, stating that the explicit statutory language used by Congress to define the term ‘inventor’ (such as ‘individual’ and ‘himself or herself’) was uniquely directed to human beings.5
Dr Thaler was unsuccessful in challenging this decision in both the District Court for the Eastern District of Virginia and the Court of Appeals for the Federal Circuit,6 and the US Supreme Court has recently declined to hear Dr Thaler’s appeal.7
Dr Thaler’s patent applications have also been unsuccessful in New Zealand,8 Taiwan,9 Israel10 the Republic of Korea,11 Canada,12 Brazil,13 and India.14
In Germany, one of Dr Thaler's appeals contained three auxiliary requests, (1) to grant the patent without any inventor designation, (2) to include a paragraph in the description clarifying that DABUS created the invention, and (3) to designate the inventor as "Stephen L. Thaler, PhD who prompted the artificial intelligence DABUS to create the invention” The 11th Senate of the Federal Patent Court granted the third auxiliary request, stating that while the role of AI in the creation of the invention may or may not be mentioned, in any case, a natural person must be named as the inventor.15
In a parallel proceeding relating to another DABUS patent, however, the 18th Senate of the Federal Patent Court decided that a patent on AI-generated inventions cannot be granted, unless the applicant omits the reference to the AI in the inventor designation. Both decisions are subject to appeal to the Federal Court of Justice, which is expected to provide clarification.
To date, South Africa16 and Saudi Arabia17 are the only exceptions, although in both of those jurisdictions the patents have not yet undergone substantive examination.
Dr Thaler has also filed applications in jurisdictions including China, Japan, Singapore, and Switzerland, so it remains to be seen whether further exceptions to the global trend will emerge.
Because of the way Dr Thaler argued the cases, none of the Australian, UK or US courts nor the EPO was required to answer who, if not DABUS, should have been named as the inventor of the relevant patents. However, the Full Court of the Australian Federal Court suggested a number of options, namely:
As noted above, due to the auxiliary requests filed in the German proceedings, the 11th Senate of the German Federal Patent Court considered that the appropriate inventor designation was “Stephen L. Thaler, PhD who prompted the artificial intelligence DABUS to create the invention”.
Determining who should be named as the “inventor” of an invention devised using (or by) an AI system is likely to be fact-specific. In many cases this question is likely to be academic, since regardless of which individual(s) are considered to be the “inventors” it will be clear that the same person or organisation will ultimately be entitled to the patent (for example, the potential inventors’ employers or the owners of the AI system). Nonetheless, until an appropriate case is litigated in which that question is required to be considered, some uncertainty will remain as to who should be regarded as the “inventor” of such an invention.
In determining whether a patent should be granted, patent offices and courts are tasked with assessing the novelty, inventiveness, and utility of the claimed invention, as well as ensuring that it is clearly and comprehensively described in the patent’s specification. Central to many of these assessments is a fictitious, yet centrally important figure: the “person skilled in the art” (PSA).
The PSA is the hypothetical person to whom the claimed invention is assumed to be addressed. They have the ordinary level of skill and perception of those working in the relevant field at the time, but they are not particularly inventive or creative. They are also armed with what is called the “common general knowledge”: the knowledge assimilated and accepted by the bulk of people working in the relevant field at the time.
For hundreds of years, the PSA has only ever been human, and the law has only attributed to them knowledge they are likely to already possess or that which is readily at their fingertips. What happens when AI is thrown into the mix?
One of the key criteria of patentability in most jurisdictions is that the claimed invention must involve an “inventive step”. This is assessed by determining whether the invention, when compared with the prior art, would have been obvious to the PSA in light of the common general knowledge.
Given the rapid rate at which AI systems are being applied to new contexts, in the Australian DABUS litigation, both the primary judge18 and the Full Court on appeal raised the question whether the standard of inventiveness needed to be recalibrated if, for example, the PSA were considered also to have access to AI systems.19
The Full Court considered that issue should be dealt with urgently, but that courts should be cautious about stretching the interpretation of existing legislation to give it meaning the legislature did not intend.Historically, the law of obviousness has been sufficiently flexible to adapt to the changing nature of innovation. For example, the hypothetical PSA will often be an interdisciplinary team, not just a single person, reflecting the way research and development is actually conducted. The common general knowledge has also evolved to account for reference material that a PSA would routinely consult or have access to, notably including the proliferation of online resources, and , at least in the UK, courts have been willing to find that the outcome of routine processes will be obvious even if it would not have been predictable in advance.20
In principle, there is therefore no reason these concepts could not include AI systems, if they would be routinely used in the relevant field.In practice, however, such developments may raise evidentiary challenges. It may be difficult to establish what kind of AI is part of the PSA’s normal toolkit, given the range of functionality, complexity, and sophistication in AI systems. This is further compounded by the lack of predictability of AI outputs given matters such as the “black box” nature of AI systems and the dependence on the datasets on which they are trained. As ever, therefore, answering these questions will critically depend on the quality of evidence that can be adduced, including the opinions of skilled experts in the field.
The fundamental bargain of patent law is that patentees get a monopoly over their invention in exchange for publicly disclosing it. In many jurisdictions, this is reflected in threshold requirements of ‘sufficiency’ or ‘enablement’. To obtain exclusive rights, the patent application must disclose the invention clearly, completely, and in enough detail for it to be carried out by the PSA without them having to undertake undue work or experimentation
Where an AI system is used to create an invention, a unique practical challenge for sufficiency is that the “black box” nature of many such systems means that humans are unable to understand or access the functions or processes by which an AI system arrived at the final output. If the person preparing the patent specification cannot properly understand how the invention was derived or how it is performed, it may not be possible to make a full disclosure enabling others to carry out the invention.21
The “black box” problem is not unique to patents: it has also been raised with the growing use of AI in other fields, for example ensuing that decision-making processes do not unlawfully discriminate, or that medical or diagnostic models are capable of verification by clinicians.22
A practical solution, in each case, may be the development of “explainable AI”: models capable of explaining or providing insights into how they arrived at their outputs. While some AI experts say that transparency comes at the cost of accuracy,23To date, the vast majority of patent offices and courts around the world have declined to recognise AI as an inventor. It is important to note that, for the most part, they have been addressing the relatively narrow question of whether Dr Thaler’s patent applications met the formal requirements of existing legislative frameworks. However, those decisions have raised the broader question of whether, and how, the patent system should accommodate inventive contributions made by AI systems.
That question is the subject of active consideration by governments. For example, a UK Intellectual Property Office consultation, reporting in 2022, identified several options for reform, including:
Perhaps surprisingly, the UKIPO reported that the majority of respondents preferred the option of not making any change to UK law, for the time being, since most respondents felt that AI was not yet advanced enough to invent without human intervention. This was the approach ultimately adopted by the UK Government.
Although applications like Dr Thaler’s remain something of a novelty or anomaly, they have exposed a tension between existing patent laws and the practical realities of modern innovation. Inventive AI sits uncomfortably within a system where inventors are presumed to have legal personality and the capacity to enjoy and assign rights. It also raises challenging questions about the appropriate benchmarks for assessing inventiveness and sufficiency of disclosure.
Nonetheless, modern patent law has evolved over centuries to adapt and remain relevant through countless industrial and technological advances. AI systems are, in that sense, no different from what has come before. As ever, courts and legislatures will continue to adapt the core principles of patent law, based on evidence as to the actual practices of inventors, to ensure that the patent system continues to incentivise, not hinder, innovation—in whatever form it may take.
For more on the developing area of intellectual property protection and risks for AI and ML systems, follow the AI in IP series on our IP blog.
With AI sending waves throughout the business world, we explain the salient role of intellectual property in regulating the technology and protecting the rights of inventors.
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2025
We’ll send you the latest insights and briefings tailored to your needs