Enabling AI-Assisted Inventions and the Black-Box Problem

Haug Partners LLP
Contact

Earlier this year, the U.S. Patent Office issued its Inventorship Guidance for AI-Assisted Inventions.1 The main takeaway is that “[w]hile AI systems . . . cannot be listed as inventors on patent applications,” the “use of an AI system by a natural person” does not preclude that person from being listed as an inventor if they “significantly contributed to the claimed invention.”2 The Office reasoned that AI systems are “like other tools” that an inventor might employ when inventing,3 and noted that U.S. patent law allows patents on inventions created by persons “using specific tools.”4

The Patent Office also noted “that AI gives rise to other questions for the patent system besides inventorship,” including questions pertaining to enablement. The Inventorship Guidance does not address these open questions, but the Office committed “to continue to engage with our stakeholders” and to “issu[e] guidance as appropriate.”5

At some point in the near future it may be appropriate for the Office to issue guidance on enabling AI-assisted inventions because of the black-box problem with AI systems.

The Black Box Problem

Modern generative AI systems, like ChatGPT, have three primary components: one or more algorithms, training data, and a model. An algorithm in the AI context, as in other contexts, is a set of rules or procedures to be followed. The algorithm processes training data in an effort to identify patterns. An algorithm so trained results in the model that people then use to make predictions.6 Training an AI model is iterative and can involve human feedback during the training process, with success depending on the quality (and quantity) of the training data used as well as the ability of the human trainers.7

So called “black box” AI systems are typically those where the internal workings of any of the three primary AI components are hidden. But even when each component of an AI system is fully visible, the system might still be characterized as a black box. This is because “researchers don’t fully understand how machine-learning algorithms, particularly deep-learning algorithms, operate.”8 That the black-box problem cannot be eliminated for some AI systems leaves open the real possibility that an AI system would not be reproducible by others, especially from the limited description that can be expected in a patent specification.

That residual black-box problem complicates the Patent Office’s tool analogy. Most tools, even complicated ones, are knowable and reproducible. If, however, an invention is made with the assistance of an AI system that cannot be reproduced, it will lead to patent enablement problems. Without being able to recreate a customized tool (in this hypothetical, the AI system) that is needed to make the invention, the specification would not enable a person of ordinary skill in a relevant art to make the invention. Thus, the specification would not be enabling and the patent would be invalid.

Overcoming the black-box problem will require reducing the problem as much as possible, accepting the problem and otherwise compensating for it, or some combination of the two.

Reducing the Problem – the Glass Box Approach

One view of AI systems is that the black-box problem arose because of “a widespread belief that the most accurate models for any given data science problem must be inherently uninterpretable and complicated.”9 Some argue that the “belief that accuracy must be sacrificed for interpretability is inaccurate,” and that AI systems can be “constrained” to “provide a better understanding of how predictions are made.”10 Systems that are explainable and understandable to humans are sometimes called “glass box” AI systems.

When an inventor is assisted by a fully explainable and understandable glass-box AI system, satisfying the enablement requirement for any resulting invention should be possible. There may be circumstances, however, where developing a fully transparent glass-box AI system is not possible or desirable. One such circumstance may be where the cost of “constrain[ing]” an AI system to be better understandable is prohibitive. Another circumstance may be where the algorithm, the training data, and/or the model are proprietary in some way. In these cases, the glass-box approach would not be a complete solution, and enablement would remain a problem.

Accepting the Problem – The Lundak Approach

The problem of enabling a specification that uses hard-to-reproduce tools to arrive at an invention is not unique to AI systems. The Patent Office has dealt with this before in the biotech context. “When an invention relates to a new biological material, the material may not be reproducible even when detailed procedures and a complete taxonomic description are included in the specification.”11 That reproducibility problem was understood to cause a patent-enablement problem. Thus, the “Patent Office established the requirement that physical samples of such materials be made available to the public, as a condition of the patent grant.”12 Solving the reproducibility problem in that way also solved the patent-enablement problem.

An analogous requirement for AI systems could likewise solve the reproducibility and enablement problems for AI-assisted inventions.13 Of course, “physical samples” of AI systems would not be possible, but portions of an AI system (or portions of components of an AI system) could be “deposited” in a location that was available to interested persons to download. The Patent Office presently does not have such a requirement, but perhaps a patent applicant could suggest such an approach in the right circumstances.

Conclusion

There is no doubt we will see patent applications and patents issued on AI-assisted inventions; there is no doubt we will see enablement problems with them. For patent practitioners, it makes sense to proactively consider reproducibility problems that your AI systems may have before they become enablement problems during prosecution or, even worse, during litigation.

Where possible, inventors should take the glass-box approach by constraining AI systems to be better explainable and better understandable. Where it is not possible, inventors (and patent practitioners) should consider taking a modified Lundak approach with the Patent Office, even on an ad-hoc basis, while these enablement issues with AI systems are still percolating in the patent system.

1 89 C.F.R. § 10043 (Feb. 13, 2024); see also https://www.uspto.gov/subscription-center/2024/uspto-issues-inventorship-guidance-and-examples-ai-assisted-inventions.
2 89 C.F.R. § 10046.
3 89 C.F.R. § 10045.
4 89 C.F.R. § 10046
5 89 C.F.R. § 10045.
6 S. Bagchi, Scientific American, Why We Need to See Inside AI’s Black Box (May 6, 2023), https://www.scientificamerican.com/article/why-we-need-to-see-inside-ais-black-box.
7 M. Chen, Oracle.com, What Is AI Model Training & Why Is It Important? (Dec. 6, 2023), https://www.oracle.com/artificial-intelligence/ai-model-training.
8 S. Bagchi, Scientific American, Why We Need to See Inside AI’s Black Box (May 6, 2023), https://www.scientificamerican.com/article/why-we-need-to-see-inside-ais-black-box.
9 C. Rudin & J. Radin, Harvard Data Science Rev., Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From an Explainable AI Competition (Nov. 22, 2019), https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/8.
10 C. Rudin & J. Radin, Harvard Data Science Rev., Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From an Explainable AI Competition (Nov. 22, 2019), https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/8.
11 In re Lundak, 773 F.2d 1216, 1220 (Fed. Cir. 1985).
12 In re Lundak, 773 F.2d 1216, 1220–21 (Fed. Cir. 1985).
13 We are not the first persons to draw the analogy between AI systems and biological materials. See D.L. Burk, AI Patents and the Self-Assembling Machine, 105 MINN. L.R. HEADNOTES 301, 312 14 (2021), https://www.researchgate.net/publication/351591454_AI_Patents_and_the_Self-Assembling_Machine.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Haug Partners LLP | Attorney Advertising

Written by:

Haug Partners LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Haug Partners LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide