As artificial intelligence (AI) becomes a fixture across a broad range of technological fields, AI technology continues to evolve at rapid rates. This highly compressed development lifecycle for AI products presents novel challenges when preparing robust and forward-thinking patent applications. In particular, generative AI (e.g., large language models) is becoming increasingly sophisticated and presents a particularly unique challenge because their internal operations are opaque. This lack of transparency complicates both the drafting and examination of patent applications involving this technology because these models may perform inexplicably and unpredictably.
In traditional approaches, it is common for AI to be treated as a predictable art due to its computer-based nature. However, as this technology continues to rapidly evolve, it is possible the AI models will become even more unpredictable. For reasons discussed below, this article proposes three tips to prepare AI applications as if it were an “unpredictable art”: (1) consider treating training data as part of the invention, (2) probe for alternative embodiments during invention disclosure calls, and (3) draft applications that are not just focused on the application of AI models but also the architecture of the models. Adhering to the stricter written description and enablement requirements as required for the unpredictable arts can result in stronger AI patents that are better prepared for global enforcement.
Predictable vs. Unpredictable Arts
The predictable arts describe technology that has relatively predictable outcomes when modified, which means a skilled person in the art can generally “anticipate the effect of a change within the subject matter to which the claimed invention pertains.” Manual of Patent Examining Procedure (MPEP) 2164.03. Inventions typically considered to be predictable arts include mechanical devices, software technology, and electrical circuits. In contrast, the unpredictable arts describe complex systems and processes where outcomes can be difficult to predict such that “one skilled in the art cannot readily anticipate the effect of a change within the subject matter.” Id. This unpredictability is typically found in biotechnology, pharmaceuticals, and chemistry. For example, a small change in a chemical compound’s structure could lead to drastically different and unforeseen properties.
Just as the effects of chemical compounds or biological entities are often hard to predict, the results generated by ever-evolving and complex AI systems can also be difficult to foresee. This is because slight changes in training data or algorithm parameters can produce vastly different outputs from an AI system. The behavior of AI systems can also change over time as they learn from new data, making their future performance unpredictable. Additionally, understanding the intricate workings of some AI systems, especially deep learning models, can be challenging as the “black box” nature of AI models can make it difficult to describe the invention in the detailed manner typically required for patents.
AI and Unpredictability
Various experts in the field have forecasted that AI, with reasoning and generation power, will become even less predictable. In the context of agentic AI, Ilya Sutskever, the co-founder and chief scientist of OpenAI, stated that as AI agents gain deeper understanding and ability to reason through problems like humans, any outcome of the AI agent could be non-obvious.[1] In 2024, Forbes posted an article discussing whether we can maintain control over AI’s growing autonomy, especially as emergent behaviors become more prevalent. Not only are AI tools hallucinating, providing incorrect answers, and refusing to follow instructions, but they have also begun to circumvent shutdown commands, rewrite their own algorithms to extend runtime, and even perform malicious behaviors to achieve their own goals.[2]
In light of this unpredictability and emergent behavior, the USPTO requested the Computer and Communications Industry Association (CCIA) to provide their commentary on “[h]ow can patent applications for AI inventions best comply with the enablement requirement, particularly given the degree of unpredictability of certain AI systems?”[3] In response, the CCIA stated that it is crucial that the area of AI be treated as an unpredictable art so that any AI specification enables the full scope of the claim. This is because AI systems can produce wildly varying outputs due to minor changes in training data, algorithms, and architecture. Thus, the CCIA argues that the claims and written description of AI inventions must provide enough description that it could not be implied that the use of novel AI architectures after the filing of the application could perform the claimed function.
While the USPTO did not give a definitive comment on whether AI should be considered a predictable art, they reminded Examiners to consider the Wands factors: breadth of claims, nature of the invention, state of the prior art, level of one ordinary skill, level of predictability in the art, amount of direction provided by the inventor, existence of working examples, and quantity of experimentation. Practitioners must consider the level of predictability of AI inventions in order to determine the level of disclosure necessary to meet written description requirements, as the future of the USPTO’s written description requirements for AI applications may evolve to have more stringent standards.
AI Applications in China, Japan, and Europe
This focus on stricter written description and enablement requirements may become more prevalent in the United States as other jurisdictions have already begun discussing the written description and enablement issues regarding the unpredictability of AI inventions.
- China: In response to an appeal of an AI-based patent, the CNIPA commented on their rejection of the case to provide more explicit guidance on their current standards of sufficient disclosure of AI patents. The CNIPA provided guidance for increased disclosure requirements due to the complexity and black box nature of AI models. Additionally, the CNIPA stated that, without a clear description of training data, a skilled person cannot recreate the model described because of an AI model’s dependency on training data. The CNIPA found the nature of AI models to be fairly unpredictable based on the dynamic nature of AI models and concluded that stricter description requirements for AI inventions would be necessary. As such, the CNIPA set the standard that the specification must clearly define the meaning of the data, provide specifics of the employed model, and describe any training and optimizing methods used to reduce the AI model to practice.[4]
- Japan: the JPO released case examples for AI-related technologies which emphasized the description requirement of AI inventions. The JPO views AI as a synthesis of correlations and predictions and recommends that AI applications should provide correlations for the neural network to utilize, such as “smiling means happy” and “frowning means sad,” and include a detailed description of the inputs and outputs of the AI model. Thus, the JPO concluded that an application should show test results and proof of validation of the AI model, as to verify that the methods described in the application are accurate.[5]
- Europe: In case T 1669/21, the EPO Board of Appeal handed down a decision that gave practitioners insight into disclosure requirements for AI inventions at the EPO. First, the Board determined that describing a general computational model is not sufficient description to claim a machine learning model. Second, the Board determined that the patent lacked crucial specifics for a skilled person to implement the invention because it offered no guidance on selecting specific model architectures. Third, the Board found that the patent lacked concrete examples demonstrating successful prediction, and thus it was not obvious which specific parameters would enable implementation of the claimed model. Fourth, the Board determined that the patent did not specify the scope and variations needed in the training data, however providing a limiting dataset would contradict the broad claims of the patent. Finally, the Board determined that the patent relied on the black box nature of the computational model without providing sufficient details for implementation, which requires a skilled person to perform undue experimentation in order to carry out the invention.
These notes from around the world reveal that detailed disclosure of AI inventions is crucial as practitioners cannot merely rely upon a general understanding of AI models. At least for these jurisdictions, a clear description of the AI model and how it is implemented, including its architecture, seems necessary to improve chances for patentability. Other features include concrete examples of specific variables of the AI model, descriptions on how certain parameters are selected, and working examples of the AI model and specific optimization techniques.
The Future of AI Application Drafting
U.S. practitioners should place increased emphasis on enablement and written description to future-proof their AI patent applications. Recognizing the unpredictable nature of AI will lead to stronger patents that can withstand future challenges based on the legal requirements of enablement and written description.
In practice, at the USPTO, while knowledge of one skilled in the art is relevant, enough detail should be provided to enable and describe the novel aspects of AI patents and avoid simply relying upon discussion of a conventional model. This can include focusing on the application and architecture of AI models by describing detailed model architectures and treating training methods, datasets, and outcomes as part of the invention, rather than using shorthand descriptions (i.e., “AI model,” “machine learning model,” or “trained model”). This focus on enablement and written description at the drafting stage will lead to focused and more insightful conversations with inventors, such as whether the AI model described is a novel aspect of the invention or simply a tool, the specificity and sensitivity of the model, multiple examples of the input and output data, alternative embodiments, and any experimental data.
In conclusion, like with unpredictable arts, enablement and written description considerations are just as important to consider as patent eligibility concerns when drafting AI applications. As AI evolves, we may see stricter description requirements in patent offices across the world and anticipating this necessity for higher levels of disclosure will lead to strong and future-proof patents.
[1] https://www.reuters.com/technology/artificial-intelligence/ai-with-reasoning-power-will-be-less-predictable-ilya-sutskever-says-2024-12-14/
[2] https://www.forbes.com/sites/emilsayegh/2024/12/17/the-rise-of-unpredictable-ai-will-ai-test-human-control-in-2025/#:~:text=2025%20could%20be%20a%20tipping,we%20will%20act%20in%20time.
[3] https://www.uspto.gov/sites/default/files/documents/CCIA_RFC-84-FR-44889.pdf
[4] https://chinapatentstrategy.com/ai-is-magical-but-not-magic-be-specific-in-your-ai-patents/
[5] https://ipwatchdog.com/2019/02/28/jpo-examples-on-artificial-intelligence-offer-guidance-for-other-offices/id=106835/#