The Other ‘Maybe’ Authors: Copyright Ownership for AI Trainers

BakerHostetler
Contact

BakerHostetler

Discussions held over the past several months regarding authorship of AI-generated works have suffered from at least two things—1) an outsized focus on whether the users of commercially available generative AI (GAI) can own the content returned to them by the AI software, and 2) an outsized focus by the Copyright Office on a need to predict the results of AI and purposely control every detail of the output. This creates a major problem. There are other types of AI beside GAI, and corporations and other stakeholders throughout multiple industries have been using them for years.

Although the Copyright Office’s official guidance on whether you can be the author of AI output is a solid “maybe?,” the Office’s recent registrability decision regarding the graphic novel created by Kris Kashtanova creates nearly insurmountable, unnecessary hurdles to all kinds of AI authorship. The Office focuses far too much on predictability, and imposes a “control” requirement that is not in line with jurisprudence regarding what is meant by “control.” As we’ll see, those stakeholders that have collected datasets and trained convolutional data models through supervised learning techniques ought to own the output of those models—to the extent they are otherwise copyrightable, of course. Those stakeholders “superintended the arrangements” to create the output, and were the “effective cause” of the output. Consistent with the Copyright Office’s guidance in the Compendium, Third Edition, there is often “creative input or intervention from a human author” at multiple steps in the process.

The Copyright Office’s current position is a classic “hard cases make bad law” situation. The discomfort many have with allowing users of GAI tools to own the output ought not wipe out the tremendous value of those who have been using AI for a decade or more to drive innovation in their industries. There are well-worn doctrines in other authorship contexts that supply a decent approximation of what we’re dealing with in AI authorship. For example, the production of a movie involves many hands, and many unpredictable creative expressions. Yet we have no problem recognizing a production studio, which may not have contributed or exhibited one jot of creative expression, as the owner of the output. Combine that “mastermind/dominant” author doctrine with the run of cases discussing ownership of software outputs (i.e., the “lion’s share” cases), and we see that the notion of what an “author” even is is highly nuanced.

Paying proper attention to the notion of “superintending the arrangements” combined with mastermind/dominant and lion’s share authorship, we begin to arrive at a doctrine that rewards innovation and creativity in using new tools. The authorship claim is strong for those who have used AI at both the “back end” and the “front end,” i.e., those who have selected the data and trained the models to yield certain classes of output. Whether the same doctrine might yield a different result regarding the users of GAI…we’ll put a pin in that for a different time, but this post may get you thinking.

We have a lot of ground to cover here, so bear with me.

The Surprise Need for Predictability

Imagine this with me: Jackson Pollack sits in his studio, preparing to create the latest output of his action painting technique. He has a rough concept of what the technique will yield – he selects paints, colors, the size of the canvas. But the technique injects considerable randomness into the production of the painting. Pollack pulls together the paints, the canvas and the technique, but he has no idea what the final painting will look like. Pollack said, “When I’m painting, I’m not aware of what I’m doing. It’s only after a get-acquainted period that I see what I’ve been about,” and “It doesn’t make much difference how the paint is put on as long as something has been said.” Imagine that Pollack, to inject further chaos into the painting’s production, places an oscillating fan between him and the canvas, adding an additional layer of randomness. Would anyone say that because of the existence of this randomization in the production that Pollack is not the author of the resulting painting?

Jack Kerouac sets about to write a novel, deploying his automatic writing technique. The technique draws considerable criticism; Truman Capote famously said, “[T]hat’s not writing, that’s typing.” Kerouac pulls together the typewriter, the continuous roll of paper, his life experiences, his perspective and some hip turns of phrase, but he has no idea what the final story will look like. Would anyone say that because of this lack of control in the production that Kerouac is not the author of the resulting story?

Prediction and control of output have taken an outsized role in recent registrability and policy statements from the U.S. Copyright Office (the Office) when it comes to copyright ownership of artificial intelligence (AI) output. So, do the users of generative AI (GAI) software programs own the output of those programs, i.e., what those programs create? The Office has answered that question with a resounding “maybe.” In its recent “Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence,” published March 16, the Office noted, “The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This is necessarily a case-by-case inquiry.”

The Office’s Guidance was “prompted,” pun fully intended, at least in part by its decision on the copyrightability of Kashtanova’s “Zarya of the Dawn,” a graphic novel created using the popular consumer AI product Midjourney. ChatGPT, Midjourney, DALL-E and other consumer-facing programs have dominated headlines, and so many bits have since been spilled in legal blogs postulating how users of GAI might be able to assert copyright ownership in the output. Consumer-facing GAI is now driving much of AI policymaking from the Office and other agencies, and that may not be a good thing. Although the Office’s Guidance document suggests the ownership answer is “maybe,” its Zarya decision places hurdles that may be insurmountable. And whether the Office’s policy decisions make sense vis-a-vis users of commercially available GAI, it may not make sense to apply that same policy to the trainers of AI. There are many stakeholders that have spent the past 10 or more years investing in and dedicating resources to training machine learning (ML) models to produce output that they may now not own because of the Office’s concerns with prediction and control.

Not All AI Is GAI

Midjourney and ChatGPT are exciting, and they produce some beautiful, bizarre and compelling output, but we should remember that copyright also protects boring, boring stuff. Technical drawings. Databases. Compilations of financial and health data. Forms and process documents (so long as they convey information). Charts. And source code. AI has been used for a decade or more to create copyrightable material across many industries that, although lacking the dazzle of GAI, is still quite important to the companies that have been training their own ML models.  

For example, Oracle’s chief technology officer, Larry Ellison, highlighted during the company’s third-quarter earnings call that, yes, indeed, there are other kinds of AI besides chatbots and art generators. AI being developed by Oracle is “reducing hospital readmissions at MD Anderson by 30 percent[.]” And as there are other kinds of AI, there are other kinds of ML models besides the large language models being used by the consumer-facing programs. Many of these models operate off considerably smaller datasets, deploying supervised learning techniques with well-defined parameters and yielding results that, while not entirely predictable, generally produce patterns that the engineers are seeking. These models improve business processes, enterprise analytics and customer outcomes.

For example, the use of AI to reduce hospital admissions by 30 percent probably went something like this: Someone collected enormous amounts of patient data and created an anonymized dataset; they selected a suite of analytical software to detect patterns in the correlation between certain disease states, the treatment history of those diseases and care outcomes; they trained ML models through supervised convolutional learning to detect patterns that were otherwise being missed; the software yielded new care pathways and guidelines to guide hospital healthcare providers.

Under the Copyright Office guidelines on machine-generated authorship in the Third Edition of the Compendium published in 2014, the output of this process would seem to be protectable: “the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” U.S. COPYRIGHT OFFICE, COMPENDIUM OF U.S. COPYRIGHT OFFICE PRACTICES § 313.2 (3d ed. 2021). The Compendium therefore implies that if a human provides some measure of creative input (“any creative input or intervention”), that suffices to establish an authorship claim, which would be present in multiple steps in using data and AI to reduce hospital admissions, such as by selecting the data and exercising creative judgment in setting parameters and value sets in data models, and adjusting and modifying the resulting rubrics and expressive forms. But it seems the Office has shifted away from this standard, imposing a heightened standard in the Kashtanova decision in two ways. First, it required control, not just “input or intervention.” Second, it implied that even this control would be fatally undermined by the introduction of unpredictability, of some unspecified degree or measure. Under this heightened standard, it seems likely that the output of the investments in creating care pathways, and other similarly situated stakeholders, would be considered by the Copyright Office to be public domain.

Kashtanova and Burrow-Giles

The Copyright Office, in addressing the registrability of “Zarya of the Dawn,” found that the output of Midjourney, and presumably other GAI tools like it, was not protectable by copyright, effectively rendering those images public domain works. In doing so, the Office was hyperfocused on the ability of GAI users to predict and control the output (emphasis added throughout):

  • “Based on the Office’s understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist – they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output.”
  • “While additional prompts applied to one of these initial images can influence the subsequent images, the process is not controlled by the user because it is not possible to predict what Midjourney will create ahead of time.”
  • “Midjourney generates images in an unpredictable way. Accordingly, Midjourney users are not the ‘authors’ for copyright purposes of the images the technology generates.”
  • “The information in the prompt may ‘influence’ generated images, but prompt text does not dictate a specific result.”
  • “The fact that Midjourney’s specific output cannot be predicted by users makes Midjourney different for copyright purposes than other tools used by artists.”
  • “Rather than a tool that [ ] Kashtanova controlled and guided to reach her (sic) desired image, Midjourney generates images in an unpredictable way.”

In crafting the Kashtanova decision, the Office relied heavily on Burrow-Giles Lithographic Co. v. Sarony, a U.S. Supreme Court decision from 1884 that dealt with the first machine-generated work – photographs. At the time, there was raging debate over whether photographs should be subject to any copyright protection at all—“It is insisted in argument, that a photograph[,] being a reproduction on paper of the exact features of some natural object or of some person, is not a writing of which the producer is the author.” The Copyright Office noted that the Court answered that question by holding that “photographs were protected by copyright because they were ‘representatives of original intellectual conceptions of the author,’” defining “author” as “he to whom anything owes its origin; originator; maker; one who completes a work of science or literature.” However, Burrow-Giles had a good bit more to say on authorship, which the Office failed to address in the Kashtanova decision. Burrow-Giles relied in part on an 1883 decision from the U.K., Nottage v. Jackson, where in upholding copyright for photographs under U.K, law, the court “said, in regard to who was the author, ‘The nearest I can come to, is that it is the person who effectively is as near as he can be, the cause of the picture which is produced, that is, the person who has superintended the arrangement, who has actually formed the picture by putting the persons in position, and arranging the place where the people are to be – the man who is the effective cause of that.’”

“Superintending” Authorship

The notion of “superintending the arrangement” and being the “effective cause” of copyrightable work using a machine are concepts echoed in that section of the U.K.’s copyright statute that extends protection to computer-generated works. “Computer-generated” is defined as “generated by computer in circumstances such that there is no human author of the work” (Section 178, Copyright, Designs and Patents Act (CDPA)). Section 9(3) of the CDPA provides that the author of a computer-generated work is deemed to be the person “by whom the arrangements necessary for the creation of the work are undertaken.” That, of course, leaves vexing questions as to who the person so undertaking the arrangements is, but at least there’s an owner in there somewhere.

But “superintending”-based authorship has firm roots in U.S. law as well. A difficult authorship determination is perennially raised with regard to the production of movies. Like AI-generated work, films are the result of a complex system of multiple contributions. “Filmmaking is a collaborative process typically involving artistic contributions from large numbers of people, including – in addition to producers, directors[] and screenwriters – actors, designers, cinematographers, camera operators[] and a host of skilled technical contributors.” 16 Casa Duse, LLC v. Merkin, 791 F.3d 247 (2d Cir. 2015). In the movie production context, the word “author” begins to lose its meaning. “The word ‘author’ is taken from the traditional activity of one person sitting at a desk with a pen and writing something for publication. It is relatively easy to apply the word ‘author’ to a novel. … But as the number of contributors grows and the work itself becomes less the product of one or two individuals who create it without much help, the word is harder to apply.” Aalmuhammed v. Lee, 202 F.3d 1227 (9th Cir. 2000).

The 9th Circuit solved this difficult problem of finding an author by looking to who “superintended the arrangements” of the movie at issue. “Burrow-Giles defines author as the person to whom the work owes its origin and who superintended the whole work, the ‘mastermind.’ In a movie, this definition, in the absence of a contract to the contrary, would generally limit authorship to someone at the top of the screen credits, sometimes the producer, sometimes the director, possibly the star or the screenwriter – someone who has artistic control.” “Control” here was not used in the sense of being able to predict the precise output, i.e., what each actor would say or do, what the lighting would look like at a given moment, or creative decisions that would be the work of a director, a cinematographer or other skilled technical contributors. Rather, it was in the sense of pulling together all these elements, superintending them and having ultimate decision-making authority for what stayed in and what stayed out of the movie.

So too in 16 Casa Duse, where the 2nd Circuit found that a director of a movie was not its author. Rather, it was the production company that was the author because of the more prosaic authority exercised by the company. “These factors – including decision[-]making authority, billing[] and written agreements with third parties – are also relevant to our dominant-author inquiry.” Even though the director exercised considerable creative control, it was the production company that superintended the whole of the work. “Casa Duse initiated the project; acquired the rights to the screenplay; selected the cast, crew and director; controlled the production schedule; and coordinated (or attempted to coordinate) the film’s publicity and release.”

Thus, whether called mastermind authorship, dominant authorship or superintending-based authorship, authorship can exist in the entity that has ultimate decision-making authority over what goes into and what stays out of a work, even if the content could not be perfectly predicted or planned at the outset. “First, an author ‘superintend[s]’ the work by exercising control. This will likely be a person ‘who has actually formed the picture by putting the persons in position[] and arranging the place where the people are to be[ – ]the man who is the effective cause of that,’ or ‘the inventive or mastermind’ who ‘creates[] or gives effect to the idea.’” Aalmuhammed, at 1234. Whether these theories of authorship should lead to a different result for users of GAI like Kashtanova is debatable. However, for the trainers of AI, the companies pull together the data, select the software involved, train the ML models and use the resulting output; the degree of control over the elements that yield copyrightable works ought to render them the author, even if the precise output is unpredictable. Like Pollack and Kerouac, the unpredictability is a byproduct of the technique.

Randomness Alone Does Not Destroy Authorship

The presence of randomness, of unpredictability, of standing alone should not kill off an authorship claim. Though not a popular theory, there are cases that stand for the notion that “accidental authorship” can be enough to support a claim of authorship. “A copyist’s bad eyesight or defective musculature, or a shock caused by a clap of thunder, may yield sufficiently distinguishable variations. Having hit upon such a variation unintentionally, the ‘author’ may adopt it as his and copyright it.” Alfred Bell & Co. v. Catalda Fine Arts, 191 F.2d 99, 105 (2d Cir. 1951). The entities that build complex ML models, especially supervised learning models, do a good deal more than “hit upon” the results accidentally, constantly fine-tuning results until the probability machine that is ML yields useful results. But the accidental authorship cases do emphasize that the mere existence of randomness should not be enough to thwart authorship.

Lion’s Share Ownership in Software Output

Another authorship theory further supports these claims of ownership. “The Ninth Circuit recently acknowledged that some authorities ‘suggest that the copyright protection afforded a computer program may extend to the program’s output if the program “does the lion’s share of the work” in creating the output and the user’s role is so “marginal” that the output reflects the program’s contents.’” Rearden LLC v. Walt Disney Co., 293 F. Supp. 3d 963, 968 (N.D. Cal. 2018) (citing Design Data Corp. v. Unigate Enter., Inc., 847 F.3d 1169, 1173 (9th Cir. 2017)). Rearden and Design Data, together with Torah Soft Ltd. v. Drosnin, 136 F. Supp. 2d 276 (S.D.N.Y. 2001), dealt with challenging issues regarding authorship disputes between the user of computer programs and the authors of the software itself. Although no definitive ruling has ever established authorship of a program’s output by the author of the software program (i.e., how it is that the output would reflect the program’s contents), that potential avenue of authorship would seem suited to ML. That is, the output is derived from the training the software receives, albeit with some degree of randomness involved as well.

“Control” Means Control of Instrumentalities

The “control” required by the Copyright Office in arriving at the Kashtanova registration decision does not appear to be the same control discussed in relevant case law. The Office’s requirement of control in Kashtanova appears to mean predictability and direction of expression in the work. Aalmuhammed, 16 Casa Duse and other cases discuss control more in the sense of asserting dominance over various instrumentalities, whether that be people, equipment, location, etc. That is, “superintending the arrangements” of content creation, being the effective cause and having decision-making authority on the final output are the hallmarks of control, not absolute certainty as to what the output would yield. Even if users of GAI programs cannot be deemed the authors of GAI output, the same should not be true of the large class of stakeholders that have selected and obtained training corpora, selected numerous software programs to manage that data and perform analytics, created and refined ML models, and created valuable output that if created solely by hand would undoubtedly be subject to copyright protection. Those companies are clearly the masterminds, the dominant authors, the ones that superintended the arrangements and are the effective causes of the output. There are no “maybes” about that.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© BakerHostetler | Attorney Advertising

Written by:

BakerHostetler
Contact
more
less

BakerHostetler on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide