AI copyright litigation continues, and the total number of cases may see its peak in 2026. In 2025, we saw the earliest rulings on the fair-use arguments about AI training in cases involving Meta and Anthropic. In 2026, courts will be asked to decide AI training cases involving OpenAI and Google, among others. A judicial consensus is developing that training a general-purpose AI model is highly transformative, a factor favoring the finding of fair use. But other issues are the subject of sharp disagreements between courts, and 2026 is unlikely to bring final answers to copyright questions on AI training.
New cases, like Disney’s case against Midjourney, are beginning to turn the emphasis from the initial acquisition of training data, or the training process itself, to claims about the propensity of models to create outputs that are claimed to be infringing. Those cases involve an individualized analysis of particular outputs, making them even less attractive candidates for class-action treatment than the training-related cases. And cases about output raise even more complex questions about who is responsible for the allegedly infringing nature of the output—the company that trained the model, the company that designed the product that makes use of the model, or the user who interacted with that service, all of the above, or none of the above? The complexity of this area highlights the importance of the clear allocation of responsibility in commercial agreements for generative AI products and services.
[View source.]