AI Works Do Not “Compete” with Works of Authorship
First published 10/8/25 on The Illusion of More
Many arguments advocating the view that AI training does not conflict with copyright rights share a common fallacy, namely that AI outputs represent “competitive” works that copyright law was intended to promote. This error appears in Judge Alsup’s opinion in Bartz et al. v. Anthropic AI, in a report published by AI Progress, and in an amicus brief filed by three law professors in Thomson Reuters v. Ross Intelligence.
The competition fallacy rejects the notion of “market dilution,” which may be a novel, but not unfounded, consideration under factor four of the fair use analysis. Traditionally, the fourth factor inquiry considers whether the particular use of the work(s) in suit might potentially harm its/their market value. The question does not ordinarily weigh harm to, say, all sound recordings by virtue of having scraped all sound recordings to produce a machine that makes different sound recordings. Because the dilution principle would strongly disfavor AI developers, its proponents seek to portray the outputs as “competitive” works envisioned by copyright law.
As a threshold principle, although authors may be said to be in “perfect competition” or non-competition with one another, copyright’s purpose is not to promote competition but to promote as much diverse expression as authors may be inspired to create. Notwithstanding the use of AI as tools of human expression, it is an error to refer to AI outputs in general as “works of expression,” “works of authorship,” or any term of art that seeks to portray purely machine-made outputs as an intended consequence of copyright.
The inapt use of these terms perhaps indicates a hope that courts won’t notice the omission of the human authorship doctrine. But so long as that doctrine is affirmed (and it should be), we should only refer to AI outputs by other terms—choose the pejorative “slop” or the neutral “material” as you wish—in order to place outputs in proper context to copyright law. As argued here several times, if the material at issue is not protected by copyright on the basis that it is not made by a human, then its existence cannot be described as a “work” incentivized by copyright.
Judge Alsup’s Error in Bartz et al. v. Anthropic AI
Although the Bartz case itself is settled and will not be appealed, the reference to “competition” made by Judge Alsup will probably be litigated again in one or more of the many active AI training lawsuits. In his opinion, he wrote…
…Authors’ complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works. This is not the kind of competitive or creative displacement that concerns the Copyright Act.
In addition to buying into the anthropomorphic comparison between machine learning and human education, Judge Alsup’s hypothetical “explosion of competing works” set off an explosion of criticism, including by Judge Chhabria of the same circuit, ruling in Kadrey et al. v. Meta. His response states…
…when it comes to market effects, using books to teach children to write is not remotely like using books to create a product that a single individual could employ to generate countless competing works with a miniscule fraction of the time and creativity it would otherwise take.
I agree with this critique though, even here, would prefer not to see the word “competing.” Competition is generally creative whereas market dilution is generally destructive and closer to describing GAI’s effect on works of authorship and on copyright law. In fact, Judge Chhabria opines in Kadrey that, “As for the potentially winning argument—that Meta has copied their works to create a product that will likely flood the market with similar works, causing market dilution—the plaintiffs barely give this issue lip service.” This kind of signal that the market dilution theory has legal foundation is why I believe its critics rely on the competition fallacy.
The Report by AI Progress
The report titled AI Models: Addressing Misconceptions About Training and Copyright, written by Anna Chauvet and Karthik Kumar, PhD, engages in the competition fallacy, albeit in a context I tend to find baffling. I say this because the report first presents an in-depth technical argument as to why AI training does not entail infringing conduct but then devotes equal effort arguing that model training is fair use.
If this document were a legal response in court, not presenting a fair use defense would likely be malpractice, but as an experts’ report, the fair use discussion casts doubt on the scientific rationale for non-infringement. Where there is truly no basis for infringement, there is no reason to mention fair use. Yet, in rejecting a consideration of market dilution under factor four, the authors of the report reprise the competition fallacy thus:
If a new work does not use protected expression, it does not matter whether it competes in the same genre and market as prior works. An increase in competitive creative works is precisely the growth of creative expression that the Copyright Act was intended to promote.
Notably, the authors rely on traditional fourth factor jurisprudence in the first sentence but seek to foreclose any consideration of AI’s novelty by mischaracterizing its outputs in the second sentence. The authors err by referring to the mass outputs of a GAI as “creative works” at all, let alone as the type of works intended to be promoted by the Copyright Act. As stated in an earlier post, I believe the courts should recognize that GAI lacks any technological precedent and, therefore, should not demur to plow new ground in considering market dilution as a destructive consequence worthy of deep consideration.
Further, it is concerning when any party implies that the AI outputs do not matter in considering whether the training process is fair use. This is nonsensical and inconsistent with case law. The courts absolutely consider the specific utility of technologies that potentially infringe copyright rights, and it is impossible to weigh the purpose or market effect of an AI product without considering its outputs. After all, the outputs are its purpose.
The Professors’ Brief in Thomson Reuters v. Ross
Law professors Brian L. Frye, Jess Miers, and Mateusz Blaszczyk filed a brief in Thomson Reuters v. Ross, principally to argue that the headnotes copied from Westlaw are not properly subjects of copyright. Here, I will set that question aside, and frankly, whether the courts find the headnotes to be sufficiently original for protection is not particularly relevant to the challenges posed by AI.
In the latter part of the brief, though, the professors reprise the competition fallacy, stating, “The problem with the dilution theory is that producing similar, but noninfringing works is precisely the kind of competition copyright is supposed to promote.” Again, this statement is legally correct but factually misleading. If the professors want to argue, as they do, that the Federal Trade Commission et al. err by advancing a market dilution theory based on unfair competition law, perhaps that debate is worth having. But general statements that AI outputs, as non-works of authorship, inherently fulfill the intent of copyright law are flatly wrong. The brief continues…
The Act seeks to promote the creation of original works of authorship, not to protect authors against competition. Indeed, it is axiomatic that the purpose of copyright is to benefit the public by encouraging marginal authors to produce and distribute additional works of authorship.
Copyright does not protect authors against informal competition with one another, but as stated, that has nothing to do with “competing” with machines that output non-works by non-authors. As for the reference to marginal authors, this is both misstated and misguided. First, the Copyright Act is agnostic as to which authors become popular and which ones remain “marginal.” Second, as is always the case, it is the independent authors who are more likely to be marginalized into oblivion by unregulated, unethical, and unlicensed AI products.
There are several briefs filed in Thompson Reuters by many familiar names in anti-copyright circles, and no doubt, they all repeat some variation on the competition fallacy. But copyright law exists to incentivize human beings to devote time, talent, and energy to the production of creative and informative works. Copyright does not exist to mass-produce material, content, slop, or stuff by any other name that lacks creative expression by humans.
Mistakenly portraying the outputs of GAI as generally “competitive” with works of authorship produces a cascade of doctrinal errors that swirl in eddies of circular logic around the pillar of the fourth fair use factor. The courts should decline to be dragged into that vortex and, as Judge Chhabria at least implied, they should be willing to consider the diluted streams of creativity that can result from wanton use of AI.
Photo by Fizkes

