The judgment in Getty Images v. Stability AI reveals the limits not so much of existing legal provisions as of the language with which copyright law describes the phenomenon of creativity. The dispute between Getty and the creators of the Stable Diffusion model was not merely about the infringement of rights to photographs or trademarks, but about a more fundamental question: is the process of machine learning an act of copying, or rather an automated and large-scale form of inspiration?
Human Copying vs. Machine Learning
Copyright law has always rested on a binary distinction between copying and creation. A person who directly traces or copies an original painting may infringe copyright if they copy substantial elements of the work, such as composition, lighting, arrangement, or color palette. The same person, however, who merely looks at a work and paints their own, inspired yet distinct piece, acts within the bounds of so-called permissible inspiration.
Artificial intelligence does not fit within this framework. Stable Diffusion does not “look” at a single image and memorize its content; instead, it analyzes millions of images, decomposing them into statistical patterns and generalized relationships between pixels and textual descriptions. The result is not an archive of pictures but a model of visual structure, a set of mathematical weights.
From a legal standpoint, this means there is no “copy,” since no material or digital reproduction of the works exists. Yet intuitively, it is difficult to escape the impression that this process functions as an equivalent of copying, for the AI uses others’ works to learn how to create.
The Problem of the “Invisible Copy”
Here arises a paradox: copying without copies, a notion at the heart of contemporary disputes surrounding AI. In the case of a human artist, a court assesses the similarity of the outcome: does the new image appropriate recognizable elements of the original? But for a machine, there is no “original” to refer back to. The model does not store Getty’s images but rather transformed vectors of their features.
Consequently, as the court held, there can be no infringing copy. The machine does not remember, it knows what something looks like. From a legal standpoint, knowledge is not a copy. This is a subtle but decisive distinction.
Such reasoning leads us toward a new notion of an “indirect copy,” one that does not exist physically but whose functional outcome closely resembles copying. Copyright law, however, is ill-equipped to grasp this, having been designed for an era of mechanical reproduction rather than statistical modeling.
Creativity from the Perspective of AI
If we accept that AI does not copy but merely learns, then the training process would seem to fall within the bounds of permissible inspiration. Yet the difference in scale is crucial. A human can be inspired by a limited number of works, thereby developing a personal creative style. An AI model, on the other hand, absorbs and structurally applies features from hundreds of millions of works. The question then arises: does inspiration on such a scale become economically equivalent to copying?
In practice, this means transferring the classical notion of inspiration from the individual to the collective level. AI does not imitate a single author; it synthesizes the shared features of thousands, producing something that could be called a statistical style of visual culture. In this sense, generative AI does not so much copy works as learn creativity itself as a social phenomenon.
Law Operates Within Territorial Boundaries
Although the court dismissed Getty’s claims mainly for lack of jurisdiction, since the model training occurred outside the UK, and the withdrawal of certain allegations, the key issue is not procedural but substantive law. The reasoning indicates that a model which contains no copies of works cannot be considered an “article constituting a copy” within the meaning of copyright law.
The court also recognized that images generated by Stable Diffusion form a new category, so-called synthetic images. These are not photographs in the traditional sense, as they do not reproduce reality but simulate it. This concept may, in the future, define a new class of digital works generated autonomously by algorithms.
Conclusion
The Getty v. Stability judgment does not settle the legal status of generative AI. It does, however, demonstrate that existing copyright frameworks cannot fully capture a phenomenon grounded not in reproduction but in learning.
The traditional category of the “copy” becomes insufficient when the subject of protection is not a single reproduction but the complex learning process of an artificial intelligence system.
In the longer term, it may become necessary to define a new category, situated between inspiration and copying, encompassing machine-based processing of others’ works as a distinct form of creative exploitation. Only then will copyright law regain the capacity to respond to technologies that, while not copying in the material sense, draw upon the cultural corpus in structural and systemic ways.

