AI: Artificial Intelligence or Authentic Infringement?

Written by Liberty Smith

Alongside copyright law, technology has historically aided progressions in both science and art. [1] However, recent technological advances in artificial intelligence (“AI”) have led to the creation, distribution, and widespread use of highly profitable AI models and programs built by engaging in extensive copyright infringement, justice for which is already underfoot. [2] These programs not only threaten the livelihoods of artists and the fabric of copyright law but the job security of professionals across every field. [3] Regardless of the legal outcome, Pandora’s AI box has been opened, and copyright lawyers will be tasked with managing its contents.

AI refers to technology that uses “machine learning” techniques, to simulate human intelligence. [4] The most basic and widely used AI systems are considered “reactive” because they can complete specific tasks with high reliability and efficiency but can only improve in function and accuracy through deliberate reprogramming because the systems cannot learn from their experiences. [5] Slightly more advanced are “limited memory” AI systems, which are capable of “deep learning” and are commonly used to program self-driving cars and virtual assistants. [6] This subset of machine learning imitates, to a certain extent, the human brain’s neural networks and allows the system to learn from its memory. [7]

Unprecedented advancements in machine learning have led to a recent explosion of “generative AI” programs, which utilize complex deep learning techniques to generate a variety of content in response to text prompts. [8] The most notorious is OpenAI’s “ChatGPT” released on November 30, 2022, which possesses advanced language processing abilities. [9] ChatGPT specializes in generating textual responses, documents, and lines of code. [10] Open AI’s “DALL-E 2”, Stability AI’s “Stable Diffusion Model”, and Midjourney’s program by the same name, all generate images and visual art in response to text prompts, and can even create accurate images of real people. [11] Microsoft’s “VALL-E” [12], an advanced audio simulator that can recreate a specific human voice after being exposed to only three seconds of audio, Google’s “MusicLM” [13], an AI text-music generator, and Synthesia’s AI video generator [14], are just three additional examples of the AI revolution’s ability to change the landscape of creativity, and even our reality. [15] If that were not drastic enough, versions of these programs can be combined to create even more sophisticated and convincingly realistic outputs, leading to unreliable media and deepfakes. [16]

Generative AI programs utilize a range of techniques to achieve their optimal output. [17] The Stable Diffusion Model used in Stability AI’s DreamsStudio program, uses what is recognized as the superior technique: diffusion. [18] The Stable Diffusion Model deconstructs and reconstructs images from noise with the purpose of training the model to recognize the latent structure of the training inputs. [19] Generative AI programs can only generate content based on the content they have been trained on, and, thus, to offer more advanced outputs, the programs must be trained on massive amounts of data. [20] During diffusion training, semantic labels and text are associated with concepts and aspects represented in the training images and embedded when those features are present as a vector of numbers. [21] There is a separate network called the “hypernetwork” that further refines the output generated by the model using previous memory of prior images to create images in response to text prompts with increasing accuracy over time. [22] DALL-E 2, Google Brain’s Imagen, and Midjourney all use versions of a diffusion model. [23]

Alternatively, ChatGPT utilizes “generative adversarial networks” or “GANs” which essentially work to replicate the same categorical discernment process babies use when learning word meanings. For example, suppose a baby believes “book” refers to all objects which can be opened and are rectangular. Gradually, however, the baby learns that other objects which fit these criteria, such as cabinets and boxes, are not books, and this process continues until the baby has engaged in sufficient discrimination and can correctly distinguish the characteristics of a book. [24] GANs consist of two separate deep-learning networks designed to generate and refine data using a similar discriminatory process. [25]

Generative AI programs pose a threat to creators because they allow anyone to generate high-quality works quickly, which will inevitably lead to oversaturated creative marketplaces, negatively affecting the value of all creative works. [26] If this was not a grave enough cause for concern, many generative AI programs are products of copyright infringement and exist solely because they were trained on an enormous amount of unlicensed data. [27] Creators depend upon the provisions of copyright law to ensure that their ability to control their work and make a living from that work, is protected. If currently filed lawsuits fail to set a strong precedent in favor of copyright protection, AI developers will continue to profit enormously [28] from programs that were made possible by infringing the copyrights of millions of creators and which, ironically, have the capacity to put those same creators out of business.

On January 13, 2023, three artists filed a lawsuit against Stability AI, Midjourney, and DeviantArt alleging direct and vicarious copyright infringement, and violations of the Digital Millennium Copyright Act (“DMCA”), statutory and common law rights of publicity, and unfair competition law. [29] The artists claim that Stable Diffusion was trained using billions of copyrighted images that were taken from the internet without obtaining licenses from the copyright owners. [30] Because users can use precise prompt language to generate images and art in the style of a specific artist, the generated outputs siphon commissions from the original artists and create unfair competition. [31] Because versions of the Stable Diffusion Model trained on the artists’ work were used to create and train Midjourney’s Midjourney and DeviantArt’s DreamUp, these corporations are implicated in the suit as well. [32]

On February 3, 2023, Getty Images (“Getty”) filed suit against Stability AI, alleging copyright and trademark infringement, trademark dilution, deceptive trade practices, alterations of copyright management information (“CMI”), and violations of unfair competition law. [33] Getty is a global distributor of digital visual content acquired through licensing agreements with thousands of creators and protects its creators’ work from infringement through watermarks and metadata which contain CMI. [34] Getty also explicitly offers licensing agreements to AI developers seeking material for training AI models. [35] Getty alleges that Stability AI copied and incorporated within the Stable Diffusion Model twelve million copyrighted images without paying to license the images, though Getty explicitly prohibits unauthorized reproduction of its content for commercial purposes. [36] Further, Getty’s complaint provides an example of an output generated by the Stable Diffusion Model which shows an image with a distorted version of Getty’s watermark. [37]

Finally, on November 3, 2022, two protected individuals filed a class action lawsuit against Github, Microsoft, and OpenAI, alleging violations of the DMCA, the Lanham Act, unfair competition law, the California Consumer Privacy Act and alleging breach of contract, tortious interference with contractual relationships, fraud, and negligence. [38] The plaintiffs are owners of copyrighted code located on GitHub, an open-source code development corporation, which distributes Copilot, a program that assists software coders by using AI to generate code. [39] Because Microsoft acquired GitHub in 2018, distributes Copilot, and partially owns OpenAI, which maintains Codex, used to power Copilot, these corporations are implicated as well. [40] The individuals allege that Codex and Copilot were trained using their material, which was sourced from the GitHub repository in violation of licensing restrictions. [41] It is also alleged that Copilot frequently generates an output that contains exact copies of its training materials, the GitHub-sourced code, without attribution to the source which is a requirement of the open-source license and a violation of the DMCA. [42]

The only definitive guidance comes from a U.S. Copyright Office (“USCO”) statement published on March 16, 2023, though even this statement includes a disclaimer that the USCO’s official stance may shift as new legal precedent is set. [43] The statement only confirms that works solely generated by AI are not copyrightable. [44] This is because copyright law equates typing a text prompt with providing an artist with instructions for a commission. In other words, the generative AI program user is merely providing parameters, while the technology—or better yet, the training data—is providing the traditional elements of authorship. [45] Unfortunately, this does little to unravel the primary issue at hand: unlicensed training images.

These programs demonstrate a similar threat to that of Napster to the music industry. [46] Illegal music copying and distribution impacted the profits generated by musicians, singers, and songwriters across the globe, and widespread access forever changed the value of music. [47] Because the Stable Diffusion model has been publicly used and accessed by so many, legal retribution will hardly be able to turn back the clock. It is likely that even if the law pushes for safeguards, we will see AI programs and developers prevail through corporate consolidation and subscription-based services like those Getty already offers, catering to AI developers in hopes that their creators may still maintain some form of compensation, similar to the meager royalties received from streams on Spotify. [48] It would not be unfathomable if traditional artists were forced to adapt in order to compete and generate income in this new era where AI-generated art is already winning fine art competitions. [49]

The fear of being replaced by a machine has always been quelled with one notion: despite science fiction’s best efforts, machines are not people. Anyone who has called a large corporation’s customer service line and has had the pleasure of engaging with an automated response menu will tell you with certainty that the creativity and ingenuity of the human brain cannot be replicated in zeros and ones. But when machines can do more than just carry out hard labor, process data, and direct angry callers—like creating visual art, music, scripts, essays, thoroughly answering complex questions, and even passing the Bar Exam—can we really be so sure? [50]


[1] Harvey Brooks, The Relationship Between Science and Technology, 23 RSCH. pol’y 477 (1994), []; Andrew Souppouris, Technology Has Changed Art and This Is What It Looks Like, VERGE (July 3, 2014, 9:09 AM), [].

[2] Kyle Wiggers, The Current Legal Cases Against Generative AI Are Just the Beginning, TechCrunch (Jan. 27, 2023, 10:30 AM), [].

[3] Calum McClelland, The Impact of Artificial Intelligence – Widespread Job Losses, IoT All (Jan. 30, 2023), [].

[4] Ed Burns, Artificial Intelligence (AI), TechTarget,,speech%20recognition%20and%20machine%20vision [] (last updated Mar. 2023).

[5] 4 Types of AI: Getting to Know Artificial Intelligence, Coursera, [] (last updated Jan. 12, 2023).

[6] Id.

[7] What is Deep Learning? Definition, Examples, and Careers, Coursera, [] (last updated May 3, 2022).

[8] Nick Routley, What is Generative AI? An AI Explains, World econ. f. (Feb. 6, 2023), [].

[9] Sabrina Ortiz, What Is ChatGPT and Why Does It Matter? Here’s What You Need to Know, ZDNET (Apr. 18, 2023), [].

[10] Id.  

[11] Ellen Glover, What Is Generative AI?, Builtin (Mar. 23, 2023), []; Benj Edwards, With Stable Diffusion, You May Never Believe What You See Online Again, Ars Technica (Sept. 6, 2022, 8:30 AM), []; Alex Mitchell, How Frightening New AI Midjourney Creates Realistic Fake Images, N.Y. Post (Apr. 5, 2023, 7:37 PM), [].

[12] Benj Edwards, Microsoft’s New AI Can Simulate Anyone’s Voice with 3 Seconds of Audio, Ars Technica (Jan. 9, 2023, 4:15 PM), [].

[13] Andrea Agostinelli et al., MusicLM: Generating Music From Text, arXiv:2301.11325, []; (last visited Apr. 26, 2023); Kristin Houser, Google’s AI Music Generator Is Like ChatGPT for Audio, Freethink (Apr. 17, 2023), [].

[14] Techletters, Combine Chat GPT-3, DALL-E 2 & Synthesia to Create AI News Videos, MEDIUM (Dec. 21, 2022), [].

[15] Dave Johnson & Alexander Johnson, What are Deepfakes? How Fake AI-Powered Media can Warp our Perception of Reality, BUS. Insider (Apr. 5, 2023, 4:35 PM), [].

[16] What is Generative AI?, NVIDIA, [] (last visited Apr. 26, 2023); Techletters, supra note 14.

[17] Id.; Burns, supra note 4.

[18] Benny Cheung, Stable Diffusion Training for Personal Embedding, Benny’s Mind Hack (Nov. 2, 2022), [].

19] Vivek Muppalla & Sean Hendryx, Diffusion Models: A Practical Guide, Scale (Oct. 19, 2022), [].

[20] Ortiz, supra note 9.

[21] Cheung, supra note 18.

[22] Id.  

[23] Sameer Farooqui, From GANs to Stable Diffusion: The History, Hype, & Promise of Generative AI, OctoML (Nov. 25, 2022), [].

[24] Amanda L. Woodward & Karen L. Hoyne, Infants’ Learning About Words and Sounds in Relation to Objects, 70 Child Dev. 65, 65-77 (2003), [].

[25] Overview of GAN Structure, Google Developers, [] (last updated July 18, 2022).

[26] Melanie Allen, Will Advances in AI Spell Doom for Creatives?, Wealth of Geeks (Mar. 13, 2023), [].

[27] James Vincent, AI Art Tools Stable Diffusion and Midjourney Targeted with Copyright Lawsuit, Verge (Jan. 16, 2023, 5:28 AM), [];James Vincent, Getty Images Sues AI Art Generator Stable Diffusion in the US for Copyright Infringement, Verge (Feb. 6, 2023, 10:56 AM), [].

[28] Kyle Wiggers, Stability AI, the Startup Behind Stable Diffusion, Raises $101M, TechCrunch (Oct. 17, 2022, 12:01 PM), []; Jeffrey Dastin et al., Exclusive: ChatGPT Owner OpenAI Projects $1 Billion in Revenue by 2024, Reuters (Dec. 15, 2022, 9:09 AM), [].

[29] Anderson, et al. v. Stability AI Ltd., et al., No. 3:23-cv-00201 (N.D. Cal. filed Jan. 13, 2023).

[30] Id.

[31] Id.

[32] Id.  

[33] Getty Images (US), Inc. v. Stability AI, Inc., No. 1:23-cv-00135-UNA (D. Del. filed Feb. 3, 2023).

[34] Id.

[35] Id.  

[36] Id.  

[37] Id.  

[38] Doe v. Github, Inc., et al., No. 3:22-cv-06823-KAW (N.D. Cal. filed Nov. 3, 2022).

[39] Id.  

[40] Id.  

[41] Id.

[42] Id.  

[43] Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 FED. REG. 16190, 16190-4 (Mar. 16. 2023) (codified at 37 C.F.R. § 202), [].

[44] Id.  

[45] Id.; James Hookway, AI-Generated Comic Book ‘Zarya of the Dawn’ Keeps Copyright but Key Images Excluded, Wall St. J. (Feb. 24, 2023, 1:01 PM), [].

[46] Alexandra Tremayne-Pengelly, Getty CEO Dubs AI-Generated Art ‘The Next Napster’, Observer (Jan. 19, 2023, 1:10 PM), [].

[47] Dan Kopf, Napster Paved the Way for Our Streaming-Reliant Music Industry, Quartz (Oct. 22, 2019), []; David Marin, How Napster Changed the Music Industry Forever, SLIDEBEAN (July 1, 2021), [].

[48] Travis M. Andrews, In the Spotify Era, Many Musicians Struggle to Make a Living, Wash. Post (Feb. 4, 2023, 6:00 AM), [].

[49] Kevin Roose, An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy., N.Y. Times (Sept. 2, 2022), []; Chris Beckman, Is AI Art Having its Napster Moment?, Beckman Law P.C., [] (last visited Apr. 26, 2023).

[50] Debra Cassens Weiss, Latest Version of ChatGPT Aces Bar Exam with Score Nearing 90th Percentile, ABA J. (Mar. 16, 2023, 1:59 PM), []; Routley, supra note 8; Ortiz, supra note 9.