Senate Judiciary Subcommittee on Intellectual Property Hearing on Artificial Intelligence and Intellectual Property – Part II: Copyright
Client Alert | 8 min read | 07.26.23
In an unconventional opening to the normally staid proceedings of the United States Senate, the voice of Frank Sinatra introduced the July 12, 2023 Senate Judiciary Subcommittee hearing on artificial intelligence (AI) and intellectual property. More accurately, an AI-generated version of Frank Sinatra’s voice sang about regulating AI to the tune of New York, New York, which Senator Chris Coons (D-DE), Chairman of the Senate Judiciary Subcommittee on Intellectual Property, used to illustrate both the possibilities and the risks of the use of AI in creative industries.
Chairman Coons presided over a hearing entitled “Artificial Intelligence and Intellectual Property—Part II: Copyright,” which aimed to explore the nuances of creating legislation that would protect copyright while incentivizing AI innovation. The hearing comes approximately one month after the Subcommittee’s hearing on AI and Patents, Innovation, and Competition, and is the second in a series of hearings on AI and IP.
Hearing Overview
In his opening statement, Chairman Coons explained that while AI opens new avenues for innovation and creativity, the increased use of generative AI in creative industries creates essential copyright law questions that affect not only the creative community but U.S. competitiveness. Coons focused on two issues: (1) whether using copyrighted content to train AI models is copyright infringement and (2) whether AI-generated content should be given copyright protection.
Ranking Member Thom Tillis (R-NC) noted that the creative community is experiencing immediate and acute challenges and called for the Subcommittee to carefully study what legislative guardrails may be necessary. Tillis stated that with proper regulation and oversight, AI is not a threat to the future and is key to maintain the U.S.’s status as a leader in innovation and creativity. But he also said that “action is clearly required.”
Summary of Witness Testimony
A wide array of experts, including artists, music industry leaders, and tech company executives, spoke to a large audience. The following is a brief summary of the testimonies given:
Jeffrey Harleston, General Counsel and Executive Vice President of Business and Legal Affairs, Universal Music Group (UMG).
In his testimony, Harleston, a representative from UMG, one of the leaders in music-based entertainment that contracts with artists like Taylor Swift, Alicia Keys, Drake, and The Weeknd, noted that songwriters and artists rely on the fundamentals of copyright. Citing experiences with his broad client base, he noted that AI in the service of artists and creativity is useful; however, “irresponsible” AI that appropriates an artist’s work, name, image, or voice, is exploitation and should require affirmative consent. Harleston encouraged the Committee to consider a Federal Right of Publicity statute, which is currently recognized in a slim majority of state statutes.
Karla Ortiz, Concept Artist, Illustrator, and Fine Artist
Ortiz claimed that her work, which has helped shape blockbuster films such as Guardians of the Galaxy Vol. 3, Loki, The Eternals, Black Panther, Avengers: Infinity War, and Doctor Strange, has been repeatedly taken and repurposed by AI without her knowledge, compensation, or consent. AI companies, she argues, train systems with copyrighted and licensed data without artists’ consent, credit, or compensation, which creates an exploitative environment that threatens artists’ livelihoods. In addition to consent, credit, and compensation, Ortiz also advocated for transparency in data sharing and affirmative opt-in policies, as opposed to retroactive opt-out policies. She explained that it is hard to remove content based on unauthorized use of copyrighted material, via avenues such as filters.
Matthew Sag, Professor of Law, Artificial Intelligence, Machine Learning, and Data Science, Emory University School of Law
Matthew Sag began his testimony by stating that “copyright law does not, and should not, recognize computer systems as authors” because AI is not capable of original intellectual conception. However, he claimed generative AI is also not designed to copy original expression and does not copy data in any literal sense, which means training generative AI on copyrighted works is typically fair use. Sag believes that, at this point in time, generative AI does not require a major overhaul of the U.S. copyright system.
Dana Rao, Executive Vice President, General Counsel, and Chief Trust Officer, Adobe Inc.
Rao explained the use of Adobe Inc.’s Firefly program, which uses only commercially safe, licensed content for its generative AI systems in order to avoid copyright infringement challenges. Adobe Inc. Firefly also employs the use of “Do Not Train” tags, which allows artists to remove consent for Adobe Inc. to train its systems with their work. Rao also encouraged a system similar to the Federal Right of Publicity statute noted by Harleston, which he calls the Federal Anti-Impersonation Right (FAIR) law.
Ben Brooks, Head of Public Policy, Stability AI
As a representative from a company that uses copyrighted work to train its AI systems, Brooks’ testimony focused on Stability AI’s intent and aspirations to protect creative content. Brooks claimed that AI is a tool, not a substitute for creators and emphasized the need for a strong and broad data set for AI systems to be trained on so that those systems can avoid discrimination and bias in its generative output. Brooks noted that while the company does not compensate authors for any input data subject to copyright, it implements voluntary opt-outs and produces identification for AI-generated content.
Other Hearing Highlights
- Copyright’s Significance in the U.S. Economy: Witnesses and Senators (Harleston, Ortiz, and Tillis, notably) cited a figure from the International Intellectual Property Alliance’s 2022 report on copyright industries in the U.S. economy noting that core copyright industries added $1.8 trillion dollars of value to U.S. GDP, accounting for 7.8 percent of the entire U.S. economy.
- Presence of Copyrighted Data Training in the AI World: Karla Ortiz criticized companies that trained their AI systems using copyrighted and licensed human created content, calling out several larger generative AI models by name. Senator Tillis acknowledged that it is difficult to figure out what the language model is trained on and make clear that these concerns are not unique to any one company. Rather, are relevant to a broader set of issues Congress hopes to address.
- The Nuances of Opt-in/Opt-out Policies: Faced with the two disparate ways for treating input—Stability AI’s use of copyrighted data and Adobe Inc.’s use of content in the public domain—, the Committee expressed concern about the ability of copyright owners to control the intellectual property in, and the use of, their works in view of programs such as Stability AI inputting their content without seeking advance permission. Mr. Brooks responded that Stability AI uses an open system that allows the public to search its input data set. If a copyright owner found its property in this data set, the owner could opt out of having its content being used in Stability AI’s model. Such an opt-out process is similar to that proposed by the European Union. Senator Padilla asked whether an AI model can unlearn the knowledge it acquired from the use of improper data. Mr. Brooks did not directly answer the question.
- Right of Publicity Statutes as Critical for Copyright Protection: Senators Coons, Tillis, Klobuchar, and Blackburn brought up the possibility of a Federal Right of Publicity law. Matthew Sag encouraged the Committee to look beyond public figures when considering extensions to the right of publicity, noting that ordinary citizens, in addition to famous celebrities, deserve the right to protection from deepfakes or impressions of likeness. When considering the right of publicity, Sag also encouraged the Committee to determine fair use based on output resemblance to input, rather than the output’s possibility for commercial replacement of the input. Rao suggested that a “Federal Anti-Impersonation Right” should apply to everyone, regardless of publicity status.
- Defining the Distinction Between AI-Generation and AI-Assistance: In view of the present landscape regarding authorship, there were not many questions from the Judiciary Committee regarding what works that contained AI content were eligible for copyright protection. However, part of the discussion focused on administering guidance on the breadth of copyright protections in content that was fully AI-generated versus AI-assisted. The Copyright Office set forth its policy regarding works that include AI-generated content on March 16, 2023 (16190 Federal Register, Vol. 88, No. 51), finding that copyright protection only extends to human authored aspects of works. Panelist Matthew Sag indicated that while it was acceptable for human authors of a work to use AI to aid in the creative process, computer systems that implement such AI should not be authors. Fellow panelist Jeffrey Harleston indicated that the Copyright Office was doing a good job of determining copyrightable works when AI-generated content was involved.
- The Role of Social Media: Senator Klobuchar questioned whether there should be an obligation for social media platforms to disclose the use of generative AI, noting specifically the harm to election integrity without tools to identify deepfakes or other forms of generative AI content.
- A “Fairly Useful Way to Steal”: Senator Marsha Blackburn (R-TN) self-identified as a champion for fair use protections for artists. She criticized the use of copyrighted content by generative AI systems, calling it a “fairly useful way to steal.” Blackburn argued that state level publicity laws may not provide sufficient protection to creators. She pointed to a recent copyright debate sparked by an incident in which a TikTok creator combined the vocals of popular music artists Drake and The Weeknd to create a new song, “Heart on My Sleeve,” which went viral on Youtube and Spotify before being taken down at the request of the artists’ record label, Republic Records’ parent company UMG.
- AI as a Creative Tool: While much of the discussion in the hearing centered on how to regulate AI in the copyright context, there was also a robust discussion about how AI offers a number of useful tools to artists that increases creative potential and makes the creation of art more accessible. Harleston told a story about one of his clients who used generative AI to simultaneously release a song he wrote in 7 different languages in his own voice. Senator Coons, in his opening statement, pointed to Paul McCartney, who recently made headlines announcing that AI helped create a last Beatles song, 50 years after the band broke up.
- Bipartisan Legislation: Ranking Member Tillis encouraged the Committee to conduct a thorough, bipartisan, and detailed investigation of the issue before moving forward with legislation, though he did note that “action is clearly required.” The Chair and Ranking Member also emphasized the need to align with other similarly-minded countries the rights and protections for individuals, creators, and consumers against deepfakes, copyright infringement, and impersonation of likeness or style.
Conclusion
Crowell & Moring, LLP will continue to monitor congressional and executive branch efforts to regulate AI. Our lawyers and public policy professionals are available to advise any clients who want to play an active role in the policy debates taking place right now or who are seeking to navigate AI-related concerns in government contracts, employment law, intellectual property, privacy, healthcare, antitrust, or other areas.
Contacts
Insights
Client Alert | 3 min read | 12.13.24
New FTC Telemarketing Sales Rule Amendments
The Federal Trade Commission (“FTC”) recently announced that it approved final amendments to its Telemarketing Sales Rule (“TSR”), broadening the rule’s coverage to inbound calls for technical support (“Tech Support”) services. For example, if a Tech Support company presents a pop-up alert (such as one that claims consumers’ computers or other devices are infected with malware or other problems) or uses a direct mail solicitation to induce consumers to call about Tech Support services, that conduct would violate the amended TSR.
Client Alert | 3 min read | 12.10.24
Fast Lane to the Future: FCC Greenlights Smarter, Safer Cars
Client Alert | 6 min read | 12.09.24
Eleven States Sue Asset Managers Alleging ESG Conspiracy to Restrict Coal Production
Client Alert | 3 min read | 12.09.24
New York Department of Labor Issues Guidance Regarding Paid Prenatal Leave, Taking Effect January 1