DMLA Submits Response to NSF Request for Information on AI Action Plan
BEFORE THE
NETWORKING AND INFORMATION TECHNOLOGY RESEARCH AND DEVELOPMENT NATIONAL COORDINATION OFFICE,
NATIONAL SCIENCE FOUNDATION
Request for Information on the
Development of an Artificial
Intelligence (AI) Action Plan
March 15, 2025
The Digital Media Licensing Association (DMLA) appreciates the opportunity to submit the following comments in response to the request for information (RFI) published by the Networking and Information Technology Research and Development (NITRD) National Coordination Office (NCO), National Science Foundation on behalf of the Office of Science and Technology Policy (OSTP) in the Federal Register on February 6, 2025, requesting input from interested parties on priority actions that should be included in the Artificial Intelligence (AI) Action Plan.(1, 2)
The DMLA is a non-profit, non-partisan trade association founded in 1951 that represents the interests of content creators, digital media producers, distributors, and licensors. Our membership spans thousands of industry professionals across the visual content ecosystem, from individual photographers and illustrators to major stock agencies, technology innovators, and AI developers. For more than seven decades, we have worked to establish business standards, develop best practices, and advocate for copyright protection, privacy rights, fair licensing practices, and now ethical AI development. DMLA members license millions of images, videos, illustrations, vectors, audio, and other creative content globally each day, powering everything from news media and educational materials to corporate communications, advertising campaigns, and entertainment products.
DMLA commends the National Science Foundation, the OSTP, and each federal agency involved in the Administration's coordinated effort to ensure that America's AI dominance is sustained and enhanced. We have been actively engaged in discussions regarding AI and copyright with our members, industry partners, and government agencies, including participating in the Copyright Office's and Patent and Trademark Office's (PTO) studies on AI. We firmly believe that the insights provided by these expert intellectual property agencies on the intersection of copyright and AI should meaningfully inform the development of the AI Action Plan.
DMLA supports the responsible, respectful, and ethical development and use of AI technologies that drive innovation while honoring intellectual property rights. An AI ecosystem that properly values visual content and respects the rights of creators and copyright owners will be stronger and more sustainable than one that doesn’t, and this requires acknowledging the critical role of properly licensed works in AI development. Creators should be able to determine if, how, and when their content is used and shared. Many of our members are exploring how generative artificial intelligence (GAI) can enhance their businesses while developing specialized licensing solutions specifically for AI training and development. We are actively establishing standards for ethical AI practices, content authenticity, and proper attribution across the media landscape, ensuring our industry continues to thrive amid technological transformation.
We submit these comments to ensure that the AI Action Plan is developed with a respect for and recognition of longstanding copyright laws and licensing practices that have made the United States the global leader in both creative industries and technological innovation.
Respecting Established Copyright Laws Promotes Human Flourishing, Economic Competitiveness, and National Security
The RFI asks for input in response to President Trump’s Executive Order 14179 “to establish U.S. policy for sustaining and enhancing America's AI dominance in order to promote human
flourishing, economic competitiveness, and national security.” (emphasis added). Promoting human flourishing, economic competitiveness, and national security are all objectives that run parallel to the goals of America’s copyright system, enshrined in Article I, Section 8, Clause 8 of the Constitution. Known as the “IP Clause,” it grants Congress the power “to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.” IP laws, including copyright laws, enable human authors to create and innovate, and they are key to securing America’s sustained economic competitiveness and global leadership. It is essential that the AI Action Plan be developed with an appreciation for the Constitutional guarantees that protect copyright owners and the human creators without whom GAI systems would not exist.
To that end, long-standing copyright laws and policies must not be cast aside in favor of new laws or policies obligating creators to essentially subsidize GAI technologies. As detailed further below, existing U.S. copyright laws are carefully balanced to provide essential protections along with important flexibilities – a deliberate rubric that must not be altered for AI. From broadcast content, film and TV shows, and journalism to sound recordings, works of visual arts, books, and everything in between, the ingestion of copyrighted protected works for GAI training is one of the central controversies related to the development of GAI technologies. Whether the unauthorized ingestion of copyright protected works for training constitutes copyright infringement or whether it qualifies for U.S. copyright law’s fair use exception is an issue that has become the focus of nearly forty ongoing federal lawsuits, and it’s one that will and should continue to be decided on a case- by-case basis.
Federal courts have been applying fair use for over a century, throughout various technological advancements like the photocopy machine, the VCR, the Internet, digital music services, and many other new technologies. Courts are clearly capable of applying fair use to novel questions surrounding disruptive technologies, and they are best positioned to do so with GAI. Thus, there is no need to change copyright law or create a new AI exception in the law. This is not just the view of a broad consensus of the copyright industries, it is also the view of numerous GAI companies, and the diverse industry groups that represent them. (3) As a result, there are many areas related to AI where the Administration may feel the need to take action to help facilitate U.S. world dominance in AI, but copyright is not one of those areas.
Promoting Economic Growth and Good Jobs
While AI is predicted to be a significant contributor to the economy, the contributions of U.S. creative industries—made possible through copyright law—have been one of the most significant contributors to the U.S. economy and to job creation for decades. A recent report on the economic impact of copyright by the International Intellectual Property Alliance notes that, in 2023, the core copyright industries contributed more than $2 trillion to the U.S. gross domestic product (GDP) (accounting for 7.66% of the U.S. economy) and employed 11.6 million workers (or 5.43% of the workforce). (4) In addition to growing at a rate more than three times that of the rest of the economy, the report notes that the core copyright industries:
1) make up an increasingly large percentage of value added to GDP;
2) create more and better paying jobs than other sectors of the U.S. economy;
3) grow faster than the rest of the U.S. economy;
4) contribute substantially to U.S. foreign sales and exports, outpacing many industry sectors; and
5) make significantly large contributions to what the [U.S. Bureau of Economic Analysis] defines as the digital economy, which does not even encompass the full scope of the copyright industries’ digital activities. (5)
Copyright industries are an invaluable asset to the U.S. economy because the exclusive intellectual property rights afforded by copyright incentivize investment in the creation and dissemination of new expressive works and allow those copyright owners to recoup that investment. The U.S. continues to be the world leader in intellectual property—an attribute that contributes significantly to this country’s vast cultural influence and its standing as the world’s leading economy. The AI Action Plan must consider the effect policy actions may have on copyright’s importance to the economy and job creation.
Promoting Free Markets Through Copyright Licensing
Promoting free markets and a robust voluntary licensing ecosystem is essential to ensuring American competitiveness in GAI. Copyright law enables creators and copyright owners to supply GAI companies with flexible and responsive solutions for training through tailored licensing and business models for GAI development. The ability of creators and copyright owners to create works and enforce their rights in those works is crucial because it incentivizes the further creation and proliferation of high-quality creative works which form the basis for GAI development. Without copyrighted works to train GAI models, GAI technologies cannot generate high-quality outputs. The growing number of licensing and partnership deals between GAI companies and rights holders being reached with each passing day demonstrates these points.
The rise of generative AI technologies has created a robust and growing market for licensing copyrighted works for AI training purposes. As representatives of the visual media licensing industry, we have witnessed firsthand the development of numerous free-market licensing agreements between rightsholders and AI developers. Our members have been at the forefront of responsible licensing that respect creators' rights while fostering innovation.
Within our industry, specialized licensing platforms have emerged specifically to address AI training needs. Companies like Bria.ai have pioneered responsible AI development by training their models exclusively using properly licensed content acquired from a network of over 30 data partners. Similarly, initiatives like Troveo and Created by Humans are establishing frameworks for fair compensation when creative works are used for AI training. The Fairly Trained certification (fairlytrained.org) is another important development dedicated to promoting fairness in the use of
training data for Generative AI models. Its certification process helps consumers identify which companies obtain proper licenses and respect creator consent for training data, addressing the common practice of using creators' work without permission or compensation.
What's particularly promising about this emerging licensing ecosystem is how it creates new monetization pathways that benefit creators of all sizes, not just established media companies. Independent photographers, illustrators, writers, and other creators who may struggle to break into traditional publishing channels can now directly license their work for AI training purposes. By preserving the current copyright framework rather than creating exceptions for AI, we allow these promising new markets to develop, democratizing access to revenue streams that would otherwise be unavailable to smaller creators. This creates a more inclusive creative economy where both human creativity and technological innovation are properly valued.
This licensing-based ecosystem directly supports the foundational principles of American copyright law. The Founding Fathers recognized that protecting creators' rights to benefit monetarily and control their works was essential to fostering innovation, not hindering it. The constitutional balance struck in the IP Clause - protecting works for limited times to promote progress - creates the optimal environment for both creation and innovation. A properly functioning licensing market for AI training data fulfills this vision by ensuring creators are incentivized to continue producing high-quality works while enabling technological advancement. By contrast, allowing unfettered access to creative works without proper licensing would ultimately stifle innovation by discouraging creators from investing time and resources into new content creation.
The recent Thomson Reuters v. Ross Intelligence ruling (February 2025) (6) further validates the importance of proper licensing for AI training data. The court rejected Ross's fair use defense when it used Thomson Reuters' content without permission to train its legal AI system, emphasizing that a viable licensing market exists for such purposes.
Since the rise of GAI technologies a few years ago, the number of free-market licensing agreements between copyright owners and GAI companies has grown significantly. Increasing numbers of copyright owners, particularly news, magazine, and academic publishers and image/media licensors are licensing their copyrighted works to AI companies for commercial uses and have been doing so for years. This shows that the market is working and there does not need to be any change in copyright law or policies that could disrupt that market. Copyright and GAI can continue to progress successfully together without changes to copyright law.
While the GAI-copyright licensing market has grown over time, this growth will be stunted if changes to copyright law were made that create new exceptions for GAI training. Nobody disputes that GAI companies and developers must pay for and invest in computer chips and cloud infrastructure. It is part of the cost of doing business in a free market. So, too, is free-market licensing of copyrighted works. To think otherwise would be detrimental to American economic competitiveness, because strong copyright laws can and already have been shown to foster AI innovation as it forms the basis of competitive AI products, not to mention copyrighted works' own, direct benefit to the American economy and balance of trade with foreign nations.
No policy should be adopted in response to GAI that interferes with the free market and the freedom of copyright owners and GAI companies and developers to enter into licensing agreements. The marketplace should continue to properly value and incentivize creativity, and policies developed through the AI Action Plan should not interfere with copyright owners' right to choose whether to license their works for GAI purposes. Copyrighted works provide immense value to GAI developers, and they can and should pay for that value—as many are already doing today. In other words, copyright law sets the conditions for the market to prevail and for the U.S. to maintain its position as a global leader in both the AI and creative industries. Free markets will encourage the creation of GAI models based on licensable content with valuable and accurate metadata, ultimately making the U.S. models more competitive and valuable.
For these reasons, the administration should avoid changes to copyright law regarding AI. The existing legal framework is functioning effectively to balance innovation with creator rights through the market-based licensing ecosystem described above.
It should be noted that the adoption of these technologies has actually been slowed as many enterprise users remain hesitant to implement AI solutions due to uncertainties surrounding
copyright issues. Establishing clear copyright guidelines that respect existing laws would accelerate responsible AI development and adoption while reducing the current resources many companies must allocate to content filtering and moderation.
The Need for Copyright Transparency
If the administration does address copyright and GAI issues, the one area that should be addressed is requiring transparency surrounding what copyrighted materials are used to train publicly available GAI models when those materials have not been licensed for training purposes. Developers of GAI models available to the public that ingest the copyrighted works of third parties without a license should be required to satisfy transparency standards related to the collection, retention, and disclosure of the copyrighted works they use to train AI. Adequate transparency regarding ingestion of unlicensed copyrighted works is vital to ensuring that copyright owners' rights are respected alongside the advancement of GAI technologies.
Best practices from corporations, research institutions, governments, and other organizations that encourage transparency around GAI ingestion already exist that enable users of AI systems or those affected by its outputs to know the provenance of those outputs. For example, the developers of GAI models made available on an open-source basis commonly disclose the public datasets they have used for training and such datasets are then able to be interrogated by the owners of copyright works to check whether their works have been used. There is no reason these same levels of disclosure should not also apply to all GAI ingestion of unlicensed copyrighted works. Such disclosure steps are non-burdensome and can be done without compromising trade secrets. It is vital that GAI developers be required to maintain adequate and proportionate records of copyrighted works they neither own nor license that were used to train the GAI and to make those records publicly accessible and searchable as appropriate.
Adequate and appropriate scoped transparency and record-keeping requirements benefit copyright owners by enabling them to learn whether and how their works have been used to train AI models. They also benefit AI developers in that transparency promotes consumer trust. Consequently, transparency by businesses that offer GAI systems to the public is a crucial component of any AI policy.
To ensure meaningful transparency, we recommend that the Administration implement mandatory public disclosure requirements for commercial GAI developers regarding the sources of their unlicensed training data. These suggested transparency measures are not merely about enforcement - they are essential to building consumer trust and market stability. Enterprise customers increasingly demand clarity about the provenance of AI training data to mitigate their own legal risks. A transparent AI ecosystem allows businesses to make informed decisions about which AI tools to adopt, driving market forces that reward responsible development practices.
Transparency of Copyright Outputs
One area where there is a clear public need for technical developments to be accelerated and aligned is in relation to the “explainability and assurance of AI model outputs”, one of the topics identified to be of interest in the RFI. The rate at which new AI-generated content is being produced - be it text, images, or other types of works - is vastly outpacing the rate of production of the same type of human-generated works. Society risks a flood of disinformation and AI-generated content that will undermine the public’s trust in institutions and each other. To combat this, AI- generated and manipulated content needs to be identified at the root level, at the time it is produced. In this regard, it is critical that AI model outputs are labelled as such at the point of generation (or modification) so that individuals interacting with that content know whether it is human-created or AI generated. If AI models don’t do this, the burden will fall to the public, and the debate over truth will then occur after the AI-generated content has been published, if at all, allowing it to replicate and influence.
Voluntary initiatives are being pursued by all the large technology companies in this area but, to achieve alignment, regulation is needed. Essentially, responsibility needs to be placed on AI model providers to include sufficiently detailed and durable information within generated files, and on platforms to retain and surface that information. This needs to be done in a consistent manner if widespread adoption is to be achieved. Other governments are assessing options in this area but none have yet said how labelling should be done. There is an opportunity for the US government to set global standards in this area if it can act quickly enough.
Key areas where development is needed are:
ensuring granularity of labelling (e.g. so viewers can identify whether content has been manipulated or retouched, whether human created or AI created, and, critically, which AI model has been used to generate or modify the content);
establishment of look-up registries so that any detached “output labels” can be re- attached;
establishment of “trust lists” to guard against identity fraud; and
metadata identifiers to be supplemented by watermark and fingerprinting measures, so that a truly multi-layered approach is taken.
Protecting and Promoting Copyright Is Crucial to Identifying Trade Barriers and Ensuring American Global Economic Competitiveness and Leadership
The global protection of U.S. intellectual property is an imperative part of developing an AI Action Plan that will ensure U.S. economic competitiveness and sustained global leadership, and it’s a principle that the first Trump Administration championed.7 Unfortunately, the development and deployment of GAI in foreign markets has created barriers to trade that put U.S. copyright owners at a disadvantage. These barriers have most frequently arisen in the form of broad copyright exceptions for GAI in some foreign countries that fundamentally weaken copyright protection and threaten the sustainability and competitiveness of America’s creative sector and its ability to contribute to U.S. economic growth and job creation. The DMLA and its members oppose such broad exceptions.
To overcome these barriers, we urge the Administration to champion the rights of American creators and copyright owners and support the protection of copyright globally through bilateral and multilateral engagement that advances human-centric and responsible GAI, promotes free markets and licensing, and ensures recordkeeping and transparency. Particularly as the global AI race continues, there will continue to be efforts to find unethical and unfair shortcuts in the name of progress, including measures which weaken and undermine copyright. If shortcuts are utilized without regard for intellectual property rights, it will lead to a continued devaluation of creative works, disincentivize the creation of new content, and undermine the creative economy that copyright laws were designed to protect. This directly contradicts the constitutional intent to 'promote the progress of science and useful arts' through copyright protection.
America's intellectual property laws, including our robust protections for our creators and innovators, is what sets us apart from China and other countries that unfairly circumvent or weaken copyright owners' rights. We have already seen challenges to IP protection come up in the context of newer GAI technologies being developed in China. Strong IP and copyright protections are ultimately what gives the U.S. an advantage over those countries, and if we neglect those principles our advantage will be lost. This is why it is crucial now more than ever for the Administration to have an AI Action Plan that respects and promotes intellectual property rights, including copyright. Specifically, we urge opposition to broad copyright exceptions and support active engagement with countries and international organizations to instead promote strong copyright protections.
One such broad exception that is being considered in some countries is an “opt out” system through which copyright owners could exclude their works from future GAI training datasets. We urge the Administration to oppose any opt-out proposals, whether in the U.S. or abroad. U.S. copyright law is unequivocally an “opt-in” regime, and allowing a GAI system to use the work unless the copyright owner objects (i.e., opts out) would require enactment of legislation. As noted above, there is a burgeoning licensing market for AI training, which demonstrates that no exception is necessary. Thus, the copyright industries and many others would vehemently oppose any policy or change in the law that establishes or supports an opt-out regime, like the ones recently adopted by the EU.
Additionally, opt-out schemes fail to consider the practical difficulties of implementation. For example: (1) many copyrighted works have likely already been copied and used for training prior to any new opt-out regime; and (2) despite opting out, copies of the copyrighted works may still be included in GAI datasets through other means, such as when copies are scraped from other sources such as a licensee of the copyright owner, from a third-party platform, or from a piracy site where a copy has been posted without authorization. The practical effects of opt-out, particularly regarding works already used to train GAI, are also negligible given that removing entire works at scale from a GAI model is challenging.
While some proponents claim that existing technical solutions may assist with opt-out, these tools typically have significant limitations because they are only effective to the extent the opt-out mechanism is recognized and respected, and because these tools are often not designed to be targeted to address scraping for GAI ingestion.8 Copyrighted works also often exist in multiple places on the internet that make it nearly impossible for a rights holder to apply the opt-out indicator to every copy of that work. For example, a single song can be streamed on a digital streaming platform, played as the background music of a user uploaded video on a social media platform or in advertisements, or displayed as notes or lyrics on a website. It is impossible for the rights holder to successfully opt out in a way where every single downstream use would be tagged with the proper recognized and respected opt-out signal to prevent GAI scraping and use. The current discussions on this issue in the context of the EU AI Act clearly demonstrate that no workable opt-out mechanism currently exists or is likely to exist in the future.
Moreover, copies of works available on pirate sites are even further removed from the copyright owner’s control. Documented evidence has emerged of GAI companies using pirated copies of creative works to train their AI models and even proliferating pirated copies themselves during the GAI development process.9 An opt-out regime fails to address or ameliorate any of these problems and certainly does not afford the rights holder any semblance of control. For these same reasons, there is currently a high level of uncertainty over what constitutes an effective opt-out,10 and as time passes this uncertainty is being exploited by some GAI developers who continue to train on scraped content despite legitimate efforts from copyright owners to opt out. So, in sum, opt-out does not and will not work.
Conclusion
When formulating a new AI Action Plan, DMLA strongly urges the Administration to reject calls for new copyright exceptions for AI training. Instead, the Action Plan should explicitly acknowledge the adequacy of existing copyright laws, actively support the flourishing free market for licensed content, and implement meaningful transparency requirements for commercial GAI developers. We encourage policymakers to take decisive action to adopt clear copyright policies, as legal uncertainty will undoubtedly slow AI adoption and reduce U.S. leadership in this critical area. The Administration should work collaboratively with AI companies and copyright holders to establish guidelines that respect intellectual property rights while fostering innovation.
As representatives of an industry that licenses hundreds of millions of images worldwide, we believe responsible AI innovation depends on respect for existing U.S. intellectual property laws and free market licensing models. The U.S. economy, to which visual content creators and licensors are vital contributors, will be best served by policies that promote, protect, and enforce copyright globally."
We appreciate the opportunity to submit these comments and are happy to answer any additional questions.
Respectfully Submitted,
Joe G. Naylor
President
Digital Media Licensing Association
c/o Cowan, DeBaets, Abrahams & Sheppard LLP 60 Broad Street, 30th Floor
New York, New York 10004
_________
Footnotes
1 This document is approved for public dissemination. The document contains no business- proprietary or confidential information. Document contents may be reused by the government in developing the AI Action Plan and associated documents without attribution.
2 DMLA submits this RFI on behalf of the association and not on behalf of any individual members.
3 See OpenAI, Reply Comments Submitted in Response to U.S. Copyright Office’s Aug. 30, 2023, Notice of Inquiry at 2-3 (Dec. 6, 2023) (“One recurring theme in the initial round of comments is a recognition that there is no need for fundamental changes to copyright law at this time…OpenAI echoes the sentiments highlighted above that legislative changes to copyright would be premature at this time.”); Google, Comments Submitted in Response to U.S. Copyright Office’s Aug. 30, 2023, Notice of Inquiry at 1 (Oct. 30, 2023) (“However, we believe that existing copyright doctrines are sufficiently flexible to handle many of the scenarios that will likely arise with AI, and that courts — informed with the facts of specific cases — are the appropriate first venues for determining how those doctrines should apply.”); Computer & Communications Industry Association (CCIA), Comments Submitted in Response to U.S. Copyright Office’s Aug. 30, 2023, Notice of Inquiry at 1 (Oct. 30, 2023) (“CCIA believes that existing U.S. copyright law is capable of addressing issues related to artificial intelligence and serves to promote creative activity in AI technology.”).
4 Robert Stoner & Jéssica Dutra, Copyright Industries in the U.S. Economy: The 2024 Report, INT’L INTELL. PROP. ALL. (Feb. 2025), https://www.iipa.org/files/uploads/2025/02/IIPA-Copyright-Industries-in-the-U.S.-Economy-Report-2024_ONLINE_FINAL.pdf.
5 Id. at 21.
6 Thomson Reuters Ent. Centre GmBH et al. v. Ross Intelligence, Inc., No. 20-cv-613 (D. Del. Feb. 11, 2025), https://www.ded.uscourts.gov/sites/ded/files/opinions/20-613_5.pdf
7 For example, in 2020, the Administration issued Artificial Intelligence for the American People, which reaffirmed the President’s commitment to protecting intellectual property in the AI environment, stating: “[t]he United States has long been a champion and defender of the core values of freedom, guarantees of human rights, the rule of law, stability in our institutions, rights to privacy, respect for intellectual property, and opportunities to all to pursue their dreams.” (emphasis added) (available at https://trumpwhitehouse.archives.gov/ai/ai-american-worker/). The first Trump Administration also rejected attempts to weaken copyright protections in the US- Mexico-Canada Agreement (“USMCA”). See generally H.R. 5430 (2024), https://www.congress.gov/crs-product/R44981.
8 Robots.txt protocol is one example. While robots.txt does alert scraping tools not to ingest the associated copyrighted work, it has significant limitations because it is only effective to the extent it is recognized and respected, and it was not designed to be targeted to scraping for generative AI ingestion. Robots.txt may also prevent a search engine from indexing the work. A copyright owner may want their work to be scraped for search engine purposes—so they can be found on the internet—but not for AI ingestion. Even if robots.txt is used, it does not attach to the copyrighted work itself but will operate at the URL or website level.
9 Kate Knibbs, Meta Secretly Trained Its AI on a Notorious Piracy Database, Newly Unredacted Court Docs Reveal, WIRED (Jan. 9, 2025, 5:33 PM), https://www.wired.com/story/new-documents- unredacted-meta-copyright-ai-lawsuit/.
10 We can look to the European Union to see that there is confusion over what is considered a proper “machine-readable” format, a question which has been raised by at least one German court. See Landgericht Hamburg [Hamburg Regional Court] Sept. 27, 2024, 310 O.22723, Kneschke v. LAION, 310 O.22723 (Ger.). See also Roy Kaufman, AI Rights Reservation: Human Readable is Machine Readable — An Interview with Haralambos (“Babis”) Marmanis, (Feb. 17, 2025), https://scholarlykitchen.sspnet.org/2025/02/17/ai-rights-reservation-human-readable-is-machine- readable-an-interview-with-haralambos-babis-marmanis/.