1. Home
  2. |Insights
  3. |White House AI Action Plan Seeks to Establish “Dominance,” Boost Innovation, and Scrutinize Regulations

White House AI Action Plan Seeks to Establish “Dominance,” Boost Innovation, and Scrutinize Regulations

What You Need to Know

  • Key takeaway #1

    The AI Action Plan and accompanying Executive Orders—the Trump Administration’s most significant AI efforts to date—emphasize innovation, adoption, and competitiveness.

  • Key takeaway #2

    The Plan calls for an examination of laws and regulations that stifle AI, but does not endorse a moratorium on, or federal preemption of, state AI laws and does not assert that the training of AI on copyrighted data constitutes “fair use.”

  • Key takeaway #3

    The Plan and EOs seek to fast-track the construction of energy and data center infrastructure and call both for greater exporting of the AI technology stack and enhanced controls on certain AI components.

Client Alert | 16 min read | 07.25.25

On July 23, 2025, the White House released Winning the Race: America’s AI Action Plan (“the Plan”) the Trump Administration’s most significant policy statement on artificial intelligence to date.

The Plan is structured around three pillars—accelerating AI innovation, building U.S. AI infrastructure, and establishing U.S. global leadership in AI diplomacy and security—and it identifies more than 90 federal policy actions the Administration plans to take over the next year. The Plan, which the White House’s Office of Science and Technology Policy (OSTP) wrote after receiving over 10,000 comments from the public, does not have the force of law and acts instead as an exhortation for the government to support AI development through the provision of energy, the adoption of AI by government, and the relaxation of certain regulations.

Also on Wednesday, President Trump signed three executive orders (EOs) on AI: (1) accelerating federal permitting of data center infrastructure (“Infrastructure EO”); (2) promoting the export of American AI “technology stack” (“Export EO”); and (3) preventing “woke AI” in the federal government (“Ideology EO”).

Taken together, the Plan and EOs express a permissive approach to AI regulation that emphasizes AI innovation, adoption, and competitiveness. How much of the Plan and EOs will translate into tangible action to reduce regulatory burdens on companies and spur “dominance” remains to be seen.

Key highlights of particular concern to clients include:

  • The Plan suggests the White House solicit the public for information on laws or regulations that hinder AI development. Likewise, the Plan recommends the Federal Trade Commission (FTC) review prior investigations or judgments that burden AI innovation. Affected companies should respond to this request and, where applicable, seek review by the FTC.
  • The Plan suggests pressuring states to remove “burdensome AI regulation” through the possible denial of federal funding. The Plan does not define “burdensome AI regulation,” but it explicitly asks the National Institute of Standards and Technology (NIST) to remove from its standards any mention of “diversity, equity, and inclusion,” “misinformation,” and climate change. The prospect of the curtailment of federal funds, however vaguely stipulated, creates a challenging and uncertain compliance environment for companies, given that states are currently implementing the most far-reaching AI laws and regulations.
  • While the Plan and the Export EO support the export of the American AI technology stack, the Plan pushes the Department of Commerce to (i) implement stricter export controls on semiconductor technologies critical to AI; (ii) enhance its efforts to identify illegal and unauthorized exports or diversions of U.S.-origin AI compute capacity to foreign adversaries, and enforce those violations; and (iii) use a series of carrots and sticks with U.S. allies and partners to prompt them to impose similar controls. Impacted companies will want to reassess their compliance programs to address these shifts in enforcement.
  • The Plan suggests NIST and other agencies work with industry to incorporate AI-specific scenarios into cybersecurity incident response plans. Private entities should, too.
  • The Plan and the Infrastructure EO aim to support the development of energy sources and data centers needed to speed AI development. The Plan recommends streamlining environmental reviews for data centers and related energy infrastructure, and the EO allows qualifying data center projects to receive financial assistance and expedited permitting reviews. It also directs several federal agencies to identify sites for the deployment of AI infrastructure on federal lands.
  • The Plan suggests the Department of Defense (DoD) take several steps to protect AI technologies and adopt them. Defense contractors should stay attuned to evolving DoD requirements.
  • The Plan and the Ideology EO seek to bar the government’s procurement of AI models that the Administration deems as “woke”—that is, engaging in speech to which the White House objects. The Office of Management and Budget (OMB) will issue guidelines to implement this direction. Government contractors should track these developments, particularly given the government’s ill-defined prohibitions.
  • The Plan has several policy recommendations to support scientific research and innovation, with a particular emphasis on start-ups and early-stage companies. These include the creation of a national repository of data and encouraging the release of “high-quality datasets,” sandboxes to test new AI tools, efforts to ensure access to costly large-scale computing power through a “healthy financial market,” and other economic incentives. These recommendations will need to navigate various extant state, federal, and international laws and regulations around data privacy, cybersecurity, and AI (such as around bias), complicating efforts to implement these recommendations.
  • The Plan does not weigh in on contentious intellectual property issues and does not assert that the training of AI on copyrighted material constitutes “fair use.” Accordingly, regulatory agencies, courts and perhaps Congress will have to resolve these questions.
  • While differing in its rhetoric and emphasis, the Plan is similar in substantial respects to efforts of the Biden Administration to address existential AI risks, promote secure-by-design cybersecurity, develop additional energy resources, lead in global AI standard setting, and address the dangers of deepfakes, particularly in evidentiary proceedings.
  • The White House remains only one player among a crowded field of policymakers focused on AI. The Plan and accompanying EOs land amid a swirl of state and international AI lawmaking and scores of lawsuits that are impacting how both technology developers and users act in an increasingly complex environment. As enthusiasm for some forms of regulation recedes in the federal executive, companies should expect increased efforts by regulatory advocates in these other venues.

An annotated overview of the Plan and the EOs follows:

Pillar 1: Accelerate AI Innovation

Regulatory Review. The Plan seeks to reduce AI regulation by asserting that federal funding should not go to states with “burdensome” AI regulations. While averring its support for states’ rights, the Plan invites the Federal Communications Commissions (FCC) to investigate whether state AI laws interfere with the FCC’s statutory duties. It also recommends OMB withhold discretionary funding from states whose AI regulatory regimes hinder effectiveness of the funding or award.

But the Plan neither calls for a moratorium on state-lawmaking on AI, which was a measure recently considered in the U.S. Congress, nor for a federal law preempting state laws. The Plan itself is vague on what constitutes “burdensome” and “onerous” regulations. The government will have to establish clearer guidelines to enforce such broad avowals, and it could well face legal challenges if federal agencies attempt to withhold Congressionally appropriated funds.

In a speech at an AI Summit the day the White House released the Plan, President Trump went further than his own Plan, calling for a single Federal “rule and regulation” that would “supplant” state action. There has been an unsuccessful effort in Congress in the last few years to pass a comprehensive AI law, including a bipartisan task force. There is no indication that efforts in the near future will be more successful, likely leaving federal agencies as the mechanism for the setting and enforcement of AI-related federal rules. States, international bodies, and courts will also enforce their own AI-related rules and remedies.

Of note for companies, the Plan calls for the OSTP to issue a Request For Information (RFI) inviting commenters to identify current Federal regulations that they believe hinder AI innovation for federal agencies to review and revise. If such an RFI issues, companies should plan to submit comments, particularly on rules that do not explicitly involve AI but that nonetheless restrain innovation.

The Plan also recommends the FTC review and modify prior investigations and dispositions that burden AI innovation. Previously investigated entities may wish to advocate with the FTC for a relook of their cases.

“Woke” AI. In line with the Ideology EO (discussed below), the Plan directs NIST to revise its AI Risk Management Framework to eliminate references to “misinformation,” “Diversity, Equity, and Inclusion,” and “climate change.”

Broadening Access to Computing Power. The Plan calls for the creation of a supportive environment for the development of open-source and open-weight models, which anyone can download and modify. It also calls on various federal agencies to increase the research community’s access to private-sector computing.

Enabling AI Adoption. The Plan calls for a coordinated federal effort to establish a “try-first” culture for AI across American industry. It suggests establishing regulatory sandboxes at various agencies to allow for the testing of AI tools and for the establishment of domain-specific standards for how to measure AI increases in productivity. Likewise, the Plan calls for the government to adopt AI by ensuring all federal employees, to the extent practicable, have access to frontier language models and coordinating government mechanisms for interagency collaboration on adoption and procurement. AI developers should stay attuned to opportunities to partner with the federal government to meet these procurement needs.

Supporting Workers, Manufacturing, and Science. The Plan calls for expanding AI literacy and skills across the economy, establishing an AI Workforce Research Hub, and studying workforce disruption, in moves strikingly similar to those directed to the Department of Labor by the Biden Administration in October 2023. The Plan also calls for investing in AI, robotics, and drone manufacturing, cloud-based infrastructure, and research. It recommends a multi-agency effort to advance AI interpretability—the ability to understand how large language models (LLMs) work—and test AI systems for transparency and vulnerabilities.

Building Datasets. Broadly speaking, the value of an AI system is determined by its software, computing power, and the data on which it trains. The Plan calls on various federal agencies to make recommendations for minimum data quality standards for various AI training modalities, including biological data, and lower barriers of access to federal data, among other steps.

Building AI Evaluations. Evaluating AI performance has become a perennial challenge. The Plan calls on regulators to explore the use of evaluations in their application of existing law to AI systems and publish guidelines for federal agencies to conduct their own evaluations.

Protecting AI. In a move sought by AI frontier labs, the Plan directs the Department of Defense and other agencies to collaborate with American AI developers to help protect them against malicious cyber actors, insider threats, and other risks.

Combating Courtroom Deepfakes. Commentators have long raised concerns that litigants may introduce AI-manipulated evidence into court. The Plan suggests federal agencies adopt standards for considering such evidence similar to a federal rule of evidence currently under consideration by the Advisory Committee on Evidence Rules (FRE 901(c)), which in some circumstances requires proponents of evidence to show that its probative value outweighs its prejudicial effects. Parties before agencies should prepare to answer such challenges to their evidence and raise similar questions against opponents.

Pillar 2: Build American AI Infrastructure

Streamlining Permitting. To provide the energy AI requires, the Plan recommends the establishment of new “Categorical Exclusions” under the National Environmental Policy Act (NEPA) that would exempt such developments from full environmental assessments. It also recommends streamlining and expediting permitting processes under the Clean Water Act.

Utilizing Federal Lands. Additionally, the Administration recommends making available federal lands for the construction of new data centers. The Plan also recommends stabilizing the electric grid and encouraging compliance with nationwide standards for resource adequacy.

Semiconductor Manufacturing. The Plan calls for a revitalized United States-focused microchip industry and explicitly advocates the use of offices established by the Biden-era CHIPS Act, whose repeal President Trump had previously advocated.

Cybersecurity, Secure-by-Design, and Incident Response. The Plan recommends various measures to bolster cybersecurity. These include ensuring all AI used in homeland security is “secure-by-design”—a Biden Administration initiative to build cybersecurity principles into software systems from the start. The plan also recommends the creation of an AI Information Sharing and Analysis Center (ISAC) by the U.S. Department of Homeland Security to promote the sharing of AI-security threat information with critical-infrastructure entities. (Anthropic recommended such a forum in its comments to OSTP.) Furthermore, the Plan suggests issuing guidance to private sector entities on remediating and responding to AI-specific threats and coordinating the sharing of known AI vulnerabilities among federal agencies and the private sector. And it encourages federal agencies to collaborate with industry to incorporate AI-specific scenarios into cybersecurity incident response plans. We can expect such best practices to proliferate throughout the private sector.

Training a Skilled Workforce. The Plan calls for investment in the workforce that will build, operate, and maintain AI infrastructure. The Plan recommends a national initiative to identify the necessary occupations and invest in industry training.

Department of Defense Recommendations. The Plan recommends the DoD develop cybersecurity safeguards to account for the increased use of AI systems by the U.S. government, while also proactively expanding the use of AI within the Department itself. It recommends DoD monitor the use of AI by the U.S. and foreign governments to ensure the United States stays abreast of foreign competition in AI adoption and innovation. It also recommends protecting AI technologies through enforcement of U.S. export controls for AI technologies. The Plan requests that DoD drive adoption of AI within the Department itself, including incorporating AI into the military colleges’ curricula and automating Department workstreams.

Pillar 3: Lead in International AI Diplomacy and Security

Exporting AI to U.S. Allies. The Plan and the Export EO (see below) attempt to balance the promotion of the export of U.S. technology with regulations and enforcement to promote U.S. national security and foreign policy. The Plan recommends that the Department of Commerce implement stricter export controls on semiconductor technologies critical to AI, including by identifying potential mechanisms to geolocate chip exports, imposing expanded controls on semiconductor manufacturing subsystems, and using the Foreign Direct Product Rule (FDPR) and secondary tariffs to incentivize international cooperation on U.S. controls. The Export EO pushes executive branch agencies to encourage exports of American AI full-stack technology, as long as those exports meet what the Plan calls “U.S.-approved security requirements and standards,” which are under development.

International Standards Based on American Ideals. The Plan calls for the United States to spearhead diplomacy on international AI standards to prevent Chinese influence over these governance bodies and to ensure that any international regulations or guidelines “promote innovation, reflect American values, and counter authoritarian influence.”

Address Risks. The Plan recommends the government “[e]valuate frontier AI systems for national security risks,” including mass destruction and cyber risks, in partnership with frontier AI developers. The Plan also calls for the evaluation and assessment of potential security vulnerabilities and “malign foreign influence” from adversaries’ use of AI systems in critical infrastructure. Furthermore, the Plan recognizes the dangers AI poses for the synthetization of harmful biological pathogens and suggests a multitiered approach working with allies to use new tools, infrastructure, and mechanisms to screen for malicious actors. These initiatives are redolent of the efforts by the Biden Administration to work with technology firms to address existential risks.

Copyright Left Out

The Plan does not address the contested issues of the scope of copyright protections in the training of AI models. This is contrary to the comments of some significant technology labs which had suggested the Plan state that training AI systems on copyrighted materials constitutes “fair use.” The silence is also notable given the Administration’s decision to remove the Head of the Copyright Office shortly after the office issued a draft report that questioned whether AI training on copyrighted material qualified as fair use.

During the AI Summit, President Trump suggested that current copyright laws hinder the development of AI, and that AI models needed to be able to learn from copyrighted materials without compensating copyright holders. However, the Plan leaves this issue unaddressed, suggesting it will be the courts and perhaps Congress that resolves these questions.

Executive Orders

  • “Accelerating Federal Permitting of Data Center Infrastructure”

This Infrastructure EO is intended to facilitate the “rapid and efficient buildout” of data center infrastructure by making certain “Qualifying Projects” eligible for federal financing and expedited permitting reviews. Specifically, it directs the Secretary of Commerce to identify avenues to provide financial support—potentially including loans and loan guarantees, grants, tax incentives, and offtake agreements—for data centers, data center components, and related energy infrastructure that meet certain capital expenditure, load capacity, or other requirements.

In addition, the EO requires the Council on Environmental Quality (“CEQ”) to identify existing categorical exclusions from the National Environmental Policy Act that can be applied to expedite the permitting of Qualifying Projects and directs CEQ to collaborate with relevant agencies to identify new categorical exclusions.

It also attempts to streamline permitting reviews by directing the Environmental Protection Agency to develop or modify regulations under the Clean Air Act; Clean Water Act; the Comprehensive Environmental Response, Compensation, and Liability Act; and the Toxic Substance Control Act for Qualifying Projects, and by requiring the Secretary of the Army to determine whether to issue a nationwide Section 404 Clean Water Act permit or Section 10 Rivers and Harbors Appropriation Act permit for certain activities by Qualifying Projects. The EO also directs a programmatic consultation approach under the Endangered Species Act, rather than project-by-project reviews.

The EO works to incentivize beneficial reuse of legacy contaminated properties by directing EPA to identify Brownfields and Superfund sites for Qualifying Projects and develop guidance to expedite redevelopment.

Finally, the EO authorizes certain federal agencies to make federal lands available for use by Qualifying Projects.

  • “Promoting the Export of the American AI Technology Stack”

This Export EO establishes a coordinated national effort to support the American AI Industry by removing barriers for American companies to export their AI systems globally. Within 90 days of the Order, the Secretary of Commerce, Secretary of State and Director of the OSTP are mandated to establish and implement the American AI Exports Program, whereby developers of a full-stack AI technology package are able to submit proposals for inclusion in the AI Exports Program.

Companies that are selected as part of the AI Exports Program will be designated as “priority.” Priority AI systems will have the benefit of technical, financial, and diplomatic resources, Federal government-backed partnerships, and market access to promote export.

While this EO does not say explicitly, the Plan notes that after the consortia are selected, State and Commerce will only facilitate deals that meet U.S.-approved security requirements and standards. The aforementioned proposals are likely to be similar to those approved during President Trump’s recent trip to the Middle East, where, for example, as a condition of these deals, the United Arab Emirates agreed to “further align their national security regulations with the United States, including strong protections to prevent the diversion of U.S.-origin technology”.

  • “Preventing Woke AI in the Federal Government”

This Ideology EO addresses what the Administration refers to as an “existential threat to reliable AI”—that is AI models that incorporate “ideological biases or social agendas,” including “diversity, equity, and inclusion.” The EO mandates that any AI model that is procured for use by the federal government be “truth-seeking” and ideologically neutral. The OMB will issue guidance requiring government contractors to adhere to these so-called “Unbiased AI Principles” in the LLMs they furnish to the government.

This EO faces several challenges. First, its terms are undefined, and the meaning of truth and ideological neutrality in the AI context are not self-evident. Second, to the extent that the government seeks to abridge speech by private entities, it may face First Amendment challenges. Finally, technically it is often difficult to ascertain why LLMs produce outputs for given inputs. Preventing outputs that express disfavored ideological opinions may exceed a model builder’s abilities.

Regardless, government contractors will need to review OMB guidance carefully and adhere to Administration requirements as well as possible to avoid enforcement actions or loss of funding.

***

Crowell & Moring LLP and Crowell Global Advisors will continue to monitor U.S. Government efforts to adopt, promote, and regulate AI. Our lawyers and policy professionals are available to advise clients as well as those taking an active role in AI policy development, across government contracts, international trade, privacy and cybersecurity, technology, healthcare, and life sciences, among other areas.

For further information, please contact our team.

Insights

Client Alert | 4 min read | 07.25.25

EU Adopts the Largest Ever Trade Countermeasures Against the U.S. to Apply if EU-U.S. Trade Negotiations Fail

On July 24, the European Commission announced the imposition of new EU countermeasures in response to U.S. tariffs further to an agreement reached among EU Member States. These measures are adopted through Commission Implementing Regulation (EU) 2025/1564 and take the form of additional customs duties on U.S. products as well as export restrictions for certain EU products. In total, these measures concern about EUR 93 billion ($109 billion) worth of customs duties, the highest volume of bilateral trade caught by the EU so far. The EU countermeasures are set to enter into force as of August 7....