White House AI Action Plan: Potential Implications for Health Care
Client Alert | 7 min read | 07.29.25
On July 23, 2025, the Trump Administration issued an artificial intelligence (AI) action plan titled “Winning the Race: America’s AI Action Plan” (the Plan) to guide AI innovation in the U.S. The Plan includes 90 policy recommendations that will shape future AI guidance and policies impacting a range of entities and industry sectors, including health care/life sciences and entities involved in clinical research.
As summarized in our recent client alert, the Plan establishes three pillars to guide the development of “American AI”: 1) accelerate AI innovation; 2) build American AI infrastructure, and 3) lead in international AI diplomacy and security. The Plan states that the U.S. must achieve global dominance in AI and contains recommendations on promoting innovation, ensuring economic competitiveness, and advancing national security. The Plan also identifies several health-specific issues, including support for scientific research and innovation, data quality and privacy issues, and AI standards development efforts. In the summary below, we highlight policy recommendations and directives for specific agencies included in the Plan that may impact health care/life science and research entities.
Deregulation and Interaction with State Law
In contrast to the previous administration, the Trump Administration is taking a “deregulatory approach” to guide AI development. To this end, it seeks to remove “bureaucratic red tape” and “onerous” regulations. The Plan states that the federal government should not allow federal funding for AI to be directed toward states with “burdensome AI regulations that waste these funds,” but further states that it should “not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.”
The Plan recommends that the Office of Science and Technology Policy (OSTP) issue a Request for Information to receive public feedback about federal regulations that hinder AI innovation and adoption and work to take appropriate action. Building on President Trump’s Executive Order (EO) on “Unleashing Prosperity Through Deregulation,” the Plan directs the Office of Management and Budget (OMB) to work with federal agencies to identify, revise, or repeal regulations and guidance that it deems may unnecessarily hinder AI development or deployment. It recommends that OMB work with federal agencies that have AI-related discretionary funding programs to ensure that they consider a state’s AI regulatory climate when making funding decisions and “limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.”
Additionally, the Plan directs the Federal Communications Commission (FCC) to evaluate whether state AI regulations interfere with its ability to implement its obligations and authorities. It also directs the review of all Federal Trade Commission (FTC) investigations, final orders, consent decrees, and injunction, commenced under the previous administration, to ensure that they do not unduly burden AI innovation.
Notably, the Plan seeks to discourage (but does not define) “burdensome” regulation of AI by proposing to reduce federal support for states that have AI regulations that contravene the Trump
Administration’s position. This recommendation follows an unsuccessful legislative attempt to include a ten-year moratorium on state regulation of AI, which was proposed as part of the House-passed version of the One Big Beautiful Bill Act (H.R. 1).
In recent years, several states have enacted legislation to govern entities’ development and deployment of AI at the state level. For example, Utah and Colorado were among the first states to enact comprehensive AI statutes, which define and govern “high-risk AI”, including those used in health care and clinical settings. The Colorado law requires deployers to use “reasonable care” to protect consumers from “any known or reasonably foreseeable risks of algorithmic discrimination” from the use of the high-risk AI system. Given the current Administration’s priorities and if the Plan’s policy recommendations are implemented in guidance, conflicting directives included in federal and state laws may impact entities’ compliance programs. Moreover, some state AI programs include funding amounts for entities to invest in AI projects. While it remains unclear the extent to which federal funding will be tied to existing state AI programs, entities in states with stricter AI regulations may encounter reduced eligibility for federal support.
Enable AI Adoption and Build Scientific Datasets
The Plan seeks to foster a culture of AI innovation and to create high-quality, AI-ready datasets. It proposes establishing AI Centers of Excellence (i.e., regulatory sandboxes) around the country where entities can rapidly deploy and test AI tools. These efforts would be enabled by several federal agencies such as the Food and Drug Administration (FDA). The Plan also recommends that the National Institute of Standards and Technology (NIST) launch several sector-specific initiatives, including in health care, to convene a broad range of public, private, and academic stakeholders to develop national standards for AI systems.
The Plan’s recommendations under this section may have certain implications for the FDA’s regulation of AI-enabled medical devices and other AI-related FDA activities. Under the previous administration, the FDA issued draft guidance to provide recommendations on the use of AI intended to support a regulatory decision about a drug or biological product’s safety, effectiveness, or quality. Previous guidance focused on advancing transparency and ensuring that comprehensive, representative datasets are used to train AI. Whether future efforts will build on previous guidance and activity is unknown.
The Plan makes several recommendations related to AI datasets, including directing the National Science and Technology Council (NSTC) Machine Learning and AI Subcommittee to make recommendations on minimum data quality standards for the use of biological, materials science, chemical, physical, and other scientific data modalities in AI model training. It directs OMB to promulgate regulations on presumption of accessibility and expanding secure access, as required in the Confidential Information Protection and Statistical Efficiency Act of 2018, to increase access to federal statistical data. The Plan’s data recommendations may have implications for data and privacy and security issues, especially as entities navigate complying with established federal and state regulations.
Remove Ideological Bias and DEI
In line with previous Trump Administration actions, the Plan seeks to advance free speech and ensure that AI procured by the federal government does not reflect “social engineering agendas.” The Plan recommends that NIST revise the AI Risk Management Framework to eliminate references to “misinformation”, Diversity, Equity, and Inclusion (DEI), and climate change. It also recommends updating procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are “objective and free from top-down ideological bias.” On the same day that the White House issued the Plan, President Trump signed an EO titled “Preventing Woke AI in the Federal Government” to prevent AI models that incorporate “ideological biases or social agendas,” including DEI.
Given the lack of any guidance or definitions around the terminology used in this part of the Plan, it is unclear how healthcare/life science and research entities can comply with these recommendations. Guidance and other clarifying notices from federal agencies on this issue should be monitored, most notably, from the Department of Health and Human Services (HHS) and the FDA.
Invest in AI-Enabled Science
The Plan includes several recommendations designed to enable basic research to support entities’ AI-enabled scientific advancement. Many of the recommendations focus on public-private partnerships and government action to facilitate partnerships between organizations, including the use of Focused-Research Organizations (FROs), which are non-profit entities designed to tackle specific scientific or technological challenges that require coordinated effort and produce public goods. Through a collaboration of federal partners, including the National Science Foundation (NSF), the Plan recommends investing in automated cloud-enabled labs for a range of scientific fields, built by the private sector and federal agencies. It recommends the use of long-term agreements to support FROs or others using AI and other emerging technologies to make fundamental scientific advancements. The Plan also includes policy recommendations related to data, including proposing incentivizing researchers to release higher-quality datasets and requiring federally funded researchers to disclose AI models that use non-proprietary, non-sensitive datasets. These recommendations signal that increased data-sharing among federal agencies may soon take place, creating another potential point of tension between federal recommendations and state laws and regulations around data privacy and cybersecurity.
Invest in Biosecurity
The Plan highlights the importance of biosecurity efforts to prevent malicious actors from taking advantage of advancements in biology. The Plan proposes a multi-tiered approach designed to screen for malicious actors and requires all institutions that receive federal funding to use “nucleic acid synthesis tools and synthesis providers that have robust nucleic acid sequence screening and customer verification procedures.” It also includes recommendations to facilitate data sharing between nucleic acid synthesis provides and to enable national security-related AI evaluations. These recommendations may impact public institutions that work with or provide contracting services around sequencing in addition to private entities that may receive National Institutes of Health (NIH) funding but offer commercial products that are related to sequencing.
Takeaways
The Trump Administration’s AI Action Plan may shift compliance requirements for a wide variety of healthcare entities as they continue to develop and deploy AI. In the coming months, entities should expect to see agency activity to implement the Plan in addition to federal AI initiatives and opportunities. In addition to monitoring developments coming out of the AI Action Plan, these entities should also begin examining their AI governance plans as well as identifying state law compliance obligations to harmonize compliance efforts. Crowell will continue to monitor federal and state AI developments as they become available. Please reach out if you have any questions.
Insights
Client Alert | 6 min read | 07.30.25
The new EU “Pharma Package”: Global (Orphan) Marketing Authorization
In this fifth alert in our weekly series on the EU Pharma Package, we will delve into the global marketing authorization (GMA) concept which has largely remained untouched in the legislative proposal – aside from an important introduction of the notion of a GMA for orphan medicinal products. We will discuss the missed opportunity to codify existing case law and provide further clarification in this respect.
Client Alert | 4 min read | 07.29.25
Children first: How Ofcom’s Children’s code and age checks change the digital game
Client Alert | 3 min read | 07.29.25
Meet the New Nationwide Injunction. Same as the Old Nationwide Injunction.
Client Alert | 4 min read | 07.29.25