White House National AI Policy Framework Calls for Preempting State Laws, Protecting Children
What You Need to Know
Key takeaway #1
The White House framework seeks to preempt “cumbersome” state AI laws in favor of a single national standard, but it faces an uncertain future amid political opposition. The framework also recommends that laws of general applicability remain in place, including those protecting children and consumers and preventing fraud.
Key takeaway #2
The framework calls for age-assurance requirements, stronger parental controls, and measures to reduce minors' exposure to harmful content, while deferring to the courts on the impact of intellectual property law on the AI training on copyrighted material.
Key takeaway #3
While Congress considers the framework and other legislative proposals, companies will need to continue navigating a patchwork of state and federal regulations. Until a national standard is codified and regulatory alignment with key U.S. trading partners is advanced, businesses developing or deploying AI systems face ongoing and significant compliance burdens, both domestically and internationally.
Client Alert | 11 min read | 03.25.26
In its latest attempt to establish a national AI regulatory standard and quash “cumbersome” state AI laws, the White House on Friday, March 20, 2026, released legislative recommendations for a National Policy Framework on Artificial Intelligence.
The high-level, four-page document sets out seven general priorities for Congress to codify into federal law. The administration calls for the preemption of state laws that interfere with a “minimally burdensome” national standard; advocates for some protections for children, including stronger parental controls; encourages Congress to let courts resolve whether AI training on copyrighted works violates intellectual property (IP) laws; and backs requirements for companies to foot the bill for higher energy costs, while making it easier to build data centers, among other recommendations.
The Framework follows an executive order President Trump signed in December 2025 that both directs the preparation of this framework and seeks to use a mix of executive authorities and the threat of litigation to stymie some state AI regulation in favor of a national, codified standard.
It remains an open question whether the recommendations in the framework will become law. GOP House leaders said they would work to implement the White House priorities, and earlier last week, Sen. Marsha Blackburn (R-TN) introduced the TRUMP AMERICA AI Act to establish a national standard. But both Democrats and some Republicans on the Hill have balked over the past year at efforts to preempt state AI laws or institute a state AI moratorium. Also on March 20, Democrats introduced the GUARDRAILS Act to prohibit President Trump’s December 2025 executive order from taking effect, to ensure states can adopt laws “to protect the American public in the face of rapidly evolving AI technologies.”
Given its uncertain future and vague recommendations, the framework is but the beginning of federal legislation to establish a national AI policy standard. While Congress wrestles with the matter, the contested state and federal landscape will continue to impose regulatory burdens on companies developing, integrating, and deploying AI systems.
I. Overview of The Framework
Protecting Children and Empowering Parents
The framework calls on AI platforms to take measures to protect children and support parents. It asks Congress to empower parents with tools — presumably by requiring companies to provide them — to manage children’s privacy and account settings, screen time, and content exposure.
According to the framework, Congress should create “commercially reasonable” age-assurance requirements for AI platforms, although parental attestation would be sufficient. Congress should require AI platforms to implement features to reduce the risks of the sexual exploitation and self-harm of minors.
While cautioning against permitting open-ended liability, the framework also recommends Congress affirm that existing child privacy protection laws apply — including limits on data collection and targeted advertising — and should not preempt state laws that enforce generally applicable laws that protect children, including on the generation of real or AI-generated child sexual abuse material. These provisions are notable because companies are subject to such laws as the Children’s Online Privacy Protection Act and because the majority of state AI laws seek to ban the spread of AI-generated nonconsensual intimate imagery, which often targets children.
If Congress leaves in place generally applicable laws, it would also likely not disturb products liability and consumer protection investigations and litigation brought by the Federal Trade Commission (FTC), state regulators, and/or plaintiffs that have relied on these laws to launch a wave of litigation targeting AI developers for creating chatbots and other features that allegedly led to minors’ self-harm and suicide.
The framework also cites approvingly the most prominent federal AI law, the TAKE IT DOWN Act, which prohibits the nonconsensual publication of intimate visual depictions, including deepfakes, and requires online platforms to remove them if victims give notice beginning in May 2026.
Safeguarding and Strengthening American Communities
The Trump administration has pushed for the development of AI data centers, including by furthering a Biden-era policy of identifying federal land on which to build AI infrastructure, despite growing public concerns of the cost and impact of such buildouts.
The framework attempts to walk this line, calling for AI infrastructure in a fashion to “strengthen American communities” through economic growth while protecting against “harmful impacts.” The framework asks Congress to ensure that residential ratepayers do not pay higher electricity costs because of data center construction. The framework cites the White House’s March 2026 Ratepayer Protection Pledge, a voluntary agreement signed by major technology companies not to raise electricity bills for households. At the same time, it urges Congress to streamline federal permitting for AI infrastructure and provide resources (likely grants, loans, or tax breaks) to small businesses for AI deployments.
The framework also presses Congress to augment law enforcement efforts to combat AI impersonation scams that target seniors — echoing an executive order on cyber fraud the administration issued earlier in March — and ensure national security agencies possess sufficient technical capacity and understanding of frontier AI.
In comparison to the administration’s AI Action Plan announced in July 2025, the framework includes no focused references to the national security and military applications and attendant risks of AI. Nor does it contain an assessment of the threats posed by nation-state actors to American AI interests — particularly China. The action plan recognized China’s goal to influence AI development, including through international standards bodies and by circumventing U.S. export controls. It remains to be seen how and through what mechanisms the administration will continue to target nation-state adversaries as it seeks to limit their access to the latest AI hardware and software and the diplomatic levers it will pull with allies to effect these controls.
Respecting Intellectual Property Rights and Supporting Creators
When announcing the AI Action Plan, President Trump decried the use of IP laws to stifle AI training on copyrighted works and the practice of AI companies “hav[ing] to make deals with every content provider.”
The framework similarly expresses the view that training AI models on copyrighted material “does not violate copyright laws.” But it also entreats Congress to defer to the courts to resolve questions on the reach of IP law on AI training — litigation which remains extensive and ongoing — and to continue to monitor the issues, lest novel considerations mandate additional protections for content creators. Notwithstanding the president’s prior statements, the framework suggests Congress could “consider enabling licensing frameworks or collective rights systems” for rights holders to negotiate compensation from AI developers. Such legislation should not address whether such licensing is required, the framework offers. While a handful of licensing initiatives have started to percolate through the legislative process, no clear consensus has emerged. Congressional recognition of a particular licensing regime — especially if it included a safe harbor provision — could influence AI developers to consider this approach to reduce litigation risk.
The framework also urges the establishment of a federal framework to protect individuals from the unauthorized distribution or commercial use of AI-generated replicas, with exceptions for First Amendment-protected uses. The framework makes no mention of the bipartisan NO FAKES Act, currently before Congress, which would likely accomplish this priority by protecting individuals’ likenesses from unauthorized AI-generated recreations.
Preventing Censorship and Protecting Free Speech
The framework exhorts the federal government to defend free speech while preventing AI systems from being used to “censor lawful expression or dissent.” It leaves it up to Congress to prevent the U.S. government from “coercing technology providers” to ban or alter content based “on partisan or ideological agendas” and invites Congress to provide a means for Americans to seek redress for government “efforts to censor expression on AI platforms or dictate the information provided by an AI platform.”
It is unclear how Congress should implement these priorities. The framework also does not explain how these recommendations square with the administration’s policies — expressed in an executive order, Office of Management and Budget guidance, and a proposed U.S. General Services Administration contract clause — to require the federal government to procure AI systems that only generate information consistent with what the administration terms “Unbiased AI Principles” that, among other strictures, eschew “Diversity, Equity, and Inclusion.”
Enabling Innovation and Ensuring American AI Dominance
To promote innovation and accelerate deployment, the framework asks Congress to establish “regulatory sandboxes” for AI applications, echoing part of the AI Action Plan. Such sandboxes usually allow for testing of AI products under a regulator’s supervision and often with the waiver of regulatory requirements. The framework also presses Congress to provide resources to make federal datasets accessible to industry in AI-ready formats and to refrain from creating a new federal enforcement or rulemaking body and, instead, rely upon sectoral regulators to police practices.
Educating Americans and Developing an AI-Ready Workforce
The framework requests that Congress expand efforts to study AI’s impact on workers, bolster capabilities at land-grant institutions to provide technical assistance and develop “AI youth development programs,” and use “non-regulatory methods” (not otherwise defined) to ensure existing education programs and workforce training programs “affirmatively incorporate AI training.” For years, lawmakers have emphasized studying the impacts of AI and focusing on retraining and reskilling, including in executive orders from 2023 and 2025.
Establishing a Federal Policy Framework, Preempting Cumbersome State AI Laws
Reaffirming a longstanding administration position, the framework urges Congress to “preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard” and “not fifty discordant ones.”
At the same time, the standard should respect “key principles of federalism” by not preempting: “traditional police powers” to enforce laws of general applicability, including laws that “protect children, prevent fraud, and protect consumers;” state zoning laws; and requirements on a state government’s use of AI.
State laws that should be preempted include those that “regulate AI development,” “unduly burden Americans’ use of AI for activity that would be lawful if performed without AI,” or penalize “AI developers for a third party’s unlawful conduct involving their models.”
It is unclear if the first recommendation will implicate transparency laws recently adopted in California and New York that require developers of frontier AI models to create and maintain safety and security protocols, report significant safety incidents, evaluate their models for significant risks, and perform regular audits. The latter recommendation is redolent of “Section 230 immunity,” which broadly protects internet service providers from civil liability for third-party, user-generated content.
Finally, today, companies comply with dozens of varying privacy laws in the absence of a federal standard. In some instances, companies operating in the U.S. default to adhering to the most stringent state law in pursuit of operational efficiency and/or to advance Americans’ trust in their products and services, resulting in a de facto national standard. Entities may adopt a similar approach to the fragmented federal and state regulatory picture in the AI context, if a national standard is not adopted. The framework does not flag the numerous state-level data privacy laws as cumbersome, and it includes no reference to a federal privacy or data governance framework. Thus, it will not likely stimulate Congress to create one beyond what the FTC Act and sectoral privacy regulations require.
II. Impact
The framework is far from becoming law. But, as noted above, many of its recommendations find support in other laws, proposed bills, or longstanding bipartisan policies. The framework’s broad calls for federal preemption are also softened by its articulated carveouts, including for laws of general applicability that protect children, prevent fraud, and protect consumers. Most state AI laws address those issues, and many of those laws are the basis for ongoing AI-related litigation.
The framework’s silence on a few areas bears mention. It makes no reference to the risks of AI systems’ bias, nor does it seek to mitigate that harm through quality or testing requirements. It does not discuss civil rights, except for the prioritization of some free speech rights. And it makes no mention of the need to monitor performance of AI models or their deployment after they are created. It does not advocate for a dedicated, expert-led AI enforcement or regulatory oversight body for the nation.
The framework’s relationship with the TRUMP AMERICA AI Act will merit attention in the coming weeks and months. Sen. Blackburn’s bill goes further in some areas than the framework to protect what the bill deems the “4 Cs” (children, creators, conservatives, and communities) from exploitation, abuse, and censorship. It also would place a duty of care on AI developers, sunset Section 230 regulation, and require covered platforms to implement tools to protect minors from online harms, among other provisions.
Companies will need to stay attuned to evolving federal and state regulatory action, particularly in light of the possibility that political control of Congress could change later this year. The Trump administration’s framework might also have some influence internationally, considering its divergence from many international laws that impact AI development and deployment. It offers an AI regulatory model to jurisdictions seeking an alternative to the European Union’s historic approach to technology regulation, embodied in the EU AI Act. While the EU grapples with its own Digital Omnibus reviews, Vietnam has recently implemented an EU-aligned AI Law. Numerous other jurisdictions are actively assessing how to address the concerns identified in the framework. Thus, regulatory complexity for multijurisdictional AI developers and deployers will persist through 2026 and 2027.
Crowell & Moring will continue to monitor governments’ efforts to adopt, promote, and regulate AI. For further information, please contact our team.
Contacts

Partner and Crowell Global Advisors Senior Director
- Washington, D.C.
- D | +1.202.624.2698
- Washington, D.C. (CGA)
- D | +1 202.624.2500
Insights
Client Alert | 4 min read | 03.25.26
NAIC Intensifies AI Regulatory Focus: What Health Insurance Payors Need to Know
The National Association of Insurance Commissioners (NAIC) is intensifying its oversight of how insurers use AI — and the pace of regulatory activity shows no signs of slowing. Over the past several months, the NAIC has published a formal Issue Brief staking out its position on federal AI legislation, launched a multistate AI Evaluation Tool pilot aimed at examining insurers’ AI governance programs, and continued to expand adoption of its AI Model Bulletin across state lines. These developments continue a trend towards enhancing regulation; the NAIC adopted AI Principles in 2020 and a Model Bulletin in 2023 clarifying that existing insurance laws apply to AI systems and establishing expectations for governance, documentation, testing, and third-party oversight. That Model Bulletin has now been adopted in approximately 24 states.
Client Alert | 3 min read | 03.24.26
California Considering A Massive Expansion of Its Antitrust Laws
Client Alert | 2 min read | 03.23.26
Client Alert | 1 min read | 03.23.26




