White House Seeks Industry Input on Laws and Rules that Hinder AI Development
What You Need to Know
Key takeaway #1
On September 26, the White House issued a request for information from the public on Federal laws, rules, and policies that “unnecessarily hinder” the development or deployment AI.
Key takeaway #2
Impacted companies should seriously consider submitting comments before the deadline on October 27, 2025.
Key takeaway #3
This request for information is the first significant deregulatory push by the Administration, but lawmaking and rulemaking by states and other efforts by the Federal government to intercede in the AI industry continue, complicating the compliance environment.
Client Alert | 7 min read | 09.29.25
On September 26, the White House invited the public to submit comments on Federal laws, rules, and policies that “unnecessarily hinder” the development or deployment of artificial intelligence (AI) technologies in the United States. This request marks one of the Trump Administration’s most substantial moves yet to reduce the regulatory burden on AI. Respondents may submit comments through a government website until October 27, 2025.
The Office of Science and Technology Policy (OSTP) issued this “Request for Information: Regulatory Reform on Artificial Intelligence” in response to the Administration’s Winning the Race: America’s AI Action Plan, released in July 2025 (Plan). The Plan’s first recommendation was for OSTP to issue a Request for Information (RFI) from “businesses and the public at large about current Federal regulations that hinder AI innovation and adoption, and work with relevant Federal agencies to take appropriate action.” The White House’s move marks a material shift from other regions that are looking to increasingly regulate potential AI harms, such as in Europe through the EU AI Act.
Companies developing or deploying AI systems that are impacted by “Federal statutes, regulations, agency rules, guidance, forms, and administrative processes” should seriously consider submitting comments to OSTP. Doing so helps ensure that any new AI regulatory action accounts for industry’s operational experience and lessons learned that may otherwise go overlooked by policymakers. The RFI highlights regulations of the healthcare and transportation sectors as particularly ripe for review and reform. Outside counsel can work with clients to craft these submissions.
RFI Poses Detailed Questions
Specifically, the RFI invites responses to one or more of six questions:
-
-
- What AI activities or innovations “are currently being inhibited, delayed, or constrained” by Federal laws, regulations, or policies? Respondents should describe the specific barrier that “directly” or “indirectly” hinders AI development or adoption.
- What “specific Federal statutes, regulations, or policies” are barriers to AI in the respondent’s sector? Respondents should cite the Code of Federal Regulations (CFR) or the U.S. Code (U.S.C.), where applicable.
- What “administrative tools,” such as waivers, exemptions and experimental authorities, are “available, but underutilized” to circumvent existing policy frameworks that are inappropriate for AI applications? Respondents should cite to specific CFR or U.S.C. sections, where applicable.
- What “modifications” should be made to statutory or regulatory regimes that are “structurally incompatible with AI applications”? How can these modifications be made “to enable lawful deployment while preserving regulatory objectives”?
- What kind of clarifications, such as standards, guidance documents, or interpretive rules, would be most effective at explaining “how existing rules cover AI activities”?
- Do barriers “arise from organizational factors”—such as gaps in workforce readiness, institutional capacity, or cultural acceptance—that impact the use of Federal laws or policies, and how might Federal action address these barriers?
-
RFI Highlights Tension between Old Assumptions and a New AI Era
The RFI’s questions derive from the White House’s belief that most existing regulatory regimes and policy mechanisms were developed prior to the rise of AI technology. Such frameworks “often rest on assumptions about human-operated systems that are not appropriate for AI-enabled or AI-augmented systems.” The RFI highlights categories of assumptions that are poorly suited for these emerging technologies:
-
-
- Decision-Making and Explainability – Policies assume that the processes and rationale for decisions can be traced to a human.
- Liability and Accountability – Policies allocate responsibilities and premise remedial frameworks on humans or “clearly identifiable organizational decision points.”
- Human Oversight and Intervention – Policies require humans to oversee, review, intervene, or continually supervise operational processes.
- Data Practices – Policies on data (its collection, retention, provenance, sharing, and permitted use) “do not account for the scale, reuse, or training dynamics” of AI.
- Testing, Validation, and Certification – Approaches to testing, authorization, and review assume “static products or human-delivered services, rather than adaptive or continuously learning systems.”
-
Outdated Assumptions Obstruct AI Development and Deployment
Policy frameworks that rest on assumptions of human-operated systems or ignore technological progress inhibit the development, deployment, and adoption of AI across sectors, the RFI contends. The RFI identifies five barriers related to these obstructive policy frameworks:
-
-
- Regulatory Mismatches – Requirements that assume humans are at the center of decision making, such as “mandatory human supervision or documentation practices[.]”
- Structural Incompatibility – Statutes or regulatory frameworks that prohibit automated data practices or require human decisionmakers. Such frameworks are “structurally unable to accommodate particular AI applications,” and potentially necessitate “legislative change or comprehensive regulatory revision.”
- Lack of Regulatory Clarity – Existing laws that plausibly cover AI activities but include insufficient interpretive guidance or direction for compliance, risk management, and enforcement.
- Direct Hindrance – Existing laws and regulations that impede AI use, such as guidance that prevents Federal workers from using AI on their work computers.
- Organizational Factors – Organizational factors that encumber AI adoption, including a lack of workforce readiness, institutional capacity, or cultural acceptance.
-
OSTP is particularly interested in feedback identifying “regulations that, while serving important purposes, contain requirements or assumptions incompatible with how AI systems function or could function.” The RFI welcomes comments on any regulation across all sectors that “may create unnecessary barriers to beneficial AI applications, even if the core policy objectives remain valid.”
RFI Represents a Deregulatory Push Amid Flurry of Governmental Intervention in AI
OSTP’s information request is the first concrete action aimed at reducing AI regulations. The Plan also recommends that the Federal Trade Commission (FTC) review prior investigations or judgments that burden AI innovation, and it suggests the Office of Management and Budget (OMB) induce state governments to remove “burdensome AI regulation” through the possible denial of Federal funding.
But the White House has taken other steps to insert itself in AI development and deployment:
-
-
- By October 21, in response to an Executive Order issued in July, the Department of Commerce is to establish and implement an American AI Exports Program to coordinate a national effort to support the export of the American AI tech stack, i.e., the whole ecosystem of tools and technologies that run AI systems.
- By late November, OMB is due to issue guidelines to implement a separate Executive Order to prevent the government from procuring AI models that the Administration deems “woke”—which the EO describes as AI that is not “truth-seeking” and ideologically neutral.
- In recent months, the White House has also sought to impose an export fee on certain semiconductors sold to China and assumed a stake in a U.S. semiconductor company.
-
This interplay between the Administration’s domestic quasi-deregulatory agenda on AI, its focus on enhancing American AI exports while simultaneously constraining access to certain AI components by foreign adversaries, and its overall objective to “achieve global dominance” in AI is notable. The international implications of AI regulatory reform should therefore also inform OSTP comments made by companies developing or deploying AI tools or systems intended for use abroad. Companies may wish to consider whether the U.S. regulatory regime ought to strive for a degree of interoperability, mutual recognition, or parity with the approach taken in key foreign partner markets as a mechanism to advance broader adoption of the U.S. AI tech stack. Similarly, companies might consider whether advocacy for deregulation in the U.S. requires that the Administration advocate similar approaches abroad, including ongoing efforts to reduce regulatory divergence in the fields of data protection and intellectual property, which both tie directly to AI.
The White House RFI contrasts with attempts to further regulate AI in other jurisdictions. The EU AI Act is one such initiative, bringing in detailed rules around the development and use of AI and, in particular, looking to categorize different types of AI based on harm and risk. These rules also threaten major fines based on a percentage of global revenue and can potentially apply to U.S. businesses in similar ways to the GDPR data laws. This divergence from the Trump Administration’s approach is therefore likely to complicate legal compliance.
Finally, the RFI queries the public only for information on Federal regulations. The states remain the locus of lawmaking and rulemaking on AI, and companies should attend to their growing requirements in this space. For more on these trends, please join Crowell & Moring for a webinar on October 16, 2025 on “The Artificial Intelligence Agenda from Capitol Hill to State Capitals: Where We Are and Where We Are (Probably) Going.”
***
Crowell & Moring LLP and Crowell Global Advisors will continue to monitor U.S. Government efforts to adopt, promote, and regulate AI. Our lawyers and policy professionals are available to advise clients on responding to RFIs and engaging in AI policy development, across government contracts, international trade, privacy and cybersecurity, technology, healthcare, and life sciences, among other areas. For further information, please contact our team.
Contacts
Partner, Crowell Global Advisors Senior Director
- Washington, D.C.
- D | +1.202.624.2698
- Washington, D.C. (CGA)
- D | +1 202.624.2500
Insights
Client Alert | 4 min read | 09.26.25
Court Vacates CMS’s 2023 Final Rule on RADV Audits
On September 25, 2025, the Northern District of Texas granted plaintiffs’ motion for summary judgment in Humana v. Becerra, vacating CMS’s 2023 Final Rule regarding risk adjustment data validation (RADV) audits. In the litigation, Humana challenged CMS’s decision in the Final Rule to not continue applying a Fee-for-Service (FFS) adjuster to its RADV audit methodology.
Client Alert | 8 min read | 09.24.25
Client Alert | 14 min read | 09.24.25
The Middle East’s Big Bet on Artificial Intelligence and Data Security
Client Alert | 4 min read | 09.23.25
A Special Relationship Reboot? The US-UK Tech Prosperity Deal