1. Home
  2. |Insights
  3. |FDA’s AI in Early Phase Clinical Trials RFI: An Opportunity to Help Set the Rules of the Road

FDA’s AI in Early Phase Clinical Trials RFI: An Opportunity to Help Set the Rules of the Road

Client Alert | 6 min read | 05.11.26

Consistent with recent FDA initiatives directed at leveraging AI technologies and improving early-phase clinical trial conduct, the FDA has issued a Request for Information (RFI) for input on a proposed AI-enabled optimization pilot program for early-phase clinical trials. The issues for which FDA is requesting information fall into two categories:  (A) Pilot program design and implementation and (B) Program evaluation metrics and success criteria.

The RFI signals three high-level themes that life sciences, biotech, digital health, and medical device companies should treat as an early indicator of where FDA guidance may be heading around the use of AI in clinical trials:

  1. “Trustworthy AI” as the organizing principle — The RFI frames evaluation of AI in early-phase trials around trustworthiness concepts aligned with the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) (e.g., validity, safety, accountability, transparency, privacy, and fairness).
  2. Earlier and more structured sponsor–FDA engagement — FDA is encouraging sponsors to collaborate early with FDA so that the FDA can understand context of use, risks, and controls while these AI-enabled approaches are being designed and deployed (rather than waiting until a later stage in the design or deployment process after problems may have already arisen).
  3. A path toward “co-developing” expectations/standards — The pilot concept contemplates a setting where sponsors/developers, regulators, and other stakeholders can test and refine evaluation approaches, metrics, and governance controls for AI used in trial conduct and early decision-making (including FDA regulatory decisions and sponsor-internal decision points).

For companies, the main near-term opportunity is to shape the practical standards that may ultimately govern AI used in recruitment, dose selection, endpoint measurement, and safety monitoring — along with the documentation and oversight FDA may expect when AI meaningfully influences trial decisions.

Companies that are already deploying (or planning to deploy) AI for recruitment, eligibility, endpoint assessment, or safety signal detection in Phase 1/early Phase 2 should consider submitting comments by May 29, 2026.

What FDA is trying to learn (and why it matters)

The RFI seeks input on how to structure the pilot program and how to measure success when AI is used to improve early-phase clinical trial efficiency and decision-making (including dose selection and safety monitoring). FDA is also asking what evidence should be used to demonstrate that AI systems are reliable, robust across settings and populations, and appropriately governed, using a “trustworthy AI” framing aligned with the NIST AI RMF.

The FDA’s request is significant because the pilot’s “scoreboard” (i.e., the metrics and controls FDA decides are necessary) can become a de facto template for what FDA expects — well beyond the pilot — especially if sponsors begin referencing pilot outcomes in meetings and submissions.

Key themes and implications for sponsors and developers

1. NIST AI RMF framing may raise the bar from “performance” to “governance”

Many organizations already evaluate AI with technical metrics such as accuracy and sensitivity/specificity. FDA’s questions suggest a broader view: how the tool is managed over time, how it behaves across sites and subgroups, how risks are mitigated, and what transparency is needed for regulators, investigators, and participants. The RFI notes that the pilot program will be guided by principles aligned with the NIST AI RMF. Companies should expect that evaluation of not only AI model performance will be developed through the pilot program, but also governance and controls (documentation, monitoring, security, privacy, bias/fairness practices, and accountability). 

In particular, the FDA asks for input into how to evaluate and/or measure validity/reliability, safety/risk mitigation, transparency/explainability for different stakeholders (including applicability to proprietary systems), privacy/data governance, and approaches to assess fairness across demographic/clinical subgroups. The FDA explicitly states that the trustworthiness of AI systems will be aligned with the NIST AI RMF. Data privacy is one area of particular focus of feedback with the FDA asking: “How should privacy protections and data governance practices be evaluated?”

Having a consistent standard can lead to more predictable expectations and better safety/quality controls for AI in clinical trials, which also creates the need for increased documentation, monitoring, and vendor oversight obligation. This can be especially challenging for small biotech and early-stage companies.

2. Earlier FDA collaboration could become a competitive advantage — and a practical expectation

The FDA is signaling that AI in early clinical research phases is an area where early engagement may materially reduce friction and delays later. The RFI’s emphasis on pilot structure, comparative evaluation, and success metrics suggests FDA wants to help define “how to do this credibly,” not just evaluate outputs after the fact. This can lead to better alignment on acceptable use cases and evidence expectations before a company scales an AI-enabled approach.  Companies that do not engage early may face more uncertainty or rework later.

3. The pilot may influence where AI is considered acceptable in trial conduct (and where it is not)

The RFI spans both efficiency outcomes (e.g., faster enrollment) and decision quality outcomes (e.g., better go/no-go decisions, improved safety signal detection). Depending on how FDA weighs these categories, the pilot could push industry toward:

  • AI as a workflow optimization tool (lower-risk, easier adoption, increased integration); and/or
  • AI that meaningfully influences clinical decisions (higher-risk, likely higher evidentiary and governance burden).

How this could change early-phase trials — for better and for worse

For the better: If FDA and stakeholders converge on workable evidence and governance norms, AI could help reduce avoidable delays (screen failures, protocol deviations, inefficient dose finding) and improve early safety oversight, potentially reducing exposure to ineffective or unsafe dosing strategies.

For the worse: If the pilot’s expectations become overly prescriptive (or unclear), it could increase early-phase operational burden, create inconsistent reviewer expectations, and pressure sponsors into premature adoption or “box-checking” governance that is expensive but not meaningfully risk-reducing.

Why medical device and digital health companies should care even though the pilot is framed around drugs and biologics

Even when the sponsor is a drug/biologic company, many AI-enabled trial optimizations rely on technologies often built by:

  • digital health companies (wearables, sensors, eCOA platforms, engagement tools);
  • imaging/diagnostics AI developers;
  • clinical operations AI vendors (site selection, recruitment); and
  • SaMD or SaMD-adjacent tools.

As the FDA defines what “trustworthy AI” looks like in trial contexts, especially around drift, explainability, fairness, privacy, and governance, those expectations can flow down to vendors through procurement, contracting, and qualification processes.

Digital health/device companies should anticipate increased sponsor diligence on:

  • validation evidence and performance claims in trial populations;
  • versioning/change control;
  • audit logs and traceability;
  • cybersecurity posture;
  • subgroup performance and fairness evaluations; and
  • integration burden and user experience in clinical workflows.

Actionable steps for companies now (before the May 29 deadline)

  1. Decide whether to submit comments, particularly if you use (or are considering) AI for recruitment, eligibility, dose escalation, safety signal detection, or digital endpoints in Phase 1/early Phase 2.
  2. Define your “context of use” clearly: what the AI does, who uses it, what decisions it influences, and what happens if it is wrong. This framing aligns with FDA’s broader approach to AI credibility in regulatory contexts. 
  3. Treat vendor/CRO transparency and change control as regulatory issues, not just procurement points. If AI outputs affect trial decisions, sponsors may need enforceable rights around versioning, monitoring, audit support, incident response, and documentation support.
  4. Manage confidentiality in submissions. FDA notes electronic comments are posted unchanged; confidential information requires a different submission approach.

How Crowell can support companies

The intersection of artificial intelligence, clinical trial regulation, and emerging FDA policy is rapidly evolving, and the decisions companies make now — including whether and how to engage with this RFI — can shape their regulatory posture for years to come. Crowell’s interdisciplinary team of attorneys brings together deep experience in regulatory strategy, life sciences and biotechnology, clinical research, digital health and medical devices, data privacy and cybersecurity, and government affairs to help clients navigate each dimension of this opportunity.

Our team at Crowell can help clients translate this RFI into a practical regulatory and operational strategy by:

  1. Drafting and submitting tailored comments that protect confidential information while advancing the company’s preferred policy outcomes;
  2. Assessing existing or planned AI uses in early-phase trials against emerging “trustworthy AI” expectations (including documentation, monitoring, and governance);
  3. Strengthening sponsor oversight and contracting for AI vendors/CROs (audit rights, change control, cybersecurity/privacy, performance monitoring, and IP-protective transparency); and
  4. Planning FDA engagement (pre-IND/Type B discussions and pilot positioning) to reduce downstream risk and rework as FDA’s expectations mature.

Insights

Client Alert | 6 min read | 05.12.26

EU Pharma Package: Advertising Compromise Proposal

In our ninth alert in this EU Pharma Package Series, we discussed the proposals of the Commission, Council, and Parliament with respect to advertising of medicinal products....