1. Home
  2. |Insights
  3. |WISeR Under Scrutiny: AI Claims Review Debate Reaches CMS

WISeR Under Scrutiny: AI Claims Review Debate Reaches CMS

What You Need to Know

  • Key takeaway #1

    In March 2026, the Electronic Frontier Foundation (EFF) filed a lawsuit against the Centers for Medicare and Medicaid Services (CMS), citing the agency’s alleged failure to answer a Freedom of Information Act (FOIA) request for records the EFF believes will provide necessary insights into the design, safeguards, vendor relationships, and real-world performance of the WISeR Model, an AI-driven prior authorization pilot program for certain Medicare services.
  • Key takeaway #2

    While CMS maintains that WISeR requires licensed clinicians to make all non-payment determinations, the EFF alleges that the model’s AI-driven review has produced inappropriately high denial rates — a concern that echoes similar criticisms of AI use in the commercial insurance market.
  • Key takeaway #3

    In parallel, a growing body of state legislation has placed more pressure on health plans to demonstrate that AI tools complement rather than replace human judgment in medical necessity reviews.

Client Alert | 6 min read | 05.08.26

The appropriate use of AI tools during the claims review process continues to be a major topic of debate within the health care industry — but in recent weeks, emerging litigation has inspired critics to turn their attention specifically to the technology’s application within federal health programs. On March 25, 2026, the Electronic Frontier Foundation (EFF) filed a lawsuit against the Centers for Medicare and Medicaid Services (CMS), citing the agency’s alleged failure to answer a Freedom of Information Act (FOIA) request for records the EFF believes will provide crucial insight into the design, safeguards, vendor relationships, and real-world performance of the Medicare Wasteful and Inappropriate Service Reduction (WISeR) Model, CMS’s  AI-driven prior authorization pilot program for certain Medicare services.

Taken in context with the recent federal push to adopt AI within public health programs and state-level efforts to regulate payors’ use of AI during claims adjudication, the EFF’s lawsuit speaks to a broader interest within the health care industry to affirm the role human clinical reviewers play in utilization review processes that are increasingly facilitated by AI.

WISeR Supports AI Use in Federal Health Programs — And Prompts Transparency Concerns

Announced in June 2025 and active in six states (Arizona, New Jersey, Ohio, Oklahoma, Texas, and Washington) as of January 1, 2026, the WISeR model provides a trial framework for using AI and machine learning in conjunction with human review to “ensure timely and appropriate Medicare payment for select items and services.” Over the course of the model’s six-year pilot period, CMS plans to assess its capacity to reduce clinically unsupported care, improve beneficiary outcomes, control spending, and prevent fraud and abuse within the Traditional Medicare program.

This shift towards prior authorization is a change for the agency; CMS has historically only required advance review for a limited selection of nonessential services and items. The WISeR model introduces additional review requirements for items and services assessed to be high-risk targets for fraud, waste, and abuse (e.g., skin and tissue substitutes, electrical nerve stimulator implants), and calls on pre-selected technology companies to both “streamline” medical necessity reviews and expedite coverage determinations.

Crucially, WISeR does not outsource claims review exclusively to AI. In fact, CMS includes a caveat on the model’s public webpage asserting that “[a]ll recommendations for non-payment are determined by appropriately licensed clinicians who will apply standardized, transparent and evidence-based procedures to their review.” This disclaimer aligns with conventional wisdom — and a 2024 Final Rule — holding that AI should not be used as the sole basis for denying medical coverage within Traditional Medicare, Medicare Advantage, and Medicaid.

This change is inspired by requirements typical in the private sector. According to recent industry reports, 99% of Medicare Advantage beneficiaries are enrolled in plans that require prior authorization for certain services, and 84% of surveyed insurers report using AI and/or machine learning to facilitate utilization management, detect fraud, and support prior authorization, among other use cases. In a best-case scenario, the WISeR model would expedite claims review and help filter out inappropriate or wasteful claims while ensuring that program beneficiaries can access coverage for medically necessary care.

The EFF, however, alleges that the rollout of WISeR’s AI system has precipitated a disproportionate increase in care denial rates across participating states. The organization notes in its complaint that the prior authorization approval rates produced by the AI model in Texas fall far short of those seen in Medicare Advantage health plans (approximately 92%), and that a higher proportion of prior authorizations denied by the AI model are approved upon human review — 62% versus 84%, respectively. EFF also alleges that the model introduces “perverse” financial incentives to deny care regardless of medical necessity: technology vendors are entitled to collect up to 20% of program savings associated with denied care claims.

Notably, the EFF’s lawsuit does not directly challenge or seek to halt WISeR. Instead, it seeks to achieve a view into the model’s performance and assess whether it includes meaningful safeguards against algorithmic bias and wrongful denials of care. In late January 2026, the EFF filed a FOIA request for relevant records on an expedited turnaround, citing an urgent need to inform the public of the potential harms introduced by the active model. Because that demand allegedly went unanswered, the organization has requested a court order to immediately enforce fulfillment of the FOIA request. It remains to be seen whether the EFF’s lawsuit will inspire secondary legal challenges against the WISeR model.

In the meantime, the litigation proceedings indirectly raise questions that have become a familiar refrain for payers in recent years: Are AI tools improperly declining claims for medically necessary care? How are human medical reviewers serving as a “check” on AI tools, if at all?

State Regulators and Courts Scrutinize AI Use in Claims Adjudication 

The EFF is far from the only interested party to call for greater transparency into how AI is being used within health care claims review processes. Over the last few years, state regulators and the courts have increasingly turned their attention to clarifying human medical reviewers’ role within commercial health plans — and confirming that AI does not improperly supplant human decision-making during claims adjudication.

Newly enacted laws in states such as Arizona, Maryland, Nebraska, and Nevada now require explicit human review and prohibit insurers from relying solely on AI for adverse coverage decisions, while older statutes in California (SB 1120) and Illinois reinforce the need for individualized, clinician-led assessments. Arizona’s legislation (HB 2175) focuses on the role of reviewers, requiring clinical staff to “individually review” all prior authorization denials, and further emphasizing that medical directors “shall exercise independent medical judgment and may not rely solely on recommendations from any other source.” Maryland (HB 820) and Nebraska (LB 77) took a slightly different approach, implementing laws that prohibit payors from denying, delaying, or modifying care based solely on determinations made by an AI or automated algorithm. Along similar lines, Indiana (HB 1271) passed a bill prohibiting insurers from using automated processes or systems to downcode claims based on medical necessity without human review of the member’s medical record in March 2026. At least three other states (Connecticut, SB 00342; Illinois, SB 3114; and Maryland, SB 797) have pending bills that require human review and/or prohibit AI from being the sole decision-maker for coverage or downcoding determinations.

On a broader scale, several states (including Colorado, Texas, California, and New York) have passed what are being referred to as “comprehensive” AI laws, which compel greater transparency and accountability for AI systems. These laws require developers and deployers of AI systems to not only disclose information critical to the design and inner workings of the models that power the AI, as well as the data on which it has been trained, but also require testing and assessment of the system and its results to mitigate against potential bias, discrimination, and other faulty outcomes. The EFF’s efforts and these legislative trends signal regulators’ continued interest in increasing visibility into “black box” systems fueling AI, automated algorithms, and other such technologies.

In parallel, courts nationwide have begun to examine whether payers’ claims review processes use automated algorithms and AI to support or overrule human review, with several recent class action lawsuits raising critical questions about what degree of involvement constitutes “sufficient” human review under both statutory requirements and contractual obligations.

Implications and Recommendations for Health Plans

Given this regulatory and litigation context, it could be said that the debate over the extent and appropriateness of AI decision-making within the claims adjudication process has simply reached public health programs, as opposed to developing the conversation in a new direction. Medicare law and state statutes both emphasize the importance of prioritizing human input during medical necessity reviews — or, more accurately, not basing coverage determinations exclusively on AI recommendations. With WISeR’s launch, concerned stakeholders such as the EFF are asking for transparency into the AI models and underlying training data for prior authorization within the Medicare program.

Electronic Frontier Foundation v. U.S. Centers for Medicare and Medicaid Services most likely will not have any direct implications for commercial payers or, indeed, for Medicare health plans operating in the 44 states not engaged in the WISeR pilot. However, the case’s emergence reinforces the likelihood that health plans and third-party administrators will continue to face pressure — from state regulators, member plaintiffs, and potentially federal lawmakers — to demonstrate that their claims review processes take advantage of AI-driven efficiency without supplanting human decision-making.

Our team will continue to track EFF and other relevant lawsuits. We strongly encourage all health care organizations interested in learning more about how these regulatory and litigation developments may affect their business to contact their preferred Crowell & Moring lawyer or any author of this alert.

Insights

Client Alert | 5 min read | 05.05.26

DOJ Launches FOCUS Initiative, Seeks Data Miners to Assist in Identifying and Building Fraud Claims

On April 30, 2026, the DOJ announced the launch of the Fraud Oversight through Careful Use of Statistics initiative (FOCUS) to increase coordination between the Department and the growing host of data miners who sift through publicly available government data to identify patterns of alleged fraud. The launch of FOCUS highlights a growing trend in False Claims Act (FCA) enforcement: civilian data miners with access to public data — but no other connection to the alleged defendants — are filing almost as many qui tam complaints as company insiders....