1. Home
  2. |Insights
  3. |Shifting Liability: AI in Medical Devices

Shifting Liability: AI in Medical Devices

Client Alert | 3 min read | 02.19.20

As artificial intelligence (AI) decision-making begins to equal or surpass that of physicians, the potential for increased reliance on AI could also mean that liability traditionally assigned to physicians through malpractice suits shifts to AI companies. It is critical that companies developing this technology consider potential liability as they position the technology for market. 

The Future of AI in Medical Devices

While some go so far as to speculate that AI will replace certain medical specialties altogether, current technology is largely used as a tool to aid doctors. Computer-aided diagnosis (CAD) software for mammography for example, which has been widely used for years, functions like a second-reviewer by flagging areas of potential concern for radiologist review. And surgical robots assist surgeons with everything from calculating optimal positioning of orthopedic implants to performing fine motor operations. But suggested uses of new AI technologies go further and include using AI in ways that override or bypass physician decision-making. For example, using AI to reduce unnecessary follow-up or treatment, or using AI to pre-screen imaging and identify normal images not needing physician review can change a patient’s course of treatment and lead to a shift in the risk profile of the device. While more subtle than the wholesale obsolescence of physicians some have forecasted, these applications do involve AI making some decisions in place of physicians. And companies should tread carefully, as these applications could result in unanticipated consequences like losing a learned intermediary defense or becoming liable in malpractice. 

Liability Concerns

At this juncture, AI’s legal place in medicine is largely undefined. Certain types of AI technology will present novel legal issues. For example, it is unclear what liability framework will apply to AI software that is untethered to any physical hardware. Courts have not traditionally considered pure software to be a “product” subject to products liability law. But this issue has yet to be addressed in the context of healthcare AI software. In addition, as AI becomes capable of “replacing” physician decision making, some have suggested that AI systems should be judged by medical malpractice standards rather than products liability.  

Some of the liability framework for AI in healthcare may hinge on the U.S. Food and Drug Administration’s (FDA) regulatory treatment of this technology, which is still evolving. The FDA generally regulates software as a “medical device” if the software is intended to be used for medical purposes (i.e. for the diagnoses, cure, mitigation, prevention, or treatment of a disease or condition). But it remains to be seen whether the FDA’s treatment of software as a medical device will be adopted by courts for products liability purposes, as noted above. The FDA also continues to grapple with how to handle ever more sophisticated technology. For example, the agency is currently evaluating how to regulate “machine learning” AI, which by definition changes over time. The regulatory process ultimately required for these technologies will have important liability implications, such as manufacturers’ ability to use preemption defenses. 

What Should Manufacturers Do?

Given these ambiguities, for now, the safest route may be to make clear, through warnings, instructions, marketing, and otherwise, that AI technologies are not to be the sole arbiter of a diagnosis or treatment plan. All AI decisions should be reviewed by a physician, and physicians should not rely on AI’s judgment over their own (particularly in determining that a finding or condition does not need follow up or treatment). Limiting AI’s role to serving as a physician’s tool, rather than replacing a physician’s judgment, is in line with current, acceptable uses of such technology. This approach not only benefits patients, but also benefits companies, as there has not been significant product litigation related to such use.

Insights

Client Alert | 3 min read | 04.25.24

JUST RELEASED: EPA’s Bold New Strategic Civil-Criminal Enforcement Collaboration Policy

The Environmental Protection Agency’s (EPA’s) Office of Enforcement and Compliance Assurance (OECA) just issued its new Strategic Civil-Criminal Enforcement Policy, setting the stage for the new manner in which the agency manages its pollution investigations. David M. Uhlmann, the head of OECA, signed the Policy memorandum on April 17, 2024, in order to ensure that EPA’s civil and criminal enforcement offices collaborate efficiently and consistently in cases across the nation. The Policy states, “EPA must exercise enforcement discretion reasonably when deciding whether a particular matter warrants criminal, civil, or administrative enforcement. Criminal enforcement should be reserved for the most egregious violations.” Uhlmann repeated this statement during a luncheon on April 23, 2024, while also emphasizing the new level of energy this collaborative effort has brought to the enforcement programs....