How President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence Addresses Health Care
Client Alert | 6 min read | 11.27.23
On October 30, President Joe Biden signed an Executive Order (“EO”) 14110 entitled, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which establishes a policy framework to manage the risks of artificial intelligence (“AI”), to direct agency action to regulate the use of health AI systems and tools, and to guide AI innovation across all sectors, including in the health and human services sectors. OMB simultaneously released a draft memorandum that would specifically direct department and agency action by establishing new agency requirements in AI governance, innovation, and risk management and adopting specific minimum risk management practices for uses of AI. OMB is seeking public comment on the memorandum by December 5, 2023, which includes a list of questions requesting feedback on specific issues.
The EO outlines eight guiding principles and priorities to advance and govern the use of AI: (i) ensure safe and secure AI technology; (ii) promote responsible innovation, competition, and collaboration; (iii) support American workers; (iv) advance equity and civil rights; (v) protect American consumers, patients, passengers, and students; (vi) protect privacy and civil liberties; (vii) manage the federal government’s use of AI; and (viii) strengthen U.S. leadership abroad, safeguarding ways to develop and deploy AI technology responsibly.
The EO encourages independent federal agencies to leverage their current authorities and impose current applicable requirements to protect Americans from fraud, discrimination, threats to privacy, and other risks arising from AI in the health and human services, education, transportation and communication sectors. The EO contains a number of provisions aimed at the Secretary of the Department of Health and Human Services (“HHS”) to develop policies on the use of AI in health care. These policies will build on existing HHS efforts by the U.S. Food and Drug Administration (“FDA”) and the Office of the National Coordinator for Health Information Technology (“ONC”) related to the use of AI in the health and human sector, but will provide a more comprehensive approach beyond their current jurisdictions. For example, we will likely see guidance regarding nondiscrimination and privacy of health data related to AI and patient safety reporting.
In addition, the Biden Administration stated it will work with Congress as it continues to develop legislation on AI. The Administration also highlighted the importance of protecting individuals’ privacy when deploying AI tools and technologies and calls on Congress to pass comprehensive data privacy legislation.
This summary focuses on the EO’s health and human sector provisions, but the EO covers policies to govern AI generally and includes provisions that have multi-sector impact, including health care. (Crowell’s client alert provides a summary of all of the EO’s provisions.)
Specifically, the EO directs the Secretary of HHS as follows:
- Establish an HHS AI Task Force and develop a Strategic Plan on responsible deployment of AI by January 27, 2025 in the health and human services sector, including in the following areas:
- development, maintenance, and use of predictive and generative AI-enabled technologies in health care delivery;
- long-term safety and real-world performance monitoring of AI-enabled technologies;
- incorporation of equity principles in AI-enabled technologies used in the health and human services sector, including protecting against bias;
- incorporation of safety, privacy, and security standards into the software-development life-cycle for protection of personally identifiable information;
- development and availability of documentation to help users determine appropriate and safe uses of AI in local settings;
- work with state, local, tribal, and territorial health and human services agencies to advance positive use cases and best practices for use of AI in local settings; and
- identifying uses of AI to promote workplace efficiency and satisfaction.
- Develop a quality strategy by April 27, 2024 to determine whether AI-enabled technologies in the health and human services sector maintain appropriate levels of quality, including in the areas described above. This work would include the development of AI assurance policy and infrastructure needs for enabling premarket assessment and post-market oversight of AI-enabled health care technology algorithmic system performance against real-world data.
- Advance nondiscrimination compliance by April 27, 2024 by considering appropriate actions to advance the prompt understanding of and compliance with federal nondiscrimination laws by health and human service providers that receive federal financial assistance, as well as how those laws relate to AI. These include convening and providing technical assistance about the obligations and the potential consequences of noncompliance with federal nondiscrimination and privacy laws as they relate to AI; and issuing guidance, or taking other action as appropriate, in response to any complaints or other reports of noncompliance.
- Establish an AI Safety Program by October 29, 2024 that, in partnership with voluntary federally listed patient safety organizations (“PSOs”) establishes a common framework for approaches to identifying and capturing clinical errors resulting from AI deployed in health care settings as well as specifications for a central tracking repository for associated incidents that cause harm to patients, caregivers, or other parties; analyzes captured data and generated evidence to develop informal guidelines aimed at avoiding these harms; and disseminates those informal guidelines to appropriate stakeholders.
- Develop a Strategy for regulating use of AI in drug development by October 29, 2024 that would, at a minimum: (i) define the objectives, goals, and high-level principles required for appropriate regulation throughout each phase of drug development; (ii) identify areas where future rulemaking, guidance, or additional legislative authority may be necessary to implement such a regulatory system; (iv) identify the existing budget, resources, personnel, and potential for new public/private partnerships necessary for such a regulatory system; and (v) consider risks identified by the actions undertaken to implement section 4 of this order.
The EO also included a number of other provisions that would apply to the health and human sector, including for the National Institutes of Health (“NIH”) to prioritize grant-making and cooperative agreement awards to promote innovation and competition.
To implement its policies, the EO creates the White House Artificial Intelligence Council, which will coordinate the activities of agencies across the federal government to ensure the effective formulation, development, communication, industry engagement related to, and timely implementation of AI-related policies (including policies set forth in the EO). Following the release of the EO, the Office of Management and Budget (“OMB”) released companion guidance to establish AI governance structures in federal agencies, advance responsible AI innovation, increase transparency, protect federal workers, and manage risks from government uses of AI. OMB also released a draft memorandum to implement the EO, establishing new agency requirements, among other things. OMB is seeking public comment on the memorandum by December 5, 2023. Healthcare entities and technology companies that use or plan to use AI should consider responding to the questions outlined in the Federal Register notice regarding the federal government’s role to advance AI innovation and inform agencies’ understanding of health care-specific use cases and risk mitigations.
Takeaways
The wide-ranging, long-awaited EO establishes a comprehensive vision for the responsible use and governance of AI. By establishing principles and including health care-specific directives, the Biden Administration has created an overarching framework that will have a significant impact on health care stakeholders’ development and deployment of AI systems and technologies. In addition, the EO offers a number of opportunities for government engagement and grant funding. As outlined, the various health and human sector provisions will be operationalized within the next several months. Stakeholders should expect to see agency-level developments and continue to track updates from the White House and various agencies (i.e., FDA, ONC, CMS, OCR).
Crowell will continue to provide analysis as federal agencies work to implement the policies described in the EO. For more information about the EO, please contact the professionals listed below, or your regular Crowell contact.
Insights
Client Alert | 3 min read | 12.10.24
Fast Lane to the Future: FCC Greenlights Smarter, Safer Cars
The Federal Communications Commission (FCC) has recently issued a second report and order to modernize vehicle communication technology by transitioning to Cellular-Vehicle-to-Everything (C-V2X) systems within the 5.9 GHz spectrum band. This initiative is part of a broader effort to advance Intelligent Transportation Systems (ITS) in the U.S., enhancing road safety and traffic efficiency. While we previously reported on the frustrations with the long time it took to finalize rules concerning C-V2X technology, this almost-final version of the rule has stirred excitement in the industry as companies can start to accelerate development, now that they know the rules they must comply with.
Client Alert | 6 min read | 12.09.24
Eleven States Sue Asset Managers Alleging ESG Conspiracy to Restrict Coal Production
Client Alert | 3 min read | 12.09.24
New York Department of Labor Issues Guidance Regarding Paid Prenatal Leave, Taking Effect January 1
Client Alert | 4 min read | 12.06.24