Navigating the AI Landscape: Recap of Biden Administration Efforts to Mitigate AI Risks Ahead of Upcoming Executive Order
Client Alert | 12 min read | 10.10.23
Artificial intelligence (AI) has been at the forefront of public debate since the release of OpenAI’s ChatGPT in November 2022. Since then, numerous AI applications have been released to the public that serve a wide variety of functions, exacerbating the need for governance, as many technical, ethical, and legal questions remain unanswered. As the AI landscape continues to rapidly evolve, the Biden Administration has taken proactive efforts to develop a National Artificial Intelligence Strategy that seeks to mitigate the risks associated with the transformative technology. These efforts include the establishment of guidelines and standards, investments in research and development (R&D) initiatives, collaborative partnerships with major technology companies, and even a national competition with nearly $20 million in awards.
Collectively, these efforts represent the government’s steadfast commitment to establish guardrails around a technology that is already beginning to permeate nearly every facet of daily life – motivated in part to reduce the potential for bias, discrimination, and privacy infringements. The Biden Administration is expected to cap off these efforts in the coming weeks with an Executive Order (EO) on AI that will build on the Administration’s earlier proposal for an “AI Bill of Rights.” In anticipation of the upcoming EO, the following is meant to provide a recap of the Administration’s broad efforts to ensure that AI is developed in a responsible manner that protects Americans from harm while simultaneously harnessing the benefits it could have on society. These efforts provide a preview of the common themes industry can expect to see in the anticipated EO.
Blueprint for an AI Bill of Rights
In October 2022, the White House Office of Science and Technology Policy (OSTP) developed the Blueprint for an AI Bill of Rights (AI Blueprint) as a guide for the design, use, and deployment of automated systems. The AI Blueprint aims to ensure the protection of the American public at a time where rapid advancements in AI are occurring in unexpected ways, and thus beyond defined parameters. OSTP has identified the following five principles in the AI Blueprint that industry should consider as vital components of automated systems:
-
- Safe and Effective Systems: Automated systems should be developed with consultation from experts to ensure that persons are protected from unsafe or ineffective systems. This should include pre-deployment testing, identification of risks, mitigation efforts, and ongoing monitoring to ensure safety and adherence to context-specific standards.
- Algorithmic Discrimination Protections: Developers should include equity assessments as part of an automated system’s design, use diverse and representative data to support algorithms, protect against proxies for demographic features, and ensure accessibility for people with disabilities.
- Data Privacy: To the greatest extent possible, persons should have agency in decisions regarding the collection, use, access, transfer, and deletion of their data used in automated systems.
- Notice and Explanation: Persons should be informed when an automated system is in use and provided with information regarding its function and outcome, as well as disclosure of the individual or organization responsible for the system.
- Human Alternative, Consideration, and Fallback: Persons should be able to opt out of automated systems and have access to a human alternative, if the situation permits.
The principles outlined by OSTP are not federally mandated at this time and are meant to serve as public recommendations to protect Americans from unlawful bias, discrimination, and other harmful outcomes.
National Artificial Intelligence Research Resource
The Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem plan, released in January 2023, is an implementation plan drafted by the Congressionally-mandated National Artificial Intelligence Research Resource (NAIRR) Task Force. The plan outlines recommendations to establish a national infrastructure equipped to facilitate AI’s benefits, and identifies four measurable goals to spur innovation, increase diversity of talent, improve capacity, and advance trustworthy AI. To realize these goals, the implementation plan outlines the following actions:
-
- Identify a single federal agency to serve as the administrative home for NAIRR operations and a Steering Committee with equities in AI research to drive its strategic direction.
- Provide access to a federated mix of computational and data resources, testbeds, software and testing tools, and user support services via an integrated portal.
- Ensure accessibility to a range of users and provide a platform that can be used for educational and community-building activities that lower barriers to participation in the AI research ecosystem and increase the diversity of AI researchers.
- Set the standard for responsible AI research through the design and implementation of NAIRR’s governance processes and implemented system safeguards.
AI Risk Management Framework
On January 26, 2023, the U.S. National Institute of Standards and Technology (NIST) published an AI Risk Management Framework (RMF). The AI RMF is intended for voluntary use and is designed to help organizations incorporate trustworthiness into the design, development, use, and evaluation of AI systems. NIST also established the Trustworthy & Responsible Artificial Resource Center to facilitate implementation of the AI RMF. The Center includes a glossary of AI terms, a calendar of engagements and events, and trainings on how to utilize resources developed by NIST. Consistent with NIST’s broader efforts to align with global standards, the Center also includes information about international alignment with the AI RMF.
In May 2023, the Biden administration announced that AI developers such as Google, Microsoft, NVIDIA, and OpenAI are participating in a public assessment to evaluate how existing AI models align with the AI Blueprint and the AI RMF. The public assessment has so far been exclusive to major tech companies, whereas other industries have not yet been invited to participate.
Executive Order on Advancing Racial Equity
On February 16, 2023, President Biden issued Executive Order 14091 on Racial Equity and Support for Underserved Communities Through the Federal Government. The EO includes provisions to ensure that automated systems are consistent with the law and that respective civil rights offices are consulted on decisions regarding their design, development, and use.
In support of this EO, the Bureau of Consumer Financial Protection, Department of Justice, Equal Employment Opportunity Commission, and Federal Trade Commission released a Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems. The Joint Statement discusses how existing legal authorities apply to the regulation of AI, such as the Fair Housing Act, American with Disabilities Act, and the Federal Trade Commission Act. These ongoing actions forecast future enforcement actions involving the application of existing federal laws and regulations to the use of AI in a variety of forms.
National AI Research Institutes
On May 4, 2023, the National Science Foundation (NSF) announced $140 million in funding to launch the following seven new National AI Research Institutes:
-
- NSF Institute for Trustworthy AI in Law & Society (TRAILS)
- AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION)
- AI Institute for Climate-Land Interactions, Mitigation, Adaptation, Tradeoffs and Economy (AI-CLIMATE)
- AI Institute for Artificial and Natural Intelligence (ARNI)
- AI-Institute for Societal Decision Making (AI-SDM)
- AI Institute for Inclusive Intelligent Technologies for Education (INVITE)
- AI Institute for Exceptional Education(AI4ExceptionalEd)
These additions will bring the total number of such institutes to 25 across the country to advance R&D development efforts in critical areas. These institutes are expected to catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, and responsible, and that serve the public good.
National AI R&D Strategic Plan
On May 23, 2023, OSTP updated its National AI R&D Strategic Plan for the first time since 2019. The Strategic Plan reaffirms eight strategies focused on:
-
- Long-term investments in responsible AI research
- Effective human-AI collaboration
- Ethical, legal, and societal implications of AI
- Safety and security of AI systems
- Development of shared public datasets and environments for AI training and testing
- Standards and benchmarks for evaluating AI systems
- Better understanding AI R&D workforce needs
- Expanded public-private partnerships to accelerate AI advances
The 2023 update includes an additional strategy, reflecting a growing appreciation for the benefits of international engagement on AI issues:
9. Establishment of a principled and coordinated approach to international collaboration in AI research
Leading AI Companies Voluntarily Commit to “AI Agreement” to Manage AI Risks
On July 21, 2023, the Biden Administration announced that seven companies leading the development of AI – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – made voluntary but immediate commitments to help move towards the safe, secure, and transparent development of AI technology. The goal of the voluntary commitments, or the “AI Agreement” as it is informally dubbed, is to establish a set of standards that promote the principles of safety, security, and trust deemed fundamental to the future of AI.
Just months later, on September 12, 2023, the Biden Administration announced that it secured voluntary commitments from an additional eight companies – Adobe, Cohere, IBM, NVIDIA, Palantir, Salesforce, Scale AI, and Stability – to further drive the safe, secure, and trustworthy development of AI. These additional commitments underscore the significance of public-private partnerships in addressing the existing and emerging challenges posed by AI.
The AI Agreement is composed of commitments organized by three key principles:
-
- Ensuring Products are Safe Before Introducing Them to the Public. The companies committed to test internal and external security before releasing an AI system, and to share information across the industry with governments, civil society, and academia on managing AI risks;
- Building Systems that Put Security First. The companies committed to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased “model weights,” which are described in the Administration’s release as “the most essential part of an AI system,” and to facilitate robust third-party discovery and reporting of vulnerabilities in AI systems; and
- Earning the Public’s Trust. The companies committed to develop robust technical mechanisms that notify users when content is AI-generated; to publicly report their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use; to prioritize research on societal risks that AI systems can pose (g., bias, discrimination, and privacy infringement); and to develop and deploy advanced AI systems to help address society’s greatest challenges.
Again returning to the theme of international engagement, as part of its announcement, the Biden Administration noted that the U.S. continues to engage with its allies and partners to harmonize AI-related efforts and had consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.
Federal Trade Commission Launches Investigation of OpenAI
On July 13, 2023, the Federal Trade Commission (FTC) announced that it had opened an investigation into OpenAI, a leading AI development firm backed by Microsoft, based on claims that it has ignored consumer protection laws and placed personal data at risk. The agency is specifically investigating whether OpenAI engaged in unfair practices that resulted in reputational harm to real individuals by generating false or misleading statements about them, an issue that has been at the heart of concerns over false information – or “hallucinations” – produced by generative AI systems.
Securities and Exchange Commission Announces New Rule with AI Implications
On July 26, 2023, the Securities and Exchange Commission (SEC) announced a new rule that requires publicly traded companies to disclose material cyber incidents via EDGAR (the publicly accessible SEC filing platform). One component of the new rule requires disclosure of cybersecurity incidents to enhance greater uniformity and comparability, including providing narrative description for the use of AI and machine learning in assessing incidents and mitigating against future cyberattacks. The rule became effective on September 5, 2023, with a deadline to comply by December 18, 2023. Smaller reporting companies will be given an additional 180 days for compliance, with a deadline of June 15, 2024.
Executive Order on Federal Research and Development
On July 28, 2023, President Biden issued Executive Order 14104 on Federal Research and Development in Support of Domestic Manufacturing and U.S. Jobs, which requires certain federal agencies to prioritize domestic manufacturing in research funding and development agreements, and includes considerations regarding AI and machine learning.
As part of the EO, agencies are to consider whether any “exceptional circumstances” warrant restrictions providing non-exclusive licenses or sales of inventions outside the United States. To make such a determination, agencies must consider whether underlying technologies, including AI and machine learning, are critical to the U.S. economy and national security.
The EO applies to the Departments of Defense, Agriculture, Commerce, Health and Human Services, Transportation, Energy, and Homeland Security, as well as the National Science Foundation and National Aeronautics and Space Administration.
AI Cyber Challenge
On August 9, 2023, the Biden Administration announced the launch of the AI Cyber Challenge (AIxCC), which will award nearly $20 million in prizes in support of identifying and fixing software vulnerabilities through the use of AI. The AIxCC, led by the Defense Advanced Research Projects Agency (DARPA), will leverage the application of existing and new AI to protect critical U.S. cyber infrastructure, such as code that the Internet relies on to function effectively and securely. The AIxCC is supported by several top AI companies such as Anthropic, Google, Microsoft, and OpenAI, which are lending their expertise and making their advanced technology available to competitors in the challenge.
Office of Management and Budget
The Office of Management and Budget (OMB) is expected to announce soon draft policy guidance on the use of AI systems by the U.S. government. The guidance is intended to help establish policies and procedures for the safe and responsible use of AI by federal agencies, while simultaneously enabling the federal government to leverage AI in its ongoing work. The announcement was initially expected during the summer, but at this time OMB has not released an updated timeframe. The policy guidance will coincide with a request for public comment from advocates, civil society, industry, and other stakeholders before it is finalized.
Key Takeaways: Taken together, these efforts serve as a reminder that the Biden Administration will use existing legal and regulatory frameworks to act in ways that it believes will protect the American public in the age of AI, and will push out additional guidance with the upcoming EO on AI to encourage organizations to incorporate protections against the potential harms from AI into policy and practice.
As AI technologies continue to rapidly develop, companies are strongly encouraged to have rigorous AI governance and compliance policies and practices in place.
Contacts
Insights
Client Alert | 3 min read | 12.10.24
Fast Lane to the Future: FCC Greenlights Smarter, Safer Cars
The Federal Communications Commission (FCC) has recently issued a second report and order to modernize vehicle communication technology by transitioning to Cellular-Vehicle-to-Everything (C-V2X) systems within the 5.9 GHz spectrum band. This initiative is part of a broader effort to advance Intelligent Transportation Systems (ITS) in the U.S., enhancing road safety and traffic efficiency. While we previously reported on the frustrations with the long time it took to finalize rules concerning C-V2X technology, this almost-final version of the rule has stirred excitement in the industry as companies can start to accelerate development, now that they know the rules they must comply with.
Client Alert | 6 min read | 12.09.24
Eleven States Sue Asset Managers Alleging ESG Conspiracy to Restrict Coal Production
Client Alert | 3 min read | 12.09.24
New York Department of Labor Issues Guidance Regarding Paid Prenatal Leave, Taking Effect January 1
Client Alert | 4 min read | 12.06.24