1. Home
  2. |Insights
  3. |SAFE Innovation in the Age of Artificial Intelligence

SAFE Innovation in the Age of Artificial Intelligence

Client Alert | 5 min read | 06.23.23

On June 21, 2023, Senate Majority Leader Charles (“Chuck”) Schumer (D-NY) released a broad policy framework, the SAFE Innovation Framework for Artificial Intelligence (AI) Policy (“the Framework”), to help guide Congress in developing future AI legislation. The Framework emphasizes the importance of maintaining national security, combatting misinformation, protecting intellectual property, and developing transparency requirements for AI systems developers. While the Framework is Senator Schumer’s work product, he’s already started searching for allies in his own party and across the aisle. Leader Schumer has convened a bipartisan working group that includes Senators Martin Heinrich (D-NM), Michael Rounds (R-SD), and Todd Young (R-IN) to consider future AI legislation. They haven’t specifically endorsed Schumer’s framework, but they’ve committed to working with him on the issue. As they look to put an AI bill together, Schumer’s office will be looking to Senate Committees as a major source for legislative language.  

In addition to the release of SAFE Innovation Framework, Senator Schumer committed to hosting a Congressional briefing series of “AI Insight Forums” beginning later this year. While the forums are not designed to replace the traditional Congressional committee process, Leader Schumer hopes they will educate Congress on the impact of AI on the workforce, national security, and privacy, among other policy areas, and help expedite policy debate and the development of legislation. Despite his intention to move quickly, however, Senator Schumer acknowledges the need to bridge a significant knowledge gap between AI developers and policy makers and understands the legislative process will take time, offering significant opportunity for stakeholder feedback and engagement.

Background

The SAFE Innovation Framework was developed around the two pillars of 1) safety and 2) innovation that seek to balance AI’s potential societal benefits and harms. According to the Framework, future legislation should embrace AI’s potential for unthinkable advancements while anticipating threats of job displacement, misuse by bad actors, disinformation, and the amplification of bias. In his framework, Schumer lays out five principles to assert U.S. leadership in developing AI policy: 1) Security, 2) Accountability, 3) Foundations, 4) Explain, and 5) Innovation.

The Framework describes each of these principles as follows:

  1. Security: Safeguard our national security with AI and determine how adversaries use it, and ensure economic security for workers by mitigating and responding to job loss;
  2. Accountability: Support the deployment of responsible systems to address concerns around misinformation and bias, support our creators by addressing copyright concerns, protect intellectual property, and address liability;
  3. Foundations: Require that AI systems align with our democratic values at their core, protect our elections, promote AI’s societal benefits while avoiding the potential harms, and stop the Chinese Government from writing the rules of the road on AI;
  4. Explain: Determine what information the federal government needs from AI developers and deployers to be a better steward of the public good, and what information the public needs to know about an AI system, data, or content; and
  5. Innovation: Support US-led innovation in AI technologies – including innovation in security, transparency and accountability – that focuses on unlocking the immense potential of AI and maintaining U.S. leadership in the technology.

Why Does It Matter?

  • Senator Schumer’s announcement is the latest in Congress’s ongoing push towards AI legislation. This Framework comes after a series of Senate Judiciary Committee hearings on AI and Human Rights, AI and Intellectual Property, and the Rules of AI, as well as AI-related hearings in other committees.
  • AI has also been the subject of additional hearings in the House of Representatives where other lawmakers have introduced AI-related bills. For example, Representatives Ken Buck (R-CO), Anna Eshoo (D-CA), Ted Lieu (D-CA), and Senator Brian Schatz (D-HI), introduced a bill this week to create a National Commission on Artificial Intelligence.
  • The release of the SAFE Innovation Framework recognizes the need for a bipartisan and bicameral approach to developing AI policy. Senate Schumer will need support from Republicans if he intends to advance legislation ahead of the 2024 elections.
  • Senator Schumer’s current working group includes Senators Martin Heinrich (D-NM), Michael Rounds (R-SD), and Todd Young (R-IN). Notably, Senator Young and Leader Schumer led their respective parties to advance the bipartisan U.S. Chips and Science Act in 2023.
  • The AI Insights Forums will provide significant opportunities for private sector engagement with lawmakers drafting future AI policy. The Senate will lean on leading industry experts and national security experts to develop deeper understanding of advanced technologies ahead of any legislative drafting.

Big Picture

In addition to recent Congressional interest in regulating the AI sector, the White House has also indicated that President Biden and his staff meet regularly to develop an AI policy strategy. The President convened AI experts, activists, and industry leaders earlier in the week in San Francisco to discuss its potential and his administration’s initiatives, including a blueprint for an AI Bill of Rights as well as an executive order directing officials overseeing top government agencies to root out bias in AI tools they use to conduct their work. The U.S. Congress and Biden administration officials are working to keep pace with other partnerships and countries that have already developed approaches to regulating the use of AI, including the European Union, the United Kingdom, Singapore, Brazil, and China.

Conclusion

AI tools have the potential to disrupt nearly every sector of the global economy. There’s no doubt Congress has some catching up to do to get up to speed on AI policy. But that also gives Congress a chance to get it right. First movers don’t always walk the right balance between protecting the public and encouraging innovation. Because the U.S. is the largest and most innovative economy in the world, Congress can still have an outsized influence on the rules of the road for AI that everyone else will play by. As Congress and other policymakers get up to speed, there is a unique opportunity for businesses and other thought leaders to offer their insights on how AI should or should not be regulated in the context of different industries including healthcare, government contracting, financial services, national security, employment, intellectual property, and energy. Crowell & Moring’s interdisciplinary group of lawyers and other professionals—many of whom were formerly in senior positions in federal agencies, the White House, and Congress—continues to monitor U.S. government efforts to regulate AI and stands ready to engage with policymakers on our clients’ behalf.

Insights

Client Alert | 3 min read | 04.26.24

CFIUS Proposes Enhanced Enforcement and Mitigation Rules and Steeper Penalties for Non-Compliance

On April 11, 2024, the Committee on Foreign Investment in the United States (“CFIUS” or the “Committee”) announced proposed amendments to its enforcement and mitigation regulations, marking the first substantive update to CFIUS’s mitigation and enforcement provisions since the enactment of the Foreign Investment Risk Review Modernization Act of 2018.  The Committee issued a notice of proposed rulemaking ("NPRM”) that would modify the regulations that apply to certain investments and acquisitions, as well as real estate transactions, by foreign persons as follows:...