1. Home
  2. |Insights
  3. |Oversight of AI: How Lawmakers Plan to Implement the Bipartisan Framework for U.S. AI Act

Oversight of AI: How Lawmakers Plan to Implement the Bipartisan Framework for U.S. AI Act

Client Alert | 7 min read | 09.15.23

On Tuesday, September 12, a key subcommittee of the Senate Judiciary Committee held a hearing entitled “Oversight of AI: Legislating on Artificial Intelligence” the day before the Senate’s first AI forum.

The hearing centered on how Congress can legislate AI with enforceable safeguards and focused on the recently proposed Bipartisan Framework for U.S. AI Act (the “Bipartisan Framework”) released last week by Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), the Chair and Ranking Member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law respectively.

This discussion follows the first and second hearings held in the recent months by the Subcommittee as part of a series on AI oversight.

Key Takeaways:

    • Senators Blumenthal and Hawley are advancing a serious bipartisan effort aimed at regulating AI.
    • As part of that effort, Bipartisan Framework aims to clarify that Section 230 of the Communications Decency Act does not apply to AI.
    • Bipartisan Framework would create a new Independent Oversight Body to oversee and regulate companies that use AI.
    • The U.S. AI Act would create a private right of action against companies that breach privacy, violate civil rights, or cause other harms. This could allow a spate of lawsuit against AI companies that violate the law, but would allow private citizens to assert their rights against AI companies.

Summary of the Blumenthal-Hawley Bipartisan Framework for U.S. AI Act

The Bipartisan Framework sets forth five key recommendations:

1. Establish a Licensing Regime Administered by an Independent Oversight Body

The Independent Oversight Body would have authority to conduct audits of companies seeking licenses, cooperate with other enforcement bodies, and monitor and report on the technological developments and economic impacts of AI.

Companies developing sophisticated general AI models or models used in situations deemed “high-risk,” such as in facial recognition, would be required to register with the Independent Oversight Body and pursue a license, including completing requirements related to and submitting information on the models, risk assessment, and risk management.

2. Ensure Legal Accountability for Harms

The independent oversight body would also ensure AI companies are held accountable for their models and systems. Private rights of action would be available against corporations when their models “breach privacy, violate civil rights, or otherwise cause cognizable harms.” The framework specifically declares that Congress should also clarify that Section 230 of the Communications Decency Act, which protects tech companies from legal consequences of content posted by third-parties, does not apply to AI.

3. Defend National Security and International Competition

Congress should utilize existing U.S. trade controls and legal restrictions, such as export controls, to limit the transfer of AI technology to foreign adversaries and countries engaged in human rights violations.

4. Promote Transparency

In order to ensure transparency, Congress should require companies developing and deploying AI systems to disclose “essential information,” such as training data, limitations, accuracy, and safety of AI models to users and companies that deploy the developers’ systems. In addition, companies should be required to give customers affirmative notice of interaction with AI systems and watermark or give some other technical disclosure of AI-generated deep fakes. The new Independent Oversight Body would establish a public database and reporting system so that consumers and researchers have access to AI model and system information, including when significant adverse incidents occur or failures in AI cause harm.

5. Protect Consumers and Kids

Congress should require companies deploying AI in “high risk or consequential situations” to employ “safety brakes,” including giving notice and allowing for human review when AI is being used to make decisions. Consumers should have control over how their personal data is used in AI systems and strict limits should be imposed on generative AI involving kids.

The Hearing on Oversight of AI

Senator Blumenthal, in his opening statement as Chair, noted neither he nor Senator Hawley have “pride of ownership” over the proposed Bipartisan Framework. They seek detailed input for the end goal of creating new legislation.

While Senator Blumenthal said the hearing does not specifically deal with the possibility of massive AI-related unemployment, he remarked that this issue is important to the Committee.

Senator Hawley briefly commented on the need to avoid the “unmitigated disaster” that is social media, claiming that Congress “outsourced” the issue to large corporations instead of regulating the industry, resulting in harms to children and elections, among other consequences.

Witness Testimony

At the hearing, the following witnesses testified on the Bipartisan Framework:

1. Brad Smith, Vice Chair and President of the Microsoft Corporation

In his testimony, Mr. Smith provided initial support for the Bipartisan Framework, which he said is a “strong and positive” step towards effectively regulating AI, building on the voluntary White House AI commitments and the bipartisan AI Insight Forum. Mr. Smith highlighted three goals:

    • Prioritize of AI safety and security. In his written testimony, Smith highlights the following principles to guide AI legislation:
          • Promote accountability in AI development and deployment;
          • Build on existing efforts, including the White House initiative to secure voluntary commitments from industry, and the NIST AI Risk Management Framework;
          • Require safety brakes for AI that controls or manages critical infrastructure;
          • A “know the customer, cloud, and content” (KY3C) regulatory framework that would impose obligations on varying actors in the AI supply chain; and
          • Ensure the regulatory framework mirrors and coordinates with the technology architecture of AI.
    • Prioritize the protection of citizens and consumers. This includes the protection of privacy, civil rights, and the needs of children. Mr. Smith also voiced his support for the framework’s distinction between developers and deployers of AI.
    • Recall the promise that AI offers. Mr. Smith explained that AI will need safety breaks, “just like we have a circuit breaker in every building and home in this country to stop the flow of electricity if that's needed.” However, there is promise in the use of AI to improve healthcare, education, and public services sectors.

2. Woodrow Hartzog, Professor of Law at the Boston University School of Law and Fellow at the Cordell Institute for Policy in Medicine & Law at Washington University in St. Louis

Woodrow Hartzog pushed Congress to go beyond “half-measures,” which are important but insufficient, as they give lawmakers “the illusion that we’ve done enough.” His testimony cites post-deployment controls, audits, assessments, certifications, and other compliance requirements as frequently deployed half measures. To go beyond measures with limited efficacy, he claimed lawmakers must do the following three things:

    • Accept that AI systems are not neutral;
    • Focus on substantive interventions that limit abuses of power; and
    • Resist the narrative that AI systems are inevitable.

Professor Hartzog emphasized that his testimony does not imply that procedural protections, such as transparency, bias mitigation, ethics, and personal control—all described in his testimony as ineffective measures in isolation—are not meaningful subjects of legislation. However, he explained that “when lawmakers go straight to putting up guardrails, they fail to ask questions about whether particular AI systems should exist at all.”

3. William Dally, Chief Scientist and Senior Vice President of Research of NVIDIA Corporation

Bill Dally discussed NVIDIA’s history of developing technologies such as accelerated computing and generative AI as well the company’s role in the future of safe and responsible AI. He noted the importance of “frontier AI models,” which describe the next-generation, large-scale models of the future, which may “possess unexpected, difficult-to-detect new capabilities.” Because of this risk, he argued that models should not be “unleash[ed]” before they are safe, accurate, and reliable, but Mr. Dally affirmed that “uncontrollable artificial general intelligence is science fiction.” Finally, Mr. Dally recognized that safe AI requires multilateral cooperation, emphasizing that no country or company controls a “chokepoint” to AI.

Conclusion

The Bipartisan Framework is an important step to advance bipartisan legislation to regulate AI. It proposes a new legal regime, an Independent Oversight Body, and a private right of action to enforce new and existing laws that would potentially increase legal risk for companies making use of AI. Senators Blumenthal and Hawley suggest that this approach is necessary given the risks that come with widespread adoption of AI. Along with their colleagues in the Senate Judiciary Committee, they will continue to explore the costs and benefits of this approach in the coming months.

Crowell & Moring, LLP will continue to monitor congressional and executive branch efforts to regulate AI. Our lawyers and public policy professionals are available to advise any clients who want to play an active role in the policy debates taking place right now or who are seeking to navigate AI-related concerns in government contracts, employment law, intellectual property, privacy, healthcare, antitrust, or other areas.

Insights

Client Alert | 3 min read | 04.26.24

CFIUS Proposes Enhanced Enforcement and Mitigation Rules and Steeper Penalties for Non-Compliance

On April 11, 2024, the Committee on Foreign Investment in the United States (“CFIUS” or the “Committee”) announced proposed amendments to its enforcement and mitigation regulations, marking the first substantive update to CFIUS’s mitigation and enforcement provisions since the enactment of the Foreign Investment Risk Review Modernization Act of 2018.  The Committee issued a notice of proposed rulemaking ("NPRM”) that would modify the regulations that apply to certain investments and acquisitions, as well as real estate transactions, by foreign persons as follows:...