Algorithmic Accountability Act Reflects Growing Interest in Regulation of AI
Client Alert | 2 min read | 04.22.19
Senator Ron Wyden (D-OR) and Senator Cory Booker (D-NJ) introduced the Algorithmic Accountability Act in the Senate last Wednesday, federal legislation which would require entities to ensure that their automated algorithmic decision systems do not expose consumers to unfair bias, inaccuracies, or privacy and security risks. The bill “direct[s] the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments.”
The bill defines “automated decision impact assessment” as a “study evaluating an automated decision system” and its “development process, including the design and training data of the automated decision system, for impacts on accuracy, fairness, bias, discrimination, privacy, and security.” At a minimum, entities with automated decision systems deemed “high-risk” must provide the FTC with a detailed description of the system; a cost-benefit analysis in light of the system’s purpose; a risk assessment regarding consumer privacy and risks of “inaccurate, unfair, biased, or discriminatory decisions impacting consumers”; and efforts the entity plans to make to minimize these risks.
Covered entities include any company with over $50 million in average annual gross receipts per year, companies possessing personal information of over 1 million consumers or consumer devices, and any entities that collect consumers’ personal information “to sell or trade the information or provide third-party access to the information.”
The bill would encompass a large percentage of AI tools used in a variety of industries, such as facial recognition, chatbots, recruiting tools, ad targeting, and credit and mortgage calculations, to name a few. Proponents of the bill state it attempts to address risks of unfair discrimination and inadvertent biases that can be imposed through unchecked utilization of these AI tools.
The bill would also require the FTC to promulgate regulations, within two years, requiring covered entities to “conduct automated decision impact assessments” of “high-risk automated decision systems.” The bill deems any violation of these regulations an unfair or deceptive practice under the Federal Trade Commission Act. The bill further allows state attorney generals to bring civil actions on behalf of state residents in federal court if the attorney general has reason to believe that an entity is conducting a practice that violates the Act.
The bill illustrates the growing interest in new regulatory requirements to prove that automated systems are fair, non-invasive, and non-discriminatory. Under the McCarran-Ferguson Act (15 U.S.C. §1012), whether it will or can apply directly to insurance companies is questionable.
Read the full text of the bill here. A parallel House bill is being sponsored by Representative Yvette Clark (D-NY).
Contacts
Insights
Client Alert | 2 min read | 09.30.25
CARB Issues Preliminary List of Entities Covered by California Climate Disclosure Laws
On September 24, 2025, the California Air Resources Board (“CARB”) issued a preliminary list of reporting/covered entities under California’s climate disclosure laws SB 253 (the Climate Corporate Data Accountability Act) and SB 261 (the Climate-Related Financial Risk Act) (the “Climate Disclosure Laws”) (both as modified by SB 219).
Client Alert | 10 min read | 09.30.25
Client Alert | 7 min read | 09.29.25
White House Seeks Industry Input on Laws and Rules that Hinder AI Development
Client Alert | 4 min read | 09.26.25