Algorithmic Accountability Act Reflects Growing Interest in Regulation of AI
Client Alert | 2 min read | 04.22.19
Senator Ron Wyden (D-OR) and Senator Cory Booker (D-NJ) introduced the Algorithmic Accountability Act in the Senate last Wednesday, federal legislation which would require entities to ensure that their automated algorithmic decision systems do not expose consumers to unfair bias, inaccuracies, or privacy and security risks. The bill “direct[s] the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments.”
The bill defines “automated decision impact assessment” as a “study evaluating an automated decision system” and its “development process, including the design and training data of the automated decision system, for impacts on accuracy, fairness, bias, discrimination, privacy, and security.” At a minimum, entities with automated decision systems deemed “high-risk” must provide the FTC with a detailed description of the system; a cost-benefit analysis in light of the system’s purpose; a risk assessment regarding consumer privacy and risks of “inaccurate, unfair, biased, or discriminatory decisions impacting consumers”; and efforts the entity plans to make to minimize these risks.
Covered entities include any company with over $50 million in average annual gross receipts per year, companies possessing personal information of over 1 million consumers or consumer devices, and any entities that collect consumers’ personal information “to sell or trade the information or provide third-party access to the information.”
The bill would encompass a large percentage of AI tools used in a variety of industries, such as facial recognition, chatbots, recruiting tools, ad targeting, and credit and mortgage calculations, to name a few. Proponents of the bill state it attempts to address risks of unfair discrimination and inadvertent biases that can be imposed through unchecked utilization of these AI tools.
The bill would also require the FTC to promulgate regulations, within two years, requiring covered entities to “conduct automated decision impact assessments” of “high-risk automated decision systems.” The bill deems any violation of these regulations an unfair or deceptive practice under the Federal Trade Commission Act. The bill further allows state attorney generals to bring civil actions on behalf of state residents in federal court if the attorney general has reason to believe that an entity is conducting a practice that violates the Act.
The bill illustrates the growing interest in new regulatory requirements to prove that automated systems are fair, non-invasive, and non-discriminatory. Under the McCarran-Ferguson Act (15 U.S.C. §1012), whether it will or can apply directly to insurance companies is questionable.
Read the full text of the bill here. A parallel House bill is being sponsored by Representative Yvette Clark (D-NY).
Contacts
Insights
Client Alert | 3 min read | 05.20.25
On May 19, 2025, Deputy Attorney General Todd Blanche issued a Memorandum creating the Civil Rights Fraud Initiative that will “utilize the False Claims Act to investigate and . . . pursue claims against any recipient of federal funds that knowingly violates federal civil rights laws.” According to the Memorandum, though racial discrimination has “always been illegal,” the Administration posits that “many corporations and schools continue to adhere to racist policies and preferences—albeit camouflaged with cosmetic changes that disguise their discriminatory nature.” In an effort to prevent federal funds from being used in connection with or support of these purportedly racist policies and preferences, the Initiative will wield the power of the False Claims Act, the government’s most powerful tool to fight fraud, waste, and abuse.
Client Alert | 8 min read | 05.19.25
Client Alert | 2 min read | 05.19.25