1. Home
  2. |Insights
  3. |Algorithmic Accountability Act Reflects Growing Interest in Regulation of AI

Algorithmic Accountability Act Reflects Growing Interest in Regulation of AI

Client Alert | 2 min read | 04.22.19

Senator Ron Wyden (D-OR) and Senator Cory Booker (D-NJ) introduced the Algorithmic Accountability Act in the Senate last Wednesday, federal legislation which would require entities to ensure that their automated algorithmic decision systems do not expose consumers to unfair bias, inaccuracies, or privacy and security risks. The bill “direct[s] the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments.”

The bill defines “automated decision impact assessment” as a “study evaluating an automated decision system” and its “development process, including the design and training data of the automated decision system, for impacts on accuracy, fairness, bias, discrimination, privacy, and security.” At a minimum, entities with automated decision systems deemed “high-risk” must provide the FTC with a detailed description of the system; a cost-benefit analysis in light of the system’s purpose; a risk assessment regarding consumer privacy and risks of “inaccurate, unfair, biased, or discriminatory decisions impacting consumers”; and efforts the entity plans to make to minimize these risks.

Covered entities include any company with over $50 million in average annual gross receipts per year, companies possessing personal information of over 1 million consumers or consumer devices, and any entities that collect consumers’ personal information “to sell or trade the information or provide third-party access to the information.”

The bill would encompass a large percentage of AI tools used in a variety of industries, such as facial recognition, chatbots, recruiting tools, ad targeting, and credit and mortgage calculations, to name a few. Proponents of the bill state it attempts to address risks of unfair discrimination and inadvertent biases that can be imposed through unchecked utilization of these AI tools.

The bill would also require the FTC to promulgate regulations, within two years, requiring covered entities to “conduct automated decision impact assessments” of “high-risk automated decision systems.” The bill deems any violation of these regulations an unfair or deceptive practice under the Federal Trade Commission Act. The bill further allows state attorney generals to bring civil actions on behalf of state residents in federal court if the attorney general has reason to believe that an entity is conducting a practice that violates the Act.

The bill illustrates the growing interest in new regulatory requirements to prove that automated systems are fair, non-invasive, and non-discriminatory. Under the McCarran-Ferguson Act (15 U.S.C. §1012), whether it will or can apply directly to insurance companies is questionable.

Read the full text of the bill here. A parallel House bill is being sponsored by Representative Yvette Clark (D-NY).


Insights

Client Alert | 3 min read | 04.24.24

Digging Deeper: “American Made” Claims From the Tenth Circuit’s Decision in I DIG Texas v. Kerry Creager Diverge from FTC Guidance

On April 12, 2024, the Tenth Circuit issued a decision in I DIG Texas LLC v. Kerry Creager, which analyzed country-of-origin claims in a manner that diverged from the well-established Federal Trade Commission’s “Made in USA” policy....