Colorado AI Bias
Client Alert | 5 min read | 05.30.24
On May 17, 2024, Colorado Governor Jared Polis signed S.B. 24-205, Consumer Protections for Artificial Intelligence, the first state law in the country to regulate employers’ use of artificial intelligence in employment decisions. This law regulates both companies that develop and companies that deploy “high-risk” artificial intelligence systems (“AI systems”). In particular, the law sets forth a set of provisions designed to ensure that developers and deployers use “reasonable care” to protect consumers from any “known or reasonably foreseeable risks to algorithmic discrimination” arising from the use of the AI system. The law then creates a rebuttable presumption, for both deployers and developers, that reasonable care was used if they meet specific requirements and disclose key information about high-risk AI systems. This law will be enforced by the Colorado Attorney General, and a violation of the law constitutes an unfair trade practice. The law becomes effective on February 1, 2026.
Colorado’s law comes nearly a year after New York City implemented its own law regulating the use of AI in employment decisions. Notably, however, the Colorado law extends beyond employment, reaching decisions and opportunities in other public and consumer-facing services, including education, financial services, housing, health care services, and legal services, among others. Additionally, while the NYC law focuses on transparency, requiring disclosure of the results of a bias audit and notice to employment applicants, the Colorado law not only requires such disclosures, but also places obligations on both developers and deployers to take active measures to mitigate the risks of algorithmic discrimination.
Despite ultimately executing the law, Governor Polis expressed “reservations” as to the impact of the law and urged the General Assembly to utilize the time prior to the law’s effective date to “reexamine” and “fine tune the provisions and ensure that the final product does not hamper development and expansion of new technologies.”
Key aspects of the Colorado law, particularly as it pertains to the use of high-risk AI systems in employment decisions, are described below.
Key Definitions
- A “high-risk artificial intelligence system” is defined as any AI system that, when deployed, makes, or is a substantial factor in making, a consequential decision.
- A “consequential decision” is defined as a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of (a) education enrollment or an education opportunity; (b) employment or an employment opportunity; (c) a financial or lending service; (d) an essential government service; (e) health-care services; (f) housing; (g) insurance; or (h) a legal service.
- A “substantial factor” is a factor that (a) assists in making a consequential decision; (b) is capable of altering the outcome of a consequential decision; and (c) is generated by an AI system.
- “Algorithmic discrimination” is defined as any condition in which the use of an AI system results in unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under state or federal law. It does not include the use of an AI system for the sole purpose of self-testing to identify, mitigate, or prevent discrimination, or expanding an applicant, customer, or participant pool to increase diversity or redress historical discrimination.
Developer Duties
The developer of a high-risk AI system is required to provide a deployer with documentation disclosing:
- The type of data used to train the AI system;
- The purpose and intended benefits, uses, and outputs of the AI system;
- Measures taken to evaluate the AI system for performance and mitigation of algorithmic discrimination;
- Known or reasonably foreseeable limitations and harmful or inappropriate uses of the AI system, and measures to mitigate known or reasonably foreseeable risks of algorithmic discrimination;
- Guidance for how the AI system should be used and monitored; and
- Information necessary for a deployer to complete an impact assessment.
Developers are also obligated to publish on their website a statement summarizing the types of AI systems they have developed and how the developer manages known or reasonably foreseeable risks of algorithmic discrimination.
Deployer Duties
The Colorado law requires deployers to implement a risk management policy and program to govern the deployment of any high-risk AI systems. This policy can cover multiple AI systems, must include the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks, and must be systematically reviewed and updated. Deployers are further required to (1) complete an impact assessment for any high-risk AI system on at least an annual basis and within 90 days after any intentional and substantial modification to the AI system, (2) review their deployment of high-risk AI systems on an annual basis to ensure that it is not causing algorithmic discrimination; and (3) provide certain disclosures to consumers relating to the use of the AI system.
Impact assessments must include:
- A statement disclosing the purpose, intended use cases, deployment context of, and benefits afforded by the AI system;
- An analysis of algorithmic discrimination risks and steps taken to mitigate the risks;
- A description of the categories of data utilized;
- Metrics used to evaluate the performance and limitations of the AI system;
- Transparency measures taken; and
- Description of post-deployment monitoring and user safeguards.
Deployers are required to maintain all impact assessments and records concerning each impact assessment for at least 3 years following the final deployment of the applicable AI system.
The law also provides specific requirements in the event a high-risk AI system makes an adverse consequential decision as to a consumer. In those scenarios, the deployer is obligated to:
- Disclose the principal reason or reasons for the decision, including (1) the degree to which use of the AI system contributed to the decision; and (2) the type and sources of data processed by the AI system;
- Provide an opportunity to correct any incorrect personal data used by the AI system; and
- Provide an opportunity to appeal the adverse consequential decision in a process allowing for human review, except if such appeal is not in the best interest of the consumer.
Deployers are also required to disclose on their website information relating to the deployment and risks of high-risk AI systems deployed. Both developers and deployers hold responsibility for ensuring disclosure of the use of an AI system to each consumer, unless it would be obvious to a reasonable person that the person is interacting with an AI system.
Notably, the law includes an exemption as to certain obligations for deployers with fewer than 50 full-time employees that do not use the deployer’s own data to train the AI system, as long as the AI system is used for its intended purpose, continues learning based on data derived from other sources, and the deployer makes available to consumers any impact assessment completed by the developer and provided to the deployer.
Attorneys at Crowell are continuing to monitor legal developments regulating the use of artificial intelligence, and are available to answer any questions.
Insights
Client Alert | 3 min read | 12.13.24
New FTC Telemarketing Sales Rule Amendments
The Federal Trade Commission (“FTC”) recently announced that it approved final amendments to its Telemarketing Sales Rule (“TSR”), broadening the rule’s coverage to inbound calls for technical support (“Tech Support”) services. For example, if a Tech Support company presents a pop-up alert (such as one that claims consumers’ computers or other devices are infected with malware or other problems) or uses a direct mail solicitation to induce consumers to call about Tech Support services, that conduct would violate the amended TSR.
Client Alert | 3 min read | 12.10.24
Fast Lane to the Future: FCC Greenlights Smarter, Safer Cars
Client Alert | 6 min read | 12.09.24
Eleven States Sue Asset Managers Alleging ESG Conspiracy to Restrict Coal Production
Client Alert | 3 min read | 12.09.24
New York Department of Labor Issues Guidance Regarding Paid Prenatal Leave, Taking Effect January 1