Background - News & Events (Landing) 2016
Labor & Employment Alerts

EEOC and DOJ Highlight ADA-Related Pitfalls of Artificial Intelligence in Employment Decisions

May 19, 2022

On May 12, 2022, the Equal Employment Opportunity Commission (“EEOC”) published guidance to help employers using artificial intelligence (“AI”) technology to remain compliant with the Americans With Disabilities Act (“ADA”). On the same day, the Department of Justice posted its own guidance regarding AI-related disability discrimination. Both are consistent with recent emphasis by the EEOC on the potential interaction of the usage of AI in employment decisions with disability rights. This new guidance comes after EEOC Chair Charlotte A. Burrows, in October 2021, launched the agency’s Artificial Intelligence and Algorithmic Fairness Initiative to examine the use of AI, machine learning, and other emerging technologies in the context of federal civil rights laws.

As employers increasingly move toward remote work (and remote hiring), many have also increased their use of AI and algorithmic decision-making tools in recruitment, including using AI tools to screen resumes and administer pre-employment tests. The EEOC has stated that its guidance is not meant to be new policy but is instead intended to explain existing principles for the enforcement of the ADA and previously issued guidance. Nevertheless, employers should take care to balance the benefits of AI with the potential legal risks highlighted by the EEOC and DOJ. (Relatedly, employers should continue to be aware of the growing number of jurisdictions, including Illinois and New York City, with laws regulating the use of certain types of AI and algorithmic decision-making tools in employment decisions.)

The EEOC’s Guidance

The EEOC’s guidance, “Americans With Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” explores how existing ADA requirements may apply to the use of AI, software applications, and algorithms in employment-related decision-making processes and practices. The guidance further presents some practical pointers to employers in an effort to assist them with ADA compliance when using such tools.

Specifically, the EEOC identifies three primary concerns as examples of how an employer’s use of AI and other technological tools can result in discrimination against disabled individuals within the meaning of the ADA. The EEOC states that:

• Employers should have a process in place to provide reasonable accommodations when using algorithmic decision-making tools. For example, if an employer administers a test through computer software, it risks violating the ADA if it fails to offer extended time or an alternative version of a test, such as one that is compatible with accessible technology (like a screen reader) as a reasonable accommodation to those who need it on account of their disability.

• Without proper safeguards, workers with disabilities may be “screened out” from consideration in a job or promotion even if they can do the job with or without a reasonable accommodation. For example, video interviewing software that analyzes applicants’ speech patterns in order to reach conclusions about their ability to solve problems may provide a lower score to an applicant who has a speech impediment that causes significant differences in speech patterns.

• If the use of AI or algorithms leads to applicants or employees providing information about disabilities or medical conditions, it may result in prohibited disability-related inquiries or constitute a “medical examination” for purposes of the ADA. For example, if a personality test asks questions about optimism, and if someone with Major Depressive Disorder (“MDD”) answers those questions negatively and loses an employment opportunity as a result, the test may “screen out” the applicant because of MDD.

The EEOC also identified a number of “promising practices” that employers should consider to mitigate the risk of ADA violations connected to their use of AI tools. Among such “promising practices,” the EEOC recommends:

• Informing applicants or employees of the steps that any evaluative process will include (e.g., if there is an algorithm being used to assess an employee) and providing an opportunity to request a reasonable accommodation.

• Using algorithmic tools that have been designed to be accessible to individuals with as many different types of disabilities as possible.

• Describing in plain language and accessible format the traits that an algorithm is designed to assess, the method by which the traits are assessed, and the variables or factors that may impact a rating.

• Ensuring that the algorithmic tool only measures abilities or qualifications that are truly necessary for the job, even for individuals who are entitled to on-the-job reasonable accommodations.

• Ensuring that the necessary abilities or qualifications are measured directly rather than by way of characteristics or scores that are correlated with the abilities or qualifications.

• Asking an algorithmic tool vendor to confirm that the tool does not ask job applicants or employees questions likely to elicit information about a disability or seek information about an individual’s physical or mental impairment or health, unless the inquiries are related to a request for reasonable accommodation.

The DOJ’s Guidance

The DOJ’s guidance, “Algorithms, Artificial, and Disability Discrimination in Hiring,” tracks the EEOC’s publication in explaining how algorithms and AI can inadvertently result in disability discrimination in hiring, particularly with respect to reasonable accommodations and screen-outs.

The DOJ’s guidance offers examples of the ways in which employers are applying these technologies, and outlines the ways in which such tools can result in discrimination through failing to reasonably accommodate, or unfairly screening out, disabled applicants. The guidance also provides recommendations to employers on ADA-compliant practices, including providing and implementing clear procedures for requesting reasonable accommodations.

Implications for Employers

For many employers, AI tools have presented an ostensible opportunity to reduce bias in hiring by leaning on tools advertised as being “bias-free” to efficiently move candidates through the application process. As the EEOC and DOJ guidance highlights, however, even robust tools present potential pitfalls. Employers should carefully consider the roadmap that the EEOC and DOJ have provided to avoid these pitfalls and mitigate their risk of potential liability for disability discrimination.

For more information, please contact the professional(s) listed below, or your regular Crowell & Moring contact.

Trina Fairley Barlow
Partner – Washington, D.C.
Phone: +1.202.624.2830
Kris D. Meade
Partner – Washington, D.C.
Phone: +1.202.624.2854
Rebecca L. Springer
Partner – Washington, D.C.
Phone: +1.202.624.2569
Jillian Ambrose
Counsel – Washington, D.C.
Phone: +1.202.624.2710