Loading…
Loading…
The Equal Employment Opportunity Commission (EEOC) has clarified that Title VII of the Civil Rights Act applies to AI-assisted hiring, promotion, and performance management tools. The 2023 technical assistance document explains that employers can face disparate impact liability if an AI tool screens out protected groups at higher rates than others — even if the employer did not design the tool and did not intend to discriminate.
If you use any AI tool in hiring decisions — resume screening, interview scheduling, assessment scoring, or reference checking — you are potentially liable under federal employment discrimination law if that tool has disparate impact on a protected class (race, sex, age, national origin, disability). The EEOC does not require intentional discrimination; a neutral-seeming AI that produces discriminatory results is sufficient. You cannot outsource liability to your AI vendor: the employer is responsible for ensuring the tools they use comply with Title VII.
Rite Aid deployed facial recognition AI in hundreds of stores to flag suspected shoplifters. The system disproportionately misidentified people of color, women, and younger individuals as threats — causing them to be wrongly accused, followed, and publicly embarrassed in stores. Rite Aid failed to ensure the AI system was accurate and did not take reasonable steps to prevent misidentification harm.
Outcome: FTC banned Rite Aid from using AI facial recognition in retail settings for 5 years. Company required to delete all facial images collected, develop a comprehensive AI governance program, and implement meaningful accuracy testing before using any AI surveillance tool.