Loading…
Loading…
Requires employers and employment agencies in New York City to conduct independent bias audits of AI tools used in hiring decisions, publish audit results, and notify candidates that automated decision tools are being used.
If you use any AI tool to screen, rank, or select job candidates for positions in New York City — including resume screening, video interview analysis, or skill assessment tools — you must commission an independent bias audit annually, publish the results, and notify applicants. This law is actively enforced. Several companies have faced scrutiny for using AI hiring tools without completing the required audits.
Civil penalties of $500–$1,500 per violation; enforced by NYC Department of Consumer and Worker Protection
Rite Aid deployed facial recognition AI in hundreds of stores to flag suspected shoplifters. The system disproportionately misidentified people of color, women, and younger individuals as threats — causing them to be wrongly accused, followed, and publicly embarrassed in stores. Rite Aid failed to ensure the AI system was accurate and did not take reasonable steps to prevent misidentification harm.
Outcome: FTC banned Rite Aid from using AI facial recognition in retail settings for 5 years. Company required to delete all facial images collected, develop a comprehensive AI governance program, and implement meaningful accuracy testing before using any AI surveillance tool.