AI tools used in hiring decisions are classified as high-risk under the EU AI Act — the highest compliance tier for non-prohibited AI systems. For employers using AI to screen resumes, score video interviews, or rank candidates, that classification triggers mandatory impact assessments, bias testing, candidate disclosure, and human oversight requirements before deployment.
At a glance: EU AI Act classifies employment AI as high-risk — mandatory compliance obligations for deployers. EEOC holds employers liable for disparate impact from AI hiring tools regardless of vendor claims. Colorado AI Act's employment domain deadline is June 30, 2026. The three actions that matter: run disparate impact testing on your applicant pool, obtain vendor conformity documentation, and add candidate disclosure language to your job postings.
This guide covers what the EU AI Act, EEOC, and US state laws require for HR AI tools, how to test for disparate impact, and the disclosure language you need in place now.
Why Employment AI Is the Highest-Risk AI Category
Employment decisions have a material legal effect on individuals — they affect livelihood, income, and career trajectory. That is precisely why the EU AI Act placed AI used in employment, worker management, and self-employment access in the high-risk category (Annex III, Section 4).
The EEOC has taken the same position from a different legal angle: under Title VII of the Civil Rights Act and the Americans with Disabilities Act, employers are responsible for the discriminatory impact of their selection tools, including AI. A vendor telling you their tool is "bias-free" does not transfer your liability.
The practical result: any AI tool your team uses to screen, rank, score, or make decisions about job candidates — whether you built it or bought it — triggers compliance obligations under both EU and US law.
What the EU AI Act Requires for HR AI
For employers using AI in employment decisions within the EU, or using AI that affects EU residents, the EU AI Act high-risk requirements include:
Conformity Assessment
Before deploying a high-risk AI system, deployers must verify the system meets the Act's requirements. For third-party tools, this means obtaining the vendor's EU Declaration of Conformity. For internally built tools, it means conducting your own conformity assessment.
What to request from vendors:
- EU Declaration of Conformity for the AI system
- Technical documentation describing how the system works and what it was trained on
- Bias and accuracy testing results (AIA Article 9 and 10 requirements)
- Instructions for appropriate use and known limitations
If a vendor cannot provide these documents, their tool is not compliant for EU high-risk deployment.
Human Oversight
High-risk AI employment systems must have meaningful human oversight — not rubber-stamping. A recruiter who reviews 500 AI-scored applications in two hours without independent analysis does not satisfy this requirement.
Minimum human oversight requirements:
- At least one human decision-maker per consequential hiring stage
- That person must be able to override the AI recommendation
- Override decisions must be documented
Candidate Disclosure
Candidates subject to a high-risk AI employment system must be told:
- That AI is being used
- The general logic behind the decision
- How to request a human review
Under the EU AI Act, this disclosure must occur before the AI system processes the candidate's data — meaning it belongs in your job posting and application form, not in a post-rejection email.
EU AI Act Compliance Timeline for HR AI
The EU AI Act's high-risk provisions for employment AI became fully enforceable in August 2026. Teams deploying employment AI after this date without the required documentation, oversight, and disclosure mechanisms are in violation.
EEOC Requirements: Disparate Impact Testing
The EEOC's position is that if an AI tool produces a statistically significant disparate impact on a protected class, the employer must justify the tool as job-related and consistent with business necessity — or stop using it.
The 4/5ths Rule (80% Rule)
The standard disparate impact test: calculate selection rates by protected group.
Formula:
Disparate impact ratio = Selection rate (protected group) / Selection rate (highest-selected group)
If ratio < 0.80, presumed disparate impact exists.
Example:
- AI selects 40% of white applicants and 24% of Hispanic applicants
- Ratio = 24/40 = 0.60 (below 0.80)
- Disparate impact exists — employer must justify the tool or replace it
How to Run Disparate Impact Testing
- Collect demographic data on your applicant pool — by application stage (applied, screened in, interviewed, offered, hired)
- Calculate selection rates at each stage for each protected class (race, sex, age 40+, disability status)
- Apply the 4/5ths test at each stage — if AI is used for initial screening, test the screening stage
- Document the analysis — date, methodology, data source, findings, action taken
- Repeat annually — a tool that passed testing two years ago may fail as your applicant pool changes
What to Do When Disparate Impact Is Found
If testing reveals disparate impact:
- Pause use of the affected stage pending remediation
- Request the vendor's technical explanation for the disparity
- Document findings and your response timeline
- Implement an alternative selection mechanism for the affected stage while remediation is in progress
- Retest after any vendor updates
US State Laws Applying to AI Hiring
Beyond EEOC federal guidance, several states have specific AI hiring requirements:
| Jurisdiction | Law | Key Requirement |
|---|---|---|
| Illinois | AI Video Interview Act (2020) | Disclose AI use in video interviews; obtain consent; explain evaluation factors |
| New York City | Local Law 144 (2023) | Annual bias audit by independent third party for automated employment decision tools; post results publicly |
| Colorado | SB 24-205 (June 30, 2026) | Employment domain is covered; impact assessment, bias monitoring, human review required |
| California | CCPA/CPRA | Employees have right to know about automated decisions; opt-out rights |
| Maryland | House Bill 1202 (2020) | Disclosure required before facial recognition in video interviews |
If your team conducts hiring in New York City, you need an independent third-party bias audit of any AI hiring tool. This is more stringent than EEOC's internal testing requirement.
Candidate Disclosure Language (Templates)
Job Posting Disclosure
Add to your job posting in the "How We Hire" or "Application Process" section:
We use AI-assisted tools in our recruitment process, including [resume screening / video interview analysis / skills assessment]. These tools are used to [describe use: e.g., "identify applications that match the stated requirements"]. A human recruiter reviews all AI-generated recommendations before any decision is made. If you have a disability that may be affected by AI assessment tools and would like an accommodation, please contact [[email protected]].
Application Form Disclosure
Notice: This application uses automated tools to process your submission. By submitting this application, you acknowledge that AI-assisted screening may be used as part of the evaluation process. You have the right to request human review of any decision. Contact [[email protected]] to request review.
Adverse Decision Notice (EEOC + Colorado AI Act)
When a candidate is rejected and AI was involved:
Thank you for your application for [role] at [Company]. We have decided to move forward with other candidates at this time.
AI disclosure: An automated system was used in reviewing your application. The principal factor(s) in this decision were: [specific reasons, e.g., "stated experience below minimum requirement of X years"]. You have the right to request human review of this decision within 30 days by contacting [[email protected]].
Vendor Due Diligence: Five Questions to Ask Before Deploying HR AI
Before signing any contract for an AI hiring tool:
- "Can you provide your EU Declaration of Conformity and AIA Article 13 technical documentation?" — Required for EU deployment; absence means non-compliant for EU AI Act purposes.
- "What protected class disparate impact testing have you conducted and what were the results?" — Request full methodology and data, not just a "bias-free" claim.
- "Will your system provide the principal reason for each candidate decision?" — Required for EU AI Act individual notice and EEOC adverse action notices.
- "How do I obtain a human review of a decision for a specific candidate?" — The mechanism must exist at the vendor level before you can offer it to candidates.
- "Have you received any EEOC charges or regulatory inquiries related to bias in your hiring AI?" — A vendor who has faced complaints has a documented risk profile.
Use the AI vendor due diligence checklist for the full 30-question review framework.
Compliance Checklist: HR AI
- Inventory all AI tools used in any hiring, promotion, termination, or task-assignment decision
- Classify each as high-risk under EU AI Act (employment domain = automatic high-risk)
- Obtain EU Declaration of Conformity from all third-party HR AI vendors
- Request vendor bias testing methodology and results
- Run disparate impact testing on your own applicant pool (4/5ths test, by stage)
- Document testing methodology, results, and any actions taken
- Add candidate disclosure language to all job postings and application forms
- Implement a human review mechanism for candidates who request reconsideration
- Verify human reviewers can meaningfully override AI recommendations
- Check NYC Local Law 144 compliance if hiring in New York City
- Set calendar for annual disparate impact re-testing
- Add HR AI compliance to your AI risk register
References
- EU AI Act — Annex III, Section 4 (Employment and Worker Management): High-risk AI classification
- EEOC Technical Assistance Document: "Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures"
- NYC Local Law 144: Automated Employment Decision Tools
- Illinois Artificial Intelligence Video Interview Act (820 ILCS 42)
- Related: Colorado AI Act Compliance Deadline 2026 — employment domain is in scope for the June 30 deadline
- Related: AI Vendor Due Diligence Checklist — full 30-question vendor assessment including HR AI-specific questions
- Related: Red Teaming AI Systems — verify HR AI bias controls hold under adversarial conditions before deployment
- Related: AI Governance for Small Teams: Complete Guide — full governance framework covering HR, healthcare, and fintech sectors with master implementation checklist
