Loading…
Loading…
A category defined by the EU AI Act for AI systems that pose significant risks to health, safety, or fundamental rights. High-risk systems include: AI used in critical infrastructure, education, employment (hiring, promotion, performance evaluation), essential services (credit, insurance, emergency services), law enforcement, migration and asylum decisions, and administration of justice. High-risk AI systems face the Act's strictest requirements: conformity assessment, technical documentation, risk management systems, data governance measures, logging, transparency, human oversight, accuracy, and cybersecurity. Annex III of the EU AI Act lists the specific categories.
Why this matters for your team
If any AI tool your team uses falls in the EU AI Act's high-risk categories — hiring, credit, education, law enforcement, or critical infrastructure — your compliance obligations are substantially higher. Check Annex III of the Act against your use cases before assuming you're in the minimal-risk bucket.
An AI system used by a company to rank job applicants falls under the EU AI Act's high-risk category (employment decisions). The company must conduct a conformity assessment, maintain technical documentation, and implement human oversight before deploying it in the EU.