Loading…
Loading…
The process by which an AI system is evaluated to confirm it meets the requirements of the EU AI Act before being placed on the market. For most high-risk AI systems, providers can self-certify via an internal conformity assessment. For certain categories (biometric systems, safety components of critical infrastructure), third-party assessment by a 'notified body' is required. The assessment must verify that the system meets requirements for risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy, and cybersecurity.
Why this matters for your team
For most high-risk categories under the EU AI Act, self-certification is allowed — which means the compliance burden falls on your team, not an external body. Start documenting your risk management and testing processes now so that the self-certification process is a documentation exercise, not a compliance scramble.