Loading…
Loading…
The ongoing collection, analysis, and reporting of real-world performance data after an AI system has been deployed. Required by the EU AI Act (Article 72) for providers of high-risk AI systems, post-market monitoring aims to detect performance degradation, emerging biases, unexpected behaviors, and safety incidents that were not apparent during pre-deployment testing. Providers must maintain a post-market monitoring plan and report serious incidents to national authorities within defined timeframes (24 hours for serious incidents involving safety risks). The broader AI governance practice refers to this as model monitoring or model drift detection.
Why this matters for your team
Model performance degrades over time as the world changes — this is called model drift. Build basic monitoring in from day one: log outputs, track accuracy proxies, set alert thresholds, and define who investigates anomalies. For EU high-risk AI, monitoring and incident reporting are legal obligations with specific timelines.
An HR software vendor deploys an AI shortlisting tool and implements post-market monitoring, tracking weekly accuracy metrics and demographic pass-rate parity. Six months post-launch, the monitoring detects a 15% drop in accuracy for a specific job category — triggering a model review before a client notices.