AI QMS post-market monitoring (EU AI Act Art. 72)
A post-market monitoring plan template for high-risk AI systems under EU AI Act Article 72 — what to monitor, how often, what to do with the data.
- EU AI Act Art. 17
- ISO 9001
Article 72 of the EU AI Act (Regulation (EU) 2024/1689) requires providers of high-risk AI systems to establish and maintain a post-market monitoring system proportionate to the AI technologies and the risks of the high-risk AI system. This article gives you a post-market monitoring plan template, the monitoring metrics that map to Article 72, and the link back to the QMS under Article 17.
What Article 72 actually requires
Three components:
- A post-market monitoring system that actively and systematically collects, documents, and analyses relevant data on the performance of high-risk AI systems throughout their lifetime, including post-launch data provided by deployers and other sources.
- A post-market monitoring plan part of the technical documentation referred to in Annex IV. The plan documents the methods, the data sources, and the analytical techniques.
- Continuous evaluation of compliance with the requirements of Chapter III, particularly Article 9 (risk management), Article 10 (data governance), Article 13 (transparency), Article 14 (human oversight), Article 15 (accuracy, robustness, cybersecurity).
The Article 72 obligations sit alongside Article 73 (serious incident reporting) and feed back into Article 9 risk management.
Plan structure
1. Cover
- AI system name, version range, intended purpose.
- Provider name, EU representative if applicable.
- Risk classification (high-risk per Article 6 / Annex III).
- Plan owner, version, approval date.
2. Scope
- AI system surfaces in scope (model, API, application, embedded).
- Deployment contexts.
- Use of generative or general-purpose components.
- Data sources used during operation.
3. Monitoring objectives
For each of the Chapter III requirements, define the monitoring objective. Examples:
- Article 9 risk management, detect emerging risks not anticipated during pre-market design.
- Article 10 data governance, detect data drift and shifts in population characteristics.
- Article 13 transparency, detect deployer or end-user misunderstandings of intended use.
- Article 14 human oversight, detect breakdowns of oversight controls, including over-reliance.
- Article 15 accuracy, robustness, cybersecurity, detect performance regressions, adversarial inputs, security incidents.
4. Metrics, sources and frequency
A practical metric matrix:
| Domain | Metric | Source | Frequency | Trigger |
|---|---|---|---|---|
| Performance | Accuracy on production sample | Logged predictions vs ground truth | Daily | Drop > 5pp from baseline |
| Performance | Calibration error | Logged predictions | Weekly | Drift beyond control limits |
| Robustness | Out-of-distribution rate | Detector | Continuous | Spike > threshold |
| Fairness | Subgroup accuracy gap | Logged predictions, demographics where lawfully held | Monthly | Gap > tolerance |
| Safety | Serious incidents | Incident pipeline | Continuous | Any |
| Security | Adversarial attack indicators | Security stack | Continuous | Severity ≥ medium |
| Human oversight | Override rate | Logged operator overrides | Weekly | Gradual decline (over-reliance) |
| Transparency | Deployer feedback | Channels per Article 50 | Quarterly | Pattern of misunderstanding |
| Data | Input distribution drift | Statistical tests | Daily | Shift beyond control limits |
Adapt to your domain. The metrics matrix is the executable part of the plan.
5. Data sources
- Provider-side telemetry. Logged inputs, outputs, model state, errors, within the bounds of Article 10 data governance and the GDPR.
- Deployer reports. Article 26(5) requires deployers to feed back certain incidents and observations to providers.
- End-user feedback. Channels established under Article 50 transparency where applicable.
- Public sources. Adverse-event registries, regulator bulletins, vulnerability databases.
- Third-party audits. Notified body surveillance, customer audits, conformity assessment surveillance.
6. Analytical techniques
- Statistical process control on key metrics.
- Drift detectors for input distribution and prediction distribution.
- Subgroup analysis for fairness metrics.
- Root-cause analysis on incidents and serious incidents.
- Periodic adversarial robustness testing.
7. Decision and action workflow
When a metric breaches a threshold:
- Triage. Classify the issue, performance, safety, fairness, security, transparency, oversight, data.
- Risk re-evaluation under Article 9. Update the risk register.
- Decision. No action / parameter change / model update / deployer notification / market action.
- Action implementation with traceability to the build pipeline under Article 17 element 3.
- Verification of effectiveness, repeat measurement.
- Update to technical documentation per Annex IV.
8. Reporting
- Internal. Monthly performance dashboard reviewed by the QMS owner; quarterly review of trends; annual review aligned with management review (ISO 9001 clause 9.3).
- External, serious incidents. Per Article 73, to the relevant national market surveillance authority within the prescribed window.
- External, deployer. Communication of relevant updates per the deployer agreement.
9. Records
- The post-market monitoring plan itself, with revision history.
- Monitoring data per the metrics matrix, retained for the period required by the harmonised standards or by national law (ten years default for high-risk AI documentation per Article 18).
- Decision and action logs.
- Effectiveness verification evidence.
- Periodic review minutes.
10. Review and update triggers
- Annual review at minimum.
- Substantial modification of the AI system per Article 3(23).
- New foreseeable risk identified.
- Serious incident.
- Regulator guidance updates.
- Harmonised standard updates.
Connection back to Article 17
Article 17 element 9 (“setting up, implementation and maintenance of post-market monitoring”) is the Article-72 hook into the QMS. Element 10 (“procedures related to reporting of serious incidents”) covers the Article-73 leg. Together, they make post-market monitoring routine documented information rather than ad-hoc work.
Common pitfalls
- Logging without analysis. A log lake is not a monitoring system. Only metrics that are reviewed, with thresholds and triggers, count.
- No deployer feedback channel. Deployers under Article 26(5) need a way to send relevant information back; if you have not built that channel, you cannot meet the inbound side of Article 72.
- Confusing serious incidents with all incidents. Article 73 thresholds matter; do not over-report or under-report.
- Over-collecting personal data in the name of monitoring. The GDPR data-minimisation principle still applies. Monitoring works on pseudonymised or aggregated data wherever possible.