Software QMS — ISO 9001 vs ISO/IEC 90003
What ISO/IEC 90003 adds for software organisations on top of ISO 9001 — interpretive guidance, not a separate certification. Mapping to agile and DevOps.
- ISO 9001
ISO/IEC 90003 is the “interpretation” of ISO 9001 for software organisations. It is not a separate certification, it is interpretive guidance that explains how ISO 9001 clauses apply when the product is software. This article walks software teams through the practical implications, with concrete agile and DevOps mappings.
Why software needs interpretation
ISO 9001 was written with manufactured, tangible products in mind. Many clauses translate cleanly to software: documented information, nonconforming output, internal audit, management review. Other clauses need interpretation:
- Identification and traceability (8.5.2). What does traceability look like when the “lot” is a build artefact?
- Property of customers (8.5.3). When customers send data, code, or credentials.
- Preservation (8.5.4). Source code, build artefacts, runtime state.
- Calibration (7.1.5). Calibration of test environments and tooling.
- Validation of processes. Validation where the output cannot be fully verified by subsequent inspection, applies to deployed software in some interpretations.
Mapping ISO 9001 to common software practice
Clause 4 context, product portfolio framing
External issues for software include the regulatory framework (GDPR, EAA, AI Act if applicable, sector-specific), platform policies (app stores, SaaS marketplace policies), open-source dependency licensing, security threat landscape. Internal issues include team structure, technology debt, deployment frequency.
Clause 6.1 risk and opportunity, at three levels
- Product risk. What goes wrong if the software fails, financial, safety, reputational.
- Process risk. Where the development pipeline can fail, secret leakage, dependency vulnerability, unreviewed deployment.
- Operational risk. Production incidents, capacity, data loss.
A single risk register covers all three; the headers per risk indicate the level.
Clause 7.1.5, monitoring and measurement resources
For software, this includes:
- Test environments, version-controlled, reproducible, ideally ephemeral.
- Test data, managed, anonymised where personal data is involved.
- Tooling, coverage tools, static analysers, security scanners.
- Telemetry, logs, metrics, traces, sampling configuration.
Treat test environments as instrumentation. They have a calibration analogue: golden environments, periodically re-validated.
Clause 7.5, documented information for software
Minimum set:
- Coding standards and architecture decision records.
- Definition of done.
- Code review policy.
- Branch and merge policy.
- Release procedure.
- Incident response runbook.
- Disaster recovery plan.
- Data flow diagrams (for GDPR Article 30 RoPA).
Clause 8.3, design and development for software
If you ship software, you do design and development. The exclusion does not apply. Map:
- Inputs. Requirements, user stories, security requirements, performance budgets, accessibility requirements.
- Controls. Design reviews, threat modelling, architecture review, acceptance criteria.
- Outputs. Designs, schemas, API contracts, test plans.
- Verification. Tests, static analysis, peer review.
- Validation. UAT, beta release, dogfooding, customer pilot.
- Changes. Change-management ticket; evidence linked to commit.
Clause 8.5.1, production and service provision
The release pipeline is the production process. Controls include:
- Branch policy preventing unreviewed merges to main.
- CI gates, build, test, security scan, accessibility scan, license scan.
- Deployment authorisation evidence.
- Rollback procedures.
- Feature flags as risk control.
- Canary or staged rollout strategy.
Clause 8.5.2, identification and traceability
Every deployed artefact ties back to a commit, a pipeline run, an authorising change ticket, and a release note. Modern toolchains do this automatically, the QMS interpretation is to require it explicitly.
Clause 8.6, release of products
Release authorisation evidence: who approved, against what acceptance criteria, with what test results.
Clause 8.7, control of nonconforming output
For software, this is the production incident. Major elements:
- Detection.
- Containment.
- Communication (status page, customer notification).
- Resolution.
- Post-incident review.
- Action items into the corrective-action workflow under 10.2.
Clause 9.1, monitoring and measurement
KPIs sit at four levels:
- Software delivery. DORA-style metrics, deployment frequency, lead time, change-failure rate, mean time to restore.
- Quality. Defect density, escape rate, customer-reported defects.
- Reliability. Availability, error rate, latency, error budgets.
- Customer. NPS or equivalent, satisfaction with releases, support ticket categories.
Clause 9.2, internal audit
Auditing a software organisation is process audit, not code audit. Auditors verify the pipeline controls work as documented; they do not review code as part of the QMS audit.
Clause 9.3, management review
Inputs include all four KPI levels above, security incidents, accessibility findings, regulatory deadlines, supplier dependency risks, and the post-incident review backlog.
Clause 10.2, corrective action
Root cause for software incidents typically falls into a few buckets: deployment process gap, monitoring gap, capacity assumption, dependency behaviour, security control. Track recurrences across the bucket, three incidents in the same bucket signal a systemic issue.
Agile and DevOps mappings
| Agile / DevOps | ISO 9001 interpretation |
|---|---|
| User story | Customer requirement (8.2) |
| Definition of done | Acceptance criteria (8.6) |
| Sprint planning | Operational planning (8.1) |
| Code review | Design verification (8.3.4) |
| CI pipeline | Process control (8.5) |
| Pair programming | Design review (8.3.4) |
| Retrospective | Improvement (10.3) |
| Post-incident review | Corrective action (10.2) |
| Backlog grooming | Continual improvement (10.3) |
| Deployment pipeline | Release authorisation (8.6) |
Where teams stumble
- Treating ceremonies as evidence. A retrospective without documented actions is not evidence of improvement.
- No traceability between user story, commit, pipeline run, and release. Modern tooling makes this trivial, but only if you configure it.
- Skipping the management review. “We have weekly stand-ups” is not a substitute for a documented review against required inputs.
- Audit fatigue from surfacing the wrong artefacts. Auditors want evidence the controls worked, not full code reviews.
When to look at sector overlays
- Medical devices, ISO 13485 + IEC 62304 software lifecycle.
- Automotive, IATF 16949 + ISO 26262 functional safety.
- Aerospace, AS9100 + DO-178C software considerations.
- AI systems classified high-risk, EU AI Act Article 17 + relevant harmonised standards once published.