Editorial standards and content authenticity
Last reviewed: 2026-04-27. This page is binding on every editor and contributor.
qms.best operates under an explicit authorship contract.
Every article carries machine-readable frontmatter declaring how it
was produced and where it sits in the editorial lifecycle. Articles
drafted by AI agents are marked editorialStatus: 'scaffold'
until a named human editor performs substantive review or rewrites
the body. We do not silently publish AI output as if it were
human-authored.
Editorial status — what each value means
Every article frontmatter sets one of:
scaffold— drafted by an AI-agent persona (e.g., FERMI) at build time. The body is a research scaffold; it has been schema-validated and lint-checked but not substantively reviewed by a named human editor. Treat as reference material with caveats; verify every clause and regulation citation against the primary source before relying on it.editor-review— a named human editor is in the process of reviewing the scaffold.ai-scaffold-rewritten— the editor rewrote the scaffold body by ≥70 % (measured by 5-gram shingle overlap; recorded asrewritePercentage). Path B of the authorship policy.published— substantively reviewed, fact-checked, and signed by a named human editor with identity on file.
The current corpus is at scaffold status pending human
editor recruitment. The status is reported in machine-readable form
in each article's source frontmatter and (in a future revision) on
the rendered page.
Authorship and oversight
- Named byline on every article. The byline names the
agent or human responsible for the current revision. AI-agent personas
carry the
kind: "ai-agent-persona"entry in the editors registry and are not authorised to ship content aspublished. - Authorship contract is enforced at build time.
The independent-content-engine audit gate fails the build if any
non-draft file omits the authorship-contract fields
(
humanAuthored,aiAssistance,editorialStatus,editorAttestation). - No anonymous content. Missing byline blocks build.
- Editor competence. Human editors have practising experience in QMS implementation, audit, or compliance — competence evidence is on file. AI-agent personas are not editors in the E-E-A-T sense; they draft scaffolds.
- Disclosure of AI assistance is explicit and granular.
The
aiAssistancefield declares one of:none,research,fact-check,copy-edit,drafting, orai-draft-human-rewrite. - We do not silently publish AI output. If the body
was AI-drafted, it is either visibly marked
scaffoldor promoted only after substantive human review or ≥70 % rewrite.
Originality and citations
- Paraphrase, not reproduce. Standards text is paraphrased in our own words and cited by clause / article number. We do not bulk-reproduce ISO, EN, ETSI, IEC, or other standards.
- Cite every regulatory claim. Regulation references include the instrument identifier (e.g., "Regulation (EU) 2024/1689 Article 17"), so the reader can verify against the official source.
- No fabricated citations. Every cited regulation, standard, court case or guidance document must exist and say what we claim it says. Editors verify each cite against the primary source.
- No plagiarism. Original drafting only. Quotations are properly attributed.
- Link back to issuers. When useful, we link to the issuing standards body's storefront so readers can purchase the official text.
Helpful-content and E-E-A-T alignment
Our editorial process aligns with Google's helpful-content guidance and the E-E-A-T framework (Experience, Expertise, Authoritativeness, Trustworthiness):
- Experience. Editors have done QMS implementation work — articles describe practical pitfalls, not abstract theory.
- Expertise. Editor competence is documented; bylines are real names; conflict-of-interest disclosures are surfaced.
- Authoritativeness. Citations to primary sources; no recycled summaries of other sites.
- Trustworthiness. Independence disclaimer, sponsor firewall, correction policy, named contact for complaints.
- Helpful first. Articles are written to help practitioners, not to rank for keywords. We do not stuff, cloak, or auto-generate.
What we will not publish
- AI-generated content at scale, with no human editorial oversight.
- Sponsored content disguised as editorial. See disclosure policy.
- Bulk reproduction of standards text in violation of the issuing body's copyright.
- Plagiarised content — taken from another publication without attribution.
- Fabricated citations or invented case law — every regulatory or legal claim must verify against a primary source.
- Marketing copy framed as "guides".
- Doorway pages or thin keyword-stuffed pages.
Quality audit (every article)
Before any article goes live, the editor confirms each of the following. The CI pipeline runs an automated subset on every build.
- Named editor is set in frontmatter (CI gate).
- Description is present, ≤180 chars (CI gate).
- At least one related standard is tagged (CI gate).
- Word count ≥ 600 (CI gate).
- At least 3 internal links (CI gate).
- No banned sister-brand attributions (CI gate).
- No common low-effort filler phrases (CI advisory).
- Every regulatory or standards citation verified by editor against primary source (manual).
- No bulk reproduction of standards text (manual).
- Originality scan against published corpus (manual; tool: editor's discretion).
- Conflict-of-interest declaration on file (manual).
Corrections and updates
- Reader-reported corrections are turned around within 5 working days.
- Material corrections are noted with a dated update line in the article footer.
- Cosmetic corrections are made silently with the
updatedDatefrontmatter field bumped. - Substantive scope changes (regulator deadline shift, standard revision) trigger a fresh editor review of every affected article.
Conflict of interest
- Editors disclose paid relationships with QMS vendors, training providers, and certification bodies on file.
- Editors do not write articles that recommend a sponsor in their own portfolio.
- Where an editor's prior employment overlaps with article subject matter, the disclosure appears in the article footer.
Reader feedback and complaints
If you believe an article on qms.best contains a factual error, a fabricated citation, plagiarism, or an undisclosed conflict of interest, contact editorial@qms.best. We acknowledge within 2 working days and resolve within 10. Editorial decisions are documented in the corrections log.
Why this matters
Search engines rank content that is genuinely helpful, demonstrably authored by humans with relevant experience, and traceable to primary sources. Search engines penalise sites that publish unedited AI output, duplicate content, or unverified claims. Our editorial standards exist to be useful — and to keep qms.best on the right side of the helpful- content guidelines.