In December 2023, ISO and IEC published the first international standard for managing AI - ISO/IEC 42001:2023. Eighteen months later, it has gone from curiosity to procurement requirement. AWS certified its AI services in late 2024. Microsoft followed for Azure AI. By the back half of 2025, "show us your AI governance" had become a standard line item in enterprise vendor questionnaires - and a credible answer was either an ISO 42001 cert or a roadmap to one.
The pressure isn't just commercial. The EU AI Act's high-risk-system enforcement deadline lands August 2026. The U.S. NIST AI Risk Management Framework is referenced in federal acquisitions. State-level AI laws are accumulating. The old "we'll figure out AI governance later" answer no longer survives a real diligence call.
What ISO/IEC 42001 Actually Is
ISO/IEC 42001 is a management-system standard for AI. That phrase is doing a lot of work, so let's unpack it.
A management-system standard does not tell you which technical controls to implement, what model architecture to use, or how to red-team your prompts. It tells you how to govern the thing - how leadership commits to it, how risks get identified, how people get trained, how decisions get documented, how problems get found and fixed, and how the whole system gets measured and improved over time. ISO 9001 does this for quality. ISO 27001 does this for information security. ISO 42001 does this for AI.
The standard applies to any organization that develops, provides, or uses AI systems - which in 2026 is most of them. It is technology-neutral: a hospital using an FDA-cleared diagnostic model, a SaaS company shipping an LLM-powered assistant, and a bank deploying a fraud-detection model are all in scope. What the standard cares about is whether the organization has a system for governing those AI activities, not which AI activities it does.
Certification is voluntary. ISO does not certify organizations directly - accredited third-party certification bodies do, and the accreditation is granted by national accreditation bodies (ANAB in the U.S., UKAS in the U.K., DAkkS in Germany). DNV, BSI, A-LIGN, Schellman, and TÜV SÜD are among the larger accredited registrars currently issuing ISO 42001 certs. You can also implement ISO 42001 without certifying - many organizations do this first as a way to prepare for the audit later.
Why It Matters Right Now
Three forces are converging.
1. The EU AI Act's August 2026 deadline
The Act's general provisions and AI literacy requirements took effect February 2025. General-purpose AI (GPAI) provider obligations took effect August 2025. The big one - mandatory conformity assessments and CE marking for high-risk AI systems - lands August 2026. For organizations placing high-risk AI on the EU market, ISO 42001 is positioning to be the harmonized standard that demonstrates conformity with key Act requirements. CSA's analysis estimates that a mature ISO 42001 program covers roughly 78% of the operational scaffolding the Act requires - particularly Articles 9 (risk management), 12 (record keeping), 14 (human oversight), and 17 (quality management).
2. Procurement pressure
Vendor questionnaires are catching up to AI. By Q4 2025, the "AI governance" line item in enterprise security questionnaires had crossed the 50% mark. The credible answers are either ISO 42001, NIST AI RMF, or both - and the certification path closes the answer faster than a long narrative response. Early movers (AWS, Microsoft, Anthropic, several MLOps platforms, large telecoms) are using their certifications as procurement differentiators. The companies that haven't started yet are now starting.
3. Insurance and capital allocation
Cyber and tech E&O underwriters are beginning to ask AI-specific questions at renewal. Investors and boards are asking for documented AI governance as part of operational risk reviews. Neither group has settled on a single framework - but ISO 42001 is the one that maps cleanest to existing audit motion (ISO 27001) and existing legal motion (EU AI Act), which is why it's becoming the lingua franca.
The Standard's Structure: 10 Clauses
Like every modern ISO management-system standard, ISO 42001 follows the Annex SL High-Level Structure. The numbering is fixed, the section names are standardized, and the same scaffolding appears in ISO 27001, ISO 9001, ISO 22301, and ISO 14001. If you have implemented any of those, the table of contents below will be familiar.
Auditors read clauses 4 through 10 in that order, and certification depends on demonstrating that each clause has produced real, dated artefacts: a scope statement, an AI policy signed by leadership, a risk register with treatments, training records, an internal audit report, a management-review meeting log, and a CAPA log with actual entries. The standard is not satisfied by writing policy documents; it is satisfied by operating the policy long enough that an auditor can see the wear marks.
Annex A: All 39 Controls, Grouped
If clauses 4–10 are the management scaffolding, Annex A is the control catalogue. ISO 42001 Annex A contains 39 control objectives organized into nine sections (numbered A.2 through A.9 in the published standard - A.1 is reserved for the formal numbering convention). Like ISO 27001's Annex A, every control is an objective, not a prescription. The organization decides how it implements the control; the auditor decides whether the implementation satisfies the objective.
AWS published a thorough mapping of these controls to its own AI services; ISMS.online and SureCloud publish the unabridged control list. Here is the structure that matters for planning purposes:
Policies Related to AI
An AI policy aligned to organizational objectives, with subordinate policies for the specific AI activities the organization performs.
2 controlsInternal Organization
AI roles and responsibilities, reporting of concerns, and clear ownership for the AI lifecycle. The "who decides what" layer.
3 controlsResources for AI Systems
Documentation of the AI system resources - data, tooling, compute, human expertise - and the processes that govern their allocation.
6 controlsAssessing Impacts of AI Systems
The AI Impact Assessment process: documenting the AI system, its purpose, its impacts on individuals and society, and the conclusions of the assessment. This is the most distinctive Annex A section vs. ISO 27001.
5 controlsAI System Lifecycle
Lifecycle stages, requirements, design and development, verification and validation, deployment, operation and monitoring, technical documentation, and event logging. The largest Annex A section by control count.
11 controlsData for AI Systems
Data resources, data quality, data acquisition, data preparation, and processes for ensuring the data used to develop and operate AI systems is fit for purpose. Where ISO 42001 most clearly intersects EU AI Act Article 10.
5 controlsInformation for Interested Parties
System documentation, information for users, external reporting, communication of incidents, and information about AI capabilities and limitations - i.e., the transparency layer.
4 controlsUse of AI Systems
Processes for responsible use of the AI systems the organization deploys - intended use, monitoring, and the ability to switch off or override the AI when its outputs are wrong.
2 controlsThird-Party & Customer Relationships
Allocation of responsibilities between AI providers, AI users, and customers; managing third-party AI components and suppliers; and customer-facing obligations.
3 controlsThe two sections that surprise organizations migrating from ISO 27001 are A.5 (AI Impact Assessment) and A.6 (AI System Lifecycle). A.5 has no 27001 analogue at all - it requires you to write down the societal and individual impacts of every in-scope AI system, including how those impacts were assessed, mitigated, and accepted. A.6 covers the engineering lifecycle, where AI-specific verification and validation activities (including red-teaming and adversarial testing) live. If your organization has an MLOps pipeline but no documented record of how that pipeline ensures bias evaluation, accuracy testing, or safety review, A.6 is going to be your audit pain point.
The Engine: Plan-Do-Check-Act
Like every Annex SL management-system standard, ISO 42001 is built around the Plan-Do-Check-Act (PDCA) cycle. PDCA is the engine that turns a static policy library into a continually-improving system. The certification audit doesn't just check that your policies exist; it checks that the cycle is turning.
Plan
Clauses 4-7. Context, scope, leadership commitment, policy, roles, risk & impact assessments, objectives, training plan.
Do
Clause 8 + Annex A. Operate the controls, run the AI lifecycle, execute risk treatments, document evidence as you go.
Check
Clause 9. Monitor, measure, internal audit, management review. The phase organizations skip - and the one auditors look for first.
Act
Clause 10. Continual improvement, nonconformity tracking, corrective actions, lessons learned feeding back into Plan.
Most failed certification attempts fail at Check. Organizations write policies (Plan), implement controls (Do), and then forget to run the internal audit, hold the management review, and document the corrective actions. When the certification body's auditor arrives, the artefact trail breaks. Plan to run a full internal audit and management review at least once before your Stage 2 audit, and document them with real findings - not a vacuous "no nonconformities found" memo.
ISO 42001 vs NIST AI RMF vs EU AI Act
These three frameworks are not competitors. They occupy different rungs of the regulatory ladder and answer different questions. Companies serving regulated industries usually need all three: NIST as the flexible risk language, ISO 42001 as the certifiable management system, EU AI Act as the legal bar.
| ISO/IEC 42001 | NIST AI RMF | EU AI Act | |
|---|---|---|---|
| Type | International standard | Voluntary framework | Binding legislation |
| Geographic scope | Global | U.S. (international adoption growing) | EU + EU-market participants |
| Certifiable | Yes (3rd-party) | No | Conformity assessment for high-risk |
| Penalties for non-compliance | None directly | None directly | Up to €35M or 7% global revenue |
| Risk management | Clause 6.1, A.5, A.6 | GOVERN + MAP functions | Article 9 (prescriptive) |
| Bias / fairness controls | Objective level | MEASURE function | Article 10 (mandatory) |
| Transparency | A.8 | MEASURE function | Articles 13, 50 |
| Human oversight | A.9 | MANAGE function | Article 14 (prescriptive) |
| Cybersecurity testing | A.6 (objective) | MANAGE function | Article 15 (prescriptive) |
| Best for | Demonstrating governance to buyers and auditors | Building the risk vocabulary internally | Selling AI in the EU |
FairNow's mapping and RSI Security's crosswalk are the cleanest public references for translating between NIST and ISO 42001. NIST itself has signaled that an explicit crosswalk between the two is forthcoming. Until then, the operational pattern most practitioners follow is: NIST AI RMF for the language (the GOVERN / MAP / MEASURE / MANAGE functions are easier for engineers to absorb than ISO clauses), ISO 42001 for the structure (the management-system scaffolding gives you the artefacts auditors expect), and EU AI Act for the legal floor (additional prescriptive requirements layered on top, especially Articles 9, 10, 14, 15).
A 12-Month Implementation Roadmap
This is the timeline we walk first-time clients through. Organizations already certified to ISO 27001 can compress months 1-4 because the management-system scaffolding (clauses 4-10) is reusable. Greenfield organizations - no prior ISO experience - should plan on the full 12-15 months.
Scope & Gap Assessment
- Inventory AI systems: developed, provided, used. Include shadow AI (employee tools, embedded vendor features).
- Define AIMS scope statement. Smaller scope = easier first audit. You can expand later.
- Run a gap assessment against all 10 clauses and 39 Annex A controls. Output: a heatmap of where you are vs. where you need to be.
- Identify executive sponsor. ISO 42001 explicitly requires leadership commitment - a designated AI governance lead reporting to the C-suite is the minimum bar.
Policy & Risk Framework
- Author the AI Policy (clause 5.2) and subordinate operating policies (Annex A.2).
- Build the AI Risk Register and Treatment Plan (clause 6.1).
- Build the AI Impact Assessment process and template (Annex A.5).
- Author the Statement of Applicability (SoA) - one row per Annex A control with applicability decision and justification.
Implementation & Operation
- Roll out competence and AI literacy training (clause 7.2). Track who completed what.
- Operationalize Annex A.6 lifecycle controls in the engineering pipeline. Hook into existing MLOps where possible.
- Execute initial AI Impact Assessments for in-scope systems. Save the artefacts.
- Begin populating the CAPA log with real findings - not just placeholder "system performing as expected" entries.
Internal Audit & Management Review
- Run a full internal audit covering all clauses and applicable Annex A controls. Use a competent auditor (internal or external) who is not the policy author.
- Hold the management review meeting. Document: audit results, status of objectives, KPIs, nonconformities, corrective actions, opportunities for improvement.
- Close out internal audit findings before Stage 1.
Stage 1 Audit (Documentation Review)
- Engage an accredited certification body. Common choices include DNV, BSI, A-LIGN, Schellman, TÜV SÜD.
- Stage 1 reviews documentation: scope statement, AI policy, SoA, risk register, internal audit report, management review.
- Address any documented findings before Stage 2. Most are minor.
Stage 2 Audit (Implementation Audit) → Cert
- Stage 2 verifies the management system is operating effectively in practice. Auditors interview staff, review evidence, sample artefacts.
- Major nonconformities require remediation before cert is issued. Minor ones can be addressed via corrective action.
- Certification issued. Annual surveillance audits follow. Recertification audit at year 3.
Statement of Applicability: The Most Important Artefact
The Statement of Applicability (SoA) is the document that maps every Annex A control to a yes/no/partially-applicable decision, with justification. It is the artefact your auditor will spend the most time on, the artefact your buyers will most often request, and the artefact that demonstrates you actually thought about each control rather than blanket-asserting "yes, we do all of these."
Here is the shape it takes:
| Control | Title | Status | Justification & Implementation |
|---|---|---|---|
| A.2.2 | AI Policy | Applicable | Implemented via "AI Policy v2.0" approved 2026-02-12. Communicated org-wide via mandatory training. Reviewed annually by AI Governance Committee. |
| A.5.4 | AI System Impact Assessment | Applicable | AIIA template AIIA-T-01 used for all in-scope systems. 12 assessments completed YTD. Stored in /governance/aiia/ with version control. Reviewed at major-change events. |
| A.6.2.7 | AI Verification & Validation | Applicable | Pre-prod test suite covers OWASP LLM Top 10, MITRE ATLAS scenarios, and bias evaluation against documented test set. External pentest performed pre-launch and annually thereafter. |
| A.6.2.8 | AI System Deployment | Applicable | Deployment runbook DRB-AI-04. Includes pre-prod sign-off from product, security, and AI governance lead. Rollback procedure tested quarterly. |
| A.7.4 | Quality of Data for AI Systems | Applicable | Data quality framework DQF-2026 governs ingestion, validation, and drift monitoring. Quarterly bias evaluation report delivered to AIGC. |
| A.10.4 | Customer Use of AI Systems | Not Applicable | Organization does not provide AI systems to external customers in the current AIMS scope. Will be re-evaluated if customer-facing AI is added (tracked in scope review backlog). |
Two patterns we see consistently: (1) every "Not Applicable" needs a real, verifiable justification that an auditor can challenge - "we don't do that" is not enough. (2) the "Applicable" rows need pointers to the actual evidence (policy version, document ID, system name, training record). The SoA is a navigation map for the audit, not a wishlist.
Where Companies Actually Fail
Patterns we see across pre-cert assessments. None of these are fatal on their own, but they're the difference between sliding through Stage 2 and burning a quarter on remediation.
Treating it like ISO 27001 with extra steps
Teams who already have 27001 sometimes assume 42001 is just a bolt-on. The clauses are similar; the controls are not. A.5 (Impact Assessment) and the lifecycle controls in A.6 have no 27001 equivalent.
Scope statement that promises everything
Brand-new AIMS programs sometimes scope all AI systems globally on day one. You can't run an effective management system over an unbounded perimeter. The audit will find gaps everywhere.
No competence training records
Clause 7.2 expects you to demonstrate that the people working with AI are competent. "We hired smart people" isn't documentation. Auditors want training rosters, completion records, and refresher cadence.
An empty CAPA log
"No nonconformities found" across 12 months of operation is implausible. Auditors read an empty CAPA log as "we don't actually run the system."
AIIA as a checkbox
The AI Impact Assessment is the most distinctive 42001 artefact and the easiest to half-ass. A one-page template per system, all answers filled in 20 minutes, will not survive scrutiny.
No security testing under A.6
A.6 expects verification and validation including security testing. "We did a SOC 2 pentest" doesn't cover AI-specific threat surface (prompt injection, tool abuse, model exfiltration).
The Certification Process, End to End
Once your management system is operating - typically after months 7–9 of the roadmap - you engage an accredited certification body and begin the formal audit cycle.
Select Registrar
Stage 1 Audit
Stage 2 Audit
Cert Issued
Surveillance
Choosing a registrar
The registrar must be accredited by a national accreditation body recognized under the IAF Multilateral Recognition Arrangement. As of early 2026, the active 42001 registrar list is still smaller than for 27001 - check that the registrar you select holds 42001-specific accreditation, not just an ISO management-system accreditation in general. ANAB publishes its accredited 42001 CB list publicly.
Stage 1: Documentation Review
Stage 1 is largely a desk audit. The auditor reviews your scope statement, AI policy, Statement of Applicability, AI Impact Assessments, risk register, internal audit report, and management review minutes. The output is a Stage 1 report listing observations, areas of concern, and a determination of audit readiness for Stage 2. Expect 1–3 days of auditor time for a small organization, more for complex ones.
Stage 2: Implementation Audit
Stage 2 is the on-site (or remote) audit where the auditor interviews staff, reviews evidence, samples artefacts, and verifies the management system is operating in practice. This is where an empty CAPA log, a vacuous AIIA, or a missing training record becomes a major nonconformity. Stage 2 typically runs 3–10 audit-days depending on scope. You receive a formal nonconformity report at the end. Major nonconformities must be remediated and re-verified before the cert can be issued; minors can be closed via documented corrective action.
Surveillance and Recertification
The cert is valid for three years. The registrar performs a surveillance audit in years 1 and 2 - shorter than Stage 2, focused on a sample of clauses and controls. In year 3, a full recertification audit covers the entire management system again. Plan for the management system to keep operating, the CAPA log to keep filling, and the training records to keep accumulating - because the surveillance auditor will check.
How Lorikeet Helps
Lorikeet Security is an offensive-and-defensive shop. We do penetration testing - including AI-focused pentests that cover the verification-and-validation requirements in Annex A.6 - and we do the program work that surrounds it. Our vCISO practice walks clients through ISO 42001 readiness, including:
- Gap assessments against all 10 clauses and 39 Annex A controls, output as an executive heatmap and a remediation backlog.
- Policy and SoA authoring tuned to your organization - we don't ship templates and walk away.
- AI Impact Assessment workshops for your in-scope systems.
- AI-focused penetration testing mapping to A.6 verification controls and EU AI Act Article 15.
- Internal audit as an independent third party so your team can focus on remediation, not auditing themselves.
- Pre-Stage 1 readiness review so you walk into the certification body's audit with a known set of findings, not surprises.
If you're starting from zero, we run the program. If you have a vCISO or compliance lead already, we plug into the gap they don't have AI-specific expertise to fill.
Starting Your ISO 42001 Program?
Thirty-minute scoping call with a senior practitioner who has actually run the audit cycle. We'll map your AI inventory to the 39 Annex A controls, identify the four or five workstreams that will move the needle, and tell you honestly whether certification or framework adoption is the right next step. No deck, no sales pitch.
Book an ISO 42001 Scoping Call Learn About vCISO ServicesSources & Further Reading
- ISO/IEC 42001:2023 - AI Management Systems (official ISO page)
- ISO - "ISO 42001 Explained: What It Is and What It Means"
- AWS - ISO 42001 Compliance FAQ
- AWS Security Blog - AI lifecycle risk management with ISO/IEC 42001:2023
- Microsoft Learn - ISO/IEC 42001:2023 Compliance Offering
- BSI - ISO 42001 AI Management System
- DNV - ISO 42001 Certification Services
- ANAB - ISO/IEC 42001 Accredited Certification Bodies
- ISMS.online - ISO 42001 Annex A Controls Explained
- SureCloud - ISO 42001 Annex A Controls Guide
- Bastion - ISO 42001 Annex A Controls: Complete Guide
- A-LIGN - Understanding ISO 42001: The World's First AI Management System Standard
- NIST AI Risk Management Framework
- NIST AI 600-1 - Generative AI Profile
- European Commission - EU AI Act Regulatory Framework
- artificialintelligenceact.eu - Full text and analysis
- Cloud Security Alliance - ISO 42001 & NIST AI RMF for EU AI Act compliance
- FairNow - Mapping NIST AI RMF to ISO 42001
- RSI Security - NIST AI RMF / ISO 42001 Crosswalk
- Vanta - 5 Key Differences Between NIST AI RMF and ISO 42001
- EC-Council - EU AI Act vs NIST AI RMF vs ISO/IEC 42001 Comparison
- OWASP Top 10 for LLM Applications (2025)
- MITRE ATLAS - Adversarial Threat Landscape for AI Systems