ISO/IEC 42001 Deep Dive: The AI Management System Standard, Decoded (2026) | Lorikeet Security Skip to main content
Back to Blog
AI Governance & Compliance

ISO/IEC 42001 Deep Dive: The AI Management System Standard, Decoded (2026)

April 29, 2026 18 min read Lorikeet Security Team
About this post: Lorikeet Security delivers offensive testing and the defensive program work that surrounds it - vCISO, GRC, AI governance readiness, and pre-cert assessments. We've helped clients prepare for ISO 27001, SOC 2, and now ISO 42001. This guide is what we walk new clients through on day one. Related reading: Agentic AI Security Testing, AI/LLM Security Testing, OWASP Top 10 for LLMs 2025, ISO 27001 vs NIST CSF, Building Secure Autonomous AI.

In December 2023, ISO and IEC published the first international standard for managing AI - ISO/IEC 42001:2023. Eighteen months later, it has gone from curiosity to procurement requirement. AWS certified its AI services in late 2024. Microsoft followed for Azure AI. By the back half of 2025, "show us your AI governance" had become a standard line item in enterprise vendor questionnaires - and a credible answer was either an ISO 42001 cert or a roadmap to one.

The pressure isn't just commercial. The EU AI Act's high-risk-system enforcement deadline lands August 2026. The U.S. NIST AI Risk Management Framework is referenced in federal acquisitions. State-level AI laws are accumulating. The old "we'll figure out AI governance later" answer no longer survives a real diligence call.

What this post covers (1) What ISO 42001 actually is and why it differs from existing management-system standards. (2) The full clause structure, walked through one by one. (3) All 39 Annex A controls grouped by category. (4) The Plan-Do-Check-Act cycle the standard runs on. (5) An honest crosswalk to NIST AI RMF and the EU AI Act, including where ISO 42001 falls short. (6) A 12-month implementation roadmap. (7) A Statement of Applicability template. (8) Where companies actually fail their audit. (9) The certification process, end to end. We've kept this vendor-neutral - the path applies whether you hire us, hire someone else, or do it in-house.
39
Annex A Controls
10
Management Clauses
~78%
Overlap w/ EU AI Act
9–15 mo
Time to Cert (Greenfield)

What ISO/IEC 42001 Actually Is

ISO/IEC 42001 is a management-system standard for AI. That phrase is doing a lot of work, so let's unpack it.

A management-system standard does not tell you which technical controls to implement, what model architecture to use, or how to red-team your prompts. It tells you how to govern the thing - how leadership commits to it, how risks get identified, how people get trained, how decisions get documented, how problems get found and fixed, and how the whole system gets measured and improved over time. ISO 9001 does this for quality. ISO 27001 does this for information security. ISO 42001 does this for AI.

The standard applies to any organization that develops, provides, or uses AI systems - which in 2026 is most of them. It is technology-neutral: a hospital using an FDA-cleared diagnostic model, a SaaS company shipping an LLM-powered assistant, and a bank deploying a fraud-detection model are all in scope. What the standard cares about is whether the organization has a system for governing those AI activities, not which AI activities it does.

Certification is voluntary. ISO does not certify organizations directly - accredited third-party certification bodies do, and the accreditation is granted by national accreditation bodies (ANAB in the U.S., UKAS in the U.K., DAkkS in Germany). DNV, BSI, A-LIGN, Schellman, and TÜV SÜD are among the larger accredited registrars currently issuing ISO 42001 certs. You can also implement ISO 42001 without certifying - many organizations do this first as a way to prepare for the audit later.

How ISO 42001 differs from ISO 27001 Both standards share the same Annex SL High-Level Structure - the standardized clause framework ISO uses across management-system standards. Clauses 4 through 10 are nearly identical: context, leadership, planning, support, operation, performance evaluation, improvement. The difference lies in what gets governed. ISO 27001 protects information assets (CIA: confidentiality, integrity, availability). ISO 42001 protects against AI-specific harms - bias, opacity, lack of human oversight, unsafe outputs, automated decisions affecting individuals - and requires you to think about the entire AI lifecycle from design through retirement. The two are explicitly designed to integrate. If you're already 27001-certified, you're roughly halfway to 42001.

Why It Matters Right Now

Three forces are converging.

1. The EU AI Act's August 2026 deadline

The Act's general provisions and AI literacy requirements took effect February 2025. General-purpose AI (GPAI) provider obligations took effect August 2025. The big one - mandatory conformity assessments and CE marking for high-risk AI systems - lands August 2026. For organizations placing high-risk AI on the EU market, ISO 42001 is positioning to be the harmonized standard that demonstrates conformity with key Act requirements. CSA's analysis estimates that a mature ISO 42001 program covers roughly 78% of the operational scaffolding the Act requires - particularly Articles 9 (risk management), 12 (record keeping), 14 (human oversight), and 17 (quality management).

2. Procurement pressure

Vendor questionnaires are catching up to AI. By Q4 2025, the "AI governance" line item in enterprise security questionnaires had crossed the 50% mark. The credible answers are either ISO 42001, NIST AI RMF, or both - and the certification path closes the answer faster than a long narrative response. Early movers (AWS, Microsoft, Anthropic, several MLOps platforms, large telecoms) are using their certifications as procurement differentiators. The companies that haven't started yet are now starting.

3. Insurance and capital allocation

Cyber and tech E&O underwriters are beginning to ask AI-specific questions at renewal. Investors and boards are asking for documented AI governance as part of operational risk reviews. Neither group has settled on a single framework - but ISO 42001 is the one that maps cleanest to existing audit motion (ISO 27001) and existing legal motion (EU AI Act), which is why it's becoming the lingua franca.


The Standard's Structure: 10 Clauses

Like every modern ISO management-system standard, ISO 42001 follows the Annex SL High-Level Structure. The numbering is fixed, the section names are standardized, and the same scaffolding appears in ISO 27001, ISO 9001, ISO 22301, and ISO 14001. If you have implemented any of those, the table of contents below will be familiar.

1–3
Scope · Normative References · Terms & Definitions
Sets boundaries for what the standard covers and provides the vocabulary the rest of the document depends on. Auditors don't audit clauses 1–3; they read them to anchor everything else.
4
Context of the Organization
Define what AI you build, provide, or use; identify internal and external issues affecting it; identify interested parties (regulators, users, employees, society); set the boundaries of your AIMS scope statement. The scope statement is one of the first artefacts an auditor reviews.
5
Leadership
Top-management commitment, an AI policy, and clear roles and responsibilities. The standard expects executive ownership - not just a designated practitioner. The AI policy here is short and aspirational; the operational policies live in clause 7 and Annex A.
6
Planning
Risk and opportunity identification, AI risk assessment process, AI impact assessment, and measurable AI objectives. Clause 6.1.4 introduces the AI impact assessment, which has no direct ISO 27001 analogue - this is where bias, fairness, and societal-harm thinking enters the management system.
7
Support
Resources, competence (people who actually know AI), awareness, communication, and documented information. Clause 7.2 on competence is where AI literacy training requirements live.
8
Operation
Operational planning and control, AI risk treatment, AI impact assessment execution. This clause is where the Annex A controls are operationalized - the doing layer of the management system.
9
Performance Evaluation
Monitoring, measurement, analysis, evaluation; internal audit; management review. The internal audit and management review meetings are non-negotiable artefacts - missing them is the most common reason organizations fail Stage 2.
10
Improvement
Continual improvement, nonconformity, and corrective action. The CAPA (corrective and preventive action) log is the artefact the auditor will most want to see populated with real entries that have been closed out.

Auditors read clauses 4 through 10 in that order, and certification depends on demonstrating that each clause has produced real, dated artefacts: a scope statement, an AI policy signed by leadership, a risk register with treatments, training records, an internal audit report, a management-review meeting log, and a CAPA log with actual entries. The standard is not satisfied by writing policy documents; it is satisfied by operating the policy long enough that an auditor can see the wear marks.

Annex SL trick: "documented information" The phrase "documented information" appears throughout. It means written, dated, version-controlled, and accessible to those who need it. It does not mean a Confluence page someone wrote and then never updated. Auditors check for revision history. The smallest investment with the biggest audit payoff is committing to a real document control system from day one - which is why most successful 42001 implementations piggyback on the same SharePoint or Confluence space already used for ISO 27001 evidence.

Annex A: All 39 Controls, Grouped

If clauses 4–10 are the management scaffolding, Annex A is the control catalogue. ISO 42001 Annex A contains 39 control objectives organized into nine sections (numbered A.2 through A.9 in the published standard - A.1 is reserved for the formal numbering convention). Like ISO 27001's Annex A, every control is an objective, not a prescription. The organization decides how it implements the control; the auditor decides whether the implementation satisfies the objective.

AWS published a thorough mapping of these controls to its own AI services; ISMS.online and SureCloud publish the unabridged control list. Here is the structure that matters for planning purposes:

A.2

Policies Related to AI

An AI policy aligned to organizational objectives, with subordinate policies for the specific AI activities the organization performs.

2 controls
A.3

Internal Organization

AI roles and responsibilities, reporting of concerns, and clear ownership for the AI lifecycle. The "who decides what" layer.

3 controls
A.4

Resources for AI Systems

Documentation of the AI system resources - data, tooling, compute, human expertise - and the processes that govern their allocation.

6 controls
A.5

Assessing Impacts of AI Systems

The AI Impact Assessment process: documenting the AI system, its purpose, its impacts on individuals and society, and the conclusions of the assessment. This is the most distinctive Annex A section vs. ISO 27001.

5 controls
A.6

AI System Lifecycle

Lifecycle stages, requirements, design and development, verification and validation, deployment, operation and monitoring, technical documentation, and event logging. The largest Annex A section by control count.

11 controls
A.7

Data for AI Systems

Data resources, data quality, data acquisition, data preparation, and processes for ensuring the data used to develop and operate AI systems is fit for purpose. Where ISO 42001 most clearly intersects EU AI Act Article 10.

5 controls
A.8

Information for Interested Parties

System documentation, information for users, external reporting, communication of incidents, and information about AI capabilities and limitations - i.e., the transparency layer.

4 controls
A.9

Use of AI Systems

Processes for responsible use of the AI systems the organization deploys - intended use, monitoring, and the ability to switch off or override the AI when its outputs are wrong.

2 controls
A.10

Third-Party & Customer Relationships

Allocation of responsibilities between AI providers, AI users, and customers; managing third-party AI components and suppliers; and customer-facing obligations.

3 controls

The two sections that surprise organizations migrating from ISO 27001 are A.5 (AI Impact Assessment) and A.6 (AI System Lifecycle). A.5 has no 27001 analogue at all - it requires you to write down the societal and individual impacts of every in-scope AI system, including how those impacts were assessed, mitigated, and accepted. A.6 covers the engineering lifecycle, where AI-specific verification and validation activities (including red-teaming and adversarial testing) live. If your organization has an MLOps pipeline but no documented record of how that pipeline ensures bias evaluation, accuracy testing, or safety review, A.6 is going to be your audit pain point.

Annex B, C, and D - the support material Annex A is the control list. Annex B provides implementation guidance for each Annex A control - dozens of pages of practical examples. Annex C is a catalog of AI-related organizational objectives and risk sources. Annex D describes how 42001 integrates with sector-specific standards. The audit cares about Annex A; the implementation team should read Annex B closely. If your consultant or vendor only quotes Annex A, ask them what Annex B says about it.

The Engine: Plan-Do-Check-Act

Like every Annex SL management-system standard, ISO 42001 is built around the Plan-Do-Check-Act (PDCA) cycle. PDCA is the engine that turns a static policy library into a continually-improving system. The certification audit doesn't just check that your policies exist; it checks that the cycle is turning.

P

Plan

Clauses 4-7. Context, scope, leadership commitment, policy, roles, risk & impact assessments, objectives, training plan.

D

Do

Clause 8 + Annex A. Operate the controls, run the AI lifecycle, execute risk treatments, document evidence as you go.

C

Check

Clause 9. Monitor, measure, internal audit, management review. The phase organizations skip - and the one auditors look for first.

A

Act

Clause 10. Continual improvement, nonconformity tracking, corrective actions, lessons learned feeding back into Plan.

Most failed certification attempts fail at Check. Organizations write policies (Plan), implement controls (Do), and then forget to run the internal audit, hold the management review, and document the corrective actions. When the certification body's auditor arrives, the artefact trail breaks. Plan to run a full internal audit and management review at least once before your Stage 2 audit, and document them with real findings - not a vacuous "no nonconformities found" memo.


ISO 42001 vs NIST AI RMF vs EU AI Act

These three frameworks are not competitors. They occupy different rungs of the regulatory ladder and answer different questions. Companies serving regulated industries usually need all three: NIST as the flexible risk language, ISO 42001 as the certifiable management system, EU AI Act as the legal bar.

ISO/IEC 42001 NIST AI RMF EU AI Act
Type International standard Voluntary framework Binding legislation
Geographic scope Global U.S. (international adoption growing) EU + EU-market participants
Certifiable Yes (3rd-party) No Conformity assessment for high-risk
Penalties for non-compliance None directly None directly Up to €35M or 7% global revenue
Risk management Clause 6.1, A.5, A.6 GOVERN + MAP functions Article 9 (prescriptive)
Bias / fairness controls Objective level MEASURE function Article 10 (mandatory)
Transparency A.8 MEASURE function Articles 13, 50
Human oversight A.9 MANAGE function Article 14 (prescriptive)
Cybersecurity testing A.6 (objective) MANAGE function Article 15 (prescriptive)
Best for Demonstrating governance to buyers and auditors Building the risk vocabulary internally Selling AI in the EU

FairNow's mapping and RSI Security's crosswalk are the cleanest public references for translating between NIST and ISO 42001. NIST itself has signaled that an explicit crosswalk between the two is forthcoming. Until then, the operational pattern most practitioners follow is: NIST AI RMF for the language (the GOVERN / MAP / MEASURE / MANAGE functions are easier for engineers to absorb than ISO clauses), ISO 42001 for the structure (the management-system scaffolding gives you the artefacts auditors expect), and EU AI Act for the legal floor (additional prescriptive requirements layered on top, especially Articles 9, 10, 14, 15).

Where ISO 42001 falls short of the EU AI Act Two areas. Data governance: AI Act Article 10 prescribes statistical bias evaluation, dataset balance, and traceability requirements that ISO 42001 only addresses at the objective level. Cybersecurity testing: Article 15 mandates accuracy and adversarial robustness testing for high-risk systems; ISO 42001 says "test it" without saying how. If you're targeting EU markets, plan to layer OWASP LLM Top 10 testing, MITRE ATLAS threat modeling, and AI-focused penetration testing on top of your ISO 42001 program.

A 12-Month Implementation Roadmap

This is the timeline we walk first-time clients through. Organizations already certified to ISO 27001 can compress months 1-4 because the management-system scaffolding (clauses 4-10) is reusable. Greenfield organizations - no prior ISO experience - should plan on the full 12-15 months.

Phase 1
Months 1–2

Scope & Gap Assessment

  • Inventory AI systems: developed, provided, used. Include shadow AI (employee tools, embedded vendor features).
  • Define AIMS scope statement. Smaller scope = easier first audit. You can expand later.
  • Run a gap assessment against all 10 clauses and 39 Annex A controls. Output: a heatmap of where you are vs. where you need to be.
  • Identify executive sponsor. ISO 42001 explicitly requires leadership commitment - a designated AI governance lead reporting to the C-suite is the minimum bar.
Phase 2
Months 2–4

Policy & Risk Framework

  • Author the AI Policy (clause 5.2) and subordinate operating policies (Annex A.2).
  • Build the AI Risk Register and Treatment Plan (clause 6.1).
  • Build the AI Impact Assessment process and template (Annex A.5).
  • Author the Statement of Applicability (SoA) - one row per Annex A control with applicability decision and justification.
Phase 3
Months 4–7

Implementation & Operation

  • Roll out competence and AI literacy training (clause 7.2). Track who completed what.
  • Operationalize Annex A.6 lifecycle controls in the engineering pipeline. Hook into existing MLOps where possible.
  • Execute initial AI Impact Assessments for in-scope systems. Save the artefacts.
  • Begin populating the CAPA log with real findings - not just placeholder "system performing as expected" entries.
Phase 4
Months 7–9

Internal Audit & Management Review

  • Run a full internal audit covering all clauses and applicable Annex A controls. Use a competent auditor (internal or external) who is not the policy author.
  • Hold the management review meeting. Document: audit results, status of objectives, KPIs, nonconformities, corrective actions, opportunities for improvement.
  • Close out internal audit findings before Stage 1.
Phase 5
Months 9–10

Stage 1 Audit (Documentation Review)

  • Engage an accredited certification body. Common choices include DNV, BSI, A-LIGN, Schellman, TÜV SÜD.
  • Stage 1 reviews documentation: scope statement, AI policy, SoA, risk register, internal audit report, management review.
  • Address any documented findings before Stage 2. Most are minor.
Phase 6
Months 10–12

Stage 2 Audit (Implementation Audit) → Cert

  • Stage 2 verifies the management system is operating effectively in practice. Auditors interview staff, review evidence, sample artefacts.
  • Major nonconformities require remediation before cert is issued. Minor ones can be addressed via corrective action.
  • Certification issued. Annual surveillance audits follow. Recertification audit at year 3.

Statement of Applicability: The Most Important Artefact

The Statement of Applicability (SoA) is the document that maps every Annex A control to a yes/no/partially-applicable decision, with justification. It is the artefact your auditor will spend the most time on, the artefact your buyers will most often request, and the artefact that demonstrates you actually thought about each control rather than blanket-asserting "yes, we do all of these."

Here is the shape it takes:

Statement of Applicability — Excerpt
Rev 1.4 · Approved 2026-04-15
Control Title Status Justification & Implementation
A.2.2 AI Policy Applicable Implemented via "AI Policy v2.0" approved 2026-02-12. Communicated org-wide via mandatory training. Reviewed annually by AI Governance Committee.
A.5.4 AI System Impact Assessment Applicable AIIA template AIIA-T-01 used for all in-scope systems. 12 assessments completed YTD. Stored in /governance/aiia/ with version control. Reviewed at major-change events.
A.6.2.7 AI Verification & Validation Applicable Pre-prod test suite covers OWASP LLM Top 10, MITRE ATLAS scenarios, and bias evaluation against documented test set. External pentest performed pre-launch and annually thereafter.
A.6.2.8 AI System Deployment Applicable Deployment runbook DRB-AI-04. Includes pre-prod sign-off from product, security, and AI governance lead. Rollback procedure tested quarterly.
A.7.4 Quality of Data for AI Systems Applicable Data quality framework DQF-2026 governs ingestion, validation, and drift monitoring. Quarterly bias evaluation report delivered to AIGC.
A.10.4 Customer Use of AI Systems Not Applicable Organization does not provide AI systems to external customers in the current AIMS scope. Will be re-evaluated if customer-facing AI is added (tracked in scope review backlog).

Two patterns we see consistently: (1) every "Not Applicable" needs a real, verifiable justification that an auditor can challenge - "we don't do that" is not enough. (2) the "Applicable" rows need pointers to the actual evidence (policy version, document ID, system name, training record). The SoA is a navigation map for the audit, not a wishlist.


Where Companies Actually Fail

Patterns we see across pre-cert assessments. None of these are fatal on their own, but they're the difference between sliding through Stage 2 and burning a quarter on remediation.

Treating it like ISO 27001 with extra steps

Teams who already have 27001 sometimes assume 42001 is just a bolt-on. The clauses are similar; the controls are not. A.5 (Impact Assessment) and the lifecycle controls in A.6 have no 27001 equivalent.

FixRun A.5 and A.6 as a separate workstream from your 27001 maintenance. Different artefacts, different sign-offs.

Scope statement that promises everything

Brand-new AIMS programs sometimes scope all AI systems globally on day one. You can't run an effective management system over an unbounded perimeter. The audit will find gaps everywhere.

FixStart with a tightly-bounded scope - one product line, one customer-facing AI, one geography. Expand at recertification.

No competence training records

Clause 7.2 expects you to demonstrate that the people working with AI are competent. "We hired smart people" isn't documentation. Auditors want training rosters, completion records, and refresher cadence.

FixStand up an AI literacy training program in month 2 - even a 30-min internal module - and track completions in your LMS or HRIS.

An empty CAPA log

"No nonconformities found" across 12 months of operation is implausible. Auditors read an empty CAPA log as "we don't actually run the system."

FixLower the bar for what gets logged. Near-misses, observation findings, and process improvements all count. Aim for 8–15 entries before Stage 1.

AIIA as a checkbox

The AI Impact Assessment is the most distinctive 42001 artefact and the easiest to half-ass. A one-page template per system, all answers filled in 20 minutes, will not survive scrutiny.

FixRun AIIAs as 60–90 minute structured workshops with the product owner, an engineer, and a non-engineer. Document who attended.

No security testing under A.6

A.6 expects verification and validation including security testing. "We did a SOC 2 pentest" doesn't cover AI-specific threat surface (prompt injection, tool abuse, model exfiltration).

FixCommission an AI-focused pentest covering OWASP LLM Top 10 and agentic abuse paths. Annual at minimum.

The Certification Process, End to End

Once your management system is operating - typically after months 7–9 of the roadmap - you engage an accredited certification body and begin the formal audit cycle.

1
Select Registrar
Wk 1–2
2
Stage 1 Audit
Wk 3–4
3
Stage 2 Audit
Wk 8–12
4
Cert Issued
Wk 13–16
5
Surveillance
Yr 1, 2 · Re-cert Yr 3

Choosing a registrar

The registrar must be accredited by a national accreditation body recognized under the IAF Multilateral Recognition Arrangement. As of early 2026, the active 42001 registrar list is still smaller than for 27001 - check that the registrar you select holds 42001-specific accreditation, not just an ISO management-system accreditation in general. ANAB publishes its accredited 42001 CB list publicly.

Stage 1: Documentation Review

Stage 1 is largely a desk audit. The auditor reviews your scope statement, AI policy, Statement of Applicability, AI Impact Assessments, risk register, internal audit report, and management review minutes. The output is a Stage 1 report listing observations, areas of concern, and a determination of audit readiness for Stage 2. Expect 1–3 days of auditor time for a small organization, more for complex ones.

Stage 2: Implementation Audit

Stage 2 is the on-site (or remote) audit where the auditor interviews staff, reviews evidence, samples artefacts, and verifies the management system is operating in practice. This is where an empty CAPA log, a vacuous AIIA, or a missing training record becomes a major nonconformity. Stage 2 typically runs 3–10 audit-days depending on scope. You receive a formal nonconformity report at the end. Major nonconformities must be remediated and re-verified before the cert can be issued; minors can be closed via documented corrective action.

Surveillance and Recertification

The cert is valid for three years. The registrar performs a surveillance audit in years 1 and 2 - shorter than Stage 2, focused on a sample of clauses and controls. In year 3, a full recertification audit covers the entire management system again. Plan for the management system to keep operating, the CAPA log to keep filling, and the training records to keep accumulating - because the surveillance auditor will check.


How Lorikeet Helps

Lorikeet Security is an offensive-and-defensive shop. We do penetration testing - including AI-focused pentests that cover the verification-and-validation requirements in Annex A.6 - and we do the program work that surrounds it. Our vCISO practice walks clients through ISO 42001 readiness, including:

If you're starting from zero, we run the program. If you have a vCISO or compliance lead already, we plug into the gap they don't have AI-specific expertise to fill.

Starting Your ISO 42001 Program?

Thirty-minute scoping call with a senior practitioner who has actually run the audit cycle. We'll map your AI inventory to the 39 Annex A controls, identify the four or five workstreams that will move the needle, and tell you honestly whether certification or framework adoption is the right next step. No deck, no sales pitch.

Book an ISO 42001 Scoping Call Learn About vCISO Services
-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

Lorikeet Security helps modern engineering teams ship safer software. Our work spans web applications, APIs, cloud infrastructure, and AI-generated codebases — and everything we publish here comes from patterns we see in real client engagements.

Lory waving

Hi, I'm Lory! Need help finding the right service? Click to chat!