When ISO 42001 was published in December 2023, most security and compliance teams filed it away as something to revisit later. It was new, it was unfamiliar, and there were more immediate things to deal with. That calculation is changing quickly. Enterprise procurement teams are adding ISO 42001 questions to vendor security questionnaires. Regulators in the EU and UK are referencing it as a benchmark. And organizations that got ahead of SOC 2 before their customers started asking have a useful precedent: the right time to build the capability is before the requirement lands, not after.
This article is not a summary of every clause in the standard. It's a plain-language explanation of what ISO 42001 actually requires, where it differs from the ISO 27001 framework most organizations already know, and what certification readiness actually looks like in practice.
What ISO 42001 Is
ISO/IEC 42001 is the first international standard specifically addressing artificial intelligence management systems (AIMS). It establishes requirements for an organization to demonstrate responsible development, deployment, and use of AI, covering governance structures, risk management, data practices, human oversight, and incident response. It follows the Annex SL high-level structure used by ISO 27001 and ISO 9001, which means organizations already operating those management systems will find the structure familiar, even if the content is new.
The standard is designed for any organization that develops, provides, or uses AI systems, not just AI vendors. If your organization is using AI tools in regulated processes, deploying AI-assisted decision-making, or building AI features into products, ISO 42001 is relevant.
The Structure: Clauses 4–10 and Annex A
Like ISO 27001, the normative requirements live in Clauses 4 through 10. Annex A provides a set of controls that organizations select based on their AI risk profile. The clause structure maps to a Plan-Do-Check-Act cycle:
- Clause 4 (Context): Understand your organization's role in the AI value chain, are you a developer, a deployer, or both? Identify interested parties and their AI-related requirements.
- Clause 5 (Leadership): Establish executive accountability for the AI management system. This is not a security team function, it requires visible leadership commitment and a named owner.
- Clause 6 (Planning): Define AI objectives and conduct risk assessments for AI systems. This is where you document what your AI systems do, what risks they carry, and how you're addressing them.
- Clause 7 (Support): Resources, competence, awareness, and documentation. Your AI governance function needs trained people, not just policies.
- Clause 8 (Operation): The operational heart of the standard, implementing controls, managing AI-specific risks, and overseeing the AI system lifecycle.
- Clause 9 (Performance Evaluation): Monitoring, measurement, internal audit, and management review. The management system has to actually run, not just exist on paper.
- Clause 10 (Improvement): Nonconformity, corrective action, and continual improvement.
What It Actually Requires, in Plain Language
Rather than walking through every Annex A control, here are the five requirements that most organizations find substantive and operationally meaningful:
1. AI Management System Establishment
You need a documented, actively maintained AI management system, not just an AI policy. This means defined scope, named ownership, executive sign-off, and a governance structure with actual authority. "We have a policy" is not sufficient. The AIMS has to be a functioning system that gets reviewed, updated, and audited.
2. Risk Assessment for AI Systems
ISO 42001 requires documented risk assessments for each AI system in scope. The assessment needs to address AI-specific risks: bias and fairness, transparency, robustness, security vulnerabilities, and the potential for harmful outputs. This is substantively different from a standard information security risk assessment, it requires assessing what the AI system does, not just what data it handles.
3. Data Governance for AI
The standard has explicit requirements around the data used to train, validate, and operate AI systems. This includes data quality, data provenance, and ensuring that data practices are consistent with legal and ethical obligations. For regulated industries, this intersects directly with existing data governance requirements, but it requires documenting the AI-specific data lineage, not just general data handling practices.
4. Human Oversight Requirements
ISO 42001 requires that AI systems operating in contexts with significant impact, consequential decisions, high-risk use cases, have defined human oversight mechanisms. This isn't just "a human can override the AI." It's a documented process for how humans review, validate, and intervene in AI-generated outputs, with evidence that the process actually operates.
5. Incident Management
AI-specific incident management is required, covering AI misbehavior, unexpected outputs, bias events, and security incidents involving AI systems. Your existing incident response process likely doesn't cover these categories explicitly, and ISO 42001 requires that it does.
"Certification readiness is not the same as compliance. You can check every box on a gap assessment and still not be ready for a third-party audit. The difference is whether your management system actually operates, or whether it exists only in documents."
How It Differs From ISO 27001
ISO 42001 complements ISO 27001, it doesn't replace it. ISO 27001 addresses information security for your organization broadly. ISO 42001 addresses the specific risks introduced by AI systems, including risks that have no information security analog: model bias, hallucination, adversarial manipulation, and the governance of AI decision-making.
Organizations already certified to ISO 27001 have a structural advantage, the management system discipline, the documentation culture, and the internal audit capability are all transferable. But the content of ISO 42001 is genuinely new, and the AI-specific risk assessment, data governance, and human oversight requirements will need to be built, not adapted.
Alignment with NIST AI RMF
The NIST AI Risk Management Framework and ISO 42001 are complementary. NIST AI RMF organizes AI risk management around four functions, Govern, Map, Measure, Manage, and provides detailed guidance on implementation. ISO 42001 provides the certifiable management system structure. Organizations in US-regulated industries often use NIST AI RMF as the implementation guidance and ISO 42001 as the certification target. The two frameworks are compatible; building to one gives you a significant head start on the other.
What Certification Readiness Actually Looks Like
There is a meaningful difference between "we comply with ISO 42001" and certification readiness. Compliance means your practices are consistent with the standard's requirements. Certification readiness means you can demonstrate that to a third-party auditor, with documented evidence, operating records, and a management system that runs continuously, not just before the audit.
Practical certification readiness milestones:
- Completed gap assessment against Clauses 4–10 and selected Annex A controls
- Documented AIMS scope, boundaries, and AI system inventory
- Risk assessments completed for all in-scope AI systems, with treatment plans in place
- Internal audit completed and findings closed or tracked
- Management review conducted with documented outputs
- At least one full Plan-Do-Check-Act cycle demonstrable to an auditor
Most organizations need six to twelve months to reach that state from a standing start, depending on the complexity of their AI footprint and the maturity of their existing management systems.
Your customers are already asking about this. Regulators are building frameworks that will reference it. The organizations that get ahead of it now will be answering "yes" to those questionnaire questions, and closing deals faster because of it, while competitors are still figuring out where to start.