ISO 42001: Why AI Governance Matters Now and How Organisations Are Making It Work.

Artificial Intelligence is already embedded in many organisations, often more deeply than leadership realises. From copilots and automated decision tools to recommendation engines and analytics platforms, AI is influencing decisions, shaping client interactions and affecting individuals at scale.

What has changed is not just the technology, but the expectation of accountability.

ISO/IEC 42001, published in 2023, is the world’s first international, certifiable standard for an Artificial Intelligence Management System (AIMS). As discussed in our recent webinar, ISO 42001 is not about how clever an algorithm is. It is about how people use AI, how decisions are governed, and how risks are controlled across the AI lifecycle.

With accredited ISO 42001 certification now available in many territories including the US and UK, organisations have a clear opportunity to move AI out of informal experimentation and into auditable, trustworthy and defensible business practice.

What ISO 42001 Really Is (and What It Isn’t)

A persistent misconception is that ISO 42001 is a technical standard aimed only at developers or data scientists. The webinar made clear that this assumption is wrong.

ISO 42001 focuses on governance, not code.

The standard provides a structured framework for identifying, governing and controlling AI systems across their lifecycle, making responsible AI use visible, documented and auditable. It addresses how AI is approved, deployed, monitored and reviewed, rather than prescribing specific technologies.

Crucially, ISO 42001 is concerned with how people use AI, how decisions are made using AI outputs, and how accountability is assigned. Certification signals that AI is being managed deliberately, rather than being allowed to evolve unchecked.

The standard aligns with other ISO management systems too, including ISO 27001, ISO 9001 and ISO 27701, and follows the Annex SL structure. For organisations with existing ISO certifications, this creates a familiar and integrable management system rather than a standalone compliance burden.

Why Organisations Are Choosing to Implement ISO 42001 Now

During the webinar, a common question emerged: why add another standard? The answer lies in the nature of AI risk.

AI mistakes can scale quickly. Bias propagates across populations. Security failures expose large datasets. Model drift can quietly degrade decision quality over time. The depth, speed and scale of AI failures can have serious consequences for individuals, organisations and society.

Many organisations already have AI in use without realising it. Tools are adopted informally by staff who sign up to new SaaS products and services where AI features are embedded into platforms. Third-party providers introduce AI-generated outputs into business processes. This creates “shadow AI” – systems operating without formal oversight or accountability.

Paying lip service to ethical AI is no longer sufficient. Either AI is governed properly, or it is not governed at all.

ISO 42001 provides a way to demonstrate that governance is real, systematic and embedded into operations, rather than being a set of aspirational statements.

Understanding the “AI Family”

One of the key points from the webinar was that AI is not a single technology. It is a family of technologies, including machine learning (ML), natural language processing (NLP), computer vision, generative AI, robotics and automated decision systems.

Each of these introduces different risk profiles and governance challenges. A generative AI tool used for drafting content presents different risks to a machine-learning model used for credit scoring or eligibility decisions.

ISO 42001 requires organisations to identify, classify and govern all AI types in use, not just the most visible or business-critical systems. This classification underpins risk assessment, impact assessment and control selection.

Ignoring less obvious AI systems often leads directly to audit nonconformities.

Ghost AI and the Reality of Unseen Risk

A recurring theme in ISO 42001 implementations is the discovery of AI operating silently in the background.

These “ghost AI” systems might include automated filters, recommendation engines, embedded copilots, analytics features or AI-enabled cloud services. They are often undocumented, poorly understood and excluded from governance frameworks.

The webinar highlighted that once organisations begin mapping their AI use, they frequently uncover far more AI than expected. These systems cannot simply be excluded because they are inconvenient to document. AI already in use must be brought into scope.

Failure to identify and govern ghost AI leads to gaps in accountability, unmanaged risks and audit findings that are difficult to remediate late in the certification process.

Roles, Accountability and Human Oversight

ISO 42001 places strong emphasis on clear roles and responsibilities.

The webinar explored the different roles that exist in the AI ecosystem, including AI providers, AI producers and AI subjects – the individuals affected by AI-driven decisions. These roles may be held by different organisations, different teams, or even the same organisation depending on the context.

A frequent audit finding is unclear accountability for AI decisions. Without defined ownership, organisations struggle to demonstrate control, particularly where AI influences outcomes that affect people.

Human oversight is another critical requirement. For higher-risk AI use cases, organisations must be able to show how human judgement is applied, when interventions occur, and how decisions can be challenged or escalated.

Accessible routes for stakeholders to raise ethical or operational concerns are an important part of this oversight framework.

Impact Assessment Together with Risk Assessment

One of the most important lessons shared during the webinar was sequencing. Implementing the standard step-by-step, as one requirement leads to the next.

Impact assessments should be conducted together with risk assessments. Understanding potential ethical, legal, operational and societal impacts early ensures that risks are properly understood and treated. Skipping straight to risk scoring without considering impact often leads to superficial controls and missed issues.

ISO 42001 encourages organisations to think beyond technical failure and consider how AI decisions affect individuals and groups, particularly where outcomes may be unfair, opaque or difficult to challenge.

For more guidance on conducting Impact Assessments, ISO 42005 provides explicit guidelines.

ISO 42001 and the Wider Regulatory Context

The webinar positioned ISO 42001 as a bridge between regulation and operational controls.

In the UK, there is no single AI law. Instead, AI is overseen by multiple regulators using a “principles-based approach” focused on safety, fairness, transparency, accountability and redress. Internationally, risk-based regulatory models are emerging. [As of December 2025]

ISO 42001 maps closely to these expectations. It provides a practical framework for demonstrating that AI risks are identified, assessed, monitored and reviewed, and that governance is not dependent on informal knowledge or individual discretion.

Certification provides auditable evidence for clients, partners and regulators that AI is being managed responsibly and consistently.

How ISO 42001 Differs from Other ISO Standards

While ISO 42001 integrates with other management systems, it introduces AI-specific requirements that are not covered elsewhere.

These include lifecycle controls from data sourcing and training through deployment and monitoring, requirements around model validation and explainability, governance of bias and ethical risk, and AI-specific incident management.

Organisations that assume existing ISO certifications already cover AI governance often discover gaps during implementation. Integration reduces duplication, but ISO 42001 still demands new thinking and new evidence.

Common Audit Findings and Lessons Learned

Our experience from ISO 42001 implementation and auditing reveals consistent themes:

Incomplete AI inventories and poorly defined scope are common starting points. Documentation gaps around model validation, bias testing and drift monitoring frequently emerge. Training is often insufficient, particularly for non-technical staff who still interact with AI outputs.

Another recurring issue is poor integration with existing policies and procedures, leading to duplicated or conflicting controls. Joined-up management systems are essential.

Perhaps most importantly, embedding controls into daily practice is far harder than writing policies. Success depends on competence, culture change and ongoing monitoring.

ISO 42001 Internal audits and gap analyses are essential tools for building momentum, prioritising effort and avoiding surprises during certification audits.

Starting the ISO 42001 Certification Journey

For organisations beginning their ISO 42001 journey, the webinar outlined a clear and practical path:

First, define a detailed scope covering AI systems, processes, sites and data flows. Next, conduct a structured gap analysis against ISO 42001 requirements. Design an AI Management System that fits the organisation and integrates with existing standards.

Targeted training is critical to build competence across leadership, technical teams and operational staff. Internal audits and management reviews drive continual improvement and prepare the organisation for certification.

Mock audits and evidence reviews reduce risk and build confidence ahead of external assessment.

How Assent Risk Management Supports ISO 42001

ISO 42001 is not a paper exercise. It requires informed judgement, structured governance and practical implementation.

Assent Risk Management specialises in outsourced AI governance, ISO 42001 consultancy and internal auditing. Our expert ISO 42001 Consultants help organisations identify and scope all AI in use, including shadow and embedded systems, design AI Management Systems that work in practice, and integrate ISO 42001 with existing management systems.

Our internal audit and readiness services help organisations avoid common pitfalls, prioritise effort and achieve certification efficiently.

With accredited certification now available, organisations that act early will be better positioned to demonstrate trust, accountability and control in an AI-driven world.

Contact us today to discuss your ISO 42001 Journey!

This blog was written using Chat GPT’s Generative AI, based on notes and transcriptions of a recent Webinar Assent participated in. This blog has been Edited by Humans.

ARMAdmin
ARMAdmin
Articles: 119