Many organisations are using AI tools in their daily work, building more specific AI agents supported by complex models that support their job roles.
I recently had an interesting conversation with our consultant Garry who raised a very good point!
How are AI Agents and the Models they produce backed up?
This is an excellent question and relevant across many of the ISO standards we work with including ISO 27001, ISO 9001 and ISO 42001.
Here are our thoughts.
Why backing up AI is different from traditional IT backups
Most organisations understand that their digital data needs to be protected from accidental, or intentional, loss or damage.
Information backups are a well established control, making copies of files, databases and configuration files on a regular basis and keeping a set number of generations allowing an organisation to roll-back to an earlier version and recover from a disaster.
Testing those backups has become even more important in recent years with crypto-locker attacks intended to corrupt the backups.
However, AI systems introduce additional complexity because Models are often trained on large datasets, fine‑tuned over time and deployed alongside orchestration logic, prompts, configuration files and integrations with third‑party services.
AI agents can also evolve dynamically, particularly where continuous learning or frequent model updates are used.
From a compliance perspective, backing up AI is not simply about copying files. It is about ensuring that the organisation can recreate trusted, validated behaviour following an incident. This means preserving not only the model itself, but the surrounding context and prompts that allow it to operate as intended.
Common questions organisations are asking
One of the most common questions is whether organisations need to back up third‑party AI models at all. The answer depends on your role with AI Systems. If you are fully reliant on a SaaS AI provider, backups may sit with the supplier, but a comprehensive risk assessment should still require you to understand, document and test that dependency on the supplier. Where models are customised, fine‑tuned or deployed internally, they become information assets within the scope of ISO 27001.
Another frequent question relates to version control. AI outputs can change significantly between model versions, so organisations should retain historical versions of models and agent configurations. This allows them to demonstrate traceability, support investigations, and roll back safely if an update introduces unacceptable risk or bias. However the frequency of these backups may depend on the extent the agent is used and therefore the evolution of the model.
There is a thought that users can ask an AI Agent itself to provide a full set of prompts and resources that will allow it to rebuild itself to its current state, and this “prompt to make a backup” can be used regularly. Taking this approach would also allow the backup to be tested, as the prompts can simply be run within a separate environment and the user can test the output of the live version against the restored backup.
Impact on risk assessment
AI backups should be explicitly considered within the organisational risk assessment process. Within standards such as ISO 27001 and ISO 42001, organisations are required to identify risks to the confidentiality, integrity and availability of assets. AI models and agents clearly fall within scope.
Loss or corruption of an AI model may result in service outages, incorrect decisions, regulatory breaches or reputational damage. Where AI supports safety‑critical, financial or compliance‑related processes, the impact may be severe and these risks should be assessed formally, with appropriate treatment plans defined and documented.
Business continuity and disaster recovery considerations
Business continuity planning increasingly depends on digital resilience, with AI now forming a key part of that. If an AI agent is unavailable following a cyber incident, cloud outage or supplier failure, organisations need to understand how quickly it can be restored and what interim controls are available.
Backing up AI agents and models supports defined recovery time objectives and recovery point objectives. However, recovery is only effective if backups are tested and documented. A restored model that behaves differently from its predecessor may introduce new risks, particularly where decisions affect customers or employees.
ISO 27001:2022 strengthened expectations around ICT readiness for business continuity, reinforcing the need for organisations to plan, test and maintain recovery arrangements for critical systems, including AI‑enabled services.
Relevance to ISO 27001:2022
Within ISO 27001, AI backups align most directly with controls relating to information backup, operational resilience, supplier management and incident response. AI models and agents should be covered by documented backup policies, with responsibilities clearly defined.
Where cloud‑based AI services are used, organisations should ensure supplier contracts and due‑diligence activities address backup, recovery and data residency. This is particularly important where AI is embedded into core business processes or customer‑facing services.
Organisations implementing or maintaining ISO 27001 should ensure that AI assets are included in the scope of their Information Security Management System, rather than treated as experimental or informal tools.
Relevance to ISO 42001:2023
ISO 42001 introduces a management‑system approach specifically for Artificial Intelligence. It places strong emphasis on lifecycle management, risk assessment and impact assessment. Backing up AI agents and models directly supports these requirements by enabling control over change, traceability and recovery.
From an ISO 42001 perspective, backups help ensure that AI systems remain aligned with their intended purpose and documented behaviour. They also support accountability, allowing organisations to demonstrate what model or configuration was in use at a given point in time.
Assent Risk Management has published extensive guidance on ISO 42001 implementation, including role determination, AI system identification and lifecycle governance, all of which provide a foundation for defining appropriate backup and recovery controls.
Relevance to ISO 9001:2015
Although it may not seem directly related, ISO 9001 is all about the organisation’s ability to provide a quality product or service to their client, which can be severely impacted by disruption to a system that contains an AI element.
In addition there are requirements to protect documented information from loss, manage changes to the organisation and have traceability of design changes.
Building a compliant approach
A compliant approach to backing up AI agents and models does not require reinventing existing management systems. Instead, it involves extending established information security, risk and continuity processes to explicitly include AI.
This typically includes documenting AI assets, defining backup and recovery responsibilities, aligning controls with ISO 27001 Annex A and ISO 42001 requirements, and ensuring that AI‑specific risks are captured within existing governance structures. For organisations using tools such as Microsoft Copilot and bespoke AI agents, this integration can often be achieved within existing Microsoft 365 environments.
Final Thought: Should we be Considering AI Agents under Configuration Management instead?
Garry also raises this excellent point. AI Agents are highly configurable and as such should we be considering them within the ISO 27001:2022 control: A8.9 – Configuration Management?
Are AI Agents an Information Asset or a Configuration Item? Or Both?
Applying the configuration management control may mitigate some risks related AI Agents and Models while backing up the associated data treats other risks and enables backwards traceability.
Perhaps both are needed.
How Assent Risk Management can help
Whether you are exploring AI for the first time or already operating AI‑enabled services, Assent Risk Management can support you with pragmatic, standards‑aligned advice. Our consultants help organisations embed AI into ISO 27001 and ISO 42001 management systems, conduct AI risk and impact assessments, and design governance frameworks that deliver real business value rather than box‑ticking.

