In January 2025 the UK government took a decisive step towards addressing the cybersecurity risks associated with AI systems by publishing its Code of Practice for the Cyber Security of AI. Focusing specifically on AI systems (including those that incorporate Generative AI), the Code sets out security requirements for the entire AI lifecycle and is to be used by the European Telecommunications Standards Institute (ETSI) to help create a global standard for the security of AI.
Why a dedicated Code for AI?
AI systems present a number of distinct security challenges that do not arise in relation to traditional software, including:
- Data poisoning: malicious actors can compromise AI systems by introducing corrupted data into training datasets, skewing the learning process, and leading to inaccurate or biased outputs;
- Model obfuscation: the complexity of AI models can make it difficult to understand their inner workings, creating opportunities for vulnerabilities to go undetected;
- Indirect prompt injection: attackers can manipulate an AI system's output by subtly influencing the prompts given to it; and
- Data management: data management for AI systems differs significantly from traditional software systems and necessitates specialised tools and infrastructure to manage the volume, variety, and velocity of the processing of data.
It is the UK government’s view that these unique challenges demand a tailored approach to cybersecurity for AI systems, hence the publication of this bespoke Code. The Department of Science, Innovation and Technology (DSIT) has also published an Implementation Guide to support organisations with adhering to its requirements (and with the future global standard to be created by ETSI).
Key features of the Code
The Code establishes 13 principles outlining best practices across five key phases, as follows:
- Secure design: emphasising the importance of incorporating security considerations from the initial conception of an AI system
- Secure development: providing guidance on secure coding practices, vulnerability testing, and supply chain security
- Secure deployment: outlining procedures for safe and responsible integration of AI systems into existing infrastructures
- Secure maintenance: recommending regular security updates, system monitoring, and incident response planning, and
- Secure end of life: providing guidelines for the secure decommissioning and disposal of AI systems and associated data.
Who is the Code aimed at?
The Code targets a range of stakeholders involved in the AI ecosystem:
“Developers” - any entity responsible for creating and adapting AI models and/or systems, including proprietary and open-source models;
“System operators” - any entity responsible for embedding/deploying AI models and systems within their own or within a customer’s infrastructure;
“Data custodians” - any entity or individual that controls data permissions and the integrity of the data used to train and operate AI models or systems;
“End-users” - both employees within an entity and UK consumers interacting with AI systems to support their work and day to day activities – this stakeholder group was created as the Code creates a duty to help inform and protect those end-users; and
“Affected entities” - all other individuals and technologies e.g. apps and autonomous systems which are only indirectly affected by AI systems or by decisions based on the output of AI systems.
Implementation and future outlook
As stated above, to facilitate the adoption of the Code, an Implementation Guide has been developed offering practical advice and resources. The UK government plans to submit both the Code and the guidance to ETSI to contribute to the development of a global standard for AI security. The government will then update both the Code and the guidance to align with the final form ETSI standard.
Next steps?
Organisations and businesses developing and deploying AI systems in the UK should consider the recommendations established by the Code. Compliance with the Code is likely to include:
Raising awareness
Organisations should provide regular and tailored cybersecurity training to all staff, focusing on AI-specific threats, vulnerabilities, and mitigation strategies, and communicate updates to these on an ongoing basis.
Integrating security into design
From conception to implementation, AI systems should be built with security as a core principle, considering potential attacks, unexpected inputs, and system failures.
Evaluating the threats whilst managing the risks
Continuous threat modelling and risk management are crucial throughout the AI lifecycle. This includes identifying and mitigating AI-specific attacks and communicating risks to relevant parties.
Enabling human responsibility
Human oversight should be integrated into AI systems to ensure responsible and ethical use. This includes designing for explainability, verifying security controls, and educating users on appropriate and prohibited use cases.
Identifying, tracking, and protecting your assets
Maintaining a comprehensive inventory of all AI assets, including their interdependencies and connectivity, is essential. It enables effective tracking, authentication, version control, and protection against unauthorised access.
Securing your infrastructure
Robust access control frameworks are necessary to secure APIs, models, data, training, and processing pipelines. This includes dedicated environments for development and model tuning, vulnerability disclosure policies, and incident management and AI system recovery plans to mitigate risks and ensure system integrity.
Securing your supply chain
Organisations should apply secure software supply chain principles to AI model and system development, ensuring all components are well-documented.
Documenting everything
Thorough documentation of system design and post-deployment maintenance plans, including data sources, models, prompts, limitations, and potential failure modes is crucial for transparency and accountability.
Conducting appropriate testing and evaluation
Rigorous security testing and evaluation are essential throughout the AI lifecycle, involving independent security testers and addressing potential vulnerabilities.
Communicating with end-users and affected entities
Clear and accessible communication with end-users about their data usage, access and storage, and the provision of accessible guidance to support the use, management and configuration of AI systems is paramount. This includes providing security updates, and support during cybersecurity incidents.
Maintaining regular security updates, patches, and mitigation
Provide regular security updates and patches and ensure timely implementation.
Monitoring your system’s behaviour
Continuous monitoring of system logs, model performance, and user actions is crucial for detecting anomalies, security breaches, and unexpected behaviour over time.
Ensuring proper data and model disposal
Secure disposal of AI assets, including training data and models is essential when transferring ownership to prevent potential security issues from transferring to other systems. Where a model or AI system is decommissioned, securely delete applicable data and configuration models with the involvement of the relevant Data Custodians.
Finally it should be noted that not all provisions of the Code are as applicable to AI systems involving open-source models. The DIST therefore encourages developers to consult the Implementation Guide to confirm which requirements are specified for different types of AI systems.
Key takeaways
Stay informed: keep abreast of evolving AI security threats, vulnerabilities, and best practices. The cybersecurity landscape is constantly changing, so continuous learning is essential;
Collaboration and knowledge sharing: foster a culture of collaboration and knowledge sharing within your organisation and with external partners. Sharing information about threats, vulnerabilities, and mitigation strategies can help strengthen the overall AI security ecosystem; and
Remember: the Code of Practice is a living document. The DIST has stated that it will be updated once the ETSI issues its final form global standard. Stay engaged with industry updates and be prepared to adapt your security practices accordingly.
If you would like to more information about any of the matters raised in the above article, please contact Paul O’Hare or Elizabeth Lumb.