The introduction of the newly released guidelines for secure AI system development by the National Cyber Security Centre (NCSC) emphasizes the growing importance and integration of AI systems in various sectors. It acknowledges the potential risks and security challenges these systems present. The guidelines aim to provide a comprehensive framework to ensure the secure design, development, operation, and maintenance of AI systems. They are intended to assist organizations in implementing robust security practices for AI, highlighting the importance of considering security at every stage of the AI system’s lifecycle. The following is a summary of the key points.

Secure Design

The “Secure Design” section of the NCSC’s guidelines for secure AI system development emphasizes the importance of integrating security into the design process from the very beginning. It underlines the necessity of understanding and managing the security risks associated with AI systems. This approach involves identifying potential threats and vulnerabilities early, ensuring that the design of the system inherently mitigates these risks. The section provides detailed strategies and best practices for achieving a secure design, focusing on how to incorporate security principles effectively throughout the AI system’s design phase.

The key points are:

  • Raise staff awareness of threats and risks: Elevate awareness among staff about AI security risks and threats, ensuring system owners and leaders comprehend these risks and their countermeasures while training data scientists, developers, and users in secure AI practices and secure coding techniques.
  • Model the threats to your system: Implement a comprehensive risk management process to evaluate threats to AI systems, considering the potential impacts on the system, users, organizations, and society, including AI-specific threats and evolving attack vectors due to AI’s increasing value as a target.
  • Design AI systems prioritizing security, functionality, and performance:
    • Assess AI design choices against threats, functionality, user experience, performance, and ethical/legal requirements.
    • Ensure supply chain security for in-house or external components.
    • Conduct due diligence for external model providers and libraries.
    • Implement scanning and isolation for third-party models.
    • Apply data controls for external APIs.
    • Integrate secure coding practices in AI development.
    • Limit AI-triggered actions with appropriate restrictions.
    • Consider AI-specific risks in user interaction design, applying default secure settings and least privilege principles.
  • Select AI models considering security and functionality trade-offs:
    • Balance model architecture, configuration, training data, algorithms, and hyperparameters.
    • Regularly reassess decisions based on evolving AI security research and threats.
    • Evaluate model complexity, appropriateness for use case, and adaptability.
    • Prioritize model interpretability for debugging, audit, and compliance.
    • Assess training dataset characteristics like size, quality, and diversity.
    • Consider model hardening, regularisation, and privacy-enhancing techniques.
    • Evaluate the provenance and supply chains of model components.

Secure Development

The “Secure Development” section underlines the importance of implementing robust security practices throughout the AI development process. This includes ensuring that the AI systems are resilient against attacks, protecting the integrity of data and algorithms, and maintaining confidentiality. The guidelines encourage developers to consider potential security vulnerabilities at each stage of development and to adopt measures to mitigate these risks. This approach is essential to safeguard AI systems against evolving cybersecurity threats and to ensure their reliable and secure operation.

The key points are:

  • Secure Your Supply Chain: Ensure security across your AI supply chain by assessing and monitoring it throughout the system’s life cycle. Require suppliers to meet your organization’s security standards, and be prepared to switch to alternate solutions if these standards are not met.
  • Identify, Track, and Protect Assets: Understand the value of AI-related assets such as models, data, and software, and recognize their vulnerability to attacks. Implement measures to protect the confidentiality, integrity, and availability of these assets, including logs. Ensure processes for asset tracking, authentication, version control, and restoration to a secure state post-compromise. Manage data access and the sensitivity of AI-generated content.
  • Document Data, Models, and Prompts: Maintain thorough documentation of the creation, operation, and management of models, datasets, and system prompts, including security-relevant details like sources of training data, scope, limitations, guardrails, hashes/signatures, retention time, review frequency, and potential failure modes. Utilize structures like model cards, data cards, and SBOMs to support transparency and accountability.
  • Manage Technical Debt: Identify, track, and manage technical debt in AI systems throughout their life cycle. Technical debt involves suboptimal engineering decisions made for short-term gains at the expense of long-term benefits. Recognize the challenges in managing this in AI, often due to rapid development cycles and evolving standards, and include strategies for risk mitigation in life cycle plans.

Secure Deployment

This section focuses on ensuring the security of AI systems during their deployment phase. This stage is critical as it involves the transition of the AI system from a controlled development environment to a live operational setting. The guidelines emphasize the importance of maintaining security controls and monitoring systems established during development while adapting to the challenges of a dynamic operational environment. The deployment phase should include rigorous testing, validation of security measures, and a thorough assessment of how the AI system interacts with other components in its operational environment. It’s crucial to ensure that the deployment does not introduce new vulnerabilities and that the AI system remains resilient against potential threats.

The key points are:

  • Secure Your Infrastructure: Implement robust infrastructure security principles across all stages of your AI system’s life cycle. Ensure strong access controls for APIs, models, and data, including their training and processing pipelines, in both research and development and deployment. This includes segregating environments with sensitive code or data, to protect against cyber attacks aimed at stealing models or impairing their performance.
  • Protect Your Model Continuously: Guard against attackers who might reconstruct or tamper with your model and its training data. This includes protecting against direct access (like acquiring model weights) and indirect access (through queries). Implement standard cybersecurity practices, control query interfaces to detect and prevent unauthorized access or modifications, and share cryptographic hashes/signatures of model files and datasets.
  • Develop Incident Management Procedures: Create comprehensive incident response, escalation, and remediation plans for your AI systems, accounting for various scenarios and evolving research. Maintain offline backups of critical digital resources, train responders in AI-specific incident management, and provide users with high-quality audit logs and security features at no extra cost to aid in their incident response.
  • Release AI Responsibly: Only release AI models, applications, or systems after thorough security evaluations, including benchmarking and red teaming, and testing for safety and fairness. Be transparent with users about any known limitations or potential failure modes of the AI system.
  • Facilitate Correct User Actions: Assess new settings or configurations for their business benefits and security risks, aiming for the most secure integrated option. Default configurations should be secure against common threats. Implement controls against malicious system use. Provide clear user guidance on model/system use, highlighting limitations and failure modes. Clarify user responsibilities in security, and be transparent about data use, access, and storage, including for retraining or review purposes.

Secure Operation and Maintenance

The last section of the NCSC’s guidelines for secure AI system development covers crucial aspects of AI system management post-deployment. This includes regular updates, vulnerability assessments, and incident response strategies to maintain security and performance. The section emphasizes the importance of continuous monitoring and adaptation to new threats, ensuring the AI system’s resilience in a dynamic cybersecurity landscape. It also highlights the necessity of rigorous maintenance protocols and staff training to effectively manage and secure AI systems in operation.

The key points are:

  • Monitor Your System’s Behaviour: Continuously measure the outputs and performance of your AI model and system to detect any sudden or gradual changes affecting security. This enables the identification of potential intrusions, compromises, and natural data drifts, ensuring ongoing system integrity.
  • Monitor Your System’s Input: Adhere to privacy and data protection standards by monitoring and logging inputs to your AI system, such as inference requests or prompts. This practice is crucial for compliance, audit, investigation, and remediation in cases of compromise or misuse. It also includes detecting out-of-distribution and adversarial inputs, which may target data preparation processes.
  • Implement Secure-by-Design Updates: Incorporate automated updates as a standard feature, using secure and modular procedures for distribution. Ensure update processes, including testing and evaluation, account for potential behavioural changes due to updates in data, models, or prompts. Support users in adapting to model updates, for example, through preview access and versioned APIs.
  • Engage in Information-Sharing Communities: Actively participate in global information-sharing communities across industry, academia, and governments. Maintain open communication channels for system security feedback, both within and outside your organization. This includes consenting to security research and reporting vulnerabilities, issuing bulletins for vulnerabilities with detailed common vulnerability enumerations, and swiftly mitigating and remediating issues.

Conclusion

The NCSC’s guidelines for secure AI system development provide a comprehensive framework, addressing all stages from design to operation and maintenance. Emphasizing proactive and continuous security practices, they guide organizations in safeguarding their AI systems against evolving cyber threats. Key points include rigorous asset monitoring, secure infrastructure, responsible AI release, and continuous system and input monitoring. These guidelines encourage active participation in information-sharing communities and highlight the significance of secure-by-design updates. As AI continues to integrate into various sectors, adhering to these guidelines ensures robust, resilient, and trustworthy AI systems.

We encourage everyone interested to read the full PDF released by NCSC available here:
https://www.ncsc.gov.uk/files/Guidelines-for-secure-AI-system-development.pdf