The U.S. and the U.K., along with 16 international partners, have collaboratively released new guidelines aimed at enhancing the development of secure artificial intelligence (AI) systems. The approach focuses on customer ownership of security outcomes, embraces transparency and accountability, and establishes organizational structures prioritizing secure design. The goal is to elevate cybersecurity levels in AI development, ensuring secure design, development, deployment, and operation throughout the AI system life cycle.
Key Highlights:
Secure by Design Approach:
The guidelines emphasize a "secure by design" approach, making cybersecurity an essential precondition for AI system safety. This includes secure design, development, deployment, and operation.
Threat Modeling:
Organizations are urged to model threats to their AI systems, safeguard supply chains, and infrastructure. The commitment addresses significant areas within the AI system development life cycle.
Guardrails for Societal Harms:
Ongoing efforts aim to manage risks posed by AI, including addressing societal harms such as bias, discrimination, and privacy concerns. Robust methods for consumers to identify AI-generated material are highlighted.
Bug Bounty System:
Companies are required to commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems through a bug bounty system. This ensures swift identification and resolution of security vulnerabilities.
Combatting Adversarial Attacks:
The guidelines aim to combat adversarial attacks targeting AI and machine learning (ML) systems. Adversarial attacks can cause unintended behavior, affecting model classification, enabling unauthorized actions, and extracting sensitive information.
Conclusion:
The joint guidelines represent a collaborative effort by the U.S., the U.K., and international partners to elevate the cybersecurity posture of AI systems. By emphasizing a "secure by design" approach and addressing key aspects of the AI system life cycle, the guidelines aim to enhance security outcomes for customers and promote transparency and accountability in AI development. The bug bounty system and measures to combat adversarial attacks underscore the commitment to proactively identify and mitigate security vulnerabilities in AI systems.
