The publication “Cybersecurity of Artificial Intelligence in the AI Act – Guiding principles to address the cybersecurity requirement for high-risk AI systems” is a Science for Policy report, released on September 11, 2023, by the Joint Research Centre (JRC), the European Commission’s science and knowledge service. It aims to provide evidence-based scientific support to the European policymaking process.
The report offers vital guidance and outlines the policy context to steer the development of cybersecurity standards for high-risk AI systems. This initiative, dictated by the European Commission’s proposed AI Act, aspires to bolster European policymaking through scientific backing, ensuring that AI technologies remain trustworthy and respectful of fundamental rights, while fostering a secure and beneficial technological environment.
Brief on the JRC Report on AI Act’s Cybersecurity Requirements
The report outlines the cybersecurity prerequisites for high-risk AI systems as envisaged in Article 15 of the European Commission’s AI Act proposal. Highlighting a high-level analysis of the fast-evolving AI domain, it establishes pivotal principles to facilitate adherence to the regulations stipulated in the AI Act.
- Central to the AI Act is the focus on AI systems as cohesive entities, rather than isolated AI models that form part of broader systems. Thus, cybersecurity measures aim to fortify the AI systems in their entirety instead of merely concentrating on their internal elements.
- To attain compliance with the AI Act, a meticulously orchestrated security risk assessment is mandatory. It necessitates a synergistic and sustained effort, leveraging both established cybersecurity strategies and AI-centric controls, while incorporating the AI system’s blueprint in identifying and mitigating potential risks.
The Pillars of the AI Act
The AI Act adopts a comprehensive approach to the governance of AI systems, promoting a continuous, integrated, and dynamic strategy. This strategy builds upon existing cybersecurity measures and introduces AI-specific controls essential in addressing the unique vulnerabilities precipitated by swift innovations in AI technology. The report provides four core guiding principles:
- Whole-System Focus: Encouraging an outlook that considers the complete AI systems, as opposed to individual components.
- Structured Security Risk Assessment: Advocating for an organized approach to security risk assessment to foster efficient risk perception and mitigation.
- Continuous and Integrated Security Approach: Promoting an ongoing, integrated effort in securing AI systems, employing both existing and novel controls specific to AI.
- Holistic Compliance Strategy: Leveraging strategies that encompass more than individual AI models to align with the standards articulated in the AI Act.
Alignment with the AI Act’s Standards
To harmonize with the AI Act’s standards, it is crucial to adopt an integrated approach that combines well-established cybersecurity norms with AI-specific safeguards. This approach facilitates the safe and beneficial proliferation of AI within society, necessitating global collaboration, stringent risk assessments, and adept risk mitigation strategies that keep pace with technological and standardization developments.
The Mandate of the AI Act
The AI Act mandates AI technology providers to adhere to the evolving standards before launching products in the EU market. This encourages strategies that defend AI systems against challenges such as data poisoning and evasion attacks through a risk-based approach, judiciously balancing the opportunities and threats posed by AI. The Act emphasizes the development of new international and European standards for AI cybersecurity, involving the adaptation of existing ISO standards and the creation of groundbreaking measures guided by the latest technological achievements. Noteworthy organizations such as CEN-CENELEC# supervise this endeavour.
# European Committee for Standardization (CEN) and European Committee for Electrotechnical Standardization (CENELEC).
A Summary of Principal Points
To facilitate a deeper comprehension and adherence to the AI Act’s cybersecurity prerequisites, here we summarize the primary aspects as highlighted in the report:
- AI System Focus: Advocating for securing expansive AI systems over merely focusing on individual AI models.
- Security Risk Assessment: Endorsing a dual-level risk assessment strategy, encompassing regulatory and cybersecurity facets as detailed in the AI Act.
- Integrated Security Approach: Accentuating a multifaceted strategy that incorporates existing security practices with AI-specific controls.
- Recognizing AI Models Security Limitations: Acknowledging the current bounds in safeguarding AI technologies, especially the novel and intricate ones, and emphasizing the necessity for awareness about these limitations during deployment in high-risk environments.
The global AI landscape is undergoing rapid transformations, unlocking unparalleled opportunities alongside potential threats. The AI Act aims to secure and uphold fundamental rights during the deployment of high-risk AI systems, emphasizing a system-wide focus on cybersecurity. It underscores the vital role of harmonized standards that integrate established cybersecurity practices with AI-specific controls. As we forge ahead, the continuous evolution of technology and standardization remains paramount, necessitating ongoing strides in cybersecurity techniques to effectively safeguard our future with AI.