|
Content Assessment: Good Cybersecurity Practices for AI? A Multilayer Framework (ENISA)
Information - 93%
Insight - 94%
Relevance - 92%
Objectivity - 91%
Authority - 92%
92%
Excellent
A short percentage-based assessment of the qualitative benefit of the recent report by the European Union Agency for Cybersecurity (ENISA) on "A Multilayer Framework for Good Cybersecurity Practices for AI."
Editor’s Note: The European Union Agency for Cybersecurity, ENISA, is the EU agency dedicated to achieving a high common level of cybersecurity across Europe. Established in 2004 and strengthened by the EU Cybersecurity Act, ENISA contributes to EU cyber policy, enhances the trustworthiness of ICT products, services and processes with cybersecurity certification schemes, cooperates with Member States and EU bodies and helps Europe prepare for the cyber challenges of tomorrow. Through knowledge sharing, capacity building and awareness raising, the agency works together with its key stakeholders to strengthen trust in the connected economy, to boost resilience of the EU’s infrastructure and, ultimately, to keep Europe’s society and citizens digitally secure. The recent ENISA report, “A Multilayer Framework for Good Cybersecurity Practices for AI,” may serve as a critical resource for cybersecurity, information governance, and eDiscovery professionals and by incorporating the insights and recommendations from this report into their practices, professionals in these fields can benefit from the treatment of AI systems as an additional effort to existing practices for the security of information and communications technology (ICT).
Background Note: This new report from ENISA aims to provide a multilayer framework for good cybersecurity practices for AI systems. The report may be beneficial to cybersecurity professionals as it provides insights into the development of AI-specific cybersecurity practices, risk assessments, and continuous risk management throughout the AI system lifecycle. Information governance professionals can benefit from the framework highlighted in the report by aligning their practices with AI-specific considerations, ensuring the secure and compliant management of AI-related data. Additionally, eDiscovery professionals can leverage the report to inform the adaptation of methodologies to handle AI-generated data, following the recommended practices for defensibility, integrity, and admissibility in legal proceedings.
Industry Report*
A Multilayer Framework for Good Cybersecurity Practices for AI
European Union Agency for Cybersecurity (ENISA)
Executive Summary – Synopsis
In April 2021, the European Commission introduced a proposal for an AI regulation that focuses on high-risk AI systems, emphasizing robustness, accuracy, and cybersecurity. The proposed regulation mandates the use of technical standards during the development of high-risk AI systems to ensure protection of public interests. While work on AI-related standards is underway, it is unlikely that they will be ready before the regulation takes effect. Therefore, ENISA has published two studies on cybersecurity for AI (AI Cybersecurity Challenges – Threat Landscape for Artificial Intelligence and Securing Machine Learning Algorithms), but they do not cover the entire AI lifecycle and associated infrastructure.
Recognizing the need for good cybersecurity practices beyond machine learning, the Commission requested ENISA’s assistance in identifying existing cybersecurity practices and requirements for AI at the EU and national levels. In response, ENISA has developed a scalable framework presented in this report. The framework consists of three layers and provides a step-by-step approach for national competent authorities (NCAs) and AI stakeholders to secure their AI systems and operations by utilizing existing knowledge and best practices.
Through a survey based on the framework and the principles of the proposed AI Act and coordinated plan on AI, the report analyzes the current state of cybersecurity requirements, monitoring, and enforcement practices adopted by NCAs. The survey results indicate a low readiness level among NCAs, highlighting the need for further measures. The report also identifies additional research efforts required to develop cybersecurity practices specific to AI.
The main recommendation of the report is to view cybersecurity for AI systems as an additional effort complementing existing practices for ICT security within organizations. AI-specific practices should address dynamic socio-technical aspects, including risk assessments of technical and social threats, continuous risk management throughout the AI system lifecycle, and sector-specific considerations for accurate threat mitigation.
The report emphasizes the importance of implementing comprehensive cybersecurity practices for AI and highlights the need for collaborative efforts between NCAs and AI stakeholders to enhance trustworthiness in AI activities.
Read the original announcement.
Multilayer Framework for Good Cybersecurity Practices for AI – ENISA
*Shared with permission under Creative Commons – Attribution 4.0 International (CC BY 4.0) – license.
Assisted by GAI and LLM Technologies
Additional Reading
- International Cyber Law in Practice: Interactive Toolkit
- Defining Cyber Discovery? A Definition and Framework
Source: ComplexDiscovery