Fri. May 3rd, 2024

Content Assessment: Good Cybersecurity Practices for AI? A Multilayer Framework (ENISA)

Information - 93%
Insight - 94%
Relevance - 92%
Objectivity - 91%
Authority - 92%

92%

Excellent

A short percentage-based assessment of the qualitative benefit of the recent report by the European Union Agency for Cybersecurity (ENISA) on "A Multilayer Framework for Good Cybersecurity Practices for AI."

Editor’s Note: The European Union Agency for Cybersecurity, ENISA, is the EU agency dedicated to achieving a high common level of cybersecurity across Europe. Established in 2004 and strengthened by the EU Cybersecurity Act, ENISA contributes to EU cyber policy, enhances the trustworthiness of ICT products, services and processes with cybersecurity certification schemes, cooperates with Member States and EU bodies and helps Europe prepare for the cyber challenges of tomorrow. Through knowledge sharing, capacity building and awareness raising, the agency works together with its key stakeholders to strengthen trust in the connected economy, to boost resilience of the EU’s infrastructure and, ultimately, to keep Europe’s society and citizens digitally secure. The recent ENISA report, “A Multilayer Framework for Good Cybersecurity Practices for AI,” may serve as a critical resource for cybersecurity, information governance, and eDiscovery professionals and by incorporating the insights and recommendations from this report into their practices, professionals in these fields can benefit from the treatment of AI systems as an additional effort to existing practices for the security of information and communications technology (ICT).


Background Note: This new report from ENISA aims to provide a multilayer framework for good cybersecurity practices for AI systems. The report may be beneficial to cybersecurity professionals as it provides insights into the development of AI-specific cybersecurity practices, risk assessments, and continuous risk management throughout the AI system lifecycle. Information governance professionals can benefit from the framework highlighted in the report by aligning their practices with AI-specific considerations, ensuring the secure and compliant management of AI-related data. Additionally, eDiscovery professionals can leverage the report to inform the adaptation of methodologies to handle AI-generated data, following the recommended practices for defensibility, integrity, and admissibility in legal proceedings.

Industry Report*

A Multilayer Framework for Good Cybersecurity Practices for AI

European Union Agency for Cybersecurity (ENISA)

Executive Summary – Synopsis

In April 2021, the European Commission introduced a proposal for an AI regulation that focuses on high-risk AI systems, emphasizing robustness, accuracy, and cybersecurity. The proposed regulation mandates the use of technical standards during the development of high-risk AI systems to ensure protection of public interests. While work on AI-related standards is underway, it is unlikely that they will be ready before the regulation takes effect. Therefore, ENISA has published two studies on cybersecurity for AI (AI Cybersecurity Challenges – Threat Landscape for Artificial Intelligence and Securing Machine Learning Algorithms), but they do not cover the entire AI lifecycle and associated infrastructure.

Recognizing the need for good cybersecurity practices beyond machine learning, the Commission requested ENISA’s assistance in identifying existing cybersecurity practices and requirements for AI at the EU and national levels. In response, ENISA has developed a scalable framework presented in this report. The framework consists of three layers and provides a step-by-step approach for national competent authorities (NCAs) and AI stakeholders to secure their AI systems and operations by utilizing existing knowledge and best practices.

Through a survey based on the framework and the principles of the proposed AI Act and coordinated plan on AI, the report analyzes the current state of cybersecurity requirements, monitoring, and enforcement practices adopted by NCAs. The survey results indicate a low readiness level among NCAs, highlighting the need for further measures. The report also identifies additional research efforts required to develop cybersecurity practices specific to AI.

The main recommendation of the report is to view cybersecurity for AI systems as an additional effort complementing existing practices for ICT security within organizations. AI-specific practices should address dynamic socio-technical aspects, including risk assessments of technical and social threats, continuous risk management throughout the AI system lifecycle, and sector-specific considerations for accurate threat mitigation.

The report emphasizes the importance of implementing comprehensive cybersecurity practices for AI and highlights the need for collaborative efforts between NCAs and AI stakeholders to enhance trustworthiness in AI activities.

Read the original announcement.


Complete Report: Multilayer Framework for Good Cybersecurity Practices for AI – ENISA (PDF) – Mouseover to Scroll

Multilayer Framework for Good Cybersecurity Practices for AI – ENISA

Read the original paper.

*Shared with permission under Creative Commons – Attribution 4.0 International (CC BY 4.0) – license.


Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is a highly recognized digital publication focused on providing detailed insights into the fields of cybersecurity, information governance, and eDiscovery. Based in Estonia, a hub for digital innovation, ComplexDiscovery OÜ upholds rigorous standards in journalistic integrity, delivering nuanced analyses of global trends, technology advancements, and the eDiscovery sector. The publication expertly connects intricate legal technology issues with the broader narrative of international business and current events, offering its readership invaluable insights for informed decision-making.

For the latest in law, technology, and business, visit ComplexDiscovery.com.

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, Midjourney, and DALL-E, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.