Content Assessment: Minding the Gap? Standardizing Cybersecurity for Artificial Intelligence (ENISA)

Information - 92%
Insight - 91%
Relevance - 93%
Objectivity - 92%
Authority - 94%

92%

Excellent

A short percentage-based assessment of the qualitative benefit of the recent announcement and report from ENISA on standards for the cybersecurity of AI and issues recommendations to support the implementation of upcoming EU policies on Artificial Intelligence.

Editor’s Note: ENISA, the European Union Agency for Cybersecurity, was established in 2004 to promote a high level of cybersecurity across Europe. The EU Cybersecurity Act has strengthened its role, and it works towards enhancing the trustworthiness of ICT products, services, and processes with cybersecurity certification schemes, contributing to EU cyber policy, cooperating with Member States and EU bodies, and preparing Europe for future cybersecurity challenges. ENISA recently published a report on the state of play of cybersecurity standards for artificial intelligence (AI). The report provides an overview of published, under-development, and planned standards and assesses their span to identify potential gaps. The report focuses on machine learning (ML) due to its extensive use across AI deployments and the vulnerabilities impacting the cybersecurity of AI implementation. The report also highlights the need for developing technical guidance on how existing standards related to the cybersecurity of software should be applied to AI and promoting cooperation and coordination across standards organizations’ technical committees on cybersecurity and AI to address potential cybersecurity concerns.


Background Note: The “Cybersecurity of AI and Standardization” report examines the current landscape of AI standards and their role in addressing cybersecurity concerns in the European legal framework. With AI systems increasingly integrated into various aspects of daily life, it is crucial to ensure their security and robustness. This report focuses on the cybersecurity aspects of AI within the European Commission’s proposed “AI Act” and highlights the importance of agreeing on a clear definition of an ‘AI system’ for allocating legal responsibilities.

Press Announcement And Report* (April 27, 2023)

Mind the Gap in Standardization of Cybersecurity for Artificial Intelligence

The European Union Agency for Cybersecurity (ENISA) publishes an assessment of standards for the cybersecurity of AI and issues recommendations to support the implementation of upcoming EU policies on Artificial Intelligence (AI).

This report provides an overview of standards – published, under development, and planned – and an assessment of their span for the purpose of identifying potential gaps.

EU Agency for Cybersecurity Executive Director, Juhan Lepassaar, declared: “Advanced chatbot platforms powered by AI systems are currently used by consumers and businesses alike. The questions raised by AI come down to our capacity to assess its impact, to monitor and control it, with a view to making AI cyber secure and robust for its full potential to unfold. Using adequate standards will help ensure the protection of AI systems and of the data those systems need to process in order to operate. I trust this is the approach we need to take if we want to maximize the benefits for all of us to securely enjoy the services of AI systems to the full.”

This report focuses on the cybersecurity aspects of AI, which are integral to the European legal framework regulating AI, proposed by the European Commission last year dubbed as the “AI Act. “

What is Artificial Intelligence?

The draft AI Act provides a definition of an AI system as “software developed with one or more (…) techniques (…) for a given set of human-defined objectives, that generates outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” In a nutshell, these techniques mainly include machine learning resorting to methods such as deep learning, logic, knowledge-based and statistical approaches.

It is indeed essential for the allocation of legal responsibilities under a future AI framework to agree on what falls into the definition of an ‘AI system’.

However, the exact scope of an AI system is constantly evolving both in the legislative debate on the draft AI Act, as well in the scientific and standardization communities.

Although broad in contents, this report focuses on machine learning (ML) due to its extensive use across AI deployments. ML has come under scrutiny with respect to vulnerabilities, particularly impacting the cybersecurity of AI implementation.

AI cybersecurity standards: what’s the state of play?

As standards help mitigate risks, this study unveils existing general-purpose standards that are readily available for information security and quality management in the context of AI. In order to mitigate some of the cybersecurity risks affecting AI systems, further guidance could be developed to help the user community benefit from the existing standards on AI.

This suggestion has been based on the observation concerning the software layer of AI. It follows that what is applicable to software could be applicable to AI. However, it does not mean the work ends here. Other aspects still need to be considered, such as:

  • a system-specific analysis to cater for security requirements deriving from the domain of application;
  • standards to cover aspects specific to AI, such as the traceability of data and testing procedures.

Further observations concern the extent to which the assessment of compliance with security requirements can be based on AI-specific horizontal standards; furthermore, the extent to which this assessment can be based on vertical/sector-specific standards calls for attention.

Key recommendations include:

  • Resorting to a standardized AI terminology for cybersecurity;
  • Developing technical guidance on how existing standards related to the cybersecurity of software should be applied to AI;
  • Reflecting on the inherent features of ML in AI. Risk mitigation in particular, should be considered by associating hardware/software components to AI; reliable metrics; and testing procedures;
  • Promoting cooperation and coordination across standards organizations’ technical committees on cybersecurity and AI so that potential cybersecurity concerns (e.g., on trustworthiness characteristics and data quality) can be addressed in a coherent manner.

Regulating AI: what is needed?

As for many other pieces of EU legislation, compliance with the draft AI Act will be supported by standards. When it comes to compliance with the cybersecurity requirements set by the draft AI Act, additional aspects have been identified. For example, standards for conformity assessment, in particular related to tools and competences, may need to be further developed. Also, the interplay across different legislative initiatives needs to be further reflected in standardization activities – an example of this is the proposal for a regulation on horizontal cybersecurity requirements for products with digital elements, referred to as the “Cyber Resilience Act.”

Building on the report and other desk research as well as input received from experts, ENISA is currently examining the need for and the feasibility of an EU cybersecurity certification scheme on AI. ENISA is therefore engaging with a broad range of stakeholders including industry, ESOs and Member States, for the purpose of collecting data on AI cybersecurity requirements, data security in relation to AI, AI risk management and conformity assessment.

AI and cybersecurity will be discussed in two dedicated panels:

ENISA advocated the importance of standardization in cybersecurity today, at the RSA Conference in San Francisco in the ‘Standards on the Horizon: What Matters Most?’ in a panel comprising the National Institute of Standards and Technology (NIST).

Further information

Read the original announcement.


Complete Report: Cybersecurity of AI and Standardization (PDF) – Mouseover to Scroll

Cybersecurity of AI and Standardisation

Read the original paper.

*Shared with permission under Creative Commons – Attribution 4.0 International (CC BY 4.0) – license.


Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is a highly recognized digital publication focused on providing detailed insights into the fields of cybersecurity, information governance, and eDiscovery. Based in Estonia, a hub for digital innovation, ComplexDiscovery OÜ upholds rigorous standards in journalistic integrity, delivering nuanced analyses of global trends, technology advancements, and the eDiscovery sector. The publication expertly connects intricate legal technology issues with the broader narrative of international business and current events, offering its readership invaluable insights for informed decision-making.

For the latest in law, technology, and business, visit ComplexDiscovery.com.

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, DALL-E2, Grammarly, Midjourney, and Perplexity, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.