Editor’s Note: The European Union’s Artificial Intelligence Act represents a significant milestone in the global effort to regulate AI and ensure its safe, responsible, and ethical use. As the first comprehensive legislation of its kind, the AI Act sets a precedent for other regions to follow and establishes Europe as a leader in AI governance. For professionals in cybersecurity, information governance, and eDiscovery, the AI Act introduces a range of new obligations, challenges, and opportunities. From securing AI systems against threats to ensuring transparency and explainability in AI-driven decision-making, the Act demands a proactive approach to compliance and risk management.
Content Assessment: EU Passes Groundbreaking Artificial Intelligence Act: Implications for Cybersecurity, Information Governance, and eDiscovery
Information - 93%
Insight - 94%
Relevance - 95%
Objectivity - 92%
Authority - 93%
93%
Excellent
A short percentage-based assessment of the qualitative benefit expressed as a percentage of positive reception of the recent article titled "EU Passes Groundbreaking Artificial Intelligence Act: Implications for Cybersecurity, Information Governance, and eDiscovery" on the passing of the EU AI Act.
Industry News – Artificial Intelligence Beat
EU Passes Groundbreaking Artificial Intelligence Act: Implications for Cybersecurity, Information Governance, and eDiscovery
ComplexDiscovery Staff
The European Parliament has approved the landmark Artificial Intelligence Act, which aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. This legislation agreed upon in negotiations with member states in December 2023, has significant implications for cybersecurity, information governance, and eDiscovery professionals.
Key provisions of the AI Act include:
- Banning certain AI applications that threaten citizens’ rights, including biometric categorization systems based on sensitive characteristics, emotion recognition in the workplace and schools, social scoring, predictive policing based solely on profiling, and AI that manipulates human behavior or exploits people’s vulnerabilities.
- Restricting the use of biometric identification systems (RBI) by law enforcement, with “real-time” RBI only allowed under strict safeguards and “post-remote RBI” requiring judicial authorization linked to a criminal offense.
- Establishing obligations for high-risk AI systems, including those used in critical infrastructure, education, employment, essential services, law enforcement, migration and border management, justice, and democratic processes. These systems must assess and reduce risks, maintain use logs, ensure transparency and accuracy, and provide human oversight.
- Imposing transparency requirements on general-purpose AI (GPAI) systems and their models, including compliance with EU copyright law and publishing detailed summaries of training content. More powerful GPAI models that could pose systemic risks will face additional requirements, such as model evaluations, risk assessments and mitigation, and incident reporting.
- Labeling artificial or manipulated images, audio, or video content (“deepfakes”) to ensure transparency.
- Establishing regulatory sandboxes and real-world testing at the national level, accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.
For cybersecurity professionals, the AI Act emphasizes the importance of securing AI systems against potential threats and ensuring that these systems do not introduce new vulnerabilities. Information governance practitioners will need to ensure that AI systems comply with data protection regulations, maintain accurate records, and provide meaningful explanations to affected individuals.
In the eDiscovery context, the AI Act’s provisions on transparency and explainability will be crucial. Organizations using AI for document review and analysis must be prepared to provide clear explanations of how these systems make decisions and ensure that the results are accurate and unbiased. The right of citizens to submit complaints about AI systems and receive explanations about decisions based on high-risk AI that affect their rights will also impact eDiscovery processes.
The AI Act is expected to enter into force in 20 days, with full applicability expected 24 months after its official publication. Various provisions will become applicable at different times: bans on prohibited practices (six months after entry into force), codes of practice (nine months), general-purpose AI rules including governance (12 months), and obligations for high-risk systems (36 months). Cybersecurity, information governance, and eDiscovery professionals should begin preparing for compliance with this groundbreaking legislation to ensure the safe and responsible use of AI in their organizations.
News Sources
- Artificial Intelligence Act: MEPs adopt landmark law
- Texts adopted – Artificial Intelligence Act – Wednesday, 13 March 2024
Assisted by GAI and LLM Technologies
Additional Reading
- The Cost of Innovation: Generative AI’s Impact on Business and Pricing Strategies in the eDiscovery Sphere
- Prompt Engineering: The New Vanguard of Legal Tech
Source: ComplexDiscovery OÜ