Sun. May 5th, 2024

Content Assessment: Shaping the Future of AI: The European Parliament's Stance on the AI Act

Information - 93%
Insight - 92%
Relevance - 93%
Objectivity - 94%
Authority - 95%

93%

Excellent

A short percentage-based assessment of the qualitative benefit of post highlighting the European Parliament's negotiation position on the Artificial Intelligence (AI) Act.

Editor’s Note: We are witnessing a significant milestone in the regulation of Artificial Intelligence (AI) as the European Parliament has recently adopted its negotiating position on the AI Act. This Act is set to govern the creation and usage of AI systems within the European Union. Co-rapporteurs Brando Benifei and Dragoș Tudorache have been instrumental in this process, proposing key compromise amendments that include a list of high-risk AI applications, prohibited practices, and definitions of crucial concepts. The formal adoption of the Parliament’s negotiating position took place on 14 June 2023, marking a substantial stride towards the establishment of the world’s first comprehensive law on Artificial Intelligence. This development underscores the growing importance and influence of AI in our society and the need for robust legal frameworks to ensure its safe and ethical use.


Background Note: In a significant move toward the regulation of Artificial Intelligence (AI), the European Parliament has adopted its negotiating position on the AI Act. This pioneering legislation is set to establish the first-ever rules for safe and transparent AI, aligning AI development and usage in Europe with EU rights and values. These rights and values include human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being. The Act holds significant implications for professionals in cybersecurity, information governance, and eDiscovery. The new regulations will directly impact how AI systems are developed, deployed, and managed, necessitating a thorough understanding and compliance with the Act.

  • For cybersecurity professionals, the Act’s emphasis on safety, privacy, and transparency aligns with the need to protect data and systems from potential threats and breaches. It stresses the importance of robust security measures in AI applications, particularly those handling sensitive data.
  • Information governance professionals will need to consider the Act’s provisions in their data management strategies, particularly regarding the use of AI for data categorization and the handling of sensitive characteristics. The Act’s focus on transparency and non-discrimination also reinforces the need for fair and accountable data practices.
  • For eDiscovery professionals, the Act could influence how AI is used in the discovery process. The requirement for AI systems to be transparent and explainable could also impact how AI-derived evidence is viewed in legal proceedings.

The AI Act truly represents a new frontier in the regulation of AI, with far-reaching implications for those in the eDiscovery ecosystem.

Press and At A Glance Report*

MEPs Ready to Negotiate First-Ever Rules for Safe and Transparent AI

The rules aim to promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its harmful effects.

  • Full ban on Artificial Intelligence (AI) for biometric surveillance, emotion recognition, predictive policing
  • Generative AI systems like ChatGPT must disclose that content was AI-generated
  • AI systems used to influence voters in elections considered to be high-risk

On Wednesday (14 June 2023), the European Parliament adopted its negotiating position on the Artificial Intelligence (AI) Act with 499 votes in favor, 28 against, and 93 abstentions ahead of talks with EU member states on the final shape of the law. The rules would ensure that AI developed and used in Europe is fully in line with EU rights and values, including human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being.

Prohibited AI Practices

The rules follow a risk-based approach and establish obligations for providers and those deploying AI systems depending on the level of risk the AI can generate. AI systems with an unacceptable level of risk to people’s safety would therefore be prohibited, such as those used for social scoring (classifying people based on their social behavior or personal characteristics). MEPs expanded the list to include bans on intrusive and discriminatory uses of AI, such as:

  • “Real-time” remote biometric identification systems in publicly accessible spaces;
  • “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
  • biometric categorization systems using sensitive characteristics (e.g., gender, race, ethnicity, citizenship status, religion, political orientation);
  • predictive policing systems (based on profiling, location, or past criminal behavior);
  • emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

High-Risk AI

MEPs ensured the classification of high-risk applications will now include AI systems that pose significant harm to people’s health, safety, fundamental rights or the environment. AI systems used to influence voters and the outcome of elections and in recommender systems used by social media platforms (with over 45 million users) were added to the high-risk list.

Obligations for General Purpose AI

Providers of foundation models – a new and fast-evolving development in the field of AI – would have to assess and mitigate possible risks (to health, safety, fundamental rights, the environment, democracy and rule of law) and register their models in the EU database before their release on the EU market. Generative AI systems based on such models, like ChatGPT, would have to comply with transparency requirements (disclosing that the content was AI-generated, also helping distinguish so-called deep-fake images from real ones) and ensure safeguards against generating illegal content. Detailed summaries of the copyrighted data used for their training would also have to be made publicly available.

Supporting Innovation and Protecting Citizens’ Rights

To boost AI innovation and support SMEs, MEPs added exemptions for research activities and AI components provided under open-source licenses. The new law promotes so-called regulatory sandboxes, or real-life environments, established by public authorities to test AI before it is deployed.

Finally, MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their fundamental rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.

Quotes

After the vote, co-rapporteur Brando Benifei (S&D, Italy) said: “All eyes are on us today. While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose. We want AI’s positive potential for creativity and productivity to be harnessed but we will also fight to protect our position and counter dangers to our democracies and freedoms during the negotiations with Council”.

Co-rapporteur Dragos Tudorache (Renew, Romania) said: “The AI Act will set the tone worldwide in the development and governance of artificial intelligence, ensuring that this technology, set to radically transform our societies through the massive benefits it can offer, evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law”.

Read the original announcement.


At A Glance Report: Parliament’s Negotiating Position on the Artificial Intelligence Act (PDF) – Mouseover to Scroll

European Parliament’s Negotiation Position on the Artificial Intelligence Act

Read the original paper.

*Shared in accordance with terms of usage (EPRS acknowledgment and advanced notice).


Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is a highly recognized digital publication focused on providing detailed insights into the fields of cybersecurity, information governance, and eDiscovery. Based in Estonia, a hub for digital innovation, ComplexDiscovery OÜ upholds rigorous standards in journalistic integrity, delivering nuanced analyses of global trends, technology advancements, and the eDiscovery sector. The publication expertly connects intricate legal technology issues with the broader narrative of international business and current events, offering its readership invaluable insights for informed decision-making.

For the latest in law, technology, and business, visit ComplexDiscovery.com.

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, Midjourney, and DALL-E, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.