EU Data Protection and Policy: Considering Artificial Intelligence

As AI gains strategic importance, it is essential to shape global rules for its development and use. In promoting the development and uptake of AI, the European Commission has opted for a human-centric approach, meaning that AI applications must comply with fundamental rights. In this context, the rules laid down in the GDPR provide a general framework and contain specific obligations and rights that are particularly relevant for the processing of personal data in AI.

en flag
nl flag
fr flag
de flag
pt flag
es flag

Editor’s Note: Artificial Intelligence (AI) is gaining strategic importance for nation-states and organizations as they seek to balance the benefit of technology with the responsibility of human rights. Provided in this short reference post are extracts from two EU Commission publications that highlight considerations for AI through the lens of data protection and policymaking.

Data Protection Legislation as an Integral Part of Policy Development

Extract from the recent EU Commission communication entitled Data Protection Rules as a Trust Enabler in the EU and Beyond – Taking Stock. This Communication to the European Parliament and the Council highlights the impact of data protection legislation, to include the General Data Protection Regulation (GDPR), the Data Protection Law Enforcement Directive, and the Data Protection Regulation, on EU members, organizations, and individuals. One interesting element of this communication is the mention of how the protection of personal data is guaranteed and integrated into several EU policies. One of those policies being Artificial Intelligence (AI).

Artificial intelligence (‘AI’)

As AI gains strategic importance, it is essential to shape global rules for its development and use. In promoting the development and uptake of AI, the Commission has opted for a human-centric approach, meaning that AI applications must comply with fundamental rights. In this context, the rules laid down in the [GDPR] Regulation provide a general framework and contain specific obligations and rights that are particularly relevant for the processing of personal data in AI. For instance, the [GDPR] Regulation includes the right not to be subject to solely automated decision-making except in certain situations. It also includes specific transparency requirements on the use of automated decision-making, namely the obligation to inform about the existence of such decisions and to provide meaningful information and explain its significance and the envisaged consequences of the processing for the individual. These core principles of the [GDPR] Regulation have been recognized by the High Level Expert Group on AI, the Organization for Economic Cooperation and Development, and G2078 as particularly relevant to address the challenges and opportunities arising from AI.

Data Protection Rules as a Trust Enabler in the EU and Beyond – Taking Stock (July 7, 2019)


Data-Protection-Rules-as-a-Trust-Enabler-in-the-EU-and-Beyond-July-2019

Read the complete report online at Data Protection Rules as a Trust Enabler in the EU and Beyond – Taking Stock

Recommendation of the Council on Artificial Intelligence

Extract from the recent EU Commission’s Council on Artificial Intelligence report entitled Recommendations for Artificial Intelligence (AI).

Additionally, on the topic of artificial intelligence, on May of 2019, the European Commission’s Council on Artificial Intelligence published its Recommendation on Artificial Intelligence (AI). These recommendations are first intergovernmental standard on AI, and they aim to foster innovation and trust in AI by promoting the responsible stewardship of trustworthy AI while ensuring respect for human rights and democratic values.

The Recommendation identifies five complementary values-based principles for the responsible stewardship of trustworthy AI and calls on AI actors to promote and implement them:

  • inclusive growth, sustainable development, and well-being;
  • human-centered values and fairness;
  • transparency and explainability;
  • robustness, security, and safety;
  • and accountability.

In addition to and consistent with these value-based principles, the Recommendation also provides five recommendations to policy-makers pertaining to national policies and international co-operation for trustworthy AI, namely:

  • investing in AI research and development;
  • fostering a digital ecosystem for AI;
  • shaping an enabling policy environment for AI;
  • building human capacity and preparing for labor market transformation;
  • and international co-operation for trustworthy AI.

The Recommendation also includes a provision for the development of metrics to measure AI research, development and deployment, and for building an evidence base to assess progress in its implementation.

OECD, Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449


Additional Reading

Source: ComplexDiscovery

ComplexDiscovery combines original industry research with curated expert articles to create an informational resource that helps legal, business, and information technology professionals better understand the business and practice of data discovery and legal discovery.

All contributions are invested to support the development and distribution of ComplexDiscovery content. Contributors can make as many article contributions as they like, but will not be asked to register and pay until their contribution reaches $5.