Content Assessment: Beyond Debiasing? Regulating AI and Its Inequalities

Information - 95%
Insight - 100%
Relevance - 90%
Objectivity - 85%
Authority - 90%

92%

Excellent

A short percentage-based assessment of the qualitative benefit of the EDRI published paper on a technical debiasing in the use of artificial intelligence.

Editor’s Note: From time to time, ComplexDiscovery highlights publicly available or privately purchasable announcements, content updates, and research from cyber, data, and legal discovery providers, research organizations, and ComplexDiscovery community members. While ComplexDiscovery regularly highlights this information, it does not assume any responsibility for content assertions.

To submit recommendations for consideration and inclusion in ComplexDiscovery’s cyber, data, and legal discovery-centric service, product, or research announcements, contact us today.


Research Report*

Beyond Debiasing: Regulating AI and Its Inequalities

Citation: Balayn, A. and Gürses, S., 2021. Beyond Debiasing? Regulating AI and Its Inequalities. [online] Brussels: EDRI. Available at: <https://edri.org/wp-content/uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf> [Accessed 23 September 2021].

Executive Summary Extract

AI-driven systems have broad social and economic impacts and demonstrably exacerbate structural discrimination and inequalities. For the most part, regulators have responded by narrowly focusing on the techno-centric solution of debiasing algorithms and datasets. By doing so, regulators risk creating a bigger problem for both AI governance and democracy because this narrow approach squeezes complex socio-technical problems into the domain of design and thus into the hands of technology companies. By largely ignoring the costly production environments that machine learning requires, regulators condone an expansionist model of computational infrastructures (clouds, mobile phones, and sensor networks) driven by Big Tech. Effective solutions require bold regulations that target the root of power imbalances inherent to the pervasive deployment of AI-driven systems.

Report Commentary Extract: Analyzing AI Systems

Even if policymakers develop a better grasp of the technical methods of debiasing data or algorithms, debiasing approaches will not effectively address the discriminatory impact of AI systems. By design, debiasing approaches concentrate power in the hands of service providers, giving them (and not lawmakers) the discretion to decide what counts as discrimination, when it occurs and how to address it.

The report unpacks the problematic assumptions about AI and offers an assessment of the limits of a focus on debiasing. The report puts forward alternative viewpoints that go beyond current techno-centric debates on data, algorithms and automated decision-making systems (ADMs). These frameworks outline different ways (views) of analyzing AI systems’ societal impact, yet are currently missing in policy debates on ‘bias’. These ways (views) include:

  • Machine Learning View: Aspects inherent to the fundamental principles of machine learning (such as the repetition of past data patterns, targeted inferences, inherent tendency to increase scale) are likely to pose harms which are often not considered in debiasing debates.
  • Production View: The focus on AI as a set ‘product’ obscures the complex processes by which AI systems are integrated into broader environments, which often create significant harms (such as labour exploitation, environmental extraction) often overlooked by policymakers.
  • Infrastructural View: The production and deployment of machine learning is heavily dependent on existing computational infrastructures in the hands of a few companies. Ownership over these computational resources is likely to lead to greater concentration of the technical, financial and political power of technology companies, exacerbating global concerns around political, economic and social inequalities.
  • Organizational View: AI based systems offer organizations the possibility to automate and centralize workflows and optimize institutional management and operations. These transformations are likely to bring about dependencies on third-parties and computational infrastructures, with demonstrable consequences for the structure of the public sector and democracy more generally.

Read the complete report commentary from the original source.


Complete Report: Beyond Debiasing? Regulating AI and Its Inequalities (PDF) – Mouseover to Scroll

EDRi Beyond Debiasing Report

Access the original report.

*Shared with permission of EDRi under Creative Commons – Attribution 4.0 International (CC BY 4.0) – license. European Digital Rights (EDRi) is an association of civil and human rights organizations from across Europe focused on rights and freedoms in digital environments.


Additional Reading

Source: ComplexDiscovery

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is a highly recognized digital publication focused on providing detailed insights into the fields of cybersecurity, information governance, and eDiscovery. Based in Estonia, a hub for digital innovation, ComplexDiscovery OÜ upholds rigorous standards in journalistic integrity, delivering nuanced analyses of global trends, technology advancements, and the eDiscovery sector. The publication expertly connects intricate legal technology issues with the broader narrative of international business and current events, offering its readership invaluable insights for informed decision-making.

For the latest in law, technology, and business, visit ComplexDiscovery.com.

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, DALL-E2, Grammarly, Midjourney, and Perplexity, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.