|
Content Assessment: Beyond Debiasing? Regulating AI and Its Inequalities
Information - 95%
Insight - 100%
Relevance - 90%
Objectivity - 85%
Authority - 90%
92%
Excellent
A short percentage-based assessment of the qualitative benefit of the EDRI published paper on a technical debiasing in the use of artificial intelligence.
Editor’s Note: From time to time, ComplexDiscovery highlights publicly available or privately purchasable announcements, content updates, and research from cyber, data, and legal discovery providers, research organizations, and ComplexDiscovery community members. While ComplexDiscovery regularly highlights this information, it does not assume any responsibility for content assertions.
To submit recommendations for consideration and inclusion in ComplexDiscovery’s cyber, data, and legal discovery-centric service, product, or research announcements, contact us today.
Research Report*
Beyond Debiasing: Regulating AI and Its Inequalities
Citation: Balayn, A. and Gürses, S., 2021. Beyond Debiasing? Regulating AI and Its Inequalities. [online] Brussels: EDRI. Available at: <https://edri.org/wp-content/uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf> [Accessed 23 September 2021].
Executive Summary Extract
AI-driven systems have broad social and economic impacts and demonstrably exacerbate structural discrimination and inequalities. For the most part, regulators have responded by narrowly focusing on the techno-centric solution of debiasing algorithms and datasets. By doing so, regulators risk creating a bigger problem for both AI governance and democracy because this narrow approach squeezes complex socio-technical problems into the domain of design and thus into the hands of technology companies. By largely ignoring the costly production environments that machine learning requires, regulators condone an expansionist model of computational infrastructures (clouds, mobile phones, and sensor networks) driven by Big Tech. Effective solutions require bold regulations that target the root of power imbalances inherent to the pervasive deployment of AI-driven systems.
Report Commentary Extract: Analyzing AI Systems
Even if policymakers develop a better grasp of the technical methods of debiasing data or algorithms, debiasing approaches will not effectively address the discriminatory impact of AI systems. By design, debiasing approaches concentrate power in the hands of service providers, giving them (and not lawmakers) the discretion to decide what counts as discrimination, when it occurs and how to address it.
The report unpacks the problematic assumptions about AI and offers an assessment of the limits of a focus on debiasing. The report puts forward alternative viewpoints that go beyond current techno-centric debates on data, algorithms and automated decision-making systems (ADMs). These frameworks outline different ways (views) of analyzing AI systems’ societal impact, yet are currently missing in policy debates on ‘bias’. These ways (views) include:
- Machine Learning View: Aspects inherent to the fundamental principles of machine learning (such as the repetition of past data patterns, targeted inferences, inherent tendency to increase scale) are likely to pose harms which are often not considered in debiasing debates.
- Production View: The focus on AI as a set ‘product’ obscures the complex processes by which AI systems are integrated into broader environments, which often create significant harms (such as labour exploitation, environmental extraction) often overlooked by policymakers.
- Infrastructural View: The production and deployment of machine learning is heavily dependent on existing computational infrastructures in the hands of a few companies. Ownership over these computational resources is likely to lead to greater concentration of the technical, financial and political power of technology companies, exacerbating global concerns around political, economic and social inequalities.
- Organizational View: AI based systems offer organizations the possibility to automate and centralize workflows and optimize institutional management and operations. These transformations are likely to bring about dependencies on third-parties and computational infrastructures, with demonstrable consequences for the structure of the public sector and democracy more generally.
Read the complete report commentary from the original source.
Complete Report: Beyond Debiasing? Regulating AI and Its Inequalities (PDF) – Mouseover to Scroll
EDRi Beyond Debiasing Report*Shared with permission of EDRi under Creative Commons – Attribution 4.0 International (CC BY 4.0) – license. European Digital Rights (EDRi) is an association of civil and human rights organizations from across Europe focused on rights and freedoms in digital environments.
Additional Reading
- Socially Acceptable? EDBP Guidelines on the Targeting of Social Media Users
- From De-Identification to Re-Identification: Considering Personal Data Protection
Source: ComplexDiscovery