MoL-2023-31: Reclaiming Enlightenment: on the logical foundations of the rule of law in a legitimate algocracy

MoL-2023-31: Iatrou, Evan (Evangelos) (2023) Reclaiming Enlightenment: on the logical foundations of the rule of law in a legitimate algocracy. [Report]

[thumbnail of MoL-2023-31.text.pdf] Text
MoL-2023-31.text.pdf - Published Version

Download (6MB)

Abstract

Algocracies, i.e., political orders where political power is exercised inter alia by or via algorithms, are already a reality. From algorithmic models partaking in judicial decisions and facial recognition AI in surveillance systems to police drones and spywares, more and more political orders are becoming more and more algocratic raising concerns about their legitimacy, the so-called threat of algocracy. Those concerns are usually about non-consensual data-driven profiling or about the implications of AI’s opacity like the lack of informed participation. There is though a more fundamental threat to legitimacy. It is not a threat that undermines the legitimacy of a political order, but a threat that challenges what legitimacy means in the first place. It is a threat that challenges to end Enlightenment’s legitimacy paradigm where human reason is the means to order legitimate political orders.
Considering the above, the objective of the Thesis is to provide requirements that should comprise the foundations for engineering algocratic AI (henceforth ALGOAI) in order to avoid Enlightenment foretold death. I focus on two types of such requirements: (α) requirements for the logical structure of ALGOAI’s output and architecture that influence the AI’s explanatory power.; (β) meta-scientific requirements for the practice of engineering ALGOAI models with such logical requirements, especially for the practice of logicians & formal philosophers. I include transdisciplinary requirements, i.e., requirements that transcend scientific disciplinary practice like societal, political, ethical, and legal values. From all those values, the rule of law reigns supreme both in terms of ontological priority as well as universality across political orders. Ergo, I focus on ALGOAI that is used as or by judicial authorities, the quintessential paradigm of AI that threatens Enlightenment’s legitimacy paradigm. Many of the results can be generalised to other types of ALGOAI as well.
Regarding (α), I start in §1 by arguing that (legal) ALGOAI engineering practice is centered around evaluative judgements about specific legitimacy values (e.g., the rule of law, human rights, democracy) contra other disciplinary practices where the main practice consists of factual judgments. I further argue what type of justification those evaluative judgments have in the Enlightenment legitimacy paradigm, why this type of justification is now threatened, and what should (legal) ALGOAI engineers do in order to respond to this threat. My proposal centers around the logical structure of those justifications. I ground my proposal on a generalisation of Benacerraf’s dilemma from the philosophy of mathematics to meta-ethics, what I name Benacerraf’s curse. It is essentially a problem of ambiguity of meaning. In §3, I argue why and how conceptual re-engineering, and in particular Carnap’s method of explication, can be used to engineer legal ALGOAI models that satisfy the foregoing requirements. In particular, explication should be used to re-engineer concepts of judicial reasoning used in the actual legal practice. In §4, I provide a toy example of how explication can be applied to judicial causal reasoning in order to contribute to the engineering of legal ALGOAI intended to be used by the European Court of Human Rights (ECtHR). I focus on the so-called NESS and but-for causal justifications. My goal is not to provide a full-fledged account of an explicated concept of causal justification but to show how explication can be performed.
Regarding (β), I contextualise my proposal in the context of philosophy of interdisciplinarity, the nascent evolutionary stage of philosophy of science. More precisely, in §2, I argue about which disciplines should collaborate and how in order to engineer legal ALGOAI models based on the requirements I introduced in §1. I put emphasis on the role that logicians & formal philosophers should have in an ALGOAI engineering team. Contra traditional philosophy of science, philosophy of interdisciplinarity emphasizes the need to engineer ALGOAI based on transdisciplinary meta-scientific requirements. Considering this, I provide such meta-scientific transdisciplinary requirements like the role of legal ALGOAI engineers in the new system of checks and balances that characterises algocratic orders. ALGOAI engineers are no longer mere engineers, but they are political actors that (co-)exercise political power. Once more I focus on the normative contribution of logicians & formal philosophers to those transdisciplinary requirements. Finally, I contextualise those transdisciplinary requirements in the context of the emerging 5th industrial revolution and the new social order predicated on it, the so-called SOCIETY 5.0.

Item Type: Report
Report Nr: MoL-2023-31
Series Name: Master of Logic Thesis (MoL) Series
Year: 2023
Subjects: Logic
Philosophy
Depositing User: Dr Marco Vervoort
Date Deposited: 16 Nov 2023 17:29
Last Modified: 16 Nov 2023 17:29
URI: https://eprints.illc.uva.nl/id/eprint/2283

Actions (login required)

View Item View Item