Comunicação
Two Dogmas of Trustworthy AI
Mattia Petrolo, Daniele Chiffi, Viola Schiaffonati, Giacomo Zanotti
Organização:  CFCUL
Ciências ULisboa, Building C6
12 / 07 / 2023
Resumo:

A lot of attention has recently been devoted to the notion of Trustworthy AI (TAI). Our aim here is twofold. First, we survey the philosophical debate on TAI and contend that the prevailing views on trust and AI fail to account for some crucial aspects of the design and use of AI systems. Secondly, we put forward an original proposal that avoids these shortcomings. The current debate on TAI largely boils down to the dichotomy between two alternatives, neither of which is completely satisfying. Purely epistemic accounts of trust, which take trust to be a matter of rational choice and probability estimation, allow one to make sense of the notion of TAI but fail to distinguish TAI from merely reliable AI – a distinction that is usually deemed essential. Motivational accounts of trust, instead, focus on the motivations and moral obligations of the trustee and clearly distinguish trustworthiness from reliability. However, given that AI systems hardly possess motivations and moral obligations, the notion of TAI turns out to be a categorical error. In both cases, the notion of TAI somehow reduces to that of reliable AI. We argue that this outcome is undesirable. AI systems are not ethically neutral: the notion of TAI should allow us to go beyond mere reliability and consider critical ethical dimensions involved in the design and use of AI systems. In our view, the current philosophical debate builds upon two dogmas, namely that (i) trust in AI should be modeled on interpersonal trust and (ii) the attribution of trustworthiness to AI systems should be understood literally. By dropping both dogmas, we provide an alternative framework that insists on the importance of a notion of TAI that captures the epistemic and non-epistemic dimensions of the design and use of AI systems.


| |