A Logical Approach to Algorithmic Opacity
Em: Proceedings of the 2nd Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming, BEWARE-23
Editor: CEUR Workshops Proceedings, AIxIA Series
Páginas: 89–95
DOI: https://ceur-ws.org/Vol-3615/short4.pdf
Resumo:
In [1], we introduced a novel definition for the epistemic opacity of AI systems. Building on this, we proposed a framework for reasoning about an agent’s epistemic attitudes toward a possibly opaque algorithm, investigating the necessary conditions for achieving epistemic transparency. Unfortunately, this logical framework faced several limitations, primarily due to its overly idealized nature and the absence of a formal representation of the inner structure of AI systems. In the present work, we address these limitations by providing a more in-depth analysis of classifiers using first-order evidence logic.This step significantly enhances the applicability of our definitions of epistemic opacity and transparency to machine learning systems.