Comunicação Internacional
Can a computer lie? Attributing legal responsibility to AI
Organização:  ISCTE-IUL
Videoconferência, Portugal
22 / 10 / 2020
Resumo:

Finding criteria for legal responsibility attribution to an artificial cognitive agent is very hard to solve. Despite its ethical value, the problem can be related to the computational question of if a Turing Machine is able to lie. Once a machine can lie, one may accept that it has become juridically accountable, not only for lying, but for any further action. However, how can one assert that a computer is lying? It is not enough to say that the machine has somehow missed the truth. That would simply imply a malfunction. To properly conceive the act of lying in a machine, one must know what is to computationally lie and how does this action compare to the human act of lying. To address the problem, I will present an effective procedure for the act of lying involving two Turing machines: a debunking machine and a lying machine. This procedure will be suggested to mimic the human act of lying. However, still a difficulty remains, when a person lies, that same person knows that he/she is lying. Computationally speaking, this would mean applying the former procedure to only one device, thus implying that the same Turing Machine would be a debunking and a lying machine simultaneously. This situation will be shown to fail.
If we accept that to know that oneself is lying implies self-awareness then, since a Turing Machine cannot lie, it will fail to be self-aware and as such, can hardly be considered morally accountable for its actions. However, if a computer becomes self-aware, it seems mandatory to account it morally and juridically responsible for its actions. A possible way of acknowledging that it has become self-aware would be the fact that it has indeed lied.


| |