Abstract
In the contemporary debate on artificial morality, the trolley problem
has found a new field of application, in the “ethics of crashes”
with self-driving cars. The paper aims to show that the trolley
dilemma is out of place, in the context of automated traffic, not
only with regard to the object of the dilemma (which human being
should be sacrificed, in crashes with inevitable fatal consequences),
but also with regard to the subject to whom it is up to decide. In
States whose constitutional charters protect fundamental individual
rights, laws contain definite constraints on solving dilemmas regarding
self-driving cars. The idea that crashes of self-driving cars
pose extraordinary moral questions, rather than safety, transparency,
caution and control issues, as any other machine, derives perhaps
from the human inclination to consider anthropomorphic objects as
agents, or even as moral agents. The development of autonomous
machines can lead to believe that three crowdsourcing and subrogation
operations, variously intertwined, are possible and allowed: of
law with ethics, in the first place; of moral evaluation, secondly,
with a computational model of aggregated social preferences and,
finally, of human moral agency with the autonomous, unpredictable
and opaque functioning of machines.