Abstract
In this chapter, I argue that in addition to the generally accepted aim of reducing traffic-related injuries and deaths as much as possible, a principle of fairness in the distribution of risk should inform our thinking about how firms that produce autonomous vehicles ought to program them to respond in conflict situations involving human-driven vehicles. This principle, I claim, rules out programming autonomous vehicles to systematically prioritize the interests of their occupants over those of the occupants of other vehicles, including human-driven vehicles. Because there is reason to think that most consumers would prefer to purchase autonomous vehicles that do systematically prioritize the interests of occupants over those of others, if my argument is correct it generates a substantial ethical restriction on firms’ efforts to gain market share in the initial stages of the introduction of autonomous vehicles onto the road.