Abstract
As vehicles become more autonomous, the task of designing guiding systems that make morally acceptable decisions is getting more urgent. It is sometimes assumed that one solution will be acceptable across various cultures. In this paper we argue for the importance of intercultural perspectives; in particular, we explore possible insights derived from Buddhist philosophy, taking avail of the virtue of compassion (karuṇā). We suggest that autonomous vehicles should first learn in supervised situations so that they reach a level of decision-making ability that human experts within a given culture deem morally good. How will they be evaluated? In some debates it is assumed that there is a one-fit-all answer to be found based on an agreed-upon moral theory. Apart from problems of ethical pluralism within a given culture, there are also problems due to different moral preferences and driving styles across cultures. One way the problem might be addressed is to introduce a machine-learning training curriculum that would provide autonomous vehicles with a process comparable to a novice human driver learning to drive and passing her own driving license test.