Abstract
Are digital subjects in virtual reality morally equivalent to human subjects? We divide this problem into two questions bearing, respectively, on cognitive and emotional equivalence. Typically, cognitive equivalence does not hold due to the lack of substantialist indistinguishability, but emotional equivalence applies: digital subjects endowed with face or language elicit emotional responses on a par with real-world pleasure, desire, horror, or fear. This is sufficient for projecting moral traits on avatars in the metaverse or on dialog systems based on large language models. Our main case study is a chatbot trained on the chat history between a Canadian man and his deceased fiancée. To demonstrate emotional equivalence and the mechanism of moral transfer, we compare digital devices with the functioning of oracles in a story by Plutarch and in a narrative that draws on the book of Genesis. Finally, we note that, along with the projections of ethical issues, humans also tend to bring real-world solutions of moral conundrums into extended reality. We argue that the lack of cognitive equivalence makes such projections problematic as they lead to overpolicing and a sanitized metaverse.