Order:
See also
Lydia Farina
Nottingham University
  1. Artificial Intelligence Systems, Responsibility and Agential Self-Awareness.Lydia Farina - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin, Germany: pp. 15-25.
    This paper investigates the claim that artificial Intelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificial intelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing on a specific account (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  11
    The Route to Artificial Phenomenology; ‘Attunement to the World’ and Representationalism of Affective States.Lydia Farina - 2023 - In Catrin Misselhorn, Tom Poljanšek, Tobias Störzinger & Maike Klein (eds.), Emotional Machines: Perspectives from Affective Computing and Emotional Human-Machine Interaction. Springer Fachmedien Wiesbaden. pp. 111-132.
    According to dominant views in affective computing, artificial systems e.g. robots and algorithms cannot experience emotion because they lack the phenomenological aspect associated with emotional experience. In this paper I suggest that if we wish to design artificial systems such that they are able to experience emotion states with phenomenal properties we should approach artificial phenomenology by borrowing insights from the concept of ‘attunement to the world’ introduced by early phenomenologists. This concept refers to an openness to the world, a (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  3.  40
    Sven Nyholm, Humans and Robots; Ethics, Agency and Anthropomorphism.Lydia Farina - 2022 - Journal of Moral Philosophy 19 (2):221-224.
    How should human beings and robots interact with one another? Nyholm’s answer to this question is given below in the form of a conditional: If a robot looks or behaves like an animal or a human being then we should treat them with a degree of moral consideration (p. 201). Although this is not a novel claim in the literature on ai ethics, what is new is the reason Nyholm gives to support this claim; we should treat robots that look (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark