In their thought-provoking article, Sedlakova and Trachsel (2023) defend the view that the status—both epistemic and ethical—of Conversational Artificial Intelligence (CAI) used in psychotherapy is complicated. While therapeutic CAI seems to be more than a mere tool implementing particular therapeutic techniques, it falls short of being a “digital therapist.” One of the main arguments supporting the latter claim is that even though “the interaction with CAI happens in the course of conversation… the conversation is profoundly different from a conversation with (...) a human therapist” (Sedlakova and Trachsel 2023, 8). In particular, unlike a human therapist, CAI cannot help its users gain new insight and self-understanding (Sedlakova and Trachsel 2023). We agree that currently available therapeutic CAI cannot be considered a “digital therapist,” however, we think that the issue surrounding the acquisition of new self-understanding in the interaction with therapeutic CAI is more complicated than Sedlakova and Trachsel suggest. (shrink)
It is often suggested that we are equipped with a set of cognitive tools that help us to filter out unreliable testimony. But are these tools effective? I answer this question in two steps. Firstly, I argue that they are not real-time effective. The process of filtering, which takes place simultaneously with or right after language comprehension, does not prevent a particular hearer on a particular occasion from forming beliefs based on false testimony. Secondly, I argue that they are long-term (...) effective. Some hearers sometimes detect false testimony, which increases speakers’ incentives for honesty and stabilizes the practice of human communication in which deception is risky and costly. In short, filtering prevents us from forming a large number of beliefs based on false testimony, not by turning each of us into a high-functioning polygraph but by turning the social environment of human communication into one in which such polygraphs are not required. Finally, I argue that these considerations support strong anti-reductionism about testimonial entitlement. (shrink)
The majority of our linguistic exchanges, such as everyday conversations, are divided into turns; one party usually talks at a time, with only relatively rare occurrences of brief overlaps in which there are two simultaneous speakers. Moreover, conversational turn-taking tends to be very fast. We typically start producing our responses before the previous turn has finished, i.e., before we are confronted with the full content of our interlocutor’s utterance. This raises interesting questions about the nature of linguistic understanding. Philosophical theories (...) typically focus on linguistic understanding characterized either as an ability to grasp the contents of utterances in a given language or as outputs of this ability—mental states of one type or another. In this paper, I supplement these theories by developing an account of the process of understanding. I argue that it enables us to capture the dynamic and temporal aspect of understanding and reconcile philosophical investigations with empirical research on language comprehension. (shrink)
What justifies our beliefs about what other people say? According to epistemic inferentialism, the justification of comprehension-based beliefs depends on the justification of other beliefs, e.g., beliefs about what words the speaker uttered or even what sounds they produced. According to epistemic non-inferentialism, the justification of comprehension-based beliefs does not depend on the justification of other beliefs. This paper offers a new defense of epistemic non-inferentialism. First, I discuss three counterexamples to epistemic non-inferentialism provided recently by Brendan Balcerak Jackson. I (...) argue that only one of Balcerak Jackson’s counterexamples is effective, and that it is effective against only one version of epistemic non-inferentialism, viz. language comprehension dogmatism. Second, I propose an alternative version of epistemic non-inferentialism, viz. comprehension-process reliabilism, which is immune to these counterexamples. I conclude that we should follow Balcerak Jackson in his rejection of language comprehension dogmatism but not all the way to the endorsement of epistemic inferentialism. Comprehension-process reliabilism is superior to both these alternatives. (shrink)
The nature of linguistic understanding is a much-debated topic. Among the issues that have been discussed, two questions have recently received a lot of attention: (Q1) ‘Are states of understanding direct (i.e. represent solely what is said) or indirect (i.e. represent what is said as being said/asserted)?’ and (Q2) ‘What kind of mental attitude is linguistic understanding (e.g. knowledge, belief, seeming)?’ This paper argues that, contrary to what is commonly assumed, there is no straightforward answer to either of these questions. (...) This is because linguistic understanding cannot be identified with a single mental attitude towards a particular representation. Instead, we should characterize states of linguistic understanding as involving complex representational structures generated by a dual-stream process. The first stream operates on direct representations of what is said, while the second operates on representations of what is said as being said/asserted by a given source. Both these streams feed a situation model, i.e. a complex representation of a state of affairs described by a given piece of discourse. (shrink)
Growing demand for broadly accessible mental health care, together with the rapid development of new technologies, trigger discussions about the feasibility of psychotherapeutic interventions based on interactions with Conversational Artificial Intelligence (CAI). Many authors argue that while currently available CAI can be a useful supplement for human-delivered psychotherapy, it is not yet capable of delivering fully fledged psychotherapy on its own. The goal of this paper is to investigate what are the most important obstacles on our way to developing CAI (...) systems capable of delivering psychotherapy in the future. To this end, we formulate and discuss three challenges central to this quest. Firstly, we might not be able to develop effective AI-based psychotherapy unless we deepen our understanding of what makes human-delivered psychotherapy effective. Secondly, assuming that it requires building a therapeutic relationship, it is not clear whether psychotherapy can be delivered by non-human agents. Thirdly, conducting psychotherapy might be a problem too complicated for narrow AI, i.e., AI proficient in dealing with only relatively simple and well-delineated tasks. If this is the case, we should not expect CAI to be capable of delivering fully-fledged psychotherapy until the so-called “general” or “human-like” AI is developed. While we believe that all these challenges can ultimately be overcome, we think that being mindful of them is crucial to ensure well-balanced and steady progress on our path to AI-based psychotherapy. (shrink)
The goal of this paper is twofold. First, we argue that the understanding one has of a proposition or a propositional content of a representational vehicle is a species of what contemporary epistemologists characterise as objectual understanding. Second, we demonstrate that even though this type of understanding differs from linguistic understanding, in many instances of successful communication, these two types of understanding jointly contribute to understanding a communicated thought.