This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related

Contents
3 found
Order:
  1. Taking It Not at Face Value: A New Taxonomy for the Beliefs Acquired from Conversational AIs.Shun Iizuka - forthcoming - Techné: Research in Philosophy and Technology.
    One of the central questions in the epistemology of conversational AIs is how to classify the beliefs acquired from them. Two promising candidates are instrument-based and testimony-based beliefs. However, the category of instrument-based beliefs faces an intrinsic problem, and a challenge arises in its application. On the other hand, relying solely on the category of testimony-based beliefs does not encompass the totality of our practice of using conversational AIs. To address these limitations, I propose a novel classification of beliefs that (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Data over dialogue: Why artificial intelligence is unlikely to humanise medicine.Joshua Hatherley - 2024 - Dissertation, Monash University
    Recently, a growing number of experts in artificial intelligence (AI) and medicine have be-gun to suggest that the use of AI systems, particularly machine learning (ML) systems, is likely to humanise the practice of medicine by substantially improving the quality of clinician-patient relationships. In this thesis, however, I argue that medical ML systems are more likely to negatively impact these relationships than to improve them. In particular, I argue that the use of medical ML systems is likely to comprise the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Exploring, expounding & ersatzing: a three-level account of deep learning models in cognitive neuroscience.Vanja Subotić - 2024 - Synthese 203 (3):1-28.
    Deep learning (DL) is a statistical technique for pattern classification through which AI researchers train artificial neural networks containing multiple layers that process massive amounts of data. I present a three-level account of explanation that can be reasonably expected from DL models in cognitive neuroscience and that illustrates the explanatory dynamics within a future-biased research program (Feest Philosophy of Science 84:1165–1176, 2017 ; Doerig et al. Nature Reviews: Neuroscience 24:431–450, 2023 ). By relying on the mechanistic framework (Craver Explaining the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark