Results for 'Strong AI'

1000+ found
Order:
  1. Weak Strong AI: An Elaboration of the English Reply to the Chinese Room.Ronald L. Chrisley - unknown
    Searle (1980) constructed the Chinese Room (CR) to argue against what he called \Strong AI": the claim that a computer can understand by virtue of running a program of the right sort. Margaret Boden (1990), in giving the English Reply to the Chinese Room argument, has pointed out that there isunderstanding in the Chinese Room: the understanding required to recognize the symbols, the understanding of English required to read the rulebook, etc. I elaborate on and defend this response to (...)
     
    Export citation  
     
    Bookmark  
  2.  13
    Searle, Strong AI, and Two Ways of Sorting Cucumbers.Karl Pfeifer - 1992 - Journal of Philosophical Research 17:347-350.
    This paper defends Searle against the misconstrual of a key claim of “Minds, Brains, and Programs” and goes on to explain why an attempt to turn the tables by using the Chinese Room to argue for intentionality in computers fails.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  3.  8
    " Strong AI": An Adolescent Disorder.M. Gams - 1997 - In Matjaz Gams (ed.), Mind Versus Computer: Were Dreyfus and Winograd Right? Amsterdam: Ios Press. pp. 43--1.
    Direct download  
     
    Export citation  
     
    Bookmark  
  4. Did Searle Attack Strong Strong AI or Weak Strong AI?Aaron Sloman - 1986 - In Artificial Intelligence and its Applications. Chichester.
    John Searle's attack on the Strong AI thesis, and the published replies, are all based on a failure to distinguish two interpretations of that thesis, a strong one, which claims that the mere occurrence of certain process patterns will suffice for the occurrence of mental states, and a weak one which requires that the processes be produced in the right sort of way. Searle attacks strong strong AI, while most of his opponents defend weak strong (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Tu Quoque: The Strong AI Challenge to Selfhood, Intentionality and Meaning and Some Artistic Responses.Erik C. Banks - manuscript
    This paper offers a "tu quoque" defense of strong AI, based on the argument that phenomena of self-consciousness and intentionality are nothing but the "negative space" drawn around the concrete phenomena of brain states and causally connected utterances and objects. Any machine that was capable of concretely implementing the positive phenomena would automatically inherit the negative space around these that we call self-consciousness and intention. Because this paper was written for a literary audience, some examples from Greek tragedy, noir (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  6.  22
    A Modal Defence of Strong AI.Steffen Borge - 2007 - The Proceedings of the Twenty-First World Congress of Philosophy 6:127-131.
    John Searle has argued that the aim of strong AI to create a thinking computer is misguided. Searle's "Chinese Room Argument" purports to show that syntax does not suffice for semantics and that computer programs as such must fail to have intrinsic intentionality But we are not mainly interested in the program itself, but rather the implementation of the program in some material. It does not follow by necessity from the fact that computer programs are defined syntactically that the (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  7.  36
    Searle, Strong AI, and Two Ways of Sorting Cucumbers.Karl Pfeifer - 1992 - Journal of Philosophical Research 17:347-50.
    This paper defends Searle against the misconstrual of a key claim of “Minds, Brains, and Programs” and goes on to explain why an attempt to turn the tables by using the Chinese Room to argue for intentionality in computers fails.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  8.  23
    Strong AI and the Problem of “Second-Order” Algorithms.Gerd Gigerenzer - 1990 - Behavioral and Brain Sciences 13 (4):663-664.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  34
    Die Starke KI-TheseThe Strong AI-Thesis.Stephan Zelewski - 1991 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 22 (2):337-348.
    Summary The controversy about the strong AI-thesis was recently revived by two interrelated contributions stemming from J. R. Searle on the one hand and from P. M. and P. S. Churchland on the other hand. It is shown that the strong AI-thesis cannot be defended in the formulation used by the three authors. It violates some well accepted criterions of scientific argumentation, especially the rejection of essentialistic definitions. Moreover, Searle's ‘proof’ is not conclusive. Though it may be reconstructed (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  10. Redcar Rocks: Strong AI and Panpsychism.J. M. Bishop - 2000 - Consciousness and Cognition 9 (2):S35 - S35.
     
    Export citation  
     
    Bookmark  
  11. Godel's Theorem and Strong Ai: Is Reason Blind?Burton Voorhees - 1999 - In S. Smets J. P. Van Bendegem G. C. Cornelis (ed.), Metadebates on Science. Vub-Press & Kluwer. pp. 6--43.
     
    Export citation  
     
    Bookmark  
  12. A Modal Defence of Strong AI.Steffen Borge - 2007 - In Dermot Moran Stephen Voss (ed.), The Proceedings of the Twenty-First World Congress of Philosophy. The Philosophical Society of Turkey. pp. 127-131.
    John Searle has argued that the aim of strong AI of creating a thinking computer is misguided. Searle’s Chinese Room Argument purports to show that syntax does not suffice for semantics and that computer programs as such must fail to have intrinsic intentionality. But we are not mainly interested in the program itself but rather the implementation of the program in some material. It does not follow by necessity from the fact that computer programs are defined syntactically that the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  13.  16
    Did Searle Attack Strong Strong or Weak Strong AI.Aaron Sloman - 1986 - In A. G. Cohn and & R. J. Thomas (eds.), Artificial Intelligence and its Applications. John Wiley and Sons.
    John Searle's attack on the Strong AI thesis, and the published replies, are all based on a failure to distinguish two interpretations of that thesis, a strong one, which claims that the mere occurrence of certain process patterns will suffice for the occurrence of mental states, and a weak one which requires that the processes be produced in the right sort of way. Searle attacks strong strong AI, while most of his opponents defend weak strong (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  14. Searle on Strong AI.Philip Cam - 1990 - Australasian Journal of Philosophy 68 (1):103-8.
  15. Consciousness as Computation: A Defense of Strong AI Based on Quantum-State Functionalism.R. Michael Perry - 2006 - In Charles Tandy (ed.), Death and Anti-Death, Volume 4: Twenty Years After De Beauvoir, Thirty Years After Heidegger. Palo Alto: Ria University Press.
    The viewpoint that consciousness, including feeling, could be fully expressed by a computational device is known as strong artificial intelligence or strong AI. Here I offer a defense of strong AI based on machine-state functionalism at the quantum level, or quantum-state functionalism. I consider arguments against strong AI, then summarize some counterarguments I find compelling, including Torkel Franzén’s work which challenges Roger Penrose’s claim, based on Gödel incompleteness, that mathematicians have nonalgorithmic levels of “certainty.” Some consequences (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  16.  2
    In Defense of Strong AI.Corey Baron - 2020 - Stance 10 (1):38-49.
    This paper argues against John Searle in defense of the potential for computers to understand language by showing that semantic meaning is itself a second-order system of rules that connects symbols and syntax with extralinguistic facts. Searle’s Chinese Room Argument is contested on theoretical and practical grounds by identifying two problems in the thought experiment, and evidence about “machine learning” is used to demonstrate that computers are already capable of learning to form true observation sentences in the same way humans (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Searle's Misunderstandings of Functionalism and Strong AI.Georges Rey - 2003 - In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. pp. 201--225.
  18. Searle's Abstract Argument Against Strong AI.Andrew Melnyk - 1996 - Synthese 108 (3):391-419.
    Discussion of Searle's case against strong AI has usually focused upon his Chinese Room thought-experiment. In this paper, however, I expound and then try to refute what I call his abstract argument against strong AI, an argument which turns upon quite general considerations concerning programs, syntax, and semantics, and which seems not to depend on intuitions about the Chinese Room. I claim that this argument fails, since it assumes one particular account of what a program is. I suggest (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  19. The Chinese Room Argument Reconsidered: Essentialism, Indeterminacy, and Strong AI. [REVIEW]Jerome C. Wakefield - 2003 - Minds and Machines 13 (2):285-319.
    I argue that John Searle's (1980) influential Chinese room argument (CRA) against computationalism and strong AI survives existing objections, including Block's (1998) internalized systems reply, Fodor's (1991b) deviant causal chain reply, and Hauser's (1997) unconscious content reply. However, a new ``essentialist'' reply I construct shows that the CRA as presented by Searle is an unsound argument that relies on a question-begging appeal to intuition. My diagnosis of the CRA relies on an interpretation of computationalism as a scientific theory about (...)
    Direct download (12 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  20.  21
    A Philosophical View on Singularity and Strong AI.Christian Hugo Hoffmann - forthcoming - AI and Society.
    More intellectual modesty, but also conceptual clarity is urgently needed in AI, perhaps more than in many other disciplines. AI research has been coined by hypes and hubris since its early beginnings in the 1950s. For instance, the Nobel laureate Herbert Simon predicted after his participation in the Dartmouth workshop that “machines will be capable, within 20 years, of doing any work that a man can do”. And expectations are in some circles still high to overblown today. This paper addresses (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  21.  38
    The Chess Room: Further Demythologizing of Strong AI.Roland Puccetti - 1980 - Behavioral and Brain Sciences 3 (3):441-442.
  22. AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi (ed.), Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  24.  57
    In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2021 - AI and Society 1:1-12.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  25.  52
    Medium AI and Experimental Science.Andre Kukla - 1994 - Philosophical Psychology 7 (4):493-5012.
    It has been claimed that a great deal of AI research is an attempt to discover the empirical laws describing a new type of entity in the world—the artificial computing system. I call this enterprise 'medium AI', since it is in some respects stronger than Searle's 'weak AI', and in other respects weaker than 'strong AI'. Bruce Buchanan, among others, conceives of medium AI as an empirical science entirely on a par with psychology or chemistry. I argue that medium (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26.  31
    Rehabilitating AI: Argument Loci and the Case for Artificial Intelligence. [REVIEW]Barbara Warnick - 2004 - Argumentation 18 (2):149-170.
    This article examines argument structures and strategies in pro and con argumentation about the possibility of human-level artificial intelligence (AI) in the near term future. It examines renewed controversy about strong AI that originated in a prominent 1999 book and continued at major conferences and in periodicals, media commentary, and Web-based discussions through 2002. It will be argued that the book made use of implicit, anticipatory refutation to reverse prevailing value hierarchies related to AI. Drawing on Perelman and Olbrechts-Tyteca's (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  27.  10
    In Search of the Moral Status of AI: Why Sentience is a Strong Argument.Martin Gibert & Dominic Martin - 2022 - AI and Society 37 (1):319-330.
  28. Representation, Analytic Pragmatism and AI.Raffaela Giovagnoli - 2013 - In Gordana Dodig-Crnkovic Raffaela Giovagnoli (ed.), Computing Nature. pp. 161--169.
    Our contribution aims at individuating a valid philosophical strategy for a fruitful confrontation between human and artificial representation. The ground for this theoretical option resides in the necessity to find a solution that overcomes, on the one side, strong AI (i.e. Haugeland) and, on the other side, the view that rules out AI as explanation of human capacities (i.e. Dreyfus). We try to argue for Analytic Pragmatism (AP) as a valid strategy to present arguments for a form of weak (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29.  9
    Embodied AI Beyond Embodied Cognition and Enactivism.Riccardo Manzotti - 2019 - Philosophies 4 (3):39-0.
    Over the last three decades, the rise of embodied cognition articulated in various schools of embodied, embedded, extended and enacted cognition has offered AI a way out of traditional computationalism—an approach loosely referred to as embodied AI. This view has split into various branches ranging from a weak form on the brink of functionalism to a strong form suggesting that body−world interactions constitute cognition. From an ontological perspective, however, constitution is a problematic notion with no obvious empirical or technical (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  41
    AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.Jocelyn Maclure - 2021 - Minds and Machines 31 (3):421-438.
    Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can be (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  31.  3
    AI ethics with Chinese characteristics? Concerns and preferred solutions in Chinese academia.Junhua Zhu - forthcoming - AI and Society:1-14.
    Since Chinese scholars are playing an increasingly important role in shaping the national landscape of discussion on AI ethics, understanding their ethical concerns and preferred solutions is essential for global cooperation on governance of AI. This article, therefore, provides the first elaborated analysis on the discourse on AI ethics in Chinese academia, via a systematic literature review. This article has three main objectives. to identify the most discussed ethical issues of AI in Chinese academia and those being left out ; (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  32.  8
    AI as a Boss? A National US Survey of Predispositions Governing Comfort with Expanded AI Roles in Society.Kate K. Mays, Yiming Lei, Rebecca Giovanetti & James E. Katz - 2022 - AI and Society 37 (4):1587-1600.
    People’s comfort with and acceptability of artificial intelligence (AI) instantiations is a topic that has received little systematic study. This is surprising given the topic’s relevance to the design, deployment and even regulation of AI systems. To help fill in our knowledge base, we conducted mixed-methods analysis based on a survey of a representative sample of the US population (_N_ = 2254). Results show that there are two distinct social dimensions to comfort with AI: as a peer and as a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Saliva Ontology: An Ontology-Based Framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  34. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  35. Comments on “The Replication of the Hard Problem of Consciousness in AI and Bio-AI”.Blake H. Dournaee - 2010 - Minds and Machines 20 (2):303-309.
    In their joint paper entitled The Replication of the Hard Problem of Consciousness in AI and BIO-AI (Boltuc et al. Replication of the hard problem of conscious in AI and Bio- AI: An early conceptual framework 2008), Nicholas and Piotr Boltuc suggest that machines could be equipped with phenomenal consciousness, which is subjective consciousness that satisfies Chalmer’s hard problem (We will abbreviate the hard problem of consciousness as H-consciousness ). The claim is that if we knew the inner workings of (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark  
  36.  31
    Strong Determinism Vs. Computability.Cristian Calude, Douglas Campbell, Karl Svozil & Doru Ştefănescu - 1995 - Vienna Circle Institute Yearbook 3:115-131.
    Penrose [40] has discussed a new point of view concerning the nature of physics that might underline conscious thought processes. He has argued that it might be the case that some physical laws are not computable, i.e. they cannot be properly simulated by computer; such laws can most probably arise on the “no-man’s-land” between classical and quantum physics. Furthermore, conscious thinking is a non-algorithmic activity. He is opposing both strong AI , and Searle’s [47] contrary viewpoint mathematical “laws”).
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Dancing with Pixies: Strong Artificial Intelligence and Panpsychism.John Mark Bishop - 2002 - In John M. Preston & John Mark Bishop (eds.), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. pp. 360-379.
    The argument presented in this paper is not a direct attack or defence of the Chinese Room Argument (CRA), but relates to the premise at its heart, that syntax is not sufficient for semantics, via the closely associated propositions that semantics is not intrinsic to syntax and that syntax is not intrinsic to physics. However, in contrast to the CRA’s critique of the link between syntax and semantics, this paper will explore the associated link between syntax and physics. The main (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  38.  2
    An Investigation Into the Effects of Destination Sensory Experiences at Visitors’ Digital Engagement: Empirical Evidence From Sanya, China.Jin Ai, Ling Yan, Yubei Hu & Yue Liu - 2022 - Frontiers in Psychology 13.
    This study investigates the mechanism of how sensory experiences influence visitors’ digital engagement with a destination through establishing a strong bond and identification between a destination and tourist utilizing a two-step process. First, visitors’ sensory experiences in a destination are identified through a content analysis of online review comments posted by visitors. Afterward, the effects of those sensory experiences on visitors’ digital engagement through destination dependence and identification with that destination are examined. Findings suggest that sensory experiences are critical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39.  16
    A Finite Model Property for RMImin.Ai-ni Hsieh & James G. Raftery - 2006 - Mathematical Logic Quarterly 52 (6):602-612.
    It is proved that the variety of relevant disjunction lattices has the finite embeddability property. It follows that Avron's relevance logic RMImin has a strong form of the finite model property, so it has a solvable deducibility problem. This strengthens Avron's result that RMImin is decidable.
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  40.  4
    The Cart Project: A Personal History, a Plea for Help and a Proposal.Hans Moravec Stanford AI Lab May - unknown
    This is a proposal for the re-activation of the essentially stillborn automatic car project for which the cart was originally obtained, and presents a process through which this activation could be accomplished painlessly. The project would be financed from the lab's operating grant, and would interact strongly with, while being independent of, any Mars rover research initiated by Lynn Quam. Since I seem to be the only one, apart from John McCarthy, with an active interest in this aspect of things, (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  41.  8
    The Thailand National AI Ethics Guideline: An Analysis.Soraj Hongladarom - 2021 - Journal of Information, Communication and Ethics in Society 19 (4):480-491.
    Purpose The paper aims to analyze the content of the newly published National AI Ethics Guideline in Thailand. Thailand’s ongoing political struggles and transformation has made it a good case to see how a policy document such as a guideline in AI ethics becomes part of the transformations. Looking at how the two are interrelated will help illuminate the political and cultural dynamics of Thailand as well as how governance of ethics itself is conceptualized. Design/methodology/approach The author looks at the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  18
    Games Between Humans and AIs.Stephen J. DeCanio - 2018 - AI and Society 33 (4):557-564.
    Various potential strategic interactions between a “strong” Artificial intelligence and humans are analyzed using simple 2 × 2 order games, drawing on the New Periodic Table of those games developed by Robinson and Goforth. Strong risk aversion on the part of the human player leads to shutting down the AI research program, but alternative preference orderings by the human and the AI result in Nash equilibria with interesting properties. Some of the AI-Human games have multiple equilibria, and in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  20
    Problems with “Friendly AI”.Oliver Li - 2021 - Ethics and Information Technology 23 (3):543-550.
    On virtue ethical grounds, Barbro Fröding and Martin Peterson recently recommended that near-future AIs should be developed as ‘Friendly AI’. AI in social interaction with humans should be programmed such that they mimic aspects of human friendship. While it is a reasonable goal to implement AI systems interacting with humans as Friendly AI, I identify four issues that need to be addressed concerning Friendly AI with Fröding’s and Peterson’s understanding of Friendly AI as a starting point. In a first step, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  44.  9
    Implementing Ethics in Healthcare AI-Based Applications: A Scoping Review.Robyn Clay-Williams, Elizabeth Austin & Magali Goirand - 2021 - Science and Engineering Ethics 27 (5):1-53.
    A number of Artificial Intelligence ethics frameworks have been published in the last 6 years in response to the growing concerns posed by the adoption of AI in different sectors, including healthcare. While there is a strong culture of medical ethics in healthcare applications, AI-based Healthcare Applications are challenging the existing ethics and regulatory frameworks. This scoping review explores how ethics frameworks have been implemented in AIHA, how these implementations have been evaluated and whether they have been successful. AI (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45. The Cognitive Phenomenology Argument for Disembodied AI Consciousness.Cody Turner - 2020 - In Steven Gouveia (ed.), The Age of Artificial Intelligence: An Exploration. Wilmington, DE: Vernon Press. pp. 111-132.
    In this chapter I offer two novel arguments for what I call strong primitivism about cognitive phenomenology, the thesis that there exists a phenomenology of cognition that is neither reducible to, nor dependent upon, sensory phenomenology. I then contend that strong primitivism implies that phenomenal consciousness does not require sensory processing. This latter contention has implications for the philosophy of artificial intelligence. For if sensory processing is not a necessary condition for phenomenal consciousness, then it plausibly follows that (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  46. Hubert L. Dreyfus’s Critique of Classical AI and its Rationalist Assumptions.Setargew Kenaw - 2008 - Minds and Machines 18 (2):227-238.
    This paper deals with the rationalist assumptions behind researches of artificial intelligence (AI) on the basis of Hubert Dreyfus’s critique. Dreyfus is a leading American philosopher known for his rigorous critique on the underlying assumptions of the field of artificial intelligence. Artificial intelligence specialists, especially those whose view is commonly dubbed as “classical AI,” assume that creating a thinking machine like the human brain is not a too far away project because they believe that human intelligence works on the basis (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  47. Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  48.  1
    Before and Beyond Trust: Reliance in Medical AI.Charalampia Kerasidou, Angeliki Kerasidou, Monika Buscher & Stephen Wilkinson - 2022 - Journal of Medical Ethics 48 (11):852-856.
    Artificial intelligence is changing healthcare and the practice of medicine as data-driven science and machine-learning technologies, in particular, are contributing to a variety of medical and clinical tasks. Such advancements have also raised many questions, especially about public trust. As a response to these concerns there has been a concentrated effort from public bodies, policy-makers and technology companies leading the way in AI to address what is identified as a "public trust deficit". This paper argues that a focus on trust (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  49.  62
    A Misdirected Principle with a Catch: Explicability for AI.Scott Robbins - 2019 - Minds and Machines 29 (4):495-514.
    There is widespread agreement that there should be a principle requiring that artificial intelligence be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” :689–707, 2018). There (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  50. Multi Scale Ethics—Why We Need to Consider the Ethics of AI in Healthcare at Different Scales.Melanie Smallman - 2022 - Science and Engineering Ethics 28 (6):1-17.
    AbstractMany researchers have documented how AI and data driven technologies have the potential to have profound effects on our lives—in ways that make these technologies stand out from those that went before. Around the world, we are seeing a significant growth in interest and investment in AI in healthcare. This has been coupled with rising concerns about the ethical implications of these technologies and an array of ethical guidelines for the use of AI and data in healthcare has arisen. Nevertheless, (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000