Results for 'Strong AI'

997 found
Order:
  1. Saliva Ontology: An ontology-based framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  2.  24
    A finite model property for RMImin.Ai-ni Hsieh & James G. Raftery - 2006 - Mathematical Logic Quarterly 52 (6):602-612.
    It is proved that the variety of relevant disjunction lattices has the finite embeddability property. It follows that Avron's relevance logic RMImin has a strong form of the finite model property, so it has a solvable deducibility problem. This strengthens Avron's result that RMImin is decidable.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  3.  18
    An Investigation Into the Effects of Destination Sensory Experiences at Visitors’ Digital Engagement: Empirical Evidence From Sanya, China.Jin Ai, Ling Yan, Yubei Hu & Yue Liu - 2022 - Frontiers in Psychology 13.
    This study investigates the mechanism of how sensory experiences influence visitors’ digital engagement with a destination through establishing a strong bond and identification between a destination and tourist utilizing a two-step process. First, visitors’ sensory experiences in a destination are identified through a content analysis of online review comments posted by visitors. Afterward, the effects of those sensory experiences on visitors’ digital engagement through destination dependence and identification with that destination are examined. Findings suggest that sensory experiences are critical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4.  13
    Civics and Moral Education in Singapore: lessons for citizenship education?Joy Ai - 1998 - Journal of Moral Education 27 (4):505-524.
    Civics and Moral Educationwas implemented as a new moral education programme in Singapore schools in 1992. This paper argues that the underlying theme is that of citizenship training and that new measures are under way to strengthen the capacity of the school system to transmit national values for economic and political socialisation. The motives and motivation for retaining a formal moral education programme have remained strong. A discussion of the structure and content of key modules in Civics and Moral (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  5.  7
    The Cart Project: A Personal History, a Plea for Help and a Proposal.Hans Moravec Stanford AI Lab May - unknown
    This is a proposal for the re-activation of the essentially stillborn automatic car project for which the cart was originally obtained, and presents a process through which this activation could be accomplished painlessly. The project would be financed from the lab's operating grant, and would interact strongly with, while being independent of, any Mars rover research initiated by Lynn Quam. Since I seem to be the only one, apart from John McCarthy, with an active interest in this aspect of things, (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  6. Weak Strong AI: An elaboration of the English Reply to the Chinese Room.Ronald L. Chrisley - unknown
    Searle (1980) constructed the Chinese Room (CR) to argue against what he called \Strong AI": the claim that a computer can understand by virtue of running a program of the right sort. Margaret Boden (1990), in giving the English Reply to the Chinese Room argument, has pointed out that there isunderstanding in the Chinese Room: the understanding required to recognize the symbols, the understanding of English required to read the rulebook, etc. I elaborate on and defend this response to (...)
     
    Export citation  
     
    Bookmark  
  7.  33
    Searle, Strong AI, and Two Ways of Sorting Cucumbers.Karl Pfeifer - 1992 - Journal of Philosophical Research 17:347-350.
    This paper defends Searle against the misconstrual of a key claim of “Minds, Brains, and Programs” and goes on to explain why an attempt to turn the tables by using the Chinese Room to argue for intentionality in computers fails.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8.  49
    Searle, strong AI, and two ways of sorting cucumbers.Karl Pfeifer - 1992 - Journal of Philosophical Research 17:347-50.
    This paper defends Searle against the misconstrual of a key claim of “Minds, Brains, and Programs” and goes on to explain why an attempt to turn the tables by using the Chinese Room to argue for intentionality in computers fails.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  9.  30
    Strong AI and the problem of “second-order” algorithms.Gerd Gigerenzer - 1990 - Behavioral and Brain Sciences 13 (4):663-664.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10.  11
    " Strong AI": an Adolescent Disorder.M. Gams - 1997 - In Matjaz Gams (ed.), Mind Versus Computer: Were Dreyfus and Winograd Right? Amsterdam: Ios Press. pp. 43--1.
    Direct download  
     
    Export citation  
     
    Bookmark  
  11. Did Searle attack strong strong AI or weak strong AI?Aaron Sloman - 1986 - In Artificial Intelligence and its Applications. Chichester.
    John Searle's attack on the Strong AI thesis, and the published replies, are all based on a failure to distinguish two interpretations of that thesis, a strong one, which claims that the mere occurrence of certain process patterns will suffice for the occurrence of mental states, and a weak one which requires that the processes be produced in the right sort of way. Searle attacks strong strong AI, while most of his opponents defend weak strong (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  12.  40
    Die starke KI-TheseThe strong AI-thesis.Stephan Zelewski - 1991 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 22 (2):337-348.
    Summary The controversy about the strong AI-thesis was recently revived by two interrelated contributions stemming from J. R. Searle on the one hand and from P. M. and P. S. Churchland on the other hand. It is shown that the strong AI-thesis cannot be defended in the formulation used by the three authors. It violates some well accepted criterions of scientific argumentation, especially the rejection of essentialistic definitions. Moreover, Searle's ‘proof’ is not conclusive. Though it may be reconstructed (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  13.  23
    The Problem of Distinction Between ‘weak AI’ and ‘strong AI’. 김진석 - 2017 - Journal of the Society of Philosophical Studies 117:111-137.
    인공지능을 논의할 때 사람들은 흔히 ‘약한weak’ 인공지능과 ‘강한 strong’ 인공지능의 구별을 사용하고 있다. 이 구별은 인공지능들을 서로 구별할때만 흔히 사용될 뿐 아니라, 인공지능을 인간과 구별하는 데에서도 사용된다. 이 점은 인공지능에 대한 세 가지 유형의 관점에서 살펴볼 수 있다. 첫째는 인간의 창의적인 마음과 인공지능을 구별하는 이론이며, 둘째는 인간의 포괄적인 능력을 강한 지능의 기준으로 삼는 관점이며, 셋째는 인간보다 우월한 종을 강한 인공지능의 기준과 목표로 삼는 관점이다.BR 본 연구는 그 관점들이 전제하는 명제나 주장의 적절성 및 모호성을 살펴볼 것이다. 그러나 본 연구는 다른 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14. A Modal Defence of Strong AI.Steffen Borge - 2007 - In Dermot Moran Stephen Voss (ed.), The Proceedings of the Twenty-First World Congress of Philosophy. The Philosophical Society of Turkey. pp. 127-131.
    John Searle has argued that the aim of strong AI of creating a thinking computer is misguided. Searle’s Chinese Room Argument purports to show that syntax does not suffice for semantics and that computer programs as such must fail to have intrinsic intentionality. But we are not mainly interested in the program itself but rather the implementation of the program in some material. It does not follow by necessity from the fact that computer programs are defined syntactically that the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  15. Tu Quoque: The Strong AI Challenge to Selfhood, Intentionality and Meaning and Some Artistic Responses.Erik C. Banks - manuscript
    This paper offers a "tu quoque" defense of strong AI, based on the argument that phenomena of self-consciousness and intentionality are nothing but the "negative space" drawn around the concrete phenomena of brain states and causally connected utterances and objects. Any machine that was capable of concretely implementing the positive phenomena would automatically inherit the negative space around these that we call self-consciousness and intention. Because this paper was written for a literary audience, some examples from Greek tragedy, noir (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  16.  14
    In Defense of Strong AI.Corey Baron - 2017 - Stance 10:15-25.
    This paper argues against John Searle in defense of the potential for computers to understand language (“Strong AI”) by showing that semantic meaning is itself a second-order system of rules that connects symbols and syntax with extralinguistic facts. Searle’s Chinese Room Argument is contested on theoretical and practical grounds by identifying two problems in the thought experiment, and evidence about “machine learning” is used to demonstrate that computers are already capable of learning to form true observation sentences in the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17.  29
    A Modal Defence of Strong AI.Steffen Borge - 2007 - The Proceedings of the Twenty-First World Congress of Philosophy 6:127-131.
    John Searle has argued that the aim of strong AI to create a thinking computer is misguided. Searle's "Chinese Room Argument" purports to show that syntax does not suffice for semantics and that computer programs as such must fail to have intrinsic intentionality But we are not mainly interested in the program itself, but rather the implementation of the program in some material. It does not follow by necessity from the fact that computer programs are defined syntactically that the (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  18. Searle on strong AI.Philip Cam - 1990 - Australasian Journal of Philosophy 68 (1):103-8.
  19.  9
    In Defense of Strong AI.Corey Baron - 2020 - Stance 10 (1):38-49.
    This paper argues against John Searle in defense of the potential for computers to understand language by showing that semantic meaning is itself a second-order system of rules that connects symbols and syntax with extralinguistic facts. Searle’s Chinese Room Argument is contested on theoretical and practical grounds by identifying two problems in the thought experiment, and evidence about “machine learning” is used to demonstrate that computers are already capable of learning to form true observation sentences in the same way humans (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. Redcar rocks: Strong AI and panpsychism.J. M. Bishop - 2000 - Consciousness and Cognition 9 (2):S35 - S35.
     
    Export citation  
     
    Bookmark  
  21. Godel's theorem and strong ai: Is reason blind?Burton Voorhees - 1999 - In S. Smets J. P. Van Bendegem G. C. Cornelis (ed.), Metadebates on Science. Vub-Press & Kluwer. pp. 6--43.
     
    Export citation  
     
    Bookmark  
  22. Searle's abstract argument against strong AI.Andrew Melnyk - 1996 - Synthese 108 (3):391-419.
    Discussion of Searle's case against strong AI has usually focused upon his Chinese Room thought-experiment. In this paper, however, I expound and then try to refute what I call his abstract argument against strong AI, an argument which turns upon quite general considerations concerning programs, syntax, and semantics, and which seems not to depend on intuitions about the Chinese Room. I claim that this argument fails, since it assumes one particular account of what a program is. I suggest (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  23.  28
    Did Searle attack strong strong or weak strong AI.Aaron Sloman - 1986 - In A. G. Cohn and & R. J. Thomas (eds.), Artificial Intelligence and its Applications. John Wiley and Sons.
    John Searle's attack on the Strong AI thesis, and the published replies, are all based on a failure to distinguish two interpretations of that thesis, a strong one, which claims that the mere occurrence of certain process patterns will suffice for the occurrence of mental states, and a weak one which requires that the processes be produced in the right sort of way. Searle attacks strong strong AI, while most of his opponents defend weak strong (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  24. Consciousness as computation: A defense of strong AI based on quantum-state functionalism.R. Michael Perry - 2006 - In Charles Tandy (ed.), Death and Anti-Death, Volume 4: Twenty Years After De Beauvoir, Thirty Years After Heidegger. Palo Alto: Ria University Press.
    The viewpoint that consciousness, including feeling, could be fully expressed by a computational device is known as strong artificial intelligence or strong AI. Here I offer a defense of strong AI based on machine-state functionalism at the quantum level, or quantum-state functionalism. I consider arguments against strong AI, then summarize some counterarguments I find compelling, including Torkel Franzén’s work which challenges Roger Penrose’s claim, based on Gödel incompleteness, that mathematicians have nonalgorithmic levels of “certainty.” Some consequences (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  25.  76
    A philosophical view on singularity and strong AI.Christian Hugo Hoffmann - forthcoming - AI and Society:1-18.
    More intellectual modesty, but also conceptual clarity is urgently needed in AI, perhaps more than in many other disciplines. AI research has been coined by hypes and hubris since its early beginnings in the 1950s. For instance, the Nobel laureate Herbert Simon predicted after his participation in the Dartmouth workshop that “machines will be capable, within 20 years, of doing any work that a man can do”. And expectations are in some circles still high to overblown today. This paper addresses (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  26. Searle's misunderstandings of functionalism and strong AI.Georges Rey - 2003 - In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. pp. 201--225.
  27.  48
    The chess room: further demythologizing of strong AI.Roland Puccetti - 1980 - Behavioral and Brain Sciences 3 (3):441-442.
  28. The chinese room argument reconsidered: Essentialism, indeterminacy, and strong AI. [REVIEW]Jerome C. Wakefield - 2003 - Minds and Machines 13 (2):285-319.
    I argue that John Searle's (1980) influential Chinese room argument (CRA) against computationalism and strong AI survives existing objections, including Block's (1998) internalized systems reply, Fodor's (1991b) deviant causal chain reply, and Hauser's (1997) unconscious content reply. However, a new ``essentialist'' reply I construct shows that the CRA as presented by Searle is an unsound argument that relies on a question-begging appeal to intuition. My diagnosis of the CRA relies on an interpretation of computationalism as a scientific theory about (...)
    Direct download (12 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  29.  60
    In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2022 - AI and Society 37 (1):319-330.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence (AI) system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  30. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  31. In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2021 - AI and Society 1:1-12.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  32.  44
    Embodied AI beyond Embodied Cognition and Enactivism.Riccardo Manzotti - 2019 - Philosophies 4 (3):39.
    Over the last three decades, the rise of embodied cognition (EC) articulated in various schools (or versions) of embodied, embedded, extended and enacted cognition (Gallagher’s 4E) has offered AI a way out of traditional computationalism—an approach (or an understanding) loosely referred to as embodied AI. This view has split into various branches ranging from a weak form on the brink of functionalism (loosely represented by Clarks’ parity principle) to a strong form (often corresponding to autopoietic-friendly enactivism) suggesting that body−world (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  33. AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi (ed.), Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34.  17
    AI research ethics is in its infancy: the EU’s AI Act can make it a grown-up.Anaïs Resseguier & Fabienne Ufert - 2024 - Research Ethics 20 (2):143-155.
    As the artificial intelligence (AI) ethics field is currently working towards its operationalisation, ethics review as carried out by research ethics committees (RECs) constitutes a powerful, but so far underdeveloped, framework to make AI ethics effective in practice at the research level. This article contributes to the elaboration of research ethics frameworks for research projects developing and/or using AI. It highlights that these frameworks are still in their infancy and in need of a structure and criteria to ensure AI research (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  35. AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.Jocelyn Maclure - 2021 - Minds and Machines 31 (3):421-438.
    Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can be (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  36.  17
    AI as a boss? A national US survey of predispositions governing comfort with expanded AI roles in society.Kate K. Mays, Yiming Lei, Rebecca Giovanetti & James E. Katz - 2022 - AI and Society 37 (4):1587-1600.
    People’s comfort with and acceptability of artificial intelligence (AI) instantiations is a topic that has received little systematic study. This is surprising given the topic’s relevance to the design, deployment and even regulation of AI systems. To help fill in our knowledge base, we conducted mixed-methods analysis based on a survey of a representative sample of the US population (_N_ = 2254). Results show that there are two distinct social dimensions to comfort with AI: as a peer and as a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37.  2
    Zašto AI-umjetnost nije umjetnost – heideggerijanska kritika.Karl Kraatz & Shi-Ting Xie - 2023 - Synthesis Philosophica 38 (2):235-253.
    AI’s new ability to create artworks is seen as a major challenge to today’s understanding of art. There is a strong tension between people who predict that AI will replace artists and critics who claim that AI art will never be art. Furthermore, recent studies have documented a negative bias towards AI art. This paper provides a philosophical explanation for this negative bias, based on our shared understanding of the ontological differences between objects. We argue that our perception of (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38.  62
    Medium AI and experimental science.Andre Kukla - 1994 - Philosophical Psychology 7 (4):493-5012.
    It has been claimed that a great deal of AI research is an attempt to discover the empirical laws describing a new type of entity in the world—the artificial computing system. I call this enterprise 'medium AI', since it is in some respects stronger than Searle's 'weak AI', and in other respects weaker than 'strong AI'. Bruce Buchanan, among others, conceives of medium AI as an empirical science entirely on a par with psychology or chemistry. I argue that medium (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  39. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  40.  12
    The use of AI in legal systems: determining independent contractor vs. employee status.Maxime C. Cohen, Samuel Dahan, Warut Khern-Am-Nuai, Hajime Shimao & Jonathan Touboul - forthcoming - Artificial Intelligence and Law:1-30.
    The use of artificial intelligence (AI) to aid legal decision making has become prominent. This paper investigates the use of AI in a critical issue in employment law, the determination of a worker’s status—employee vs. independent contractor—in two common law countries (the U.S. and Canada). This legal question has been a contentious labor issue insofar as independent contractors are not eligible for the same benefits as employees. It has become an important societal issue due to the ubiquity of the gig (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  41. Comments on “The Replication of the Hard Problem of Consciousness in AI and Bio-AI”.Blake H. Dournaee - 2010 - Minds and Machines 20 (2):303-309.
    In their joint paper entitled The Replication of the Hard Problem of Consciousness in AI and BIO-AI (Boltuc et al. Replication of the hard problem of conscious in AI and Bio- AI: An early conceptual framework 2008), Nicholas and Piotr Boltuc suggest that machines could be equipped with phenomenal consciousness, which is subjective consciousness that satisfies Chalmer’s hard problem (We will abbreviate the hard problem of consciousness as H-consciousness ). The claim is that if we knew the inner workings of (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark  
  42.  43
    Strong Determinism vs. Computability.Cristian Calude, Douglas Campbell, Karl Svozil & Doru Ştefănescu - 1995 - Vienna Circle Institute Yearbook 3:115-131.
    Penrose [40] has discussed a new point of view concerning the nature of physics that might underline conscious thought processes. He has argued that it might be the case that some physical laws are not computable, i.e. they cannot be properly simulated by computer; such laws can most probably arise on the “no-man’s-land” between classical and quantum physics. Furthermore, conscious thinking is a non-algorithmic activity. He is opposing both strong AI , and Searle’s [47] contrary viewpoint mathematical “laws”).
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  22
    AI ethics with Chinese characteristics? Concerns and preferred solutions in Chinese academia.Junhua Zhu - forthcoming - AI and Society:1-14.
    Since Chinese scholars are playing an increasingly important role in shaping the national landscape of discussion on AI ethics, understanding their ethical concerns and preferred solutions is essential for global cooperation on governance of AI. This article, therefore, provides the first elaborated analysis on the discourse on AI ethics in Chinese academia, via a systematic literature review. This article has three main objectives. to identify the most discussed ethical issues of AI in Chinese academia and those being left out ; (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  44.  6
    Superhuman AI.Gabriele Gramelsberger - 2023 - Philosophisches Jahrbuch 130 (2):81-91.
    The modern program of operationalizing the mind, from Descartes to Kant, in the form of the externalization of human mind functions in logic and calculations, and its continuation in the program of formalization from the middle of the 19th century with Boole, Peirce and Turing, have led to the form of rationality that has become machine rationality: the digital computer as a logical-mathematical machine and algorithms as machine-rational interpretations of human thinking in the form of problem solving and decision making. (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45. Representation, Analytic Pragmatism and AI.Raffaela Giovagnoli - 2013 - In Gordana Dodig-Crnkovic Raffaela Giovagnoli (ed.), Computing Nature. pp. 161--169.
    Our contribution aims at individuating a valid philosophical strategy for a fruitful confrontation between human and artificial representation. The ground for this theoretical option resides in the necessity to find a solution that overcomes, on the one side, strong AI (i.e. Haugeland) and, on the other side, the view that rules out AI as explanation of human capacities (i.e. Dreyfus). We try to argue for Analytic Pragmatism (AP) as a valid strategy to present arguments for a form of weak (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46.  7
    The Philosophy of AI and Its Critique.James H. Fetzer - 2004 - In Luciano Floridi (ed.), The Blackwell Guide to the Philosophy of Computing and Information. Oxford, UK: Blackwell. pp. 117–134.
    The prelims comprise: Historical Background The Turing Test Physical Machines Symbol Systems The Chinese Room Weak AI Strong AI Folk Psychology Eliminative Materialism Processing Syntax Semantic Engines The Language of Thought Formal Systems Mental Propensities The Frame Problem Minds and Brains Semiotic Systems Critical Differences The Hermeneutic Critique Conventions and Communication Other Minds Intelligent Machines.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  47.  37
    Rehabilitating AI: Argument loci and the case for artificial intelligence. [REVIEW]Barbara Warnick - 2004 - Argumentation 18 (2):149-170.
    This article examines argument structures and strategies in pro and con argumentation about the possibility of human-level artificial intelligence (AI) in the near term future. It examines renewed controversy about strong AI that originated in a prominent 1999 book and continued at major conferences and in periodicals, media commentary, and Web-based discussions through 2002. It will be argued that the book made use of implicit, anticipatory refutation to reverse prevailing value hierarchies related to AI. Drawing on Perelman and Olbrechts-Tyteca's (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  48.  24
    The Thailand national AI ethics guideline: an analysis.Soraj Hongladarom - 2021 - Journal of Information, Communication and Ethics in Society 19 (4):480-491.
    Purpose The paper aims to analyze the content of the newly published National AI Ethics Guideline in Thailand. Thailand’s ongoing political struggles and transformation has made it a good case to see how a policy document such as a guideline in AI ethics becomes part of the transformations. Looking at how the two are interrelated will help illuminate the political and cultural dynamics of Thailand as well as how governance of ethics itself is conceptualized. Design/methodology/approach The author looks at the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  49. Dancing with pixies: strong artificial intelligence and panpsychism.John Mark Bishop - 2002 - In John M. Preston & John Mark Bishop (eds.), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. pp. 360-379.
    The argument presented in this paper is not a direct attack or defence of the Chinese Room Argument (CRA), but relates to the premise at its heart, that syntax is not sufficient for semantics, via the closely associated propositions that semantics is not intrinsic to syntax and that syntax is not intrinsic to physics. However, in contrast to the CRA’s critique of the link between syntax and semantics, this paper will explore the associated link between syntax and physics. The main (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  50.  43
    Problems with “Friendly AI”.Oliver Li - 2021 - Ethics and Information Technology 23 (3):543-550.
    On virtue ethical grounds, Barbro Fröding and Martin Peterson recently recommended that near-future AIs should be developed as ‘Friendly AI’. AI in social interaction with humans should be programmed such that they mimic aspects of human friendship. While it is a reasonable goal to implement AI systems interacting with humans as Friendly AI, I identify four issues that need to be addressed concerning Friendly AI with Fröding’s and Peterson’s understanding of Friendly AI as a starting point. In a first step, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 997