Related categories

113 found
Order:
1 — 50 / 113
  1. Extending Environments To Measure Self-Reflection In Reinforcement Learning.Samuel Allen Alexander, Michael Castaneda, Kevin Compher & Oscar Martinez - manuscript
    We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent's hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment's outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus weighted-average performance over the space of all suitably well-behaved extended environments could be considered a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  2. A Statistical Learning Approach to a Problem of Induction.Kino Zhao - manuscript
    At its strongest, Hume's problem of induction denies the existence of any well justified assumptionless inductive inference rule. At the weakest, it challenges our ability to articulate and apply good inductive inference rules. This paper examines an analysis that is closer to the latter camp. It reviews one answer to this problem drawn from the VC theorem in statistical learning theory and argues for its inadequacy. In particular, I show that it cannot be computed, in general, whether we are in (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  3. Can Reinforcement Learning Learn Itself? A Reply to 'Reward is Enough'.Samuel Allen Alexander - forthcoming - CIFMA 2021.
    In their paper 'Reward is enough', Silver et al conjecture that the creation of sufficiently good reinforcement learning (RL) agents is a path to artificial general intelligence (AGI). We consider one aspect of intelligence Silver et al did not consider in their paper, namely, that aspect of intelligence involved in designing RL agents. If that is within human reach, then it should also be within AGI's reach. This raises the question: is there an RL environment which incentivises RL agents to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. A Falsificationist Account of Artificial Neural Networks.Oliver Buchholz & Eric Raidl - forthcoming - The British Journal for the Philosophy of Science.
    Machine learning operates at the intersection of statistics and computer science. This raises the question as to its underlying methodology. While much emphasis has been put on the close link between the process of learning from data and induction, the falsificationist component of machine learning has received minor attention. In this paper, we argue that the idea of falsification is central to the methodology of machine learning. It is commonly thought that machine learning algorithms infer general prediction rules from past (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5. Understanding, Idealization, and Explainable AI.Will Fleisher - forthcoming - Episteme:1-27.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6. Autognorics Approach to the Problem of Defining Life and Artificial Intelligence.Joey Lawsin - forthcoming
    Many thinkers, past, and present, have tried to solve the underlying mystery of Life. Yet, no one has ever categorically expressed its exact concrete essence, scope, or meaning until a new school of thought known as Originemology was conceptualized in 1988 by Joey Lawsin. Life and consciousness can not be explained properly since their theoretical and philosophical bases are wrong. When the bases are incorrect, the outcomes are incorrect. The words associated with life such as alive, aware, conscious, intelligent, and (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  7. Does Artificial Intelligence Use Private Language?Ryan Miller - forthcoming - In Proceedings of the International Ludwig Wittgenstein Symposium 2021.
    Wittgenstein’s Private Language Argument holds that language requires rule-following, rule following requires the possibility of error, error is precluded in pure introspection, and inner mental life is known only by pure introspection, thus language cannot exist entirely within inner mental life. Fodor defends his Language of Thought program against the Private Language Argument with a dilemma: either privacy is so narrow that internal mental life can be known outside of introspection, or so broad that computer language serves as a counter-example. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. Human Induction in Machine Learning: A Survey of the Nexus.Petr Spelda & Vit Stritecky - forthcoming - ACM Computing Surveys.
    As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. A Note on the Learning-Theoretic Characterizations of Randomness and Convergence.Tomasz Steifer - forthcoming - Review of Symbolic Logic:1-15.
    Recently, a connection has been established between two branches of computability theory, namely between algorithmic randomness and algorithmic learning theory. Learning-theoretical characterizations of several notions of randomness were discovered. We study such characterizations based on the asymptotic density of positive answers. In particular, this note provides a new learning-theoretic definition of weak 2-randomness, solving the problem posed by (Zaffora Blando, Rev. Symb. Log. 2019). The note also highlights the close connection between these characterizations and the problem of convergence on random (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10. On Explaining the Success of Induction.Tom F. Sterkenburg - forthcoming - British Journal for the Philosophy of Science.
    Douven (in press) observes that Schurz's meta-inductive justification of induction cannot explain the great empirical success of induction, and offers an explanation based on computer simulations of the social and evolutionary development of our inductive practices. In this paper, I argue that Douven's account does not address the explanatory question that Schurz's argument leaves open, and that the assumption of the environment's induction-friendliness that is inherent to Douven's simulations is not justified by Schurz's argument.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - forthcoming - Philosophy of Science:1-13.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this paper, I argue that non-epistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12. Pseudo-Visibility: A Game Mechanic Involving Willful Ignorance.Samuel Allen Alexander & Arthur Paul Pedersen - 2022 - FLAIRS-35.
    We present a game mechanic called pseudo-visibility for games inhabited by non-player characters (NPCs) driven by reinforcement learning (RL). NPCs are incentivized to pretend they cannot see pseudo-visible players: the training environment simulates an NPC to determine how the NPC would act if the pseudo-visible player were invisible, and penalizes the NPC for acting differently. NPCs are thereby trained to selectively ignore pseudo-visible players, except when they judge that the reaction penalty is an acceptable tradeoff (e.g., a guard might accept (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2022 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  14. Interprétabilité et explicabilité de phénomènes prédits par de l’apprentissage machine.Christophe Denis & Franck Varenne - 2022 - Revue Ouverte d'Intelligence Artificielle 3 (3-4):287-310.
    Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathéma- tique et causale d’un phénomène (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  15. A Fuzzy-Cognitive-Maps Approach to Decision-Making in Medical Ethics.Alice Hein, Lukas J. Meier, Alena Buyx & Klaus Diepold - 2022 - 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).
    Although machine intelligence is increasingly employed in healthcare, the realm of decision-making in medical ethics remains largely unexplored from a technical perspective. We propose an approach based on fuzzy cognitive maps (FCMs), which builds on Beauchamp and Childress’ prima-facie principles. The FCM’s weights are optimized using a genetic algorithm to provide recommendations regarding the initiation, continuation, or withdrawal of medical treatment. The resulting model approximates the answers provided by our team of medical ethicists fairly well and offers a high degree (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. The Deskilling of Teaching and the Case for Intelligent Tutoring Systems.James Hughes - 2022 - Journal of Ethics and Emerging Technologies 31 (2):1-16.
    This essay describes trends in the organization of work that have laid the groundwork for the adoption of interactive AI-driven instruction tools, and the technological innovations that will make intelligent tutoring systems truly competitive with human teachers. Since the origin of occupational specialization, the collection and transmission of knowledge have been tied to individual careers and job roles, specifically doctors, teachers, clergy, and lawyers, the paradigmatic knowledge professionals. But these roles have also been tied to texts and organizations that can (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. AI Powered Anti-Cyber Bullying System Using Machine Learning Algorithm of Multinomial Naïve Bayes and Optimized Linear Support Vector Machine.Tosin Ige - 2022 - International Journal of Advanced Computer Science and Applications 13 (5):1 - 5.
    Unless and until our society recognizes cyber bullying for what it is, the suffering of thousands of silent victims will continue.” ~ Anna Maria Chavez. There had been series of research on cyber bullying which are unable to provide reliable solution to cyber bullying. In this research work, we were able to provide a permanent solution to this by developing a model capable of detecting and intercepting bullying incoming and outgoing messages with 92% accuracy. We also developed a chatbot automation (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  18. Implementation of Data Mining on a Secure Cloud Computing Over a Web API Using Supervised Machine Learning Algorithm.Tosin Ige - 2022 - International Journal of Advanced Computer Science and Applications 13 (5):1 - 4.
    Ever since the era of internet had ushered in cloud computing, there had been increase in the demand for the unlimited data available through cloud computing for data analysis, pattern recognition and technology advancement. With this also bring the problem of scalability, efficiency and security threat. This research paper focuses on how data can be dynamically mine in real time for pattern detection in a secure cloud computing environment using combination of decision tree algorithm and Random Forest over a restful (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  19. Stimuli-Based Control of Negative Emotions in a Digital Learning Environment.Rossitza Kaltenborn, Mincho Hadjiski & Stefan Koynov - 2022 - In V. Sgurev, V. Jotsov & J. Kacprzyk (eds.), Advances in Intelligent Systems Research and Innovation. Cambridge, Vereinigtes Königreich:
    The proposed system for coping negative emotions arising during the learning process is considered as an embedded part of the complex intelligent learning system realized in a digital environment. By applying data-driven procedures on the current and retrospective data the main didactic-based stimuli provoking emotion generation are identified. They are examined as dominant negative emotions in the context of learning. Due to the presence of strong internal and output interconnections between teaching and emotional states, an intelligent decoupling multidimensional control scheme (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  20. Big Data and Artificial Intelligence Based on Personalized Learning – Conformity with Whitehead’s Organismic Theory.Rossitza Kaltenborn & Mintcho Hadjiski - 2022 - In F. Riffert & V. Petrov (eds.), Education and Learning in a World of Accelerated Knowledge Growth: Current Trends in Process Thought. Cambridge, Vereinigtes Königreich:
    The study shows the existence of a broad conformity between Whitehead’s organismic cosmology and the contemporary theory of complex systems at a relevant level of abstraction. One of the most promising directions of educational transformation in the age of big data and artificial intelligence – personalized learning – is conceived as a system of systems and reveals its close congruence with a number of basic Whiteheadian concepts. A new functional structure of personalized learning systems is proposed, including all the core (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  21. Philosophical Foundations of Intelligence Collection and Analysis: A Defense of Ontological Realism.William Mandrick & Barry Smith - 2022 - Intelligence and National Security 38.
    There is a common misconception across the lntelligence Community (IC) to the effect that information trapped within multiple heterogeneous data silos can be semantically integrated by the sorts of meaning-blind statistical methods employed in much of artificial intelligence (Al) and natural language processlng (NLP). This leads to the misconception that incoming data can be analysed coherently by relying exclusively on the use of statistical algorithms and thus without any shared framework for classifying what the data are about. Unfortunately, such approaches (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22. Calculating the Mind-Change Complexity of Learning Algebraic Structures.Luca San Mauro, Nikolay Bazhenov & Vittorio Cipriani - 2022 - In Ulrich Berger, Johanna N. Y. Franklin, Florin Manea & Arno Pauly (eds.), Revolutions and Revelations in Computability. Cham, Svizzera: pp. 1-12.
    This paper studies algorithmic learning theory applied to algebraic structures. In previous papers, we have defined our framework, where a learner, given a family of structures, receives larger and larger pieces of an arbitrary copy of a structure in the family and, at each stage, is required to output a conjecture about the isomorphism type of such a structure. The learning is successful if there is a learner that eventually stabilizes to a correct conjecture. Here, we analyze the number of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  23. Clinical Ethics – To Compute, or Not to Compute?Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (12):W1-W4.
    Can machine intelligence do clinical ethics? And if so, would applying it to actual medical cases be desirable? In a recent target article (Meier et al. 2022), we described the piloting of our advi...
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  24. ANNs and Unifying Explanations: Reply to Erasmus, Brunet, and Fisher.Yunus Prasetya - 2022 - Philosophy and Technology 35 (2):1-9.
    In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does not make them (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Model-Induced Escape.Barry Smith - 2022 - Facing the Future, Facing the Screen: 10th Budapest Visual Learning Conference.
    Nyíri writes a paper demonstrating convincingly that there are strong signals of a conservative strain of thought in the writings of Wittgenstein. This has initially a tiny effect; but then a more significant effect sets in as the authors of Wittgenstein secondary literature, consciously or unconsciously, draw attention to features of Wittgenstein which cast the conservatism thesis in a negative light. -/- So. -/- There is no such thing as email spam. Rather there is a flow of constantly mutating patterns. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  26. On Characterizations of Learnability with Computable Learners.Tom F. Sterkenburg - 2022 - Proceedings of Machine Learning Research 178:3365-3379.
    We study computable PAC (CPAC) learning as introduced by Agarwal et al. (2020). First, we consider the main open question of finding characterizations of proper and improper CPAC learning. We give a characterization of a closely related notion of strong CPAC learning, and provide a negative answer to the COLT open problem posed by Agarwal et al. (2021) whether all decidably representable VC classes are improperly CPAC learnable. Second, we consider undecidability of (computable) PAC learnability. We give a simple general (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  27. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  28. Correlation Isn’T Good Enough: Causal Explanation and Big Data. [REVIEW]Frank Cabrera - 2021 - Metascience 30 (2):335-338.
    A review of Gary Smith and Jay Cordes: The Phantom Pattern Problem: The Mirage of Big Data. New York: Oxford University Press, 2020.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  29. Making AI Intelligible: Philosophical Foundations.Herman Cappelen & Josh Dever - 2021 - New York, USA: Oxford University Press.
    Can humans and artificial intelligences share concepts and communicate? Making AI Intelligible shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can understand each other. In doing so, they illustrate ways in which that philosophical tradition can be improved. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  30. Towards Knowledge-Driven Distillation and Explanation of Black-Box Models.Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello - 2021 - In Proceedings of the Workshop on Data meets Applied Ontologies in Explainable {AI} {(DAO-XAI} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR 2998.
    We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, to target (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31. Fair Machine Learning Under Partial Compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect the allocation outcomes? (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. Microethics for Healthcare Data Science: Attention to Capabilities in Sociotechnical Systems.Mark Graves & Emanuele Ratti - 2021 - The Future of Science and Ethics 6:64-73.
    It has been argued that ethical frameworks for data science often fail to foster ethical behavior, and they can be difficult to implement due to their vague and ambiguous nature. In order to overcome these limitations of current ethical frameworks, we propose to integrate the analysis of the connections between technical choices and sociocultural factors into the data science process, and show how these connections have consequences for what data subjects can do, accomplish, and be. Using healthcare as an example, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Exploring Machine Learning Techniques for Coronary Heart Disease Prediction.Hisham Khdair - 2021 - International Journal of Advanced Computer Science and Applications 12 (5):28-36.
    Coronary Heart Disease (CHD) is one of the leading causes of death nowadays. Prediction of the disease at an early stage is crucial for many health care providers to protect their patients and save lives and costly hospitalization resources. The use of machine learning in the prediction of serious disease events using routine medical records has been successful in recent years. In this paper, a comparative analysis of different machine learning techniques that can accurately predict the occurrence of CHD events (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  34. Tecno-especies: la humanidad que se hace a sí misma y los desechables.Mateja Kovacic & María G. Navarro - 2021 - Bajo Palabra. Revista de Filosofía 27 (II Epoca):45-62.
    Popular culture continues fuelling public imagination with things, human and non-human, that we might beco-me or confront. Besides robots, other significant tropes in popular fiction that generated images include non-human humans and cyborgs, wired into his-torically varying sociocultural realities. Robots and artificial intelligence are re-defining the natural order and its hierar-chical structure. This is not surprising, as natural order is always in flux, shaped by new scientific discoveries, especially the reading of the genetic code, that reveal and redefine relationships between (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  36. On the Turing Complexity of Learning Finite Families of Algebraic Structures.Luca San Mauro & Nikolay Bazhenov - 2021 - Journal of Logic and Computation 7 (31):1891-1900.
    In previous work, we have combined computable structure theory and algorithmic learning theory to study which families of algebraic structures are learnable in the limit (up to isomorphism). In this paper, we measure the computational power that is needed to learn finite families of structures. In particular, we prove that, if a family of structures is both finite and learnable, then any oracle which computes the Halting set is able to achieve such a learning. On the other hand, we construct (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. Humanistic Interpretation and Machine Learning.Juho Paakkonen & Petri Ylikoski - 2021 - Synthese 199:1461–1497.
    This paper investigates how unsupervised machine learning methods might make hermeneutic interpretive text analysis more objective in the social sciences. Through a close examination of the uses of topic modeling—a popular unsupervised approach in the social sciences—it argues that the primary way in which unsupervised learning supports interpretation is by allowing interpreters to discover unanticipated information in larger and more diverse corpora and by improving the transparency of the interpretive process. This view highlights that unsupervised modeling does not eliminate the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. Healthcare and Anomaly Detection: Using Machine Learning to Predict Anomalies in Heart Rate Data.Edin Šabić, David Keeley, Bailey Henderson & Sara Nannemann - 2021 - AI and Society 36 (1):149-158.
    The application of machine learning algorithms to healthcare data can enhance patient care while also reducing healthcare worker cognitive load. These algorithms can be used to detect anomalous physiological readings, potentially leading to expedited emergency response or new knowledge about the development of a health condition. However, while there has been much research conducted in assessing the performance of anomaly detection algorithms on well-known public datasets, there is less conceptual comparison across unsupervised and supervised performance on physiological data. Moreover, while (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  39. Predicting Me: The Route to Digital Immortality?Paul Smart - 2021 - In Robert W. Clowes, Klaus Gärtner & Inês Hipólito (eds.), The Mind-Technology Problem: Investigating Minds, Selves and 21st Century Artefacts. Cham, Switzerland: Springer. pp. 185–207.
    An emerging consensus in cognitive science views the biological brain as a hierarchically-organized predictive processing system that relies on generative models to predict the structure of sensory information. Such a view resonates with a body of work in machine learning that has explored the problem-solving capabilities of hierarchically-organized, multi-layer (i.e., deep) neural networks, many of which acquire and deploy generative models of their training data. The present chapter explores the extent to which the ostensible convergence on a common neurocomputational architecture (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  40. The No-Free-Lunch Theorems of Supervised Learning.Tom F. Sterkenburg & Peter D. Grünwald - 2021 - Synthese 199 (3-4):9979-10015.
    The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  41. Ethical Implications of Alzheimer’s Disease Prediction in Asymptomatic Individuals Through Artificial Intelligence.Frank Ursin, Cristian Timmermann & Florian Steger - 2021 - Diagnostics 11 (3):440.
    Biomarker-based predictive tests for subjectively asymptomatic Alzheimer’s disease (AD) are utilized in research today. Novel applications of artificial intelligence (AI) promise to predict the onset of AD several years in advance without determining biomarker thresholds. Until now, little attention has been paid to the new ethical challenges that AI brings to the early diagnosis in asymptomatic individuals, beyond contributing to research purposes, when we still lack adequate treatment. The aim of this paper is to explore the ethical arguments put forward (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  42. Discounting Desirable Gambles.Gregory Wheeler - 2021 - Proceedings of Machine Learning Research 147:331-341.
    The desirable gambles framework offers the most comprehensive foundations for the theory of lower pre- visions, which in turn affords the most general ac- count of imprecise probabilities. Nevertheless, for all its generality, the theory of lower previsions rests on the notion of linear utility. This commitment to linearity is clearest in the coherence axioms for sets of desirable gambles. This paper considers two routes to relaxing this commitment. The first preserves the additive structure of the desirable gambles framework and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43. Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2021 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  44. The Archimedean Trap: Why Traditional Reinforcement Learning Will Probably Not Yield AGI.Samuel Allen Alexander - 2020 - Journal of Artificial General Intelligence 11 (1):70-85.
    After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Genealogy of Algorithms: Datafication as Transvaluation.Virgil W. Brower - 2020 - le Foucaldien 6 (1):1-43.
    This article investigates religious ideals persistent in the datafication of information society. Its nodal point is Thomas Bayes, after whom Laplace names the primal probability algorithm. It reconsiders their mathematical innovations with Laplace's providential deism and Bayes' singular theological treatise. Conceptions of divine justice one finds among probability theorists play no small part in the algorithmic data-mining and microtargeting of Cambridge Analytica. Theological traces within mathematical computation are emphasized as the vantage over large numbers shifts to weights beyond enumeration in (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   31 citations  
  47. On Uniform Definability of Types Over Finite Sets for NIP Formulas.Shlomo Eshel & Itay Kaplan - 2020 - Journal of Mathematical Logic 21 (3).
    Combining two results from machine learning theory we prove that a formula is NIP if and only if it satisfies uniform definability of types over finite sets. This settles a conjecture of La...
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48. Perceptron Connectives in Knowledge Representation.Pietro Galliani, Guendalina Righetti, Daniele Porello, Oliver Kutz & Nicolas Toquard - 2020 - In Knowledge Engineering and Knowledge Management - 22nd International Conference, {EKAW} 2020, Bolzano, Italy, September 16-20, 2020, Proceedings. Lecture Notes in Computer Science 12387. pp. 183-193.
    We discuss the role of perceptron (or threshold) connectives in the context of Description Logic, and in particular their possible use as a bridge between statistical learning of models from data and logical reasoning over knowledge bases. We prove that such connectives can be added to the language of most forms of Description Logic without increasing the complexity of the corresponding inference problem. We show, with a practical example over the Gene Ontology, how even simple instances of perceptron connectives are (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  49. Learning Families of Algebraic Structures From Informant.Luca San Mauro, Nikolay Bazhenov & Ekaterina Fokina - 2020 - Information And Computation 1 (275):104590.
    We combine computable structure theory and algorithmic learning theory to study learning of families of algebraic structures. Our main result is a model-theoretic characterization of the learning type InfEx_\iso, consisting of the structures whose isomorphism types can be learned in the limit. We show that a family of structures is InfEx_\iso-learnable if and only if the structures can be distinguished in terms of their \Sigma^2_inf-theories. We apply this characterization to familiar cases and we show the following: there is an infinite (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  50. AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi (ed.), Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 113