Book Description (Blurb): Cognitive Design for Artificial Minds explains the crucial role that human cognition research plays in the design and realization of artificial intelligence systems, illustrating the steps necessary for the design of artificial models of cognition. It bridges the gap between the theoretical, experimental and technological issues addressed in the context of AI of cognitive inspiration and computational cognitive science. -/- Beginning with an overview of the historical, methodological and technical issues in the field of Cognitively-Inspired Artificial Intelligence, (...) Lieto illustrates how the cognitive design approach has an important role to play in the development of intelligent AI technologies and plausible computational models of cognition. Introducing a unique perspective that draws upon Cybernetics and early AI principles, Lieto emphasizes the need for an equivalence between cognitive processes and implemented AI procedures, in order to realise biologically and cognitively inspired artificial minds. He also introduces the Minimal Cognitive Grid, a pragmatic method to rank the different degrees of biologically and cognitive accuracy of artificial systems in order project and predict their explanatory power with respect to the natural systems taken as source of inspiration. -/- Providing a comprehensive overview of cognitive design principles in constructing artificial minds, this text will be essential reading for students and researchers of artificial intelligence and cognitive science. (shrink)
In this paper we identify and characterize an analysis of two problematic aspects affecting the representational level of cognitive architectures (CAs), namely: the limited size and the homogeneous typology of the encoded and processed knowledge. We argue that such aspects may constitute not only a technological problem that, in our opinion, should be addressed in order to build arti cial agents able to exhibit intelligent behaviours in general scenarios, but also an epistemological one, since they limit the plausibility of the (...) comparison of the CAs' knowledge representation and processing mechanisms with those executed by humans in their everyday activities. In the fi nal part of the paper further directions of research will be explored, trying to address current limitations and future challenges. (shrink)
In this article we present an advanced version of Dual-PECCS, a cognitively-inspired knowledge representation and reasoning system aimed at extending the capabilities of artificial systems in conceptual categorization tasks. It combines different sorts of common-sense categorization (prototypical and exemplars-based categorization) with standard monotonic categorization procedures. These different types of inferential procedures are reconciled according to the tenets coming from the dual process theory of reasoning. On the other hand, from a representational perspective, the system relies on the hypothesis of conceptual (...) structures represented as heterogeneous proxytypes. Dual-PECCS has been experimentally assessed in a task of conceptual categorization where a target concept illustrated by a simple common-sense linguistic description had to be identified by resorting to a mix of categorization strategies, and its output has been compared to human responses. The obtained results suggest that our approach can be beneficial to improve the representational and reasoning conceptual capabilities of standard cognitive artificial systems, and –in addition– that it may be plausibly applied to different general computational models of cognition. The current version of the system, in fact, extends our previous work, in that Dual-PECCS is now integrated and tested into two cognitive architectures, ACT-R and CLARION, implementing different assumptions on the underlying invariant structures governing human cognition. Such integration allowed us to extend our previous evaluation. (shrink)
We propose a nonmonotonic Description Logic of typicality able to account for the phenomenon of the combination of prototypical concepts. The proposed logic relies on the logic of typicality ALC + TR, whose semantics is based on the notion of rational closure, as well as on the distributed semantics of probabilistic Description Logics, and is equipped with a cognitive heuristic used by humans for concept composition. We first extend the logic of typicality ALC + TR by typicality inclusions of the (...) form p :: T(C) v D, whose intuitive meaning is that “we believe with degree p about the fact that typical Cs are Ds”. As in the distributed semantics, we define different scenarios containing only some typicality inclusions, each one having a suitable probability. We then exploit such scenarios in order to ascribe typical properties to a concept C obtained as the combination of two prototypical concepts. We also show that reasoning in the proposed Description Logic is EXPTIME-complete as for the underlying standard Description Logic ALC. (shrink)
In this paper we propose a computational framework aimed at extending the problem solving capabilities of cognitive artificial agents through the introduction of a novel, goal-directed, dynamic knowledge generation mechanism obtained via a non monotonic reasoning procedure. In particular, the proposed framework relies on the assumption that certain classes of problems cannot be solved by simply learning or injecting new external knowledge in the declarative memory of a cognitive artificial agent but, on the other hand, require a mechanism for the (...) automatic and creative re-framing, or re-formulation, of the available knowledge. We show how such mechanism can be obtained trough a framework of dynamic knowledge generation that is able to tackle the problem of commonsense concept combination. In addition, we show how such a framework can be employed in the field of cognitive architectures in order to overcome situations like the impasse in SOAR by extending the possible options of its subgoaling procedures. (shrink)
We propose a nonmonotonic Description Logic of typicality able to account for the phenomenon of combining prototypical concepts, an open problem in the fields of AI and cognitive modelling. Our logic extends the logic of typicality ALC + TR, based on the notion of rational closure, by inclusions p :: T(C) v D (“we have probability p that typical Cs are Ds”), coming from the distributed semantics of probabilistic Description Logics. Additionally, it embeds a set of cognitive heuristics for concept (...) combination. We show that the complexity of reasoning in our logic is EXPTIME-complete as in ALC. (shrink)
During the last decades, many cognitive architectures (CAs) have been realized adopting different assumptions about the organization and the representation of their knowledge level. Some of them (e.g. SOAR [35]) adopt a classical symbolic approach, some (e.g. LEABRA[ 48]) are based on a purely connectionist model, while others (e.g. CLARION [59]) adopt a hybrid approach combining connectionist and symbolic representational levels. Additionally, some attempts (e.g. biSOAR) trying to extend the representational capacities of CAs by integrating diagrammatical representations and reasoning are (...) also available [34]. In this paper we propose a reflection on the role that Conceptual Spaces, a framework developed by Peter G¨ardenfors [24] more than fifteen years ago, can play in the current development of the Knowledge Level in Cognitive Systems and Architectures. In particular, we claim that Conceptual Spaces offer a lingua franca that allows to unify and generalize many aspects of the symbolic, sub-symbolic and diagrammatic approaches (by overcoming some of their typical problems) and to integrate them on a common ground. In doing so we extend and detail some of the arguments explored by G¨ardenfors [23] for defending the need of a conceptual, intermediate, representation level between the symbolic and the sub-symbolic one. In particular we focus on the advantages offered by Conceptual Spaces (w.r.t. symbolic and sub-symbolic approaches) in dealing with the problem of compositionality of representations based on typicality traits. Additionally, we argue that Conceptual Spaces could offer a unifying framework for interpreting many kinds of diagrammatic and analogical representations. As a consequence, their adoption could also favor the integration of diagrammatical representation and reasoning in CAs. (shrink)
In this paper a possible general framework for the representation of concepts in cognitive artificial systems and cognitive architectures is proposed. The framework is inspired by the so called proxytype theory of concepts and combines it with the heterogeneity approach to concept representations, according to which concepts do not constitute a unitary phenomenon. The contribution of the paper is twofold: on one hand, it aims at providing a novel theoretical hypothesis for the debate about concepts in cognitive sciences by providing (...) unexplored connections between different theories; on the other hand it is aimed at sketching a computational characterization of the problem of concept representation in cognitively inspired artificial systems and in cognitive architectures. (shrink)
The paper introduces an extension of the proposal according to which conceptual representations in cognitive agents should be intended as heterogeneous proxytypes. The main contribution of this paper is in that it details how to reconcile, under a heterogeneous representational perspective, different theories of typicality about conceptual representation and reasoning. In particular, it provides a novel theoretical hypothesis - as well as a novel categorization algorithm called DELTA - showing how to integrate the representational and reasoning assumptions of the theory-theory (...) of concepts with the those ascribed to the prototype and exemplars-based theories. (shrink)
The problem of concept representation is relevant for many sub-fields of cognitive research, including psychology and philosophy, as well as artificial intelligence. In particular, in recent years it has received a great deal of attention within the field of knowledge representation, due to its relevance for both knowledge engineering as well as ontology-based technologies. However, the notion of a concept itself turns out to be highly disputed and problematic. In our opinion, one of the causes of this state of affairs (...) is that the notion of a concept is, to some extent, heterogeneous, and encompasses different cognitive phenomena. This results in a strain between conflicting requirements, such as compositionality, on the one hand and the need to represent prototypical information on the other. In some ways artificial intelligence research shows traces of this situation. In this paper, we propose an analysis of this current state of affairs. Since it is our opinion that a mature methodology with which to approach knowledge representation and knowledge engineering should also take advantage of the empirical results of cognitive psychology concerning human abilities, we outline some proposals for concept representation in formal ontologies, which take into account suggestions from psychological research. Our basic assumption is that knowledge representation systems whose design takes into account evidence from experimental psychology may therefore give better results in many applications. (shrink)
The mental rotation ability is an essential spatial reasoning skill in human cognition and has proven to be an essential predictor of mathematical and STEM skills, critical and computational thinking. Despite its importance, little is known about when and how mental rotation processes are activated in games explicitly targeting spatial reasoning tasks. In particular, the relationship between spatial abilities and TetrisTM has been analysed several times in the literature. However, these analyses have shown contrasting results between the effectiveness of Tetris-based (...) training activities to improve mental rotation skills. In this work, we studied whether, and under what conditions, such ability is used in the TetrisTM game by explicitly modelling mental rotation via an ACT-R based cognitive model controlling a virtual agent. The obtained results show meaningful insights into the activation of mental rotation during game dynamics. The study suggests the necessity to adapt game dynamics in order to force the activation of this process and, therefore, can be of inspiration to design learning activities based on TetrisTM or re-design the game itself to improve its educational effectiveness. (shrink)
We overview the main historical and technological elements characterising the rise, the fall and the recent renaissance of the cognitive approaches to Artificial Intelligence and provide some insights and suggestions about the future directions and challenges that, in our opinion, this discipline needs to face in the next years.
In his famous 1982 paper, Allen Newell [22, 23] introduced the notion of knowledge level to indicate a level of analysis, and prediction, of the rational behavior of a cognitive arti cial agent. This analysis concerns the investigation about the availability of the agent knowledge, in order to pursue its own goals, and is based on the so-called Rationality Principle (an assumption according to which "an agent will use the knowledge it has of its environment to achieve its goals" [22, (...) p. 17]. By using the Newell's own words: "To treat a system at the knowledge level is to treat it as having some knowledge, some goals, and believing it will do whatever is within its power to attain its goals, in so far as its knowledge indicates" [22, p. 13]. In the last decades, the importance of the knowledge level has been historically and system- atically downsized by the research area in cognitive architectures (CAs), whose interests have been mainly focused on the analysis and the development of mechanisms and the processes governing human and (arti cial) cognition. The knowledge level in CAs, however, represents a crucial level of analysis for the development of such arti cial general systems and therefore deserves greater research attention [17]. In the following, we will discuss areas of broad agree- ment and outline the main problematic aspects that should be faced within a Common Model of Cognition [12]. Such aspects, departing from an analysis at the knowledge level, also clearly impact both lower (e.g. representational) and higher (e.g. social) levels. (shrink)
In this article we argue that the problem of the relationships between concepts and perception in cognitive science is blurred by the fact that the very notion of concept is rather confused. Since it is not always clear exactly what concepts are, it is not easy to say, for example, whether and in what measure concept possession involves entertaining and manipulating perceptual representations, whether concepts are entirely different from perceptual representations, and so on. As a paradigmatic example of this state (...) of affairs, we will start by taking into consideration the distinction between conceptual and nonconceptual content. The analysis of such a distinction will lead us to the conclusion that concept is a heterogeneous notion. Then we shall take into account the so called dual process theories of mind; this approach also points to concepts being a heterogeneous phenomenon: different aspects of conceptual competence are likely to be ascribed to different types of systems. We conclude that without a clear specification of what concepts are, the problem of the relationships between concepts and perception is somewhat ill-posed. (shrink)
Concept representation is still an open problem in the field of ontology engineering and, more generally, of knowledge representation. In particular, the issue of representing “non classical” concepts, i.e. concepts that cannot be defined in terms of necessary and sufficient conditions, remains unresolved. In this paper we review empirical evidence from cognitive psychology, according to which concept representation is not a unitary phenomenon. On this basis, we sketch some proposals for concept representation, taking into account suggestions from psychological research. In (...) particular, it seems that human beings employ both prototype-based and exemplar-based representations in order to represent non classical concepts. We suggest that a similar, hybrid prototype-exemplar based approach could also prove useful in the field of knowledge representation technology. Finally, we propose conceptual spaces as a suitable framework for developing some aspects of this proposal. (shrink)
In the present paper, we shall discuss the notion of prototype and show its benefits. First, we shall argue that the prototypes of common-sense concepts are necessary for making prompt and reliable categorisations and inferences. However, the features constituting the prototype of a particular concept are neither necessary nor sufficient conditions for determining category membership; in this sense, the prototype might lead to conclusions regarded as wrong from a theoretical perspective. That being said, the prototype remains essential to handling most (...) ordinary situations and helps us to perform important cognitive tasks. To exemplify this point, we shall focus on disease concepts. Our analysis concludes that the prototypical conception of disease is needed to make important inferences from a practical and clinical point of view. Moreover, it can still be compatible with a classical definition of disease, given in terms of necessary and sufficient conditions. In the first section, we shall compare the notion of stereotype, as it has been introduced in philosophy of language by Hilary Putnam, with the notion of prototype, as it has been developed in the cognitive sciences. In the second section, we shall discuss the general role of prototypical information in cognition and stress its centrality. In the third section, we shall apply our previous discussion to the specific case of medical concepts, before briefly summarising our conclusions in section four. (shrink)
In the last decades Human-Computer Interaction (HCI) has started to focus attention on “persuasive technologies” having the goal of changing users’ behavior and attitudes according to a predefined direction. In this talk we show how some of the techniques employed in such technologies trigger some well known cognitive biases by adopting a strategy relying on logical fallacies (i.e. forms of reasoning which are logically invalid but psychologically persuasive). In particular, we will show how the mechanisms reducible to logical fallacies are (...) used to design web and mobile interfaces in domains ranging from the e-commerce to the jihadist propaganda. The final part of the talk will be devoted to point out the potential ethical dangers related to the misuse of these techniques in the design of persuasive technologies. (shrink)
This work describes an explainable system for emotion attribution and recommendation (called DEGARI (Dynamic Emotion Generator And ReclassIfier) relying on a recently introduced probabilistic commonsense reasoning framework.
This article addresses an open problem in the area of cognitive systems and architectures: namely the problem of handling (in terms of processing and reasoning capabilities) complex knowledge structures that can be at least plausibly comparable, both in terms of size and of typology of the encoded information, to the knowledge that humans process daily for executing everyday activities. Handling a huge amount of knowledge, and selectively retrieve it according to the needs emerging in different situational scenarios, is an important (...) aspect of human intelligence. For this task, in fact, humans adopt a wide range of heuristics (Gigerenzer & Todd) due to their “bounded rationality” (Simon, 1957). In this perspective, one of the requirements that should be considered for the design, the realization and the evaluation of intelligent cognitively-inspired systems should be rep- resented by their ability of heuristically identify and retrieve, from the general knowledge stored in their artificial Long Term Memory (LTM), that one which is synthetically and contextually relevant. This requirement, however, is often neglected. Currently, artificial cognitive systems and architectures are not able, de facto, to deal with complex knowledge structures that can be even slightly comparable to the knowledge heuris- tically managed by humans. In this paper I will argue that this is not only a technological problem but also an epistemological one and I will briefly sketch a proposal for a possible solution. (shrink)
Commonsense reasoning is one of the main open problems in the field of Artificial Intelligence (AI) while, on the other hand, seems to be a very intuitive and default reasoning mode in humans and other animals. In this talk, we discuss the different paradigms that have been developed in AI and Computational Cognitive Science to deal with this problem (ranging from logic-based methods, to diagrammatic-based ones). In particular, we discuss - via two different case studies concerning commonsense categorization and knowledge (...) invention tasks - how cognitively inspired heuristics can help (both in terms of efficiency and efficacy) in the realization of intelligent artificial systems able to reason in a human-like fashion, with results comparable to human-level performances. (shrink)
I will review the main problems concerning commonsense reasoning in machines and I will present resent two different applications - namaly: the Dual PECCS linguistic categorization system and the TCL reasoning framework that have been developed to address, respectively, the problem of typicality effects and the one of commonsense compositionality, in a way that is integrated or compliant with different cognitive architectures thus extending their knowledge processing capabilities In doing so I will show how such aspects are better dealt with (...) at different levels of representation and will discuss how the adoption of a cognitively inspired approach be useful in the design and implementation of the next generation AI systems mastering commonsense. (shrink)
In the last decade Human-Computer Interaction (HCI) has started to focus attention on forms of persuasive interaction where computer technologies have the goal of changing users behavior and attitudes according to a predefined direction. In this work, we hypothesize a strong connection between logical fallacies (forms of reasoning which are logically invalid but cognitively effective) and some common persuasion strategies adopted within web technologies. With the aim of empirically evaluating our hypothesis, we carried out a pilot study on a sample (...) of 150 e-commerce websites. (shrink)
I will present two different applications - Dual PECCS and the TCL reasoning framework - addressing some crucial aspects of commonsense reasoning (namely: dealing with typicality effects and with the problem of commonsense compositionality) in a way that is integrated or compliant with different cognitive architectures. In doing so I will show how such aspects are better dealt with at different levels of representation and will discuss the adopted solution to integrate such representational layers.
Commonsense reasoning is a crucial human ability employed in everyday tasks. In this talk I provide a knowledge level analysis of the main representational and reasoning problems affecting the cognitive architectures for what concerns this issue. In providing this analysis I will show, by considering some of the main cognitive architectures currently available (e.g. SOAR, ACT-R, CLARION), how one of the main problems of such architectures is represented by the fact that their knowledge representation and processing mechanisms are not sufficiently (...) constrained with insights coming from cognitive science (Lieto 2021; Lieto, Lebiere, Oltramari, 2018). As a possible way out to such knowledge processing issues, I present the main assumptions that have led to the development of the Dual PECCS categorization system (Lieto, Radicioni, Rho 2017) and discuss some of the lessons learned and their possible implications in the design of the knowledge modules and knowledge-processing mechanisms of integrated cognitive architectures. (shrink)
Invited Lecture at the SRM ACM Student Chapter, India, on Cognitive Heuristics for Commonsense Thinking and Reasoning in the next generation Artificial Intelligence. The lecture proposes a historical and technical overview of strategies for commonsense reasoning in AI.
I will present the rationale followed for the conceptualization and the following development the Dual PECCS system that relies on the cognitively grounded heterogeneous proxytypes representational hypothesis. Such hypothesis allows integrating exemplars and prototype theories of categorization and has provided useful insights in the context of cognitive modelling for what concerns the typicality effects in categorization. As argued in [Chella et al., 2017] [Lieto et al., 2018b] [Lieto et al., 2018a] a pivotal role in this respect is played by the (...) use of the conceptual spaces framework and by its integration with a symbolic knowledge representation layer. (shrink)
The paper presents the heterogeneous proxytypes hypothesis as a cognitively-inspired computational framework able to reconcile, in both natural and artificial systems, different theories of typicality about conceptual representation and reasoning that have been traditionally seen as incompatible. In particular, through the Dual PECCS system and its evolution, it shows how prototypes, exemplars and theory-theory like conceptual representations can be integrated in a cognitive artificial agent (thus extending its categorization capabilities) and, in addition, can provide useful insights in the context of (...) a computationally grounded science of the mind. (shrink)
A 3rd person Knowledge Level analysis of cognitive architectures -/- Abstract I provide a knowledge level analysis of the main representational and reasoning problems affecting the cognitive architectures for what concerns this issue. In providing this analysis I will show, by considering some of the main cognitive architectures currently available (e.g. SOAR, ACT-R, CLARION), how one of the main problems of such architectures is represented by the fact that their knowledge representation and processing mechanisms are not sufficiently constrained with “structural (...) insights” (Lieto 2021) coming from cognitive science for dealing with commonsense knowledge and reasoning (Lebiere, Oltramari, 2018). As a possible way out to such knowledge processing issues, I present the main assumptions that have led to the development of the Dual PECCS categorization system (Lieto, Radicioni, Rho 2017) and discuss some of the lessons learned and their possible implications in the design of the knowledge modules and knowledge-processing mechanisms of integrated cognitive architectures. (shrink)
As it emerged from philosophical analyses and cognitive research, most concepts exhibit typicality effects, and resist to the efforts of defining them in terms of necessary and sufficient conditions. This holds also in the case of many medical concepts. This is a problem for the design of computer science ontologies, since knowledge representation formalisms commonly adopted in this field do not allow for the representation of concepts in terms of typical traits. However, the need of representing concepts in terms of (...) typical traits concerns almost every domain of real world knowledge, including medical domains. In particular, in this article we take into account the domain of mental disorders, starting from the DSM-5 descriptions of some specific mental disorders. On this respect, we favor a hybrid approach to the representation of psychiatric concepts, in which ontology oriented formalisms are combined to a geometric representation of knowledge based on conceptual spaces. (shrink)
Questo contributo si propone di fornire uno spunto di riflessione, e una breve panoramica storica, sul ruolo che le scienze cognitive hanno giocato, e possono ancora giocare, nello sviluppo dei sistemi intelligenti di nuova generazione. Illustra, inoltre, le attività recenti che l’AISC (Associazione Italiana di Scienze Cognitive, di cui gli autori sono attualmente Vice-Presidente e Presidente) sta portando avanti per lo sviluppo di linee di ricerca nell’ambito dei sistemi artificiali di inspirazione cognitiva.
Questo lavoro propone un confronto tra diversi strumenti utilizzabili per modellare la conoscenza di dominio in ambito didattico: le mappa concettuali, Novak e Cañas (2006), (uno strumento tradizionalmente utilizzato nelle scuole) e le ontologie computazionali (dei sistemi formali di modellazione concettuale, attualmente molto usati nei sistemi di intelligenza artificiale per le loro capacità di “ragionamento automatico”, si veda Guarino, (1995)). Nello specifico, questo articolo presenta il risultato di un un doppio esperimento sul campo condotto presso il Liceo Scientifico “Guido Parodi” (...) di Acqui Terme in cui gruppi di studenti paragonano lo strumento della mappa concettuale e quello dell’ontologia nella risoluzione di due problemi di “misconcezione” (o errata concettualizzazione): uno indotto attraverso la consegna di appunti e materiali didattici contenenti informazioni volontariamente contraddittorie tra loro (caso che potrebbe corrispondere alla situazione in cui uno studente prende - per qualche motivo - degli appunti in modo scorretto) e l’altro legato ad una complessità concettuale intrinseca all’argomento. (shrink)
In this paper I will present an analysis of the impact that the notion of “bounded rationality”, introduced by Herbert Simon in his book “Administrative Behavior”, produced in the field of Artificial Intelligence (AI). In particular, by focusing on the field of Automated Decision Making (ADM), I will show how the introduction of the cognitive dimension into the study of choice of a rational (natural) agent, indirectly determined - in the AI field - the development of a line of research (...) aiming at the realisation of artificial systems whose decisions are based on the adoption of powerful shortcut strategies (known as heuristics) based on “satisficing” - i.e. non optimal - solutions to problem solving. I will show how the “heuristic approach” to problem solving allowed, in AI, to face problems of combinatorial complexity in real-life situations and still represents an important strategy for the design and implementation of intelligent systems. (shrink)
The research in Human Computer Interaction (HCI) has nowadays extended its attention to the study of persuasive technologies. Following this line of research, in this paper we focus on websites and mobile applications in the e-commerce domain. In particular, we take them as an evident example of persuasive technologies. Starting from the hypothesis that there is a strong connection between logical fallacies, i.e., forms of reasoning which are logically invalid but psychologically persuasive, and some common persuasion strategies adopted within these (...) technological artifacts, we carried out a survey on a sample of 175 websites and 101 mobile applications. This survey was aimed at empirically evaluating the significance of this connection by detecting the use of persuasion techniques, based on logical fallacies, in existing websites and mobile apps. In addition, with the goal of assessing the effectiveness of different fallacy-based persuasion techniques, we performed an empirical evaluation where participants interacted with a persuasive (fallacy-based) and with a non-persuasive version of an e-commerce website. Our results show that fallacy-based persuasion strategies are extensively used in existing digital artifacts, and that they are actually effective in influencing users’ behavior, with strategies based on visual salience manipulation (accent fallacy) being both the most popular and the most effective ones. (shrink)
In the last decades a growing body of literature in Artificial Intelligence (AI) and Cognitive Science (CS) has approached the problem of narrative understanding by means of computational systems. Narrative, in fact, is an ubiquitous element in our everyday activity and the ability to generate and understand stories, and their structures, is a crucial cue of our intelligence. However, despite the fact that - from an historical standpoint - narrative (and narrative structures) have been an important topic of investigation in (...) both these areas, a more comprehensive approach coupling them with narratology, digital humanities and literary studies was still lacking. With the aim of covering this empty space, in the last years, a multidisciplinary effort has been made in order to create an international meeting open to computer scientist, psychologists, digital humanists, linguists, narratologists etc.. This event has been named CMN (for Computational Models of Narrative) and was launched in the 2009 by the MIT scholars Mark A. Finlayson and Patrick H. Winston1. (shrink)
This paper presents a practical case study showing how, despite the nowadays limited collaboration between AI and Cognitive Science (CogSci), cognitive research can still have an important role in the development of novel AI technologies. After a brief historical introduction about the reasons of the divorce between AI and CogSci research agendas (happened in the mid’80s of the last century), we try to provide evidence of a renewed collaboration by showing a recent case study on a commonsense reasoning system, built (...) by using insights from cognitive semantics. (shrink)
In philosophy of language, a distinction has been proposed by Diego Marconi between two aspects of lexical competence, i.e. referential and inferential competence. The former accounts for the relation-ship of words to the world, the latter for the relationship of words among themselves. The aim of the pa-per is to offer a critical discussion of the kind of formalisms and computational techniques that can be used in Artificial Intelligence to model the two aspects of lexical competence, and of the main (...) difficulties related to the use of these computational techniques. The first conclusion of our discussion is that the dis-tinction between inferential and referential semantics is instantiated in the literature of Artificial Intelli-gence by the distinction between symbolic and connectionist approaches. The second conclusion of our discussion is that the modelling of lexical competence needs the advent of hybrid models integrating symbolic and connectionist frameworks. Our hypothesis is that Conceptual Spaces, a framework devel-oped by Gärdenfors more than fifteen years ago, can offer a lingua franca that allows to unify and gener-alize many aspects of the representational approaches mentioned above and to integrate “inferential” (=symbolic) and “referential” (=connectionist) computational approaches on common ground. (shrink)
As it emerged from philosophical analyses and cognitive research, most concepts exhibit typicality effects, and resist to the efforts of defining them in terms of necessary and sufficient conditions. This holds also in the case of many medical concepts. This is a problem for the design of computer science ontologies, since knowledge representation formalisms commonly adopted in this field (such as, in the first place, the Web Ontology Language - OWL) do not allow for the representation of concepts in terms (...) of typical traits. The need of representing concepts in terms of typical traits concerns almost every domain of real world knowledge, including medical domains. In particular, in this article we take into account the domain of mental disorders, starting from the DSM-5 descriptions of some specific disorders. We favour a hybrid approach to concept representation, in which ontology oriented formalisms are combined to a geometric representation of knowledge based on conceptual space. As a preliminary step to apply our proposal to mental disorder concepts, we started to develop an OWL ontology of the schizophrenia spectrum, which is as close as possible to the DSM-5 descriptions. (shrink)
Endowing artificial systems with explanatory capacities about the reasons guiding their decisions, represents a crucial challenge and research objective in the current fields of Artificial Intelligence (AI) and Computational Cognitive Science [Langley et al., 2017]. Current mainstream AI systems, in fact, despite the enormous progresses reached in specific tasks, mostly fail to provide a transparent account of the reasons determining their behavior (both in cases of a successful or unsuccessful output). This is due to the fact that the classical problem (...) of opacity in artificial neural networks (ANNs) explodes with the adoption of current Deep Learning techniques [LeCun, Bengio, Hinton, 2015]. In this paper we argue that the explanatory deficit of such techniques represents an important problem, that limits their adoption in the cognitive modelling and computational cognitive science arena. In particular we will show how the current attempts of providing explanations of the deep nets behaviour (see e.g. [Ritter et al. 2017] are not satisfactory. As a possibile way out to this problem, we present two different research strategies. The first strategy aims at dealing with the opacity problem by providing a more abstract interpretation of neural mechanisms and representations. This approach is adopted, for example, by the biologically inspired SPAUN architecture [Eliasmith et al., 2012] and by other proposals suggesting, for example, the interpretation of neural networks in terms of the Conceptual Spaces framework [Gärdenfors 2000, Lieto, Chella and Frixione, 2017]. All such proposals presuppose that the neural level of representation can be considered somehow irrelevant for attacking the problem of explanation [Lieto, Lebiere and Oltramari, 2017]. In our opinion, pursuing this research direction can still preserve the use of deep learning techniques in artificial cognitive models provided that novel and additional results in terms of “transparency” are obtained. The second strategy is somehow at odds with respect to the previous one and tries to address the explanatory issue by avoiding to directly solve the “opacity” problem. In this case, the idea is that one of resorting to pre-compiled plausible explanatory models of the word used in combination with deep-nets (see e.g. [Augello et al. 2017]). We argue that this research agenda, even if does not directly fits the explanatory needs of Computational Cognitive Science, can still be useful to provide results in the area of applied AI aiming at shedding light on the models of interaction between low level and high level tasks (e.g. between perceptual categorization and explanantion) in artificial systems. (shrink)
Formal ontologies are nowadays widely considered a standard tool for knowledge representation and reasoning in the Semantic Web. In this context, they are expected to play an important role in helping automated processes to access information. Namely: they are expected to provide a formal structure able to explicate the relationships between different concepts/terms, thus allowing intelligent agents to interpret, correctly, the semantics of the web resources improving the performances of the search technologies. Here we take into account a problem regarding (...) Knowledge Representation in general, and ontology based representations in particular; namely: the fact that knowledge modeling seems to be constrained between conflicting requirements, such as compositionality, on the one hand and the need to represent prototypical information on the other. In particular, most common sense concepts seem not to be captured by the stringent semantics expressed by such formalisms as, for example, Description Logics (which are the formalisms on which the ontology languages have been built). The aim of this work is to analyse this problem, suggesting a possible solution suitable for formal ontologies and semantic web representations. The questions guiding this research, in fact, have been: is it possible to provide a formal representational framework which, for the same concept, combines both the classical modelling view (accounting for compositional information) and defeasible, prototypical knowledge ? Is it possible to propose a modelling architecture able to provide different type of reasoning (e.g. classical deductive reasoning for the compositional component and a non monotonic reasoning for the prototypical one)? We suggest a possible answer to these questions proposing a modelling framework able to represent, within the semantic web languages, a multilevel representation of conceptual information, integrating both classical and non classical (typicality based) information. Within this framework we hypothesise, at least in principle, the coexistence of multiple reasoning processes involving the different levels of representation. (shrink)
Inventing novel knowledge to solve problems is a crucial, creative, mechanism employed by humans, to extend their range of action. In this talk, I will show how commonsense reasoning plays a crucial role in this respect. In particular, I will present a cognitively inspired reasoning framework for knowledge invention and creative problem solving exploiting TCL: a non-monotonic extension of a Description Logic (DL) of typicality able to combine prototypical (commonsense) descriptions of concepts in a human-like fashion. The proposed approach has (...) been tested both in the task of goal-driven concept invention and has additionally applied within the context of serendipity-based recommendation systems. I will present the obtained results, the lessons learned, and the road ahead of this research path. (shrink)
Dynamic conceptual reframing represents a crucial mechanism employed by humans, and partially by other animal species, to generate novel knowledge used to solve complex goals. In this talk, I will present a reasoning framework for knowledge invention and creative problem solving exploiting TCL: a non-monotonic extension of a Description Logic (DL) of typicality able to combine prototypical (commonsense) descriptions of concepts in a human-like fashion [1]. The proposed approach has been tested both in the task of goal-driven concept invention [2,3] (...) and has additionally applied within the context of serendipity-based recommendation systems [4]. I will present the obtained results, the lessons learned, and the road ahead of this research path. -/- . (shrink)
In this paper we present a framework for the dynamic and automatic generation of novel knowledge obtained through a process of commonsense reasoning based on typicality-based concept combination. We exploit a recently introduced extension of a Description Logic of typicality able to combine prototypical descriptions of concepts in order to generate new prototypical concepts and deal with problem like the PET FISH (Osherson and Smith, 1981; Lieto & Pozzato, 2019). Intuitively, in the context of our application of this logic, the (...) overall pipeline of our system works as follows: given a goal expressed as a set of properties, if the knowledge base does not contain a concept able to fulfill all these properties, then our system looks for two concepts to recombine in order to extend the original knowledge based satisfy the goal. (shrink)
Persuasive technologies can adopt several strategies to change the attitudes and behaviors of their users. In this work we present some empirical results stemming from the hypothesis - firstly formulated in [3] - that there is a strong connection between some well known cognitive biases reducible to fallacious argumentative schemata and some of the most common persuasion strategies adopted within digital technologies. In particular, we will report how both framing and fallacious-reducible mechanisms are nowadays used to design web and mobile (...) technologies in domains ranging from e-commerce [4] and news recommendations [1] to the jihadist propaganda. We will also show how and to what extent such persuasive strategies have an impact on nudging the choices of the users in digital environments. (shrink)
Come è emerso dall’analisi filosofica e dalla ricerca nelle scienze cogni- tive, la maggior parte dei concetti, tra cui molti concetti medici, esibisce degli “effetti prototipici” e non riesce ad essere definita nei termini di condizioni necessarie e sufficienti. Questo aspetto rappresenta un problema per la pro- gettazione di ontologie in informatica, poiché i formalismi adottati per la rap- presentazione della conoscenza (a partire da OWL – Web Ontology Langua- ge) non sono in grado di rendere conto dei concetti nei (...) termini dei loro tratti prototipici. Nel presente articolo ci concentriamo sulla classe dei disordini mentali facendo riferimento alle descrizioni che ne vengono date nel DSM-5. L’idea è quella di proporre un approccio ibrido, in cui i formalismi delle ontologie sono combinati a una rappresentazione geometrica della conoscenza basata sugli spazi concettuali. (shrink)
In questo contributo descriviamo un sistema di creatività computazionale in grado di generare automaticamente nuovi concetti utilizzando una logica descrittiva non monotòna che integra tre ingredienti principali: una logica descrittiva della tipicalità, una estensione probabilistica basata sulla semantica distribuita nota come DISPONTE, e una euristica di ispirazione cognitiva per la combinazione di più concetti. Una delle applicazioni principali del sistema riguarda il campo della creatività computazionale e, più specificatamente, il suo utilizzo come sistema di supporto alla creatività in ambito mediale. (...) In particolare, tale sistema è in grado di: generare nuove storie (a partire da una rappresentazione narrativa di storie pre-esistenti), generare nuovi personaggi (ad es. il nuovo “cattivo” di una serie tv o di un cartone animato) e, in generale, può essere utilizzato per proporre nuove soluzioni e format narrativi da esplorare nell’ambito dell’industria creativa. (shrink)
Combining typical knowledge to generate novel concepts is an important creative trait of human cognition. Dealing with such ability requires, from an AI perspective, the harmonization of two conflicting requirements that are hardly accommodated in symbolic systems: the need of a syntactic compositionality (typical of logical systems) and that one concerning the exhibition of typicality effects (see Frixione and Lieto, 2012). In this work we provide a logical framework able to account for this type of human-like concept combination. We propose (...) a nonmonotonic Description Logic of typicality called TCL (Typicality-based Compositional Logic). (shrink)