dolce, the first top-level ontology to be axiomatized, has remained stable for twenty years and today is broadly used in a variety of domains. dolce is inspired by cognitive and linguistic considerations and aims to model a commonsense view of reality, like the one human beings exploit in everyday life in areas as diverse as socio-technical systems, manufacturing, financial transactions and cultural heritage. dolce clearly lists the ontological choices it is based upon, relies on philosophical principles, is richly formalized, and (...) is built according to well-established ontological methodologies, e.g. OntoClean. Because of these features, it has inspired most of the existing top-level ontologies and has been used to develop or improve standards and public domain resources. Being a foundational ontology, dolce is not directly concerned with domain knowledge. Its purpose is to provide the general categories and relations needed to give a coherent view of reality, to integrate domain knowledge, and to mediate across domains. In these 20 years dolce has shown that applied ontologies can be stable and that interoperability across reference and domain ontologies is a reality. This paper briefly introduces the ontology and shows how to use it on a few modeling cases. (shrink)
Many aspects of how humans form and combine concepts are notoriously difficult to capture formally. In this paper, we focus on the representation of three particular such aspects, namely overexten- sion, underextension, and dominance. Inspired in part by the work of Hampton, we consider concepts as given through a prototype view, and by considering the interdependencies between the attributes that define a concept. To approach this formally, we employ a recently introduced family of operators that enrich Description Logic languages. These (...) operators aim to characterise complex concepts by collecting those instances that apply, in a finely controlled way, to ‘enough’ of the concept’s defin- ing attributes. Here, the meaning of ‘enough’ is technically realised by accumulating weights of satisfied attributes and comparing with a given threshold that needs to be met. (shrink)
Embracing an inter-disciplinary approach grounded on Gärdenfors’ theory of conceptual spaces, we introduce a formal framework to analyse and compare selected theories about technical artefacts present in the literature. Our focus is on design-oriented approaches where both designing and manufacturing activities play a crucial role. Intentional theories, like Kroes’ dual nature thesis, are able to solve disparate problems concerning artefacts but they face both the philosophical challenge of clarifying the ontological nature of intentional properties, and the empirical challenge of testing (...) the attribution of such intentional properties to artefacts. To avoid these issues, we propose an approach that, by identifying different modalities to characterise artefact types, does not commit to intentional qualities and is able to empirically ground compliance tests. (shrink)
We propose a formal framework to examine the relationship between models and observations. To make our analysis precise,models are reduced to first-order theories that represent both terminological knowledge – e.g., the laws that are supposed to regulate the domain under analysis and that allow for explanations, predictions, and simulations – and assertional knowledge – e.g., information about specific entities in the domain of interest. Observations are introduced into the domain of quantification of a distinct first-order theory that describes their nature (...) and their organization and takes track of the way they are experimentally acquired or intentionally elaborated. A model mainly represents the theoretical knowledge or hypotheses on a domain, while the theory of observations mainly represents the empirical knowledge and the given experimental practices. We propose a precise identity criterion for observations and we explore different links between models and observations by assuming a degree of independence between them. By exploiting some techniques developed in the field of social choice theory and judgment aggregation, we sketch some strategies to solve inconsistencies between a given set of observations and the assumed theoretical hypotheses. The solutions of these inconsistencies can impact both the observations – e.g., the theoretical knowledge and the analysis of the way observations are collected or produced may highlight some unreliable sources – and the models – e.g. empirical evidences may invalidate some theoretical laws. (shrink)
We argue that a cognitive semantics has to take into account the possibly partial information that a cognitive agent has of the world. After discussing Gärdenfors's view of objects in conceptual spaces, we offer a number of viable treatments of partiality of information and we formalize them by means of alternative predicative logics. Our analysis shows that understanding the nature of simple predicative sentences is crucial for a cognitive semantics.
We introduce a family of operators to combine Description Logic concepts. They aim to characterise complex concepts that apply to instances that satisfy \enough" of the concept descriptions given. For instance, an individual might not have any tusks, but still be considered an elephant. To formalise the meaning of "enough", the operators take a list of weighted concepts as arguments, and a certain threshold to be met. We commence a study of the formal properties of these operators, and study some (...) variations. The intended applications concern the representation of cognitive aspects of classi cation tasks: the interdependencies among the attributes that de ne a concept, the prototype of a concept, and the typicality of the instances. (shrink)
Relevant logics provide an alternative to classical implication that is capable of accounting for the relationship between the antecedent and the consequence of a valid implication. Relevant implication is usually explained in terms of information required to assess a proposition. By doing so, relevant implication introduces a number of cognitively relevant aspects in the de nition of logical operators. In this paper, we aim to take a closer look at the cognitive feature of relevant implication. For this purpose, we develop (...) a cognitively-oriented interpretation of the semantics of relevant logics. In particular, we provide an interpretation of Routley-Meyer semantics in terms of conceptual spaces and we show that it meets the constraints of the algebraic semantics of relevant logic. (shrink)
DOLCE, the first top-level (foundational) ontology to be axiomatized, has remained stable for twenty years and today is broadly used in a variety of domains. dolce is inspired by cognitive and linguistic considerations and aims to model a commonsense view of reality, like the one human beings exploit in everyday life in areas as diverse as socio-technical systems, manufacturing, financial transactions and cultural heritage. dolce clearly lists the ontological choices it is based upon, relies on philosophical principles, is richly formalized, (...) and is built according to well-established ontological methodologies, e.g. OntoClean. Because of these features, it has inspired most of the existing top-level ontologies and has been used to develop or improve standards and public domain resources (e.g. CIDOC CRM, DBpedia and WordNet). Being a foundational ontology, dolce is not directly concerned with domain knowledge. Its purpose is to provide the general categories and relations needed to give a coherent view of reality, to integrate domain knowledge, and to mediate across domains. In these 20 years dolce has shown that applied ontologies can be stable and that interoperability across reference and domain ontologies is a reality. This paper briefly introduces the ontology and shows how to use it on a few modeling cases. (shrink)
How can organisations survive not only the substitution of members, but also other dramatic changes, like that of the norms regulating their activities, the goals they plan to achieve, or the system of roles that compose them? This paper is as first step towards a well-founded ontological analysis of the persistence of organisations through changes. Our analysis leverages Kit Fine’s notions of rigid and variable embodiment and proposes to view the (history of the) decisions made by the members of the (...) organisation as the criterion to re-identify the organisation through change. (shrink)
We present a preliminary high-level formal theory, grounded on knowledge representation techniques and foundational ontologies, for the uniform and integrated representation of the different kinds of (quali- tative and quantitative) knowledge involved in the designing process. We discuss the conceptual nature of engineering design by individuating and analyzing the involved notions. These notions are then formally charac- terized by extending the DOLCE foundational ontology. Our ultimate purpose is twofold: (i) to contribute to foundational issues of design; and (ii) to support (...) the development of advanced modelling systems for (qualitative and quantitative) representation of design knowledge. (shrink)
We present an algorithm for concept combination inspired and informed by the research in cognitive and experimental psychology. Dealing with concept combination requires, from a symbolic AI perspective, to cope with competitive needs: the need for compositionality and the need to account for typicality effects. Building on our previous work on weighted logic, the proposed algorithm can be seen as a step towards the management of both these needs. More precisely, following a proposal of Hampton [1], it combines two weighted (...) Description Logic formulas, each defining a concept, using the following general strategy. First it selects all the features needed for the combination, based on the logical distinc- tion between necessary and impossible features. Second, it determines the threshold and assigns new weights to the features of the combined concept trying to preserve the relevance and the necessity of the features. We illustrate how the algorithm works exploiting some paradigmatic examples discussed in the cognitive literature. (shrink)
We analyze and compare geometrical theories based on mereology (mereogeometries). Most theories in this area lack in formalization, and this prevents any systematic logical analysis. To overcome this problem, we concentrate on specific interpretations for the primitives and use them to isolate comparable models for each theory. Relying on the chosen interpretations, we introduce the notion of environment structure, that is, a minimal structure that contains a (sub)structure for each theory. In particular, in the case of mereogeometries, the domain of (...) an environment structure is composed of particular subsets of Rn. The comparison of mereogeometrical theories within these environment structures shows dependencies among primitives and provides (relative) definitional equivalences. With one exception, we show that all the theories considered are equivalent in these environment structures. (shrink)
Product structures are represented in engineering models by depicting and linking components, features and assemblies. Their understanding requires knowledge of both design and manufacturing practices, and yet further contextual reasoning is needed to read them correctly. Since these representations are essen- tial to the engineering activities, the lack of a clear and explicit semantics of these models hampers the use of information systems for their assessment and exploita- tion. We study this problem by identifying different interpretations of structure rep- resentations, (...) and then discuss the formal properties that a suitable language needs for representing components, features and combinations of these. We show that the representation of components and features require a non-standard mereology. (shrink)
Forests, cars and orchestras are very different ontological entities, and yet very similar in some aspects. The relationships they have with the elements they are composed of is often assumed to be reducible to standard ontological relations, like parthood and constitution, but how this could be done is still debated. This paper sheds light on the issue starting from a linguistic and philosophical analysis aimed at understanding notions like plurality, collective and composite, and propos- ing a formal approach to characterise (...) them. We conclude the presentation with a discussion and analysis of social groups within this framework. (shrink)
A concept is traditionally defined via the necessary and sufficient conditions that clearly determine its extension. By contrast, cognitive views of concepts intend to account for empirical data that show that categorisation under a concept presents typicality effects and a certain degree of indeterminacy. We propose a formal language to compactly represent concepts by leveraging on weighted logical formulas. In this way, we can model the possible synergies among the qualities that are relevant for categorising an object under a concept. (...) We show that our proposal can account for a number of views of concepts such as the prototype theory and the exemplar theory. Moreover, we show how the proposed model can overcome some limitations of cognitive views. (shrink)
This book is written in homage to Nicola Guarino. It is a tribute to his many scientific contributions to the new discipline, applied ontology, he struggled to establish. Nicola Guarino is widely recognized as one of the pioneers in formal and applied ontology. Renow – and sometimes even criticized – for his deep interest for the subtlest details of theoretical analysis, all throughout his career he has held the conviction that all science has to be for the benefit of society (...) at large, hence his motto that ontologies are not just for making information systems interoperable, but also – and more importantly – for making people (users of the systems) understand each other. He was among the first to realize that, to capture the intended meaning of the terms used by an information system, applied ontology has necessarily to be an interdisciplinary enterprise. Nicola’s early career developed in the areas of data systems and expert systems in physics and biomedical engineering. The lack of methodologies in expert systems led him to turn to philosophical logic as a source of inspiration, prompting him to attend the meetings of the analytic philosophy group at the University of Padua, and to discover a whole new world. It was the end of the 1980s and in that period the term ‘ontology’ started to be used to indicate a shared vocabulary across a community. Recognizing the potentials of this idea, Nicola began studying philosophical work. Knowing that language remains one of the pivotal elements in knowledge acquisition and representation, he paired it with the study of linguistic analysis. The combination of the two fields proved to be fundamental to shape his research vision, which could be summarized as: ontological analysis is hard, yet unavoidable to address the pervasive need for explicit, meaningful and transparent information systems. In other words, ontology makes sense. (shrink)
Nicola Guarino is widely recognized as one of the founders of applied ontology. His deep interest in the subtlest details of theoretical analysis and his vision of ontology as the Rosetta Stone for semantic interoperability guided the development and understanding of this domain. His motivations in research stem from the conviction that all science must be for the benefit of society at large, and his motto has always been that ontologies are not just for making information systems interoperable, but – (...) more importantly – for ensuring that systems’ users understand each other. He was among the first to recognize that applied ontology must be an interdisciplinary enterprise if it is to capture the intended meaning of the terms used by an information system. (shrink)
Nicola Guarino is widely recognized as one of the founders of applied ontology. His deep interest in the subtlest details of theoretical analysis and his vision of ontology as the Rosetta Stone for semantic interoperability guided the development and understanding of this domain. His motivations in research stem from the conviction that all science must be for the benefit of society at large, and his motto has always been that ontologies are not just for making information systems interoperable, but – (...) more importantly – for ensuring that systems’ users understand each other. He was among the first to recognize that applied ontology must be an interdisciplinary enterprise if it is to capture the intended meaning of the terms used by an information system. (shrink)