Economics and culture are in a complex, developing relation to each other. Yet, to introduce ?culture? into economic theory requires, first of all, an appropriate understanding of culture itself. The crucial point of this paper is that culture in its development and structure is only understandable if one considers it in connection with the autonomous structural development of the forms with which the subjects experience and construct their world. In recognition of the socio?cultural organization of human society, there is no (...) absolute autonomy of individuals in comparison to society and economics, while together with this interdependency the development of rationality exceeds mere instrumentality. Through ontogenesis, every individual is located ?within the boundaries of society?. What are consequences for economic theory? First of all: Economics is a cultural science in a double sense. Its object is the changing world of economic phenomena that are bound in a very specific cultural context. However, culture is not only relevant for the phenomena of socio?economic life, but also for the phenomena of economic science, i.e. for the development of economic thought. (shrink)
In modern technical societies computers interact with human beings in ways that can affect moral rights and obligations. This has given rise to the question whether computers can act as autonomous moral agents. The answer to this question depends on many explicit and implicit definitions that touch on different philosophical areas such as anthropology and metaphysics. The approach chosen in this paper centres on the concept of information. Information is a multi-facetted notion which is hard to define comprehensively. However, the (...) frequently used definition of information as data endowed with meaning can promote our understanding. It is argued that information in this sense is a necessary condition of cognitivist ethics. This is the basis for analysing computers and information processors regarding their status as possible moral agents. Computers have several characteristics that are desirable for moral agents. However, computers in their current form are unable to capture the meaning of information and therefore fail to reflect morality in anything but a most basic sense of the term. This shortcoming is discussed using the example of the Moral Turing Test. The paper ends with a consideration of which conditions computers would have to fulfil in order to be able to use information in such a way as to render them capable of acting morally and reflecting ethically. (shrink)
How can we best identify, understand, and deal with ethical and societal issues raised by healthcare robotics? This paper argues that next to ethical analysis, classic technology assessment, and philosophical speculation we need forms of reflection, dialogue, and experiment that come, quite literally, much closer to innovation practices and contexts of use. The authors discuss a number of ways how to achieve that. Informed by their experience with “embedded” ethics in technical projects and with various tools and methods of responsible (...) research and innovation, the paper identifies “internal” and “external” forms of dialogical research and innovation, reflections on the possibilities and limitations of these forms of ethical–technological innovation, and explores a number of ways how they can be supported by policy at national and supranational level. (shrink)
The discourse concerning computer ethics qualifies as a reference discourse for ethics-related IS research. Theories, topics and approaches of computer ethics are reflected in IS. The paper argues that there is currently a broader development in the area of research governance, which is referred to as 'responsible research and innovation'. RRI applied to information and communication technology addresses some of the limitations of computer ethics and points toward a broader approach to the governance of science, technology and innovation. Taking this (...) development into account will help IS increase its relevance and make optimal use of its established strengths. 2014 The Authors. Published by Elsevier B.V. All rights reserved. (shrink)
There has been much debate whether computers can be responsible. This question is usually discussed in terms of personhood and personal characteristics, which a computer may or may not possess. If a computer fulfils the conditions required for agency or personhood, then it can be responsible; otherwise not. This paper suggests a different approach. An analysis of the concept of responsibility shows that it is a social construct of ascription which is only viable in certain social contexts and which serves (...) particular social aims. If this is the main aspect of responsibility then the question whether computers can be responsible no longer hinges on the difficult problem of agency but on the possibly simpler question whether responsibility ascriptions to computers can fulfil social goals. The suggested solution to the question whether computers can be subjects of responsibility is the introduction of a new concept, called “quasi-responsibility” which will emphasise the social aim of responsibility ascription and which can be applied to computers. (shrink)
The two main challenges of the theory of conceptual content presented by Robert Brandom in Making It Explicit are to account for a referential dimension of conceptual content and to account for the objectivity of conceptual norms. Brandom tries to meet both these challenges in chapter 8 of his book. I argue that the accounts presented there can only be understood if seen against the background of Brandom's theory of communication developed in chapter 7. This theory is motivated by the (...) well-known problem that semantic holism threatens the possibility of communication because it has the consequence that words mean different things in different mouths. Brandom offers a solution to this problem in terms of what he calls recurrence commitments. I show that chapter 8 of Making It Explicit should be understood as arguing that a practice that includes acknowledging interpersonal recurrence commitments institutes both conceptual contents with a referential dimension and objective conceptual norms. I close by raising the objection that Brandom's argument can only show that conceptual norms are communally shared and not that they are objective. I propose an emendation of this argument, having recourse to a practice Brandom refers to as rational rectification in his new book Between Saying and Doing. (shrink)
The term “synthetic biology” is a popular label of an emerging biotechnological field with strong claims to robustness, modularity, and controlled construction, finally enabling the creation of new organisms. Although the research community is heterogeneous, it advocates a common denominator that seems to define this field: the principles of rational engineering. However, it still remains unclear to what extent rational engineering—rather than “tinkering” or the usage of random based or non-rational processes—actually constitutes the basis for the techniques of synthetic biology. (...) In this article, we present the results of a quantitative bibliometric analysis of the realized extent of rational engineering in synthetic biology. In our analysis, we examine three issues: (1) We evaluate whether work at three levels of synthetic biology (parts, devices, and systems) is consistent with the principles of rational engineering. (2) We estimate the extent of rational engineering in synthetic biology laboratory practice by an evaluation of publications in synthetic biology. (3) We examine the methodological specialization in rational engineering of authors in synthetic biology. Our analysis demonstrates that rational engineering is prevalent in about half of the articles related to synthetic biology. Interestingly, in recent years the relative number of respective publications has decreased. Despite its prominent role among the claims of synthetic biology, rational engineering has not yet entirely replaced biotechnological methods based on “tinkering” and non-rational principles. (shrink)
In , Metakides and Nerode introduced the study of recursively enumerable substructures of a recursively presented structure. The main line of study presented in  is to examine the effective content of certain algebraic structures. In , Metakides and Nerode studied the lattice of r.e. subspaces of a recursively presented vector space. This lattice was later studied by Kalantari, Remmel, Retzlaff and Shore. Similar studies have been done by Metakides and Nerode  for algebraically closed fields, by Remmel  for (...) Boolean algebras and by Metakides and Remmel  and  for orderings. Kalantari and Retzlaff  introduced and studied the lattice of r.e. subsets of a recursively presented topological space. Kalantari and Retzlaff consideredX, a topological space with ⊿, a countable basis. This basis is coded into integers and with the help of this coding, r.e. subsets ofωgive rise to r.e. subsets ofX. The notion of “recursiveness” of a topological space is the natural next step which gives rise to the question of what should be the “degree” of an r.e. open subset ofX? It turns out that any r.e. open set partitions ⊿; into four sets whose Turing degrees become central in answering the question raised above.In this paper we show that the degrees of the elements of the partition of ⊿ imposed by an r.e. open set can be “controlled independently” in a sense to be made precise in the body of the paper. In , Kalantari and Retzlaff showed that givenAany r.e. set andany r.e. open subset ofX, there exists an r.e. open set ℋ which is a subset ofand is dense in and in whichAis coded. This shows that modulo a nowhere dense set, an r.e. open set can become as complicated as desired. After giving the general technical and notational machinery in §1, and giving the particulars of our needs in §2, in §3 we prove that the set ℋ described above could be made to be precisely of degree ofA. We then go on and establish various results on the mentioned partitioning of ⊿. One of the surprising results is that there are r.e. open sets such that every element of partitioning of ⊿ is of a different degree. Since the exact wording of the results uses the technical definitions of these partitioning elements, we do not summarize the results here and ask the reader to examine §3 after browsing through §§1 and 2. (shrink)
In , Metakides and Nerode introduced the study of the lattice of recursively enumerable substructures of a recursively presented model as a means to understand the recursive content of certain algebraic constructions. For example, the lattice of recursively enumerable subspaces,, of a recursively presented vector spaceV∞has been studied by Kalantari, Metakides and Nerode, Retzlaff, Remmel and Shore. Similar studies have been done by Remmel ,  for Boolean algebras and by Metakides and Nerode  for algebraically closed fields. In all (...) of these models, the algebraic closure of a set is nontrivial., is given in §1, however in vector spaces, cl is just the subspace generated byS, in Boolean algebras, cl is just the subalgebra generated byS, and in algebraically closed fields, cl is just the algebraically closed subfield generated byS.)In this paper, we give a general model theoretic setting in which we are able to give constructions which generalize many of the constructions of classical recursion theory. One of the main features of the modelswhich we study is that the algebraic closure of setis just itself, i.e., cl = S. Examples of such models include the natural numbers under equality 〈N, = 〉, the rational numbers under the usual ordering 〈Q, ≤〉, and a large class ofn-dimensional partial orderings. (shrink)
Attempts to explain the origin of macroevolutionary innovations have been only partially successful. Here it is proposed that the patterns of major evolutionary transitions have to be understood first, before it is possible to further analyse the forces behind the process. The hypothesis is that major evolutionary innovations are characterized by an increase in organismal autonomy, in the sense of emancipation from the environment. After a brief overview of the literature on this subject, increasing autonomy is defined as the evolutionary (...) shift in the individual system–environment relationship, such that the direct influences of the environment are gradually reduced and a stabilization of self-referential, intrinsic functions within the system is generated. This is described as relative autonomy because numerous interconnections with the environment and dependencies upon it are retained. Features of increasing autonomy are spatial separations, an increase in homeostatic functions and in body size, internalizations and an increase in physiological and behavioral flexibility. It is described how these features are present in different combinations in the major evolutionary transitions of metazoans and, consequently, how they should be taken into consideration when evolutionary innovations are studied. The hypothesis contributes to a reconsideration of the relationship between organisms and their environment. (shrink)
This study investigates the ethical use of Big Data and Artificial Intelligence technologies —using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ethical issues,, into clusters to offer a comparison with the proposed classification in the literature. The results show that despite the variety of different social domains, fields, and (...) applications of AI, there is overlap and correlation between the organisations’ ethical concerns. This more detailed understanding of ethics in AI + BD is required to ensure that the multitude of suggested ways of addressing them can be targeted and succeed in mitigating the pertinent ethical issues that are often discussed in the literature. (shrink)
Information security can be of high moral value. It can equally be used for immoral purposes and have undesirable consequences. In this paper we suggest that critical theory can facilitate a better understanding of possible ethical issues and can provide support when finding ways of addressing them. The paper argues that critical theory has intrinsic links to ethics and that it is possible to identify concepts frequently used in critical theory to pinpoint ethical concerns. Using the example of UK electronic (...) medical records the paper demonstrates that a critical lens can highlight issues that traditional ethical theories tend to overlook. These are often linked to collective issues such as social and organisational structures, which philosophical ethics with its typical focus on the individual does not tend to emphasise. The paper suggests that this insight can help in developing ways of researching and innovating responsibly in the area of information security. (shrink)
Modern biology is ambivalent about the notion of evolutionary progress. Although most evolutionists imply in their writings that they still understand large-scale macroevolution as a somewhat progressive process, the use of the term “progress” is increasingly criticized and avoided. The paper shows that this ambivalence has a long history and results mainly from three problems: (1) The term “progress” carries historical, theoretical and social implications which are not congruent with modern knowledge of the course of evolution; (2) An incongruence exists (...) between the notion of progress and Darwin’s theory of selection; (3) It is still not possible to give more than a rudimentary definition of the general patterns that were generated during the macroevolution of organisms. The paper consists of two parts: the first is a historical overview of the roots of the term “progress” in evolutionary biology, the second discusses epistemological, ontological and empirical problems. It is stated that the term has so far served as a metaphor for general patterns generated amongst organisms during evolution. It is proposed that a reformulation is needed to eliminate historically imported implications and that it is necessary to develop a concept for an appropriate empirical description of macroevolutionary patterns. This is the third way between, on the one hand, using the term indiscriminately and, on the other hand, ignoring the general patterns that evolution has produced. (shrink)
Trustful interaction serves the interests of those involved. Thus, one could reason that trust itself may be analyzed as part of rational, goaloriented action. In contrast, common sense tells us that trust is an emotion and is, therefore, independent of rational deliberation to some extent. I will argue that we are right in trusting our common sense. My argument is conceptual in nature, referring to the common distinction between trust and pure reliance. An emotional attitude may be understood as some (...) general pattern in the way the world or some part of the world is perceived by an individual. Trust may be characterized by such a pattern. I shall focus on two central features of a trusting attitude. First, trust involves a participant attitude (Strawson) toward the person being trusted. Second, a situation of trust is perceived by a trusting person as one in which shared values or norms motivate both his own actions as well as those of the person being trusted. As an emotional attitude, trust is, to some extent, independent of objective information. It determines what a trusting person will believe and how various outcomes are evaluated. Hence, trust is quite different from rational belief and the problem with trust is not adequately met in minimizing risk by supplying extensive information or some mechanism of sanctioning. Trust is an attitude that enables us to cope with risk in a certain way. If we want to promote trustful interaction, we must form our institutions in ways that allow individuals to experience their interest and values as shared and, thus, to develop a trusting attitude. (shrink)
A particular problem of traditional Rational Choice Theory is that it cannot explain equilibrium selection in simple coordination games. In this paper we analyze and discuss the solution concept for common coordination problems as incorporated in the theory of Team Reasoning (TR). Special consideration is given to TR’s concept of opportunistic choice and to the resulting restrictions in using private information. We report results from a laboratory experiment in which teams were given a chance to coordinate on a particular pattern (...) of behavior in a sequence of HiLo games. A modification of the stage game offered opportunities to improve on the team goal through changing this accustomed pattern of behavior. Our observations throw considerable doubt on the idea of opportunistic team reasoning as a guide to coordination. Contrary to what TR would predict, individuals tend to stick to accustomed behavioral patterns. Moreover, we find that individual decisions are at least partly determined by private information not accessible to all members of a team. Alternative theories of choice, in particular cognitive hierarchy theory may be more suitable to explain the observed pattern of behavior. (shrink)
The contributors to _Constructing the Pluriverse_ critique the hegemony of the postcolonial Western tradition and its claims to universality by offering a set of “pluriversal” approaches to understanding the coexisting epistemologies and practices of the different worlds and problems we inhabit and encounter. Moving beyond critiques of colonialism, the contributors rethink the relationship between knowledge and power, offering new perspectives on development, democracy, and ideology while providing diverse methodologies for non-Western thought and practice that range from feminist approaches to scientific (...) research to ways of knowing expressed through West African oral traditions. In combination, these wide-ranging approaches and understandings form a new analytical toolbox for those seeking creative solutions for dismantling Westernization throughout the world. Contributors Zaid Ahmad, Manuela Boatcă, Hans-Jürgen Burchardt, Arturo Escobar, Sandra Harding, Ehsan Kashfi, Venu Mehta, Walter D. Mignolo, Ulrich Oslender, Isiaka Ouattara, Manu Samnotra, Aram Ziai. (shrink)
Ethical issues of information and communication technologies (ICTs) are important because they can have significant effects on human liberty, happiness, and people’s ability to lead a good life. They are also of functional interest because they can determine whether technologies are used and whether their positive potential can unfold. For these reasons, policy makers are interested in finding out what these issues are and how they can be addressed. The best way of creating ICT policy that is sensitive to ethical (...) issues pertain to being proactive in addressing such issues at an early stage of the technology life cycle. The present paper uses this position as a starting point and discusses how knowledge of ethical aspects of emerging ICTs can be gained. It develops a methodology that goes beyond established futures methodologies to cater for the difficult nature of ethical issues. The authors outline how the description of emerging ICTs can be used for an ethical analysis. (shrink)
In this article we show that it is possible to completely classify the degrees of r.e. bases of r.e. vector spaces in terms of weak truth table degrees. The ideas extend to classify the degrees of complements and splittings. Several ramifications of the classification are discussed, together with an analysis of the structure of the degrees of pairs of r.e. summands of r.e. spaces.
Sociological, economic and evolutionary paradigms of human agency have often seen social agents either as the rational controllers of their fate or as marionettes on the strings of historical, functional or adaptive necessity. They found it therefore difficult to account for the variability, intentionality and creativity of human behaviour and for its frequently redundant or harmful results. This paper argues that human agency is a product of evolution, but that genetic variation and inheritance can only provide a limited explanation of (...) its complex nature. The primary evolutionary problem which human agents face while they are alive is not to adapt to stable environments, but to respond flexibly and creatively to a contingent, uncertain world. Variation and selection therefore take two connected but distinct forms, one external, genetic, and inherited across generations, the other internal and cognitive, and operating during the lifetime of individuals. An examination of this lived part of evolution provides a better understanding of key properties of agency. (shrink)
An important question one can ask of ethical theories is whether and how they aim to raise claims to universality. This refers to the subject area that they intend to describe or govern and also to the question whether they claim to be binding for all (moral) agents. This paper discusses the question of universality of Luciano Floridi’s information ethics (IE). This is done by introducing the theory and discussing its conceptual foundations and applications. The emphasis will be placed on (...) the ontological grounding of IE. IE’s claims to universality will be contrasted with those raised by discourse ethics. This comparison of two pertinent ethical theories allows for a critical discussion of areas where IE currently has room for elaboration and development. (shrink)
"Vortrèage von Bernd Dèorflinger... [et al.] auf der Gemeinschaftsveranstaltung der èOsterreichischen Akademie der Wissenschaften, Wien, und der Akademie der Wissenschaften und der Literatur, Mainz, am 8. und 9. Mèarz 1991 in Mainz.".
The first-order theory of the lattice of recursively enumerable closed subsets of an effective topological space is proved undecidable using the undecidability of the first-order theory of the lattice of recursively enumerable sets. In particular, the first-order theory of the lattice of recursively enumerable closed subsets of Euclidean n -space, for all n , is undecidable. A more direct proof of the undecidability of the lattice of recursively enumerable closed subsets of Euclidean n -space, n ⩾ 2, is provided using (...) the method of reduction and the recursive inseparability of the set of all formulae satisfiable in every model of the theory of SIBs and the set of all formulae refutable in some finite model of the theory of SIBs. (shrink)
In this introduction we discuss the motivation behind the workshop “Towards a New Epistemology of Mathematics” of which this special issue constitutes the proceedings. We elaborate on historical and empirical aspects of the desired new epistemology, connect it to the public image of mathematics, and give a summary and an introduction to the contributions to this issue.
Ethical issues of information and communication technologies (ICTs) are important because they can have significant effects on human liberty, happiness, their ability to lead a good life. They are also of functional interest because they can determine whether technologies are used and whether their positive potential can unfold. For these reasons policy makers are interested in finding out what these issues are and how they can be addressed. The best way of creating ICT policy that is sensitive to ethical issues (...) would be to be proactive and address such issues at early stages of the technology life cycle. The present paper uses this position as a starting point and discusses how knowledge of ethical aspects of emerging ICTs can be gained. It develops a methodology that goes beyond established futures methodologies to cater for the difficult nature of ethical issues. The paper goes on to outline some of the preliminary findings of a European research project that has applied this method. (shrink)