Science is an activity of the human intellect and as such has ethical implications that should be reviewed and taken into account. Although science and ethics have conventionally been considered different, it is herewith proposed that they are essentially similar. The proposal set henceforth is to create a new ethics rooted in science: scientific ethics. Science has firm axiological foundations and searches for truth and knowledge. Hence, science cannot be value neutral. Looking at standard scientific principles, it is possible to (...) construct a scientific ethic, which can be applied to all sciences. These intellectual standards include the search for truth, human dignity and respect for life. Through these it is thence achievable to draft a foundation of a ethics based purely on science and applicable beyond the confines of science. A few applications of these will be presented. Scientific ethics can have vast applications in other fields even in non scientific ones. (shrink)
The genetic code appeared on Earth with the first cells. The codes of cultural evolution arrived almost four billion years later. These are the only codes that are recognized by modern biology. In this book, however, Marcello Barbieri explains that there are many more organic codes in nature, and their appearance not only took place throughout the history of life but marked the major steps of that history. A code establishes a correspondence between two independent 'worlds', and the codemaker (...) is a third party between those 'worlds'. Therefore the cell can be thought of as a trinity of genotype, phenotype and ribotype. The ancestral ribotypes were the agents which gave rise to the first cells. The book goes on to explain how organic codes and organic memories can be used to shed new light on the problems encountered in cell signalling, epigenesis, embryonic development, and the evolution of language. (shrink)
When subjects view stimulation of a rubber hand while feeling congruent stimulation of their own hand, they may come to feel that the rubber hand is part of their own body. This illusion of body ownership is termed ‘Rubber Hand Illusion’ . We investigated sensitivity of RHI to spatial mismatches between visual and somatic experience. We compared the effects of spatial mismatch between the stimulation of the two hands, and equivalent mismatches between the postures of the two hands. We created (...) the mismatch either by adjusting stimulation or posture of the subject’s hand, or, in a separate group of subjects, by adjusting stimulation or posture of the rubber hand. The matching processes underlying body ownership were asymmetrical. The illusion survived small changes in the subject’s hand posture, but disappeared when the same posture transformations were applied to the rubber hand. Mismatch between the stimulation delivered to the subject’s hand and the rubber hand abolished the illusion. The combination of these two situations is of particular interest. When the subject’s hand posture was slightly different from the rubber hand posture, the RHI remained as long as stimulation of the two hands was congruent in a hand-centred spatial reference frame, even though the altered posture of the subject’s hand meant that stimulation was incongruent in external space. Conversely, the RHI was reduced when the stimulation was incongruent in hand-centred space but congruent in external space. We conclude that the visual–tactile correlation that causes the RHI is computed within a hand-centred frame of reference, which is updated with changes in body posture. Current sensory evidence about what is ‘me’ is interpreted with respect to a prior mental body representation. (shrink)
Brain–computer interfacing technologies are used as assistive technologies for patients as well as healthy subjects to control devices solely by brain activity. Yet the risks associated with the misuse of these technologies remain largely unexplored. Recent findings have shown that BCIs are potentially vulnerable to cybercriminality. This opens the prospect of “neurocrime”: extending the range of computer-crime to neural devices. This paper explores a type of neurocrime that we call brain-hacking as it aims at the illicit access to and manipulation (...) of neural information and computation. As neural computation underlies cognition, behavior and our self-determination as persons, a careful analysis of the emerging risks of malicious brain-hacking is paramount, and ethical safeguards against these risks should be considered early in design and regulation. This contribution is aimed at raising awareness of the emerging risk of malicious brain-hacking and takes a first step in developing an ethical and legal reflection on those risks. (shrink)
In this much-anticipated revision and translation of Scienza e Retorica, Marcello Pera argues that rhetoric is central to the making of scientific knowledge. Pera begins with an attack of what he calls the "Cartesian syndrome"--the fixation on method common to both defenders of traditional philosophy of science and its detractors. He argues that in assuming the primacy of methodological rules, both sides get it wrong. Scientific knowledge is neither the simple mirror of nature nor a cultural construct imposed by (...) contingent interests, thus we must replace the idea of the scientific method with that of scientific rhetoric. Pera proposes a new dialectics of science to overcome the tension between normative and descriptive philosophies of science by focusing on the rhetoric in the proposition, defense, and argumentation of theories. Examining the uses of rhetoric in debates drawn from Galileo’s Dialogues, Darwin’s Origins, and the Big Bang-Steady State controversy in cosmology, Pera shows how the conduct of science involves not just nature and the inquiring mind, but nature, the inquiring mind, and a questioning community which, through the process of attack, defense, and dispute, determines what is science. Rhetoric, then, is an essential element in the constitution of science as the practice of persuasive argumentation through which results gain acceptance. (shrink)
This book explores how we can measure consciousness. It clarifies what consciousness is, how it can be generated from a physical system, and how it can be measured. It also shows how conscious states can be expressed mathematically and how precise predictions can be made using data from neurophysiological studies.
Biosemiotics is the synthesis of biology and semiotics, and its main purpose is to show that semiosis is a fundamental component of life, i.e., that signs and meaning exist in all living systems. This idea started circulating in the 1960s and was proposed independently from enquires taking place at both ends of the Scala Naturae. At the molecular end it was expressed by Howard Pattee’s analysis of the genetic code, whereas at the human end it took the form of Thomas (...) Sebeok’s investigation into the biological roots of culture. Other proposals appeared in the years that followed and gave origin to different theoretical frameworks, or different schools, of biosemiotics. They are: (1) the physical biosemiotics of Howard Pattee and its extension in Darwinian biosemiotics by Howard Pattee and by Terrence Deacon, (2) the zoosemiotics proposed by Thomas Sebeok and its extension in sign biosemiotics developed by Thomas Sebeok and by Jesper Hoffmeyer, (3) the code biosemiotics of Marcello Barbieri and (4) the hermeneutic biosemiotics of Anton Markoš. The differences that exist between the schools are a consequence of their different models of semiosis, but that is only the tip of the iceberg. In reality they go much deeper and concern the very nature of the new discipline. Is biosemiotics only a new way of looking at the known facts of biology or does it predict new facts? Does biosemiotics consist of testable hypotheses? Does it add anything to the history of life and to our understanding of evolution? These are the major issues of the young discipline, and the purpose of the present paper is to illustrate them by describing the origin and the historical development of its main schools. (shrink)
The use of Intelligent Assistive Technology in dementia care opens the prospects of reducing the global burden of dementia and enabling novel opportunities to improve the lives of dementia patients. However, with current adoption rates being reportedly low, the potential of IATs might remain under-expressed as long as the reasons for suboptimal adoption remain unaddressed. Among these, ethical and social considerations are critical. This article reviews the spectrum of IATs for dementia and investigates the prevalence of ethical considerations in the (...) design of current IATs. Our screening shows that a significant portion of current IATs is designed in the absence of explicit ethical considerations. These results suggest that the lack of ethical consideration might be a codeterminant of current structural limitations in the translation of IATs from designing labs to bedside. Based on these data, we call for a coordinated effort to proactively incorporate ethical considerations early in the design and development of new products. (shrink)
The article deals with Chinese ink painting and some aesthetic notions, in particular those of xiang 象 and shanshui 山水 (literally: “mountains-waters”, i.e. landscape...
The past few years have witnessed several media-covered cases involving citizens actively engaging in the pursuit of experimental treatments for their medical conditions—or those of their loved ones—in the absence of established standards of therapy. This phenomenon is particularly observable in patients with rare genetic diseases, as the development of effective therapies for these disorders is hindered by the limited profitability and market value of pharmaceutical research. Sociotechnical trends at the cross-section of medicine and society are facilitating the involvement of (...) patients and creating the digital infrastructure necessary to its sustainment. Such participant-led research has the potential to promote the autonomy of research participants as drivers of discovery and to open novel non-canonical avenues of scientific research. At the same time, however, the extra-institutional, self-appointed, and, often, oversight-free nature of PLR raises ethical concern. This paper explores the complex ethical entanglement of PLR by critically appraising case studies and discussing the conditions for its moral justification. Furthermore, we propose a path forward to ensure the safe and effective implementation of PLR within the current research ecosystem in a manner that maximizes the benefits for both individual participants and society at large, while minimizing the risks. (shrink)
Systems Biology and the Modern Synthesis are recent versions of two classical biological paradigms that are known as structuralism and functionalism, or internalism and externalism. According to functionalism (or externalism), living matter is a fundamentally passive entity that owes its organization to external forces (functions that shape organs) or to an external organizing agent (natural selection). Structuralism (or internalism), is the view that living matter is an intrinsically active entity that is capable of organizing itself from within, with purely internal (...) processes that are based on mathematical principles and physical laws. At the molecular level, the basic mechanism of the Modern Synthesis is molecular copying, the process that leads in the short run to heredity and in the long run to natural selection. The basic mechanism of Systems Biology, instead, is self-assembly, the process by which many supramolecular structures are formed by the spontaneous aggregation of their components. In addition to molecular copying and self-assembly, however, molecular biology has uncovered also a third great mechanism at the heart of life. The existence of the genetic code and of many other organic codes in Nature tells us that molecular coding is a biological reality and we need therefore a framework that accounts for it. This framework is Code biology, the study of the codes of life, a new field of research that brings to light an entirely new dimension of the living world and gives us a completely new understanding of the origin and the evolution of life. (shrink)
Bruce Waller has defended a deductive reconstruction of the kinds of analogical arguments found in ethics, law, and metaphysics. This paper demonstrates the limits of such a reconstruction and argues for an alternative. non-deductive reconstruction. It will be shown that some analogical arguments do not fit Waller's deductive schema, and that such a schema does not allow for an adequate account of the strengths and weaknesses of an analogical argument. The similarities and differences between the account defended herein and the (...) Trudy Govier's account are discussed as well. (shrink)
Bruce Waller has defended a deductive reconstruction of the kinds of analogical arguments found in ethics, law, and metaphysics. This paper demonstrates the limits of such a reconstruction and argues for an alternative. non-deductive reconstruction. It will be shown that some analogical arguments do not fit Waller's deductive schema, and that such a schema does not allow for an adequate account of the strengths and weaknesses of an analogical argument. The similarities and differences between the account defended herein and the (...) Trudy Govier's account are discussed as well. (shrink)
Thomas Sebeok and Noam Chomsky are the acknowledged founding fathers of two research fields which are known respectively as Biosemiotics and Biolinguistics and which have been developed in parallel during the past 50 years. Both fields claim that language has biological roots and must be studied as a natural phenomenon, thus bringing to an end the old divide between nature and culture. In addition to this common goal, there are many other important similarities between them. Their definitions of language, for (...) example, have much in common, despite the use of different terminologies. They both regard language as a faculty, or a modelling system, that appeared rapidly in the history of life and probably evolved as an exaptation from previous animal systems. Both accept that the fundamental characteristic of language is recursion, the ability to generate an unlimited number of structures from a finite set of elements (the property of ‘discrete infinity’). Both accept that human beings are born with a predisposition to acquire language in a few years and without apparent efforts (the innate component of language). In addition to similarities, however, there are also substantial differences between the two fields, and it is an historical fact that Sebeok and Chomsky made no attempt at resolving them. Biosemiotics and Biolinguistics have become two separate disciplines, and yet in the case of language they are studying the same phenomenon, so it should be possible to bring them together. Here it is shown that this is indeed the case. A convergence of the two fields does require a few basic readjustments in each of them, but leads to a unified framework that keeps the best of both disciplines and is in agreement with the experimental evidence. What is particularly important is that such a framework suggests immediately a new approach to the origin of language. More precisely, it suggests that the brain wiring processes that take place in all phases of human ontogenesis (embryonic, foetal, infant and child development) are based on organic codes, and it is the step-by-step appearance of these brain-wiring codes, in a condition that is referred to as cerebra bifida, that holds the key to the origin of language. (shrink)
The existence of different types of semiosis has been recognized, so far, in two ways. It has been pointed out that different semiotic features exist in different taxa and this has led to the distinction between zoosemiosis, phytosemiosis, mycosemiosis, bacterial semiosis and the like. Another type of diversity is due to the existence of different types of signs and has led to the distinction between iconic, indexical and symbolic semiosis. In all these cases, however, semiosis has been defined by the (...) Peirce model, i.e., by the idea that the basic structure is a triad of ‘sign, object and interpretant’, and that interpretation is an essential component of semiosis. This model is undoubtedly applicable to animals, since it was precisely the discovery that animals are capable of interpretation that allowed Thomas Sebeok to conclude that they are also capable of semiosis. Unfortunately, however, it is not clear how far the Peirce model can be extended beyond the animal kingdom, and we already know that we cannot apply it to the cell. The rules of the genetic code have been virtually the same in all living systems and in all environments ever since the origin of life, which clearly shows that they do not depend on interpretation. Luckily, it has been pointed out that semiosis is not necessarily based on interpretation and can be defined exclusively in terms of coding. According to the ‘code model’, a semiotic system is made of signs, meanings and coding rules, all produced by the same codemaker, and in this form it is immediately applicable to the cell. The code model, furthermore, allows us to recognize the existence of many organic codes in living systems, and to divide them into two main types that here are referred to as manufacturing semiosis and signalling semiosis. The genetic code and the splicing codes, for example, take part in processes that actually manufacture biological objects, whereas signal transduction codes and compartment codes organize existing objects into functioning supramolecular structures. The organic codes of single cells appeared in the first three billion years of the history of life and were involved either in manufacturing semiosis or in signalling semiosis. With the origin of animals, however, a third type of semiosis came into being, a type that can be referred to as interpretive semiosis because it became closely involved with interpretation. We realize in this way that the contribution of semiosis to life was far greater than that predicted by the Peirce model, where semiosis is always a means of interpreting the world. Life is essentially about three things: (1) it is about manufacturing objects, (2) it is about organizing objects into functioning systems, and (3) it is about interpreting the world. The idea that these are all semiotic processes, tells us that life depends on semiosis much more deeply and extensively than we thought. We realize in this way that there are three distinct types of semiosis in Nature, and that they gave very different contributions to the origin and the evolution of life. (shrink)
The extravagances of quantum mechanics never fail to enrich daily the debate around natural philosophy. Entanglement, non-locality, collapse, many worlds, many minds, and subjectivism have challenged generations of thinkers. Its approach can perhaps be placed in the stream of quantum logic, in which the “strangeness” of QM is “measured” through the violation of Bell’s inequalities and, from there, attempts an interpretative path that preserves realism yet ends up overturning it, restating the fundamental mechanisms of QM as a logical necessity for (...) a strong realism. (shrink)
‘Particularism’ and ‘generalism’ refer to families of positions in the philosophy of moral reasoning, with the former playing down the importance of principles, rules or standards, and the latter stressing their importance. Part of the debate has taken an empirical turn, and this turn has implications for AI research and the philosophy of cognitive modeling. In this paper, Jonathan Dancy’s approach to particularism (arguably one of the best known and most radical approaches) is questioned both on logical and empirical grounds. (...) Doubts are raised over whether Dancy’s brand of particularism can adequately explain the graded nature of similarity assessments in analogical arguments. Also, simple recurrent neural network models of moral case classification are presented and discussed. This is done to raise concerns about Dancy’s suggestion that neural networks can help us to understand how we could classify situations in a way that is compatible with his particularism. Throughout, the idea of a surveyable standard—one with restricted length and complexity—plays a key role. Analogical arguments are taken to involve multidimensional similarity assessments, and surveyable contributory standards are taken to be attempts to articulate the dimensions of similarity that may exist between cases. This work will be of relevance both to those who have interests in computationally modeling human moral cognition and to those who are interested in how such models may or may not improve our philosophical understanding of such cognition. (shrink)
Today there are two major theoretical frameworks in biology. One is the ‘chemical paradigm’, the idea that life is an extremely complex form of chemistry. The other is the ‘information paradigm’, the view that life is not just ‘chemistry’ but ‘chemistry-plus-information’. This implies the existence of a fundamental difference between information and chemistry, a conclusion that is strongly supported by the fact that information and information-based-processes like heredity and natural selection simply do not exist in the world of chemistry. Against (...) this conclusion, the supporters of the chemical paradigm have pointed out that information processes are no different from chemical processes because they are both described by the same physical quantities. They may appear different, but this is only because they take place in extremely complex systems. According to the chemical paradigm, in other words, biological information is but a shortcut term that we use to avoid long descriptions of countless chemical reactions. It is intuitively appealing, but it does not represent a new ontological entity. It is merely a derived construct, a linguistic metaphor. The supporters of the information paradigm insist that information is a real and fundamental entity of Nature, but have not been able to prove this point. The result is that the chemical view has not been abandoned and the two paradigms are both coexisting today. Here it is shown that an alternative does exist and is a third theoretical framework that is referred to as the ‘code paradigm’. The key point is that we need to introduce in biology not only the concept of information but also that of meaning because any code is based on meaning and a genetic code does exist in every cell. The third paradigm is the view that organic information and organic meaning exist in every living system because they are the inevitable results of the processes of copying and coding that produce genes and proteins. Their true nature has eluded us for a long time because they are nominable entities, i.e., objective and reproducible observables that can be described only by naming their components in their natural order. They have also eluded us because nominable entities exist only in artifacts and biologists have not yet come to terms with the idea that life is artifact making. This is the idea that life arose from matter and yet it is fundamentally different from it because inanimate matter is made of spontaneous structures whereas life is made of manufactured objects. It will be shown, furthermore, that the existence of information and meaning in living systems is documented by the standard procedures of science. We do not have to abandon the scientific method in order to introduce meaning in biology. All we need is a science that becomes fully aware of the existence of organic codes in Nature. (shrink)
Organoids are three-dimensional biological structures grown in vitro from different kinds of stem cells that self-organise mimicking real organs with organ-specific cell types. Recently, researchers have managed to produce human organoids which have structural and functional properties very similar to those of different organs, such as the retina, the intestines, the kidneys, the pancreas, the liver and the inner ear. Organoids are considered a great resource for biomedical research, as they allow for a detailed study of the development and pathologies (...) of human cells; they also make it possible to test new molecules on human tissue. Furthermore, organoids have helped research take a step forward in the field of personalised medicine and transplants. However, some ethical issues have arisen concerning the origin of the cells that are used to produce organoids and their properties. In particular, there are new, relevant and so-far overlooked ethical questions concerning cerebral organoids. Scientists have created so-called mini-brains as developed as a few-months-old fetus, albeit smaller and with many structural and functional differences. However, cerebral organoids exhibit neural connections and electrical activity, raising the question whether they are or will one day be somewhat sentient. In principle, this can be measured with some techniques that are already available, which are used for brain-injured non-communicating patients. If brain organoids were to show a glimpse of sensibility, an ethical discussion on their use in clinical research and practice would be necessary. (shrink)
Work on analogy has been done from a number of disciplinary perspectives throughout the history of Western thought. This work is a multidisciplinary guide to theorizing about analogy. It contains 1,406 references, primarily to journal articles and monographs, and primarily to English language material. classical through to contemporary sources are included. The work is classified into eight different sections (with a number of subsections). A brief introduction to each section is provided. Keywords and key expressions of importance to research on (...) analogy are discussed in the introductory material. Electronic resources for conducting research on analogy are listed as well. (shrink)
The problem of concept representation is relevant for many sub-fields of cognitive research, including psychology and philosophy, as well as artificial intelligence. In particular, in recent years it has received a great deal of attention within the field of knowledge representation, due to its relevance for both knowledge engineering as well as ontology-based technologies. However, the notion of a concept itself turns out to be highly disputed and problematic. In our opinion, one of the causes of this state of affairs (...) is that the notion of a concept is, to some extent, heterogeneous, and encompasses different cognitive phenomena. This results in a strain between conflicting requirements, such as compositionality, on the one hand and the need to represent prototypical information on the other. In some ways artificial intelligence research shows traces of this situation. In this paper, we propose an analysis of this current state of affairs. Since it is our opinion that a mature methodology with which to approach knowledge representation and knowledge engineering should also take advantage of the empirical results of cognitive psychology concerning human abilities, we outline some proposals for concept representation in formal ontologies, which take into account suggestions from psychological research. Our basic assumption is that knowledge representation systems whose design takes into account evidence from experimental psychology may therefore give better results in many applications. (shrink)
The discovery of the genetic code has shown that the origin of life has also been the origin of semiosis, and the discovery of many other organic codes has indicated that organic semiosis has been the sole form of semiosis present on Earth in the first three thousand million years of evolution. With the origin of animals and the evolution of the brain, however, a new type of semiosis came into existence, a semiosis that is based on interpretation and is (...) commonly referred to as interpretive, or Peircean semiosis. This suggests that there are two distinct types of semiosis in Nature, one based on coding and one based on interpretation, and all the experimental evidence that we have does support this conclusion. Both in principle and in practice, therefore, there is no conflict between organic semiosis and Peircean semiosis, and yet they have been the object of a fierce controversy because it has been claimed that semiosis is always based on interpretation, even at the cellular level. Such a claim has recently been reproposed in a number of papers and it has become necessary therefore to reexamine it in the light of the proposed arguments. (shrink)
The concept of contextual emergence has been introduced as a speci?c kind of emergence in which some, but not all of the conditions for a higher-level phenomenon exist at a lower level. Further conditions exist in contingent contexts that provide stability conditions at the lower level, which in turn accord the emergence of novelty at the higher level. The purpose of the present paper is to propose that consciousness is a contextually emergent property of self-sustaining systems. The core assumption is (...) that living organisms constitute self-sustaining embodiments of the contingent contexts that accord their emergence. We propose that the emergence of such systems constitutes the emergence of content-bearing systems because the lower-level processes of such systems give rise to and sustain the macro-level whole in which they are nested, while the emergent macro-level whole constitutes the context in which the lower- level processes can be for something . Such embodied functionality is necessarily and naturally about the contexts that it has embodied. It is this notion of self- sustaining embodied aboutness that we propose to represent a type of content capable of evolving into consciousness. (shrink)
Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are (...) inherently opaque. It is concluded that, at least presently, full transparency for oversight bodies alone is the only feasible option; extending it to the public at large is normally not advisable. Moreover, it is argued that algorithmic decisions preferably should become more understandable; to that effect, the models of machine learning to be employed should either be interpreted ex post or be interpretable by design ex ante. (shrink)
In the study of cognitive processes, limitations on computational resources (computing time and memory space) are usually considered to be beyond the scope of a theory of competence, and to be exclusively relevant to the study of performance. Starting from considerations derived from the theory of computational complexity, in this paper I argue that there are good reasons for claiming that some aspects of resource limitations pertain to the domain of a theory of competence.
David Bohm's interpretation of quantum mechanics yields a quantum potential, Q. In his early work, the effects of Q are understood in causal terms as acting through a real (quantum) field which pushes particles around. In his later work (with Basil Hiley), the causal understanding of Q appears to have been abandoned. The purpose of this paper is to understand how the use of certain metaphors leads Bohm away from a causal treatment of Q, and to evaluate the use of (...) those metaphors. (shrink)
It is shown that information and meaning can be defined by operative procedures, and that we need to recognize them as new types of natural entities. They are not quantities (neither fundamental nor derived) because they cannot be measured, and they are not qualities because they are not subjective features. Here it is proposed to call them nominable entities, i.e., entities which can be specified only by naming their components in their natural order.
This paper presents the results of training an artificial neural network (ANN) to classify moral situations. The ANN produces a similarity space in the process of solving its classification problem. The state space is subjected to analysis that suggests that holistic approaches to interpreting its functioning are problematic. The idea of a contributory or pro tanto standard, as discussed in debates between moral particularists and generalists, is used to understand the structure of the similarity space generated by the ANN. A (...) spectrum of possibilities for reasons, from atomistic to holistic, is discussed. Reasons are understood as increasing in nonlocality as they move away from atomism. It is argued that contributory standards could be used to understand forms of nonlocality that need not go all the way to holism. It is also argued that contributory standards may help us to understand the kind of similarity at work in analogical reasoning and argument in ethics. Some objections to using state space approaches to similarity are dealt with, as are objections to using empirical and computational work in philosophy. (shrink)
There are currently three major theories on the origin and evolution of the genetic code: the stereochemical theory, the coevolution theory, and the error-minimization theory. The first two assume that the genetic code originated respectively from chemical affinities and from metabolic relationships between codons and amino acids. The error-minimization theory maintains that in primitive systems the apparatus of protein synthesis was extremely prone to errors, and postulates that the genetic code evolved in order to minimize the deleterious effects of the (...) translation errors. This article describes a fourth theory which starts from the hypothesis that the ancestral genetic code was ambiguous and proposes that its evolution took place with a mechanism that systematically reduced its ambiguity and eventually removed it altogether. This proposal is distinct from the stereochemical and the coevolution theories because they do not contemplate any ambiguity in the genetic code, and it is distinct from the error-minimization theory because ambiguity-reduction is fundamentally different from error-minimization. The concept of ambiguity-reduction has been repeatedly mentioned in the scientific literature, but so far it has remained only an abstract possibility because no model has been proposed for its mechanism. Such a model is described in the present article and may be the first step in a new approach to the study of the evolution of the genetic code. (shrink)
Suppose one hundred prisoners are in a yard under the supervision of a guard, and at some point, ninety-nine of them collectively kill the guard. If, after the fact, a prisoner is picked at random and tried, the probability of his guilt is 99%. But despite the high probability, the statistical chances, by themselves, seem insufficient to justify a conviction. The question is why. Two arguments are offered. The first, decision-theoretic argument shows that a conviction solely based on the statistics (...) in the prisoner scenario is unacceptable so long as the goal of expected utility maximization is combined with fairness constraints. The second, risk-based argument shows that a conviction solely based on the statistics in the prisoner scenario lets the risk of mistaken conviction surge potentially too high. The same, by contrast, cannot be said of convictions solely based on DNA evidence or eyewitness testimony. A noteworthy feature of the two arguments in the paper is that they are not confined to criminal trials and can in fact be extended to civil trials. (shrink)