Digital images taken by mobile phones are the most frequent class of images created today. Due to their omnipresence and the many ways they are encountered, they require a specific focus in research. However, to date, there is no systematic compilation of the various factors that may determine our evaluations of such images, and thus no explanation of how users select and identify relatively “better” or “worse” photos. Here, we propose a theoretical taxonomy of factors influencing the aesthetic appeal of (...) mobile phone photographs. Beyond addressing relatively basic/universal image characteristics, perhaps more related to fast perceptual processing of an image, we also consider factors involved in the slower re-appraisal or deepened aesthetic appreciation of an image. We span this taxonomy across specific types of picture genres commonly taken—portraits of other people, selfies, scenes and food. We also discuss the variety of goals, uses, and contextual aspects of users of mobile phone photography. As a working hypothesis, we propose that two main decisions are often made with mobile phone photographs: Users assess images at a first glance—by swiping through a stack of images—focusing on visual aspects that might be decisive to classify them from “low quality” to “acceptable” to, in rare cases, “an exceptionally beautiful picture.” Users make more deliberate decisions regarding one’s “favorite” picture or the desire to preserve or share a picture with others, which are presumably tied to aspects such as content, framing, but also culture or personality, which have largely been overlooked in empirical research on perception of photographs. In sum, the present review provides an overview of current focal areas and gaps in research and offers a working foundation for upcoming research on the perception of mobile phone photographs as well as future developments in the fields of image recording and sharing technology. (shrink)
The use of evidence in medicine is something we should continuously seek to improve. This book seeks to develop our understanding of evidence of mechanism in evaluating evidence in medicine, public health, and social care; and also offers tools to help implement improved assessment of evidence of mechanism in practice. In this way, the book offers a bridge between more theoretical and conceptual insights and worries about evidence of mechanism and practical means to fit the results into evidence assessment procedures.
Believable Evidence argues that evidence consists of true beliefs. This claim opens up an entirely overlooked space on the ontology of evidence map, between purely factualist positions and purely psychologist ones. Veli Mitova provides a compelling three-level defence of this view in the first contemporary monograph entirely devoted to the ontology of evidence. First, once we see the evidence as a good reason, metaethical considerations show that the evidence must be psychological and veridical. Second, true belief in particular allows (...) epistemologists to have everything they want from the concept of evidence. Finally, the view helps us locate the source of the normative authority of evidence. The book challenges a broad range of current views on the ontology of reasons and their normative authority, making it a must-read for scholars and advanced students in metaethics and epistemology. (shrink)
What sorts of things can be evidence for belief? Five answers have been defended in the recent literature on the ontology of evidence: propositions, facts, psychological states, factive psychological states, all of the above. Each of the first three views privileges a single role that the evidence plays in our doxastic lives, at the cost of occluding other important roles. The fifth view, pluralism, is a natural response to such dubious favouritism. If we want to be monists about evidence and (...) accommodate all roles for the concept, we need to think of evidence as propositional, psychological and factive. Our only present option along these lines is the fourth view, which holds that evidence consists of all and only known propositions. But the view comes with some fairly radical commitments. This paper proposes a more modest view—‘truthy psychologism’. According to this view, evidence is also propositional, psychological and factive; but we don’t need the stronger claim that only knowledge can fill this role; true beliefs are enough. I first argue for truthy psychologism by appeal to some standard metaethical considerations. I then show that the view can accommodate all of the roles epistemologists have envisaged for the concept of evidence. Truthy psychologism thus gives us everything we want from the evidence, without forcing us to go either pluralist or radical. (shrink)
In this chapter we explore the process of extrapolating causal claims from model organisms to humans in pharmacology. We describe and compare four strategies of extrapolation: enumerative induction, comparative process tracing, phylogenetic reasoning, and robustness reasoning. We argue that evidence of mechanisms plays a crucial role in several strategies for extrapolation and in the underlying logic of extrapolation: the more directly a strategy establishes mechanistic similarities between a model and humans, the more reliable the extrapolation. We present case studies from (...) the research on atherosclerosis and the development of statins, that illustrate these strategies and the role of mechanistic evidence in extrapolation. (shrink)
Many influential philosophers have claimed that truth is valuable, indeed so valuable as to be the ultimate standard of correctness for intellectual activity. Yet most philosophers also think that truth is only instrumentally valuable. These commitments make for a strange pair. One would have thought that an ultimate standard would enjoy more than just instrumental value. This paper develops a new argument for the non-instrumental value of truth: inquiry is non-instrumentally valuable; and truth inherits some of its value from the (...) value of inquiry. This makes truth finally but extrinsically valuable, a thesis that to my knowledge has not been directly defended in the literature. I support by appeal to the notion of epistemic injustice, and through the surprising claim that some goals get their value from the pursuit that aims at them. (shrink)
When you believe something for a good reason, your belief is in a position to be justified, rational, responsible, or to count as knowledge. But what is the nature of this thing that can make such a difference? Traditionally, epistemologists thought of epistemic normative notions, such as reasons, in terms of the believer's psychological perspective. Recently, however, many have started thinking of them as factive: good reasons for belief are either facts, veridical experiences, or known propositions. This ground breaking volume (...) reflects major recent developments in thinking about this 'Factive Turn', and advances the lively debate around it in relation to core epistemological themes including perception, evidence, justification, knowledge, scepticism, rationality, and action. With clear and comprehensive chapters written by leading figures in the field, this book will be essential for students and scholars looking to engage with the state of the art in epistemology. (shrink)
The topic of epistemic decolonisation is currently the locus of lively debate both in academia and in everyday life. The aim of this piece is to isolate a few main strands in the philosophical lite...
A particular tradition in medicine claims that a variety of evidence is helpful in determining whether an observed correlation is causal. In line with this tradition, it has been claimed that establishing a causal claim in medicine requires both probabilistic and mechanistic evidence. This claim has been put forward by Federica Russo and Jon Williamson. As a result, it is sometimes called the Russo–Williamson thesis. In support of this thesis, Russo and Williamson appeal to the practice of the International Agency (...) for Research on Cancer. However, this practice presents some problematic cases for the Russo–Williamson thesis. One response to such cases is to argue in favour of reforming these practices. In this paper, we propose an alternative response according to which such cases are in fact consistent with the Russo–Williamson thesis. This response requires maintaining that there is a role for mechanism-based extrapolation in the practice of the IARC. However, the response works only if this mechanism-based extrapolation is reliable, and some have argued against the reliability of mechanism-based extrapolation. Against this, we provide some reasons for believing that reliable mechanism-based extrapolation is going on in the practice of the IARC. The reasons are provided by appealing to the role of robustness analysis. (shrink)
What is going on when we explain someone’s belief by appeal to stereotypes associated with her gender, sexuality, race, or class? In this paper I try to motivate two claims. First, such explanations involve an overlooked form of epistemic injustice, which I call ‘explanatory injustice’. Second, the language of reasons helps us shed light on the ways in which such injustice wrongs the victim qua epistemic agent. In particular, explanatory injustice is best understood as occurring in explanations of belief through (...) a so-called reason-why when the correct explanation in fact features a motivating reason. I reach this conclusion by arguing that such explanations are a kind of normative inversion of confabulation. Thinking in these terms helps us see both how certain reason-ascriptions empower while others disempower, and how through them believers are robbed of agency over their beliefs. (shrink)
This article compares the epistemic roles of theoretical models and model organisms in science, and specifically the role of non-human animal models in biomedicine. Much of the previous literature on this topic shares an assumption that animal models and theoretical models have a broadly similar epistemic role—that of indirect representation of a target through the study of a surrogate system. Recently, Levy and Currie have argued that model organism research and theoretical modelling differ in the justification of model-to-target inferences, such (...) that a unified account based on the widely accepted idea of modelling as indirect representation does not similarly apply to both. I defend a similar conclusion, but argue that the distinction between animal models and theoretical models does not always track a difference in the justification of model-to-target inferences. Case studies of the use of animal models in biomedicine are presented to illustrate this. However, Levy and Currie’s point can be argued for in a different way. I argue for the following distinction. Model organisms function as surrogate sources of evidence, from which results are transferred to their targets by empirical extrapolation. By contrast, theoretical modelling does not involve such an inductive step. Rather, theoretical models are used for drawing conclusions from what is already known or assumed about the target system. Codifying assumptions about the causal structure of the target in external representational media allows one to apply explicit inferential rules to reach conclusions that could not be reached with unaided cognition alone. (shrink)
Understanding nature of science is widely considered an important educational objective and views of NOS are closely linked to science teaching and learning. Thus there is a lively discussion about what understanding NOS means and how it is reached. As a result of analyses in educational, philosophical, sociological and historical research, a worldwide consensus about the content of NOS teaching is said to be reached. This consensus content is listed as a general statement of science, which students are supposed to (...) understand during their education. Unfortunately, decades of research has demonstrated that teachers and students alike do not possess an appropriate understanding of NOS, at least as far as it is defined at the general level. One reason for such failure might be that formal statements about the NOS and scientific knowledge can really be understood after having been contextualized in the actual cases. Typically NOS is studied as contextualized in the reconstructed historical case stories. When the objective is to educate scientifically and technologically literate citizens, as well as scientists of the near future, studying NOS in the contexts of contemporary science is encouraged. Such contextualizations call for revision of the characterization of NOS and the goals of teaching about NOS. As a consequence, this article gives two examples for studying NOS in the contexts of scientific practices with practicing scientists: an interview study with nanomodellers considering NOS in the context of their actual practices and a course on nature of scientific modelling for science teachers employing the same interview method as a studying method. Such scrutinization opens rarely discussed areas and viewpoints to NOS as well as aspects that practising scientists consider as important. (shrink)
Kerry et al. criticize our discussion of causal knowledge in evidence-based medicine (EBM) and our assessment of the relevance of their dispositionalist ontology for EBM. Three issues need to be addressed in response: (1) problems concerning transfer of causal knowledge across heterogeneous contexts; (2) how predictions about the effects of individual treatments based on population-level evidence from RCTs are fallible; and (3) the relevance of ontological theories like dispositionalism for EBM.
Inconsistencies between scientific theories have been studied, by and large, from the perspective of paraconsistent logic. This approach considered the formal properties of theories and the structure of inferences one can legitimately draw from theories. However, inconsistencies can be also analysed from the perspective of modelling practices, in particular how modelling practices may lead scientists to form opinions and attitudes that are different, but not necessarily inconsistent. In such cases, it is preferable to talk about disagreement, rather than inconsistency. Disagreement (...) may originate in, or concern, a number of epistemic, socio-political or psychological factors. In this paper, we offer an account of the ‘loci and reasons’ for disagreement at different stages of the scientific process. We then present a controversial episode in the health sciences: the studies on hypercholesterolemia. The causes and effects of high levels of cholesterol in blood have been long and hotly debated, to the point of deserving the name of ‘cholesterol wars’; the debate, to be sure, isn’t settled yet. In this contribution, we focus on some selected loci and reasons for disagreement that occurred between 1920 and 1994 in the studies on hypercholesterolemia. We hope that our analysis of ‘loci and reasons’ for disagreement may shed light on the cholesterol wars, and possibly on other episodes of scientific disagreement. (shrink)
In this paper, I investigate the issue of the contingency and inevitability of science. First, I point out valuable insights from the existing discussion about the issue. I then formulate a general framework, built on the notion of contrastive explanation and counterfactuals, that can be used to approach questions of contingency of science. I argue, with an example from the existing historiography of science, that this framework could be useful to historians of science. Finally, I argue that this framework shows (...) the existing views on historical contingency and counterfactuals in a new light. The framework also shows the value of existing historiography in philosophical debates. (shrink)
Synthetic biology research is often described in terms of programming cells through the introduction of synthetic genes. Genetic material is seemingly attributed with a high level of causal responsibility. We discuss genetic causation in synthetic biology and distinguish three gene concepts differing in their assumptions of genetic control. We argue that synthetic biology generally employs a difference-making approach to establishing genetic causes, and that this approach does not commit to a specific notion of genetic program or genetic control. Still, we (...) suggest that a strong program concept of genetic material can be used as a successful heuristic in certain areas of synthetic biology. Its application requires control of causal context, and may stand in need of a modular decomposition of the target system. We relate different modularity concepts to the discussion of genetic causation and point to possible advantages of and important limitations to seeking modularity in synthetic biology systems. (shrink)
By looking at videogame production through a two-vector model of design – a practice determined by the interplay between economic and technological evolution – we argue that shared screen play, as both collaboration and competition, originally functioned as a desirable pattern in videogame design, but has since become problematic due to industry transformations. This is introduced as an example of what we call design vestigiality: momentary loss of a design pattern’s contextual function due to techno-economical evolution.
John Dewey's writings on social intelligence, collective intelligence and the intelligence of the public have gained renascent attention, especially within democratic theory and democratic education. It has been proposed that pragmatism in general and Dewey in particular, offer an alternative model for democratic participation. This model shares many of the goals of deliberative democratic theory or critical theory, but is proposed to be capable of dodging some of the problems often affiliated with them—such as the powerlessness in the face of (...) the rise of non-democratic populist movements that exploit the very means and apparatus of democracy. It is in part this allegedly non-democratic and non-intelligent populism... (shrink)
In this book, I defend the present-centered approach in historiography of science (i.e. study of the history of science), build an account for causal explanations in historiography of science, and show the fruitfulness of the approach and account in when we attempt to understand science. -/- The present-centered approach defines historiography of science as a field that studies the developments that led to the present science. I argue that the choice of the targets of studies in historiography of science should (...) be directly connected to our values and preferences in an intersubjective process. The main advantage of this approach is that it gives a clear motivation for historiography of science and avoids or solves stubborn conceptual and practical problems within the field. -/- The account of causal explanations is built on the notions of counterfactual scenarios and contrastive question-answer pairs. I argue that if and only if we track down patterns of counterfactual dependencies, can we understand history. Moreover, I define the notions of historical explanation, explanatory competition, explanatory depth, and explanatory resources. -/- Finally, I analyze the existing historiography of science with the framework built in the previous chapter, and I show that this framework clarifies many first-order (i.e. concerning the history of science) and meta-level issues (i.e. concerning the nature of science in general) that historians and philosophers tackle. As an illustration of the philosophical power of the framework, I explicate the notion of local explanation and analyze the question of whether the developments of science were necessary or contingent. (shrink)
A logical approach to Bell's Inequalities of quantum mechanics has been introduced by Abramsky and Hardy [2]. We point out that the logical Bell's Inequalities of [2] are provable in the probability logic of Fagin, Halpern and Megiddo [4]. Since it is now considered empirically established that quantum mechanics violates Bell's Inequalities, we introduce a modified probability logic, that we call quantum team logic, in which Bell's Inequalities are not provable, and prove a Completeness Theorem for this logic. For this (...) end we generalise the team semantics of dependence logic [7] first to probabilistic team semantics, and then to what we call quantum team semantics. (shrink)
Interventionism is a theory of causation with a pragmatic goal: to define causal concepts that are useful for reasoning about how things could, in principle, be purposely manipulated. In its original presentation, Woodward’s interventionist definition of causation is relativized to an analyzed variable set. In Woodward, Woodward changes the definition of the most general interventionist notion of cause, contributing cause, so that it is no longer relativized to a variable set. This derelativization of interventionism has not gathered much attention, presumably (...) because it is seen as an unproblematic way to save the intuition that causal relations are objective features of the world. This paper first argues that this move has problematic consequences. Derelativization entails two concepts of unmediated causal relation that are not coextensional, but which nonetheless do not entail different conclusions about manipulability relations within any given variable set. This is in conflict with the pragmatic orientation at the core of interventionism. The paper then considers various approaches for resolving this tension but finds them all wanting. It is concluded that interventionist causation should not be derelativized in the first place. Various considerations are offered rendering that conclusion acceptable. (shrink)
In this paper, I explicate desiderata for accounts of explanation in historiography. I argue that a fully developed account of explanation in historiography must explicate many explanation-related notions in order to be satisfactory. In particular, it is not enough that an account defines the basic structure of explanation. In addition, the account of explanation must be able to explicate notions such as minimal explanation, complete explanation, historiographical explanation, explanatory depth, explanatory competition, and explanatory goal. Moreover, the account should also tell (...) how explananda can be chosen in a motivated way. Furthermore, the account should be able to clarify notions that are closely connected with explanation such as historical contingency. Finally, it is important that the account is able to recognize when explanation-related notions and issues are so closely intertwined that we are in danger of not seeing the differences between them. In other words, I argue that a satisfactory account of explanation in historiography must have the power to explicate central explanation-related notions and to clarify discussions where the differences between the notions are obscure. In order to explicate these desiderata, I formulate a counterfactual account of explanation and show how that account is able to explicate explanation-related notions and clarify issues that are connected with historiographical explanations. The success of the counterfactual account suggests that historiographical explanations do not differ fundamentally from explanations in many other fields. (shrink)
In this article, the authors present several topics related to the nascent development of a merit-based hiring system in North Macedonia. This paper employs a normative approach. We advocate for a merit-based hiring system, similar to the American model. First, we explore the pressure exerted by the European Commission to adopt a merit-based system at all levels of government as a condition for entry into the European Union. Second, we delve into the patronage system in North Macedonia. Third, we provide (...) a short history of patronage in the United States and the difficulty that nation had in curbing its entrenched patronage system. Fourth, we discuss the advantages of a merit-based hiring system, namely the creation of good governance, the improvement of employee morale, the development of more public confidence in government, the reduction of the influence of ethnic politics and the furtherance of the rule of law. Finally, we present an example drawn from the American federal government about the basic procedures of a merit-based hiring process. (shrink)
In this paper we study a specific subclass of abstract elementary classes. We construct a notion of independence for these AEC’s and show that under simplicity the notion has all the usual properties of first order non-forking over complete types. Our approach generalizes the context of 0-stable homogeneous classes and excellent classes. Our set of assumptions follow from disjoint amalgamation, existence of a prime model over 0/, Löwenheim–Skolem number being ω, -tameness and a property we call finite character. We also (...) start the studies of these classes from the 0-stable case. Stability in 0 and -tameness can be replaced by categoricity above the Hanf number. Finite character is the main novelty of this paper. Almost all examples of AEC’s have this property and it allows us to use weak types, as we call them, in place of Galois types. (shrink)
Randomised clinical trials involve procedures such as randomisation, blinding, and placebo use, which are not part of standard medical care. Patients asked to participate in RCTs often experience difficulties in understanding the meaning of these and their justification.
We give a characterization for those stable theories whose $\omega_{1}$-saturated models have a "Shelah-style" structure theorem. We use this characterization to prove that if a theory is countable, stable, and 1-based without dop or didip, then its $\omega_{1}$-saturated models have a structure theorem. Prior to us, this is proved in a paper of Hart, Pillay, and Starchenko . Some other remarks are also included.