Se intenta una contribución al conocimiento de la enseñanza de la filosofía en el Colegio de San Carlos de Buenos Aires hacia fines del siglo XVIII. Para ello se examinan las propuestas de Juan Baltasar Maziel (1727-1788) para dicho Colegio en materia de Filosofía, Teología y Derecho. Esta información se complementa con la exposición de otras ideas jurídico-políticas de Maziel, en el contexto de su actividad eclesiástica. The subject is the contribution of Juan Baltasar Maziel (1727-1778) to the (...) teaching of Philosophy, Theology and Law in the Colegio de San Carlos of Buenos Aires, in the last third of the XVIIIth Century. In addition, other political ideas of Maziel are considered. (shrink)
In this paper I argue against Mentalism, the claim that all the factors that contribute to the epistemic justification of a doxastic attitude towards a proposition by a subject S are mental states of S. My objection to mentalism is that there is a special kind of fact (what I call a "support fact") that contributes to the justification of any belief, and that is not mental. My argument against mentalism, then, is the following: Anti-mentalism argument: 1. If mentalism is (...) true, then support facts are mental. 2. Support facts are not mental. Therefore, 3. Mentalism is not true. In what follows I explain what support facts are, and then defend each of the premises of my argument. I conclude with some remarks regarding the relevance of my argument for the larger internalism/externalism debate(s) in epistemology. (shrink)
This book is dedicated to clarify ambiguous concepts from the world of creativity and innovation. One of the initial triggers for the development of the book was the perceived ambiguity of the binomials Design vs. Design Thinking and Innovation vs. Invention. Frequently, designers and innovation consultants are questioned by their clients about the relationships between these kind of concepts. Has the second emerged through the first, or vice-verse? Is one part of the other? Where are the similarities and which are (...) the differences? This conceptual incomprehension makes itself noticeable between many ambiguous concepts in the world of innovation and creativity. What is the difference between Radical and Disruptive Innovation? Or is Social Innovation the same as Social Intervention? And regarding Creativity, are Creativity and Creative Thinking the same? In this book the reader will find answers to these kinds of questions and doubts. 28 authors from 10 different countries and cultural backgrounds are questioning current definitions and perceptions, by comparing different sources and ideas, or simply by giving their personal opinions. Some of the authors have an academic background, others a practical one, being either entrepreneurs, working with innovation in companies, or being innovation/design consultants. (shrink)
Human adaptive behavior in sensorimotor control is aimed to increase the confidence in feedforward mechanisms when sensory afferents are uncertain. It is thought that these feedforward mechanisms rely on predictions from internal models. We investigate whether the brain uses an internal model of physical laws to help estimate body equilibrium when tactile inputs from the foot sole are depressed by carrying extra weight. As direct experimental evidence for such a model is limited, we used Judoka athletes thought to have built (...) up internal models of external loads as compared with Non-Athlete participants and Dancers. Using electroencephalography, we first tested the hypothesis that the influence of tactile inputs was amplified by descending cortical efferent signals. We compared the amplitude of P1N1 somatosensory cortical potential evoked by electrical stimulation of the foot sole in participants standing still with their eyes closed. We showed smaller P1N1 amplitudes in the Load compared to No Load conditions in both Non-Athletes and Dancers. This decrease neural response to tactile stimulation was associated with greater postural oscillations. By contrast in the Judoka’s group, the neural early response to tactile stimulation was unregulated in the Load condition. This suggests that the brain can selectively increase the functional gain of sensory inputs, during challenging equilibrium tasks when tactile inputs were mechanically depressed by wearing a weighted vest. In Judokas, the activation of regions such as the right posterior inferior parietal cortex as early as the P1N1 is likely the source of the neural responses being maintained similar in both Load and No Load conditions. An overweight internal model stored in the right PPC known to be involved in maintaining a coherent representation of one’s body in space can optimize predictive mechanisms in situations with high balance constraints. This hypothesis has been confirmed by showing that postural reaction evoked by a translation of the support surface on which participants were standing wearing extra-weight was improved in Judokas. (shrink)
The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...) computational processes are indeed methodologically opaque to humans, we argue that the reliability of algorithms provides reasons for trusting the outcomes of medical artificial intelligence. To this end, we explain how computational reliabilism, which does not require transparency and supports the reliability of algorithms, justifies the belief that results of medical AI are to be trusted. We also argue that several ethical concerns remain with black box algorithms, even when the results are trustworthy. Having justified knowledge from reliable indicators is, therefore, necessary but not sufficient for normatively justifying physicians to act. This means that deliberation about the results of reliable algorithms is required to find out what is a desirable action. Thus understood, we argue that such challenges should not dismiss the use of black box algorithms altogether but should inform the way in which these algorithms are designed and implemented. When physicians are trained to acquire the necessary skills and expertise, and collaborate with medical informatics and data scientists, black box algorithms can contribute to improving medical care. (shrink)
Several philosophical issues in connection with computer simulations rely on the assumption that results of simulations are trustworthy. Examples of these include the debate on the experimental role of computer simulations :483–496, 2009; Morrison in Philos Stud 143:33–57, 2009), the nature of computer data Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013; Humphreys, in: Durán, Arnold Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013), and the explanatory power of (...) computer simulations :277–292, 2008; Durán in Int Stud Philos Sci 31:27–45, 2017). The aim of this article is to show that these authors are right in assuming that results of computer simulations are to be trusted when computer simulations are reliable processes. After a short reconstruction of the problem of epistemic opacity, the article elaborates extensively on computational reliabilism, a specified form of process reliabilism with computer simulations located at the center. The article ends with a discussion of four sources for computational reliabilism, namely, verification and validation, robustness analysis for computer simulations, a history of successful implementations, and the role of expert knowledge in simulations. (shrink)
In the centenary year of Turing’s birth, a lot of good things are sure to be written about him. But it is hard to find something new to write about Turing. This is the biggest merit of this article: it shows how von Neumann’s architecture of the modern computer is a serendipitous consequence of the universal Turing machine, built to solve a logical problem.
for everyone to handle indiscriminately, since it was something great and holy. 28. They were not wrong, were they? For you are the holy mother who by ...
In the xvii century, William Molyneux asked John Locke whether a newly-sighted person could reliably identify a cube from a sphere without aid from their touch. While this might seem an easily testable question, answering it is not so straightforward. In this paper, I illustrate this question and claim that some distinctions regarding the concept of consciousness are important for an empirical solution. First, I will describe Molyneux’s question as it was proposed by Molyneux himself, and I’ll briefly say something (...) about its early debates. Second, I will go over some empirical attempts to solve this question, including recent experiments coming from neuroscience. Third, I will introduce some distinctions with regards to consciousness, and in the following section I will apply them to the Molyneux case. Finally, I will shortly consider some consequences of this approach. I conclude by suggesting researchers pay attention to different senses in which Molyneux’s question might be posed for empirical purposes. (shrink)
The volume gathers theoretical contributions on human rights and global justice in the context of international migration. It addresses the need to reconsider human rights and the theories of justice in connection with the transformation of the social frames of reference that international migrations foster. The main goal of this collective volume is to analyze and propose principles of justice that serve to address two main challenges connected to international migrations that are analytically differentiable although inextricably linked in normative terms: (...) to better distribute the finite resources of the planet among all its inhabitants; and to ensure the recognition of human rights in current migration policies. Due to the very nature of the debate on global justice and the implementation of human rights and migration policies, this interdisciplinary volume aims at transcending the academic sphere and appeals to a large public through argumentative reflections. Challenging the Borders of Justice in the Age of Migrations represents a fresh and timely contribution. -/- IN A TIME when national interests are structurally overvalued and borders increasingly strengthened, it’s a breath of fresh air to read a book in which migration flows are not changed into a threat. We simply cannot understand the world around us through the lens of the ‘migration crisis’-a message the authors of this book have perfectly understood. Aimed at a strong link between theories of global justice and policies of border control, this timely book combines the normative and empirical to deeply question the way our territorial boundaries are justified. Professor Ronald Tinnevelt, Radboud University Nijmegen, The Netherlands.- -/- THIS BOOK IS essential reading for those frustrated by the limitations of the dominant ways of thinking about global justice especially in relation to migration. By bringing together discussions of global justice, cosmopolitan political theory and migration, this collection of essays has the potential to transform the way in which we think and debate the critical issues of membership and movement. Together they present a critical interdisciplinary approach to international migration, human rights and global justice, challenging disciplinary borders as well as political ones. Professor Phil Cole, University of the West of England, UK.-. (shrink)
Much NGO fund-raising and publicity concern disasters, emergencies and the immediate relief of suffering. Donations and support may follow but they are prompted all too often by a superficially informed compassion or guilt with donors having little understanding of the results of their action. For all their impact, such campaigns can amount to demagogic sentimentalism leading to ‘compassion fatigue’ and lack of sustained support once media attention moves elsewhere. They thus undermine the unique mission of NGOs themselves. This paper urges (...) a different and more strategic approach to communication by NGOs, one which takes account of their unique status and their mission to promote solidarity. It argues that as well as solving problems of underdevelopment, NGOs need to remain independent and to shape public opinion if they are to flourish. And for this they need stable funding from informeddonors giving in a spirit of solidarity to support development carried out explicitly in the name of human solidarity. The paper sets out guidelines for NGOs to communicate in ways likely to gain the support of such donors. And it describes the la Florida project in Columbia as an example of how the beneficiary can - in the spirit of solidarity - be brought to the centre of NGO action and communication. (shrink)
This article aims to develop a new account of scientific explanation for computer simulations. To this end, two questions are answered: what is the explanatory relation for computer simulations? And what kind of epistemic gain should be expected? For several reasons tailored to the benefits and needs of computer simulations, these questions are better answered within the unificationist model of scientific explanation. Unlike previous efforts in the literature, I submit that the explanatory relation is between the simulation model and the (...) results of the simulation. I also argue that our epistemic gain goes beyond the unificationist account, encompassing a practical dimension as well. (shrink)
From our everyday commuting to the gold medalist’s world-class performance, skillful actions are characterized by fine-grained, online agentive control. What is the proper explanation of such control? There are two traditional candidates: intellectualism explains skillful agentive control by reference to the agent’s propositional mental states; anti-intellectualism holds that propositional mental states or reflective processes are unnecessary since skillful action is fully accounted for by automatic coping processes. I examine the evidence for three psychological phenomena recently held to support anti-intellectualism and (...) argue that it supports neither traditional candidate, but an intermediate attention-control account, according to which the top-down, intention-directed control of attention is a necessary component of skillful action. Only this account recognizes both the role of automatic control in skilled action and the need for higher-order cognition to thread automatic processes together into a unified, skillful performance. This applies to bodily skillful action in general, from the world-class performance of experts to mundane, habitual action. The attention-control account stresses that, for intentions to play their role as top-down modulators of attention, agents must sustain the intention’s activation; hence, the need for reflecting throughout performance. (shrink)
The traditional image of northern Iberian mountain settlements is that they are largely egalitarian, homogeneous, and survivals of archaic forms of 'agrarian collectivism'. In this book, based both on extensive fieldwork and detailed study of local records, Brian Juan O'Neill offers a different perspective, questioning prevailing views on both empirical as well as theoretical and methodological grounds. Through a detailed examination of three major areas of social life - land tenure, cooperative labour exchanges, and marriage and inheritance practices - (...) in one particular hamlet, the author demonstrates the predominance of forms of institutionalized economic inequality and social differentiation within the peasantry. Situating the local study within a wider European and Mediterranean ethnographic and geographical framework, O'Neill offers a refreshing and challenging way of combining the research methods of anthropology with those of social and economic history. His book will appeal to anthropologists, historians, sociologists, geographers and demographers interested in the present and past social structure of European village communities, as well as to those concerned with the growing links between anthropology and history. (shrink)
In this study, we examine whether, how, and when corporate social responsibility increases promotive and prohibitive voices in accordance with ethical climate theory and multi-experience model of ethical climate. Data from 382 employees at two time points are examined. Results show that CSR is positively related to promotive and prohibitive voices. Other-focused and self-focused climates mediate the relationship between CSR and the two types of voice. Moreover, humble leadership moderates the positive relationship between CSR and other-focused climate. Such leadership moderates (...) the negative relationship between CSR and self-focused climate. Humble leadership also moderates the indirect effect between CSR and the two kinds of voice through other-focused and self-focused climates. The findings of this study provide important insights into how and when CSR influences employee voice. (shrink)
Many philosophical accounts of scientific models fail to distinguish between a simulation model and other forms of models. This failure is unfortunate because there are important differences pertaining to their methodology and epistemology that favor their philosophical understanding. The core claim presented here is that simulation models are rich and complex units of analysis in their own right, that they depart from known forms of scientific models in significant ways, and that a proper understanding of the type of model simulations (...) are fundamental for their philosophical assessment. I argue that simulation models can be distinguished from other forms of models by the many algorithmic structures, representation relations, and new semantic connections involved in their architecture. In this article, I reconstruct a general architecture for a simulation model, one that faithfully captures the complexities involved in most scientific research with computer simulations. Furthermore, I submit that a new methodology capable of conforming such architecture into a fully functional, computationally tractable computer simulation must be in place. I discuss this methodology—what I call recasting—and argue for its philosophical novelty. If these efforts are heading towards the right interpretation of simulation models, then one can show that computer simulations shed new light on the philosophy of science. To illustrate the potential of my interpretation of simulation models, I briefly discuss simulation-based explanations as a novel approach to questions about scientific explanation. (shrink)
We give examples of calculi that extend Gentzen’s sequent calculusLKby unsound quantifier inferences in such a way that derivations lead only to true sequents, and proofs therein are nonelementarily shorter thanLK-proofs.
Researchers often claim that self-control is a skill. It is also often stated that self-control exertions are intentional actions. However, no account has yet been proposed of the skillful agency that makes self-control exertion possible, so our understanding of self-control remains incomplete. Here I propose the skill model of self-control, which accounts for skillful agency by tackling the guidance problem: how can agents transform their abstract and coarse-grained intentions into the highly context-sensitive, fine-grained control processes required to select, revise and (...) correct strategies during self-control exertion? The skill model borrows conceptual tools from ‘hierarchical models’ recently developed in the context of motor skills, and asserts that self-control crucially involves the ability to manage the implementation and monitoring of regulatory strategies as the self-control exercise unfolds. Skilled agents are able do this by means of flexible practical reasoning: a fast, context-sensitive type of deliberation that incorporates non-propositional representations into the formation and revision of the mixed-format intentions that structure self-control exertion. The literatures on implementation intentions and motivation framing offer corroborating evidence for the theory. As a surprising result, the skill of self-control that allows agents to overcome the contrary motivations they experience is self-effacing: instead of continuously honing this skill, expert agents replace it with a different one, which minimizes or prevents contrary motivations from arising in the first place. Thus, the more expert you are at self-control, the less likely you are to use it. (shrink)
BackgroundDistorted gambling-related cognitions are tightly related to gambling problems, and are one of the main targets of treatment for disordered gambling, but their etiology remains uncertain. Although folk wisdom and some theoretical approaches have linked them to lower domain-general reasoning abilities, evidence regarding that relationship remains unconvincing.MethodIn the present cross-sectional study, the relationship between probabilistic/abstract reasoning, as measured by the Berlin Numeracy Test, and the Matrices Test, respectively, and the five dimensions of the Gambling-Related Cognitions Scale, was tested in a (...) sample of 77 patients with gambling disorder and 58 individuals without gambling problems.Results and interpretationNeither BNT nor matrices scores were significantly related to gambling-related cognitions, according to frequentist analyses, performed both considering and disregarding group in the models. Correlation Bayesian analyses largely supported the null hypothesis, i.e., the absence of relationships between the measures of interest. This pattern or results reinforces the idea that distorted cognitions do not originate in a general lack of understanding of probability or low fluid intelligence, but probably result from motivated reasoning. (shrink)
We argue that if evidence were knowledge, then there wouldn’t be any Gettier cases, and justification would fail to be closed in egregious ways. But there are Gettier cases, and justification does not fail to be close in egregious ways. Therefore, evidence isn’t knowledge.
Price discrimination is the practice of charging different customers different prices for the same product. Many people consider price discrimination unfair, but economists argue that in many cases price discrimination is more likely to lead to greater welfare than is the uniform pricing alternative—sometimes for every party in the transaction. This article shows i) that there are many situations in which it is necessary to engage in differential pricing in order to make the provision of a product possible; and ii) (...) that in many such situations, the seller does not obtain an above-average rate of return. It concludes that price discrimination is not inherently unfair. The article also contends that even when conditions i) and/or ii) do not obtain, price discrimination is not necessarily unethical. In itself, the fact that some people get an even better deal than do others does not entail that the latter are wronged. (shrink)
Perhaps the hottest topic in the philosophy of chemistry is that of the relationship between chemistry and physics. The problem finds one of its main manifestations in the debate about the nature of molecular structure, given by the spatial arrangement of the nuclei in a molecule. The traditional strategy to address the problem is to consider chemical cases that challenge the definition of molecular structure in quantum–mechanical terms. Instead of taking that top-down strategy, in this paper we face the problem (...) of the reduction of molecular structure to quantum mechanics from a bottom-up perspective: our aim is to show how the theoretical peculiarities of quantum mechanics stand against the possibility of molecular structure, defined in terms of the spatial relations of the nuclei conceived as individual localized objects. We will argue that, according to the theory, quantum “particles” are not individuals that can be identified as different from others and that can be reidentified through time; therefore, they do not have the ontological stability necessary to maintain the relations that can lead to a spatially definite system with an identifiable shape. On the other hand, although quantum chemists use the resources supplied by quantum mechanics with successful results, this does no mean reduction: their “approximations” add certain assumptions that are not justified in the context of quantum mechanics or are even inconsistent with the very formal structure of the theory. (shrink)
Intuitively, there is a difference between knowledge and mere belief. Contemporary philosophical work on the nature of this difference has focused on scenarios known as “Gettier cases.” Designed as counterexamples to the classical theory that knowledge is justified true belief, these cases feature agents who arrive at true beliefs in ways which seem reasonable or justified, while nevertheless seeming to lack knowledge. Prior empirical investigation of these cases has raised questions about whether lay people generally share philosophers’ intuitions about these (...) cases, or whether lay intuitions vary depending on individual factors (e.g. ethnicity) or factors related to specific types of Gettier cases (e.g. cases that include apparent evidence). We report an experiment on lay attributions of knowledge and justification for a wide range of Gettier Cases and for a related class of controversial cases known as Skeptical Pressure cases, which are also thought by philosophers to elicit intuitive denials of knowledge. Although participants rated true beliefs in Gettier and Skeptical Pressure cases as being justified, they were significantly less likely to attribute knowledge for these cases than for matched true belief cases. This pattern of response was consistent across different variations of Gettier cases and did not vary by ethnicity or gender, although attributions of justification were found to be positively related to measures of empathy. These findings therefore suggest that across demographic groups, laypeople share similar epistemic concepts with philosophers, recognizing a difference between knowledge and justified true belief. (shrink)
When one wants to use citizen input to inform policy, what should the standards of informedness on the part of the citizens be? While there are moral reasons to allow every citizen to participate and have a voice on every issue, regardless of education and involvement, designers of participatory assessments have to make decisions about how to structure deliberations as well as how much background information and deliberation time to provide to participants. After assessing different frameworks for the relationship between (...) science and society, we use Philip Kitcher's framework of Well-Ordered Science to propose an epistemic standard on how citizen deliberations should be structured. We explore what potential standards follow from this epistemic framework focusing on significance versus scientific and engineering expertise. We argue that citizens should be tutored on the historical context of why scientific questions became significant and deemed scientifically and socially valuable, and if citizens report that they are capable of weighing in on an issue then they should be able to do so. We explore what this standard can mean by looking at actual citizen deliberations tied to the 2014 NASA ECAST Asteroid Initiative Citizen forums. We code different vignettes of citizens debating alternative approaches for Mars exploration based upon what level of information seemed to be sufficient for them to feel comfortable in making a policy position. The analysis provides recommendations on how to design and assess future citizen assessments grounded in properly conveying the historical value context surrounding a scientific issue and trusting citizens to seek out sufficient information to deliberate. (shrink)
In the last decades there has been a great controversy about the scientific status of emotion categories. This controversy stems from the idea that emotions are heterogeneous phenomena, which precludes classifying them under a common kind. In this article, I analyze this claim—which I call the Variability Thesis—and argue that as it stands, it is problematically underdefined. To show this, I examine a recent formulation of the thesis as offered by Scarantino (2015). On one hand, I raise some issues regarding (...) the logical structure of the claim. On the other hand, and most importantly, I show that the Variability Thesis requires a consensus about what counts as a relevant pattern of response in different domains, a consensus that is lacking in the current literature. This makes it difficult to assess what counts as evidence for or against this thesis. As a result, arguments based on the Variability Thesis are unwarranted. This raises serious concerns about some current empirical theories of emotions, but also sheds light on the issue of the scientific status of emotion categories. (shrink)
This paper presents the manner in which the DNA, the molecule of life, was discovered. Unlike what many people, even biologists, believe, it was Johannes Friedrich Miescher who originally discovered and isolated nuclein, currently known as DNA, in 1869, 75 years before Watson and Crick unveiled its structure. Also, in this paper we show, and above all demonstrate, the serendipity of this major discovery. Like many of his contemporaries, Miescher set out to discover how cells worked by means of studying (...) and analysing their proteins. During this arduous task, he detected an unexpected substance of unpredicted properties. This new substance precipitated when he added acid to the solution and it dissolved again when adding alkali. Unexpectedly and by a mere fluke, Miescher was the first person to obtain a DNA precipitate. The paper then presents the term serendipity and discusses how it has influenced the discovery of other important scientific milestones. Finally, we address the question of whether serendipitous discoveries can be nurtured and what role the computer could play in this process. (shrink)
We would like to thank the authors of the commentaries for their critical appraisal of our feature article, Who is afraid of black box algorithms?1 Their comments, suggestions and concerns are various, and we are glad that our article contributes to the academic debate about the ethical and epistemic conditions for medical Explanatory AI. We would like to bring to attention a few issues that are common worries across reviewers. Most prominently are the merits of computational reliabilism —in particular, when (...) promoted as an alternative to transparency—and CR as necessary but not sufficient for delivering trust. We finalise our response by addressing concerns about the place and role of artificial intelligence in medical decision-making and the physician’s responsibilities. We understand the concerns and reservations that some of the reviewers express regarding the epistemic merits of CR. We believe that, in part, this is due to a practice too deeply rooted in transparency. But on …. (shrink)
The two main theories of perceptual reasons in contemporary epistemology can be called Phenomenalism and Factualism. According to Phenomenalism, perceptual reasons are facts about experiences conceived of as phenomenal states, i.e., states individuated by phenomenal character, by what it’s like to be in them. According to Factualism, perceptual reasons are instead facts about the external objects perceived. The main problem with Factualism is that it struggles with bad cases: cases where perceived objects are not what they appear or where there (...) is no perceived object at all. The main problem with Phenomenalism is that it struggles with good cases: cases where everything is perfectly normal and the external object is correctly perceived, so that one’s perceptual beliefs are knowledge. In this paper we show that there is a theory of perceptual reasons that avoids the problems for Factualism and Phenomenalism. We call this view Propositionalism. We use ‘proposition’ broadly to mean the entities that are contents of beliefs and other doxastic attitudes. The key to finding a middle ground between Phenomenalism and Factualism, we claim, is to allow our reasons to be false in bad cases. Despite being false, they are about the external world, not our phenomenal states. (shrink)
We discuss two modal claims about the phenomenal structure of color experiences: (i) violet experiences are necessarily experiences of a color that is for the subject on that occasion phenomenally composed of red and blue (the modal claim about violet) and (ii) no subject can possibly have an experience of a color that is for it then phenomenally composed of red and green (the modal claim about reddish green). The modal claim about reddish green is undermined by empirical results. We (...) discuss whether these empirical results cast doubt on the other modal claims as well. We argue that this not the case. Our argument is based on the thesis that the best argument for the modal claim about violet is quite different from the best argument for the modal claim about reddish green. To argue for this disanalogy we propose a reconstruction of the best available justification for both claims. (shrink)
In the international sphere, sovereignty and fundamental rights are often at odds, giving these rights little space for action and, in general, only after crisis has led to tragedy, and tragedy to disgrace. International Law, on the other hand, consistently succumbs to forms of domination and power, and its scope of action is often limited to certain codifications which are frequently suspended by political exception. Sixteenth century Dominican theologian, Francisco de Vitoria, established the principles for a Law of the people, (...) based on secular civil power that, while still exercising internal sovereignty, could defend and privilege the “rights of all men” over the exercise of power and private interest. This article focuses on this dispute to show Vitoria’s position, who long preceded the great declarations of human rights of the 18 th century, presenting an original doctrine within the framework of an emerging globalization and based on the defense of fundamental rights. (shrink)
Reliabilism about epistemic justification - the thesis that what makes a belief epistemically justified is that it was produced by a reliable process of belief-formation - must face two problems. First, what has been called "the new evil demon problem", which arises from the idea that the beliefs of victims of an evil demon are as justified as our own beliefs, although they are not - the objector claims - reliably produced. And second, the problem of diagnosing why skepticism is (...) so appealing despite being false. I present a special version of reliabilism, "indexical reliabilism", based on two-dimensional semantics, and show how it can solve both problems. (shrink)
The objective of this paper is to discuss the relationship between the functional properties and information-processing modes of the human brain and the evolution of scientific thought. Science has emerged as a tool to carry out predictive operations that exceed the accuracy, temporal scale, and intrinsic operational limitations of the human brain. Yet the scientific method unavoidably reflects some fundamental characteristics of the information-acquisition and -analysis modes of the brain, which impose a priori boundary conditions upon how science can develop (...) and how the physical universe can be “understood.” A brief description of physical and biological interactions is given, with emphasis on the defining role played by the concept of information. Current views on the information-processing and information-generating mechanisms of the human brain are briefly reviewed. It is shown how some particular features of superstition, natural philosophy, physical thought, and intuition can be linked to certain characteristic information-processing modes of the brain. A discussion is given of how greatly expanded knowledge of brain functions might affect the future of science and technology. (shrink)
Loyalty is a much-discussed topic among business ethicists, but this discussion seems to have issued in very few clear conclusions. This article builds on the existing literature on the subject and attempts to ground a definite conclusion on a limited topic: whether, and under what conditions, it makes sense for an employee to offer loyalty to his employer. The main ways in which loyalty to one’s employer can contribute to human flourishing are that it makes the employee more trustworthy and (...) therefore more valuable as an employee; makes it easier to form authentic relationships in other areas of the employee’s life; expands the employee’s field of interests and gives her or him a richer identity; provides greater motivation for the employee’s work; makes it possible to have a greater unity in the employee’s life; improves the performance of the organization for which the employee works; contributes to the protection of valuable social institutions; and, in so far as many employees share an attitude of loyalty towards the organization which employs them, it becomes possible for this organization to become a true community. Last, but not the least, loyal relationships have an inherent value. The article also reviews the main arguments that have been offered against employee loyalty and concludes that none of them offers a reason why it would be inappropriate in all cases for an employee to be loyal to her or his employer. The force of these arguments depends on the specific attributes of the organization for which the employee works. The main conclusion of the article is that while being a loyal employee involves risk, it has the potential to contribute significantly to the employee’s fulfilment. The main challenge for employees is to identify employers who are worthy of being loyal to. (shrink)
Mathematical explanations are poorly understood. Although mathematicians seem to regularly suggest that some proofs are explanatory whereas others are not, none of the philosophical accounts of what such claims mean has become widely accepted. In this paper we explore Wilkenfeld’s suggestion that explanations are those sorts of things that generate understanding. By considering a basic model of human cognitive architecture, we suggest that existing accounts of mathematical explanation are all derivable consequences of Wilkenfeld’s ‘functional explanation’ proposal. We therefore argue that (...) the explanatory criteria offered by earlier accounts can all be thought of as features that make it more likely that a mathematical proof will generate understanding. On the functional account, features such as characterising properties, unification, and salience correlate with explanatoriness, but they do not define explanatoriness. (shrink)
Business organizations in their work environment, aspire to create a high level of performance and low levels of absenteeism and turnover. Organizational commitment is considered a key factor in achieving this objective, however, it can be conditioned by several factors, among which is the psychological contract. The literature has related the organizational commitment with the fulfillment of the psychological contract framing it as one of the explanatory variables. This work aims to investigate research trends on psychological contract and organizational commitment. (...) For this purpose, bibliometric techniques and the software SciMAT have been used. 220 journal articles indexed in Web of Science were analyzed. The findings indicate that the theme chosen for this review is valid. Based on the relationship between the two concepts, as the most recurrent themes, issues such as the sense of justice and the consequences of the violation of the psychological contract, normative commitment, HR management or job insecurity are addressed. However, in the last period analyzed, publications related to more sensitive topics to the present time emerge, such as the employability or the impact of these two concepts in the new generations or the retention of talent. On the other hand, shortcomings are detected in the research on the ideologically charged psychological contract, the analysis of the organizational context or cultural and demographic factors in relation to both theoretical constructs. The contribution of this work lies in giving visibility to scientific results, which will serve business organizations as instruments for decision making in their labor management and, for the scientific community, as knowledge of the research spaces to explore. (shrink)
Let Mn♯ denote the minimal active iterable extender model which has n Woodin cardinals and contains all reals, if it exists, in which case we denote by Mn the class-sized model obtained by iterating the topmost measure of Mn class-many times. We characterize the sets of reals which are Σ1-definable from R over Mn, under the assumption that projective games on reals are determined:1. for even n, Σ1Mn=⅁RΠn+11;2. for odd n, Σ1Mn=⅁RΣn+11.This generalizes a theorem of Martin and Steel for L, (...) that is, the case n=0. As consequences of the proof, we see that determinacy of all projective games with moves in R is equivalent to the statement that Mn♯ exists for all n∈N, and that determinacy of all projective games of length ω2 with moves in N is equivalent to the statement that Mn♯ exists and satisfies AD for all n∈N. (shrink)
A chronicled approach to the notion of computer simulations shows that there are two predominant interpretations in the specialized literature. According to the first interpretation, computer simulations are techniques for finding the set of solutions to a mathematical model. I call this first interpretation the problem-solving technique viewpoint. In its second interpretation, computer simulations are considered to describe patterns of behavior of a target system. I call this second interpretation the description of patterns of behavior viewpoint of computer simulations. This (...) article explores these two interpretations of computer simulations from three different angles. First, I collect a series of definitions of computer simulation from the historical record. I track back definitions to the early 1960s and show how each viewpoint shares similar interpretations of computer simulations—ultimately clustering into the two viewpoints aforementioned. This reconstruction also includes the most recent literature. Second, I unpack the philosophical assumptions behind each viewpoint, with a special emphasis on their differences. Third, I discuss the philosophical implications of each viewpoint in the context of the recent discussion on the logic of scientific explanation for computer simulations. (shrink)
Brucellosis is one of the major infectious diseases in China. In this study, we consider an SI model of animal brucellosis with transport. The basic reproduction number ℛ0 is obtained, and the stable state of the equilibria is analyzed. Numerical simulation shows that different initial values have a great influence on results of the model. In addition, the sensitivity analysis of ℛ0 with respect to different parameters is analyzed. The results reveal that the transport has dual effects. Specifically, transport can (...) lead to increase in the number of infected animals; besides, transport can also reduce the number of infected animals in a certain range. The analysis shows that the number of infected animals can be controlled if animals are transported reasonably. (shrink)
A chronicled approach to the notion of computer simulations shows that there are two predominant interpretations in the specialized literature. According to the first interpretation, computer simulations are techniques for finding the set of solutions to a mathematical model. I call this first interpretation the problem-solving technique viewpoint. In its second interpretation, computer simulations are considered to describe patterns of behavior of a target system. I call this second interpretation the description of patterns of behavior viewpoint of computer simulations. This (...) article explores these two interpretations of computer simulations from three different angles. First, I collect a series of definitions of computer simulation from the historical record. I track back definitions to the early 1960s and show how each viewpoint shares similar interpretations of computer simulations—ultimately clustering into the two viewpoints aforementioned. This reconstruction also includes the most recent literature. Second, I unpack the philosophical assumptions behind each viewpoint, with a special emphasis on their differences. Third, I discuss the philosophical implications of each viewpoint in the context of the recent discussion on the logic of scientific explanation for computer simulations. (shrink)
Evidentialism and Reliabilism are two of the main contemporary theories of epistemic justification. Some authors have thought that the theories are not incompatible with each other, and that a hybrid theory which incorporates elements of both should be taken into account. More recently, other authors have argued that the resulting theory is well- placed to deal with fine-grained doxastic attitudes (credences). In this paper I review the reasons for adopting this kind of hybrid theory, paying attention to the case of (...) credences and the notion of probability involved in their treatment. I argue that the notion of probability in question can only be an epistemic (or evidential) kind of probability. I conclude that the resulting theory will be incompatible with Reliabilism in one important respect: it cannot deliver on the reductivist promise of Reliabilism. I also argue that attention to the justification of basic beliefs reveals limitations in the Evidentialist framework as well. The theory that results from the right combination of Evidentialism and Reliabilism, therefore, is neither Evidentialist nor Reliabilist. (shrink)
I propose a reading of Berkeley's Essay towards a New Theory of Vision in which Molyneux-type questions are interpreted as thought experiments instead of arguments. First, I present the general argumentative strategy in the NTV, and provide grounds for the traditional reading. Second, I consider some roles of thought experiments, and classify Molyneux-type questions in the NTV as constructive conjectural thought experiments. Third, I argue that (i) there is no distinction between Weak and Strong Heterogeneity theses in the NTV; (ii) (...) that Strong Heterogeneity is the basis of Berkeley's theory; and (iii) that Molyneux-type questions act as illustrations of Strong Heterogeneity. (shrink)
We conduct an empirical study on the determinants of the psychological costs of tax evasion, also known as tax morale. As a preliminary step, we build a model of tax evasion including non-monetary considerations, show the relationship between tax compliance and tax morale. In the empirical analysis of tax morale we find, using a binomial logit model, that the justification of tax evasion can be explained by the presence of grievance in absolute terms (those who feel that taxes are too (...) high, those who feel that public funds are wasted, and those who accept underground economic activities); and grievances in relative terms (the suspected level of others’ tax evasion). The sense of duty and the level of solidarity are also relevant factors, but to a lesser extent. (shrink)
This article presents a systematic literature review on quality management and social responsibility (focusing on ethical and social issues). It uses the literature review to identify the parallels between quality management and social responsibility, the extent to which qualitative, quantitative and mixed methods are used, the countries that have contributed most to this area, and how the most common quality management practices facilitate social responsibility. The literature review covers articles about quality management and social responsibility (focusing on ethical and social (...) categories) based on a computer search in three databases, namely ABI Inform, Emerald and Science Direct, and includes articles dealing with the subject that were found in the lists of references of the articles found in the primary search, as well as a search in six top management journals. The results show (1) the journals most likely to publish this type of article; (2) the range of qualitative, quantitative and mixed methods used; (3) the most productive countries in this field; (4) the parallels between quality management and social responsibility; and (5) how the most common quality management practices facilitate ethical behaviour and social aspects. (shrink)