We revisit the analogy suggested by Madelung between a non-relativistic time-dependent quantum particle, to a fluid system which is pseudo-barotropic, irrotational and inviscid. We first discuss the hydrodynamical properties of the Madelung description in general, and extract a pressure like term from the Bohm potential. We show that the existence of a pressure gradient force in the fluid description, does not violate Ehrenfest’s theorem since its expectation value is zero. We also point out that incompressibility of the fluid implies conservation (...) of density along a fluid parcel trajectory and in 1D this immediately results in the non-spreading property of wave packets, as the sum of Bohm potential and an exterior potential must be either constant or linear in space. Next we relate to the hydrodynamic description a thermodynamic counterpart, taking the classical behavior of an adiabatic barotopric flow as a reference. We show that while the Bohm potential is not a positive definite quantity, as is expected from internal energy, its expectation value is proportional to the Fisher information whose integrand is positive definite. Moreover, this integrand is exactly equal to half of the square of the imaginary part of the momentum, as the integrand of the kinetic energy is equal to half of the square of the real part of the momentum. This suggests a relation between the Fisher information and the thermodynamic like internal energy of the Madelung fluid. Furthermore, it provides a physical linkage between the inverse of the Fisher information and the measure of disorder in quantum systems—in spontaneous adiabatic gas expansion the amount of disorder increases while the internal energy decreases. (shrink)
We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In (...) response to, we show how four familiar accounts of explanation apply to neural networks as they would to any scientific phenomenon. We diagnose the confusion about explaining neural networks within the machine learning literature as an equivocation on “explainability,” “understandability” and “interpretability.” To remedy this, we distinguish between these notions, and answer by offering a theory and typology of interpretation in machine learning. Interpretation is something one does to an explanation with the aim of producing another, more understandable, explanation. As with explanation, there are various concepts and methods involved in interpretation: Total or Partial, Global or Local, and Approximative or Isomorphic. Our account of “interpretability” is consistent with uses in the machine learning literature, in keeping with the philosophy of explanation and understanding, and pays special attention to medical artificial intelligence systems. (shrink)
Both left libertarians, who support the redistribution of income and wealth through taxation, and right libertarians, who oppose redistributive taxation, share an important view: that, looming catastrophes aside, the state must never redistribute any part of our body or our person without our consent. Cécile Fabre rejects that view. For her, just as the undeservedly poor have a just claim to money from their fellow citizens in order to lead a minimally flourishing life, the undeservedly ‘medically poor’ have a just (...) claim to help from fellow citizens in order to lead such a life. Such obligatory help may in principle involve even the supply of body parts for transplantation. The state ought to exact such resources from the medically rich whenever doing so would secure the prospect of a minimally flourishing life to the medically poor without denying that prospect to anyone else. Fabre criticizes Ronald Dworkin's belief in ‘a prophylactic line that comes close to making the body inviolate, that is, making body parts not parts of social resources at all’. For her, ‘Duties to help. . . do not stop at material resources: they involve the body. . . in invasive ways’. (shrink)
Law, Economics, and Morality examines the possibility of combining economic methodology and deontological morality through explicit and direct incorporation of moral constraints into economic models. Economic analysis of law is a powerful analytical methodology. However, as a purely consequentialist approach, which determines the desirability of acts and rules solely by assessing the goodness of their outcomes, standard cost-benefit analysis is normatively objectionable. Moderate deontology prioritizes such values as autonomy, basic liberties, truth-telling, and promise-keeping over the promotion of good outcomes. It (...) holds that there are constraints on promoting the good. Such constraints may be overridden only if enough good is at stake. While moderate deontology conforms to prevailing moral intuitions and legal doctrines, it is arguably lacking in methodological rigor and precision. Eyal Zamir and Barak Medina argue that the normative flaws of economic analysis can be rectified without relinquishing its methodological advantages and that moral constraints can be formalized so as to make their analysis more rigorous. They discuss various substantive and methodological choices involved in modeling deontological constraints. Zamir and Medina propose to determine the permissibility of any act or rule infringing a deontological constraint by means of mathematical threshold functions. Law, Economics, and Morality presents the general structure of threshold functions, analyzes their elements and addresses possible objections to this proposal. It then illustrates the implementation of constrained CBA in several legal fields, including contract law, freedom of speech, antidiscrimination law, the fight against terrorism, and legal paternalism. (shrink)
We examine whether the "evidence of evidence is evidence" principle is true. We distinguish several different versions of the principle and evaluate recent attacks on some of those versions. We argue that, whatever the merits of those attacks, they leave the more important rendition of the principle untouched. That version is, however, also subject to new kinds of counterexamples. We end by suggesting how to formulate a better version of the principle that takes into account those new counterexamples.
Suppose we learn that we have a poor track record in forming beliefs rationally, or that a brilliant colleague thinks that we believe P irrationally. Does such input require us to revise those beliefs whose rationality is in question? When we gain information suggesting that our beliefs are irrational, we are in one of two general cases. In the first case we made no error, and our beliefs are rational. In that case the input to the contrary is misleading. In (...) the second case we indeed believe irrationally, and our original evidence already requires us to fix our mistake. In that case the input to that effect is normatively superfluous. Thus, we know that information suggesting that our beliefs are irrational is either misleading or superfluous. This, I submit, renders the input incapable of justifying belief revision, despite our not knowing which of the two kinds it is. (shrink)
The Madelung equations map the non-relativistic time-dependent Schrödinger equation into hydrodynamic equations of a virtual fluid. While the von Neumann entropy remains constant, we demonstrate that an increase of the Shannon entropy, associated with this Madelung fluid, is proportional to the expectation value of its velocity divergence. Hence, the Shannon entropy may grow due to an expansion of the Madelung fluid. These effects result from the interference between solutions of the Schrödinger equation. Growth of the Shannon entropy due to expansion (...) is common in diffusive processes. However, in the latter the process is irreversible while the processes in the Madelung fluid are always reversible. The relations between interference, compressibility and variation of the Shannon entropy are then examined in several simple examples. Furthermore, we demonstrate that for classical diffusive processes, the “force” accelerating diffusion has the form of the positive gradient of the quantum Bohm potential. Expressing then the diffusion coefficient in terms of the Planck constant reveals the lower bound given by the Heisenberg uncertainty principle in terms of the product between the gas mean free path and the Brownian momentum. (shrink)
Detecting that two images are different is faster for highly dissimilar images than for highly similar images. Paradoxically, we showed that the reverse occurs when people are asked to describe how two images differ—that is, to state a difference between two images. Following structure-mapping theory, we propose that this disassociation arises from the multistage nature of the comparison process. Detecting that two images are different can be done in the initial (local-matching) stage, but only for pairs with low overlap; thus, (...) “different” responses are faster for low-similarity than for high-similarity pairs. In contrast, identifying a specific difference generally requires a full structural alignment of the two images, and this alignment process is faster for high-similarity pairs. We described four experiments that demonstrate this dissociation and show that the results can be simulated using the Structure-Mapping Engine. These results pose a significant challenge for nonstructural accounts of similarity comparison and suggest that structural alignment processes play a significant role in visual comparison. (shrink)
The Self-Intimation thesis has it that whatever justificatory status a proposition has, i.e., whether or not we are justified in believing it, we are justified in believing that it has that status. The Infallibility thesis has it that whatever justificatory status we are justified in believing that a proposition has, the proposition in fact has that status. Jointly, Self-Intimation and Infallibility imply that the justificatory status of a proposition closely aligns with the justification we have about that justificatory status. Self-Intimation (...) has two noteworthy implications. First, assuming that we never have sufficient justification for a proposition and for its negation, we can derive Infallibility from Self-Intimation. Interestingly, there seems to be no equivalently simple way to derive Self-Intimation from Infallibility. This asymmetry provides reason for thinking that bottom-level justification rather than top-level justification drives the explanation for why the levels of justification align. Second, Self-Intimation suggests a counterintuitive treatment of information concerning what justificatory status a proposition has. It follows from Self-Intimation that we always have justification for the truth about whether a proposition is justified for us, and therefore, that higher-order evidence could change what we should believe on this matter only by misleading us. This permits forming beliefs about whether a proposition is justified for us without regard to higher-order evidence, and thus reveals a reason for thinking that top-level justification is evidentially inert. (shrink)
This paper provides a philosophical analysis of the ongoing controversy surrounding R.A. Fisher's famous fundamental theorem of natural selection. The difference between the traditional and modern interpretations of the theorem is explained. I argue that proponents of the modern interpretation have captured Fisher's intended meaning correctly and shown that the theorem is mathematically correct, pace the traditional consensus. However, whether the theorem has any real biological significance remains an unresolved issue. I argue that the answer depends on whether (...) we accept Fisher's non-standard notion of environmental change, on which the theorem rests; arguments for and against this notion are explored. I suggest that there is a close link between Fisher's fundamental theorem and the modern gene's eye view of evolution. Introduction What Does the Fundamental Theorem Say? Key Concepts Explained Alleged Significance of the FTNS Traditional versus Modern Interpretations of the FTNS The Modern Interpretation Illustrated Fisher's Concept of Environmental Change Causality and the Modern Interpretation The Significance of the FTNS Re-considered Appendix CiteULike Connotea Del.icio.us What's this? (shrink)
This article looks at the way people determine the antecedent of a pronoun in sentence pairs, such as: Albert invited Ron to dinner. He spent hours cleaning the house. The experiment reported here is motivated by the idea that such judgments depend on reasoning about identity . Because the identity of an individual over time depends on the causal-historical path connecting the stages of the individual, the correct antecedent will also depend on causal connections. The experiment varied how likely it (...) is that the event of the first sentence would cause the event of the second for each of the two individuals . Decisions about the antecedent followed causal likelihood. A mathematical model of causal identity accounted for most of the key aspects of the data from the individual sentence pairs. (shrink)
ABSTRACTShould conciliating with disagreeing peers be considered sufficient for reaching rational beliefs? Thomas Kelly argues that when taken this way, Conciliationism lets those who enter into a disagreement with an irrational belief reach a rational belief all too easily. Three kinds of responses defending Conciliationism are found in the literature. One response has it that conciliation is required only of agents who have a rational belief as they enter into a disagreement. This response yields a requirement that no one should (...) follow. If the need to conciliate applies only to already rational agents, then an agent must conciliate only when her peer is the one irrational. A second response views conciliation as merely necessary for having a rational belief. This alone does little to address the central question of what is rational to believe when facing a disagreeing peer. Attempts to develop the response either reduce to the first response, or deem necessary an unnecessary doxastic revision, or imply that rational dilemmas obtain in cases where intuitively there are none. A third response tells us to weigh what our pre-disagreement evidence supports against the evidence from the disagreement itself. This invites epistemic akrasia. (shrink)
This is a lively discussion between two perceptive philosophical thinkers as comfortable with vulnerable intimacy and abstract ideas as they are savvy with the aesthetics of oppression and the many neurotic loops of fear-based escape routes from the Real. With a deep concern for finding the best ways to build a healthy and sane society, their Integrating of East-West, Indigenous and ecological knowledges brings forward a synthesis of ideas to be reckoned with. Dr. Fisher, founder of The Fearology Institute (...) and Luke Barnesmoore a doctoral student in the Geography department at The University of British Columbia caress the contours of fear and fearlessness and the importance of admitting how much fear exists in most all places humans dwell in contemporary urban societies. if we are to avoid the worst catastrophe's of crises we face on the planet in the very near future, Fisher and Barnesmoore are sure that fear is going to be a major player in the outcomes. (shrink)
Two experiments demonstrated letter-context effects that cannot easily be accounted for by postperceptual theories based on structural redundancy, iigural goodness, or memory advantage. In Experiment 1, subjects identified the color of a letter fragment more accurately in letter than in nonletter contexts. In Experiment 2, subjects identified the feature presented in a precued color more accurately in letters than in nonletters. We argue that these effects result from topdown perceptual processing.
What it means for an action to have moral worth, and what is required for this to be the case, is the subject of continued controversy. Some argue that an agent performs a morally worthy action if and only if they do it because the action is morally right. Others argue that a morally worthy action is that which an agent performs because of features that make the action right. These theorists, though they oppose one another, share something important in (...) common. They focus almost exclusively on the moral worth of right actions. But there is a negatively valenced counterpart that attaches to wrong actions, which we will call moral counterworth. In this paper, we explore the moral counterworth of wrong actions in order to shed new light on the nature of moral worth. Contrary to theorists in both camps, we argue that more than one kind of motivation can affect the moral worth of actions. (shrink)
In certain judgmental situations where a “correct” decision is presumed to exist, optimal decision making requires evaluation of the decision-makers’ capabilities and the selection of the appropriate aggregation rule. The major and so far unresolved difficulty is the former necessity. This article presents the optimal aggregation rule that simultaneously satisfies these two interdependent necessary requirements. In our setting, some record of the voters’ past decisions is available, but the correct decisions are not known. We observe that any arbitrary evaluation of (...) the decision-makers’ capabilities as probabilities yields some optimal aggregation rule that, in turn, yields a maximum-likelihood estimation of decisional skills. Thus, a skill-evaluation equilibrium can be defined as an evaluation of decisional skills that yields itself as a maximum-likelihood estimation of decisional skills. We show that such equilibrium exists and offer a procedure for finding one. The obtained equilibrium is locally optimal and is shown empirically to generally be globally optimal in terms of the correctness of the resulting collective decisions. Interestingly, under minimally competent (almost symmetric) skill distributions that allow unskilled decision makers, the optimal rule considerably outperforms the common simple majority rule (SMR). Furthermore, a sufficient record of past decisions ensures that the collective probability of making a correct decision converges to 1, as opposed to accuracy of about 0.7 under SMR. Our proposed optimal voting procedure relaxes the fundamental (and sometimes unrealistic) assumptions in Condorcet’s celebrated theorem and its extensions, such as sufficiently high decision-making quality, skill homogeneity or existence of a sufficiently large group of decision makers. (shrink)
Judges are typically tasked to consider sentencing benefits but not costs. Previous research finds that both laypeople and prosecutors discount the costs of incarceration when forming sentencing attitudes, raising important questions about whether professional judges show the same bias during sentencing. To test this, we used a vignette-based experiment in which Minnesota state judges reviewed a case summary about an aggravated robbery and imposed a hypothetical sentence. Using random assignment, half the participants received additional information about plausible negative consequences of (...) incarceration. As predicted, our results revealed a mitigating effect of cost exposure on prison sentence term lengths. Critically, these findings support the conclusion that policies that increase transparency in sentencing costs could reduce sentence lengths, which has important economic and social ramifications. (shrink)
How are we to think of Beckett's fiction? Lyrical, inventive, uncompromising, beautifully precise-an immense achievement—is it really an art that proclaims the disintegration of language and of the imagination, as traditional readings conclude? Eyal Amiran's study demonstrates that Beckett's work does not embody the failure of synthetic vision. Beckett's fiction transposes a large intertextual logic from the Western metaphysics it is said to disown, and so takes its place in a literary and philosophical tradition that extends from Plato to (...) Joyce and Yeats. At the same time, it develops as a serial narrative, from the early novels to the late short fictions, to unravel the story itself that its metaphysical tradition tells. (shrink)
In _Differentiating the Pearl from the Fish-Eye_, Eyal Aviv offers an account of Ouyang Jingwu, a revolutionary Buddhist thinker and educator. The book surveys the life and career of Ouyang and his influence on modern Chinese intellectual history.
I first support Alec Fisher's thesis that premises and conclusions in arguments can be unasserted first by arguing in its favor that only it preserves our intuition that it is at least possible that two arguments share the same premises and the same conclusion although not everything that is asserted in the one is also asserted in the other and second by answering two objections that might be raised against it. I then draw from Professor Fisher's thesis the (...) consequence that in suppositional arguments the falsity (or unacceptability) of a supposition does not tell unfavorably in the evaluation of the argument, because the falsity (or unacceptability) of a (nonredundant) premise counts against an argument if and only if that premise is asserted. Finally, I observe that, despite the fact that they are neither expressed nor even alluded to, implicit assumptions in arguments are always asserted, unless the conclusion, but none of the explicit premisses, is unasserted. Hence, apart from an exceptional case of the kind just mentioned, the falsity (or unacceptability) of implicit assumptions always counts against an argument. (shrink)
Is it ever rational to suspend judgment about whether a particular doxastic attitude of ours is rational? An agent who suspends about whether her attitude is rational has serious doubts that it is. These doubts place a special burden on the agent, namely, to justify maintaining her chosen attitude over others. A dilemma arises. Providing justification for maintaining the chosen attitude would commit the agent to considering the attitude rational—contrary to her suspension on the matter. Alternatively, in the absence of (...) such justification, the attitude would be arbitrary by the agent's own lights, and therefore irrational from the agent's own perspective. So, suspending about whether an attitude of ours is rational does not cohere with considering it rationally preferable to other attitudes, and leads to a more familiar form of epistemic akrasia otherwise. (shrink)
To counter the pandemic caused by severe acute respiratory syndrome coronavirus 2, some have proposed accelerating SARS-CoV-2 vaccine development through controlled human infection trials. These trials would involve the deliberate exposure of relatively few young, healthy volunteers to SARS-CoV-2. We defend this proposal against the charge that there is still too much uncertainty surrounding the risks of COVID-19 to responsibly run such a trial.
Richard Feldman has proposed and defended different versions of a principle about evidence. In slogan form, the principle holds that ‘evidence of evidence is evidence’. Recently, Branden Fitelson has argued that Feldman’s preferred rendition of the principle falls pray to a counterexample related to the non-transitivity of the evidence-for relation. Feldman replies arguing that Fitelson’s case does not really represent a counterexample to the principle. In this note, we argue that Feldman’s principle is trivially true.
On the origin of film and the resurrection of the people : D.W. Griffith's Intolerance -- The actor of the crowd : The great dictator -- Howard Hawks' idea of genre -- What is a cinema of Jewish vengeance? : Tarantino's Inglourious basterds.
The alignment of bargaining positions is crucial to a successful negotiation. Prior research has shown that similarity in language use is indicative of the conceptual alignment of interlocutors. We use latent semantic analysis to explore how the similarity of language use between negotiating parties develops over the course of a three-party negotiation. Results show that parties that reach an agreement show a gradual increase in language similarity over the course of the negotiation. Furthermore, reaching the most financially efficient outcome is (...) dependent on similarity in language use between the parties that have the most to gain from such an outcome. (shrink)