In the remainder of this article, we will disarm an important motivation for epistemic contextualism and interest-relative invariantism. We will accomplish this by presenting a stringent test of whether there is a stakes effect on ordinary knowledge ascription. Having shown that, even on a stringent way of testing, stakes fail to impact ordinary knowledge ascription, we will conclude that we should take another look at classical invariantism. Here is how we will proceed. Section 1 lays out some limitations of previous (...) research on stakes. Section 2 presents our study and concludes that there is little evidence for a substantial stakes effect. Section 3 responds to objections. The conclusion clears the way for classical invariantism. (shrink)
Philosophers have long debated whether, if determinism is true, we should hold people morally responsible for their actions since in a deterministic universe, people are arguably not the ultimate source of their actions nor could they have done otherwise if initial conditions and the laws of nature are held fixed. To reveal how non-philosophers ordinarily reason about the conditions for free will, we conducted a cross-cultural and cross-linguistic survey (N = 5,268) spanning twenty countries and sixteen languages. Overall, participants tended (...) to ascribe moral responsibility whether the perpetrator lacked sourcehood or alternate possibilities. However, for American, European, and Middle Eastern participants, being the ultimate source of one’s actions promoted perceptions of free will and control as well as ascriptions of blame and punishment. By contrast, being the source of one’s actions was not particularly salient to Asian participants. Finally, across cultures, participants exhibiting greater cognitive reflection were more likely to view free will as incompatible with causal determinism. We discuss these findings in light of documented cultural differences in the tendency toward dispositional versus situational attributions. (shrink)
This article examines whether people share the Gettier intuition (viz. that someone who has a true justified belief that p may nonetheless fail to know that p) in 24 sites, located in 23 countries (counting Hong Kong as a distinct country) and across 17 languages. We also consider the possible influence of gender and personality on this intuition with a very large sample size. Finally, we examine whether the Gettier intuition varies across people as a function of their disposition to (...) engage in “reflective” thinking. (shrink)
Philosophical discussions on causal inference in medicine are stuck in dyadic camps, each defending one kind of evidence or method rather than another as best support for causal hypotheses. Whereas Evidence Based Medicine advocates the use of Randomised Controlled Trials and systematic reviews of RCTs as gold standard, philosophers of science emphasise the importance of mechanisms and their distinctive informational contribution to causal inference and assessment. Some have suggested the adoption of a pluralistic approach to causal inference, and an inductive (...) rather than hypothetico-deductive inferential paradigm. However, these proposals deliver no clear guidelines about how such plurality of evidence sources should jointly justify hypotheses of causal associations. We here develop such guidelines by first giving a philosophical analysis of the underpinnings of Hill’s viewpoints on causality. We then put forward an evidence-amalgamation framework adopting a Bayesian net approach to model causal inference in pharmacology for the assessment of harms. Our framework accommodates a number of intuitions already expressed in the literature concerning the EBM vs. pluralist debate on causal inference, evidence hierarchies, causal holism, relevance, and reliability. (shrink)
Does the Ship of Theseus present a genuine puzzle about persistence due to conflicting intuitions based on “continuity of form” and “continuity of matter” pulling in opposite directions? Philosophers are divided. Some claim that it presents a genuine puzzle but disagree over whether there is a solution. Others claim that there is no puzzle at all since the case has an obvious solution. To assess these proposals, we conducted a cross-cultural study involving nearly 3,000 people across twenty-two countries, speaking eighteen (...) different languages. Our results speak against the proposal that there is no puzzle at all and against the proposal that there is a puzzle but one that has no solution. Our results suggest that there are two criteria—“continuity of form” and “continuity of matter”— that constitute our concept of persistence and these two criteria receive different weightings in settling matters concerning persistence. (shrink)
Since at least Hume and Kant, philosophers working on the nature of aesthetic judgment have generally agreed that common sense does not treat aesthetic judgments in the same way as typical expressions of subjective preferences—rather, it endows them with intersubjective validity, the property of being right or wrong regardless of disagreement. Moreover, this apparent intersubjective validity has been taken to constitute one of the main explananda for philosophical accounts of aesthetic judgment. But is it really the case that most people (...) spontaneously treat aesthetic judgments as having intersubjective validity? In this paper, we report the results of a cross‐cultural study with over 2,000 respondents spanning 19 countries. Despite significant geographical variations, these results suggest that most people do not treat their own aesthetic judgments as having intersubjective validity. We conclude by discussing the implications of our findings for theories of aesthetic judgment and the purpose of aesthetics in general. (shrink)
Philosophical discussions on causal inference in medicine are stuck in dyadic camps, each defending one kind of evidence or method rather than another as best support for causal hypotheses. Whereas Evidence Based Medicine advocates the use of Randomised Controlled Trials and systematic reviews of RCTs as gold standard, philosophers of science emphasise the importance of mechanisms and their distinctive informational contribution to causal inference and assessment. Some have suggested the adoption of a pluralistic approach to causal inference, and an inductive (...) rather than hypothetico-deductive inferential paradigm. However, these proposals deliver no clear guidelines about how such plurality of evidence sources should jointly justify hypotheses of causal associations. We here develop such guidelines by first giving a philosophical analysis of the underpinnings of Hill’s viewpoints on causality. We then put forward an evidence-amalgamation framework adopting a Bayesian net approach to model causal inference in pharmacology for the assessment of harms. Our framework accommodates a number of intuitions already expressed in the literature concerning the EBM vs. pluralist debate on causal inference, evidence hierarchies, causal holism, relevance, and reliability. (shrink)
Philosophical discussions have critically analysed the methodological pitfalls and epistemological implications of evidence assessment in medicine, however they have mainly focused on evidence of treatment efficacy. Most of this work is devoted to statistical methods of causal inference with a special attention to the privileged role assigned to randomized controlled trials in evidence based medicine. Regardless of whether the RCT’s privilege holds for efficacy assessment, it is nevertheless important to make a distinction between causal inference of intended and unintended effects, (...) in that the unknowns at stake are heterogonous in the two contexts. However, although “lower level” evidence is increasingly acknowledged to be a valid source of information contributory to assessing the risk profile of medications on theoretical or empirical grounds, current practices have difficulty in assigning a precise epistemic status to this kind of evidence because they are more or less implicitly parasitic on the methods developed to test drug efficacy. My thesis is that “lower level” evidence is justified on distinct grounds and at different conditions depending on the different epistemologies which one wishes to endorse, in that each impose different constraints on the methods we adopt to collect and evaluate evidence; such constraints ought to be understood to be different in the case of evidence for risk versus benefit assessment for a series of reasons which I will illustrate on the basis of the recent debate on the causal association between acetaminophen and asthma. (shrink)
The problem of collecting, analyzing and evaluating evidence on adverse drug reactions (ADRs) is an example of the more general class of epistemological problems related to scientific inference and prediction, as well as a central problem of the health-care practice. Philosophical discussions have critically analysed the methodological pitfalls and epistemological implications of evidence assessment in medicine, however they have mainly focused on evidence of treatment efficacy. Most of this work is devoted to statistical methods of causal inference with a special (...) attention to the privileged role assigned to randomized controlled trials in Evidence Based Medicine. Regardless of whether the RCT’s privilege holds for efficacy assessment, it is nevertheless important to make a distinction between causal inference of intended and unintended effects, in that the unknowns at stake are heterogonous in the two contexts. This point has been emphasized by epidemiologists in the last decade. Their main focus is methodological, and regards the fact that bias and confounding do not affect studies on intended and unintended effects in the same way. However, deeper concerns ground the intuition for such a distinction; these are related to the constraints which we impose on evidence and their epistemological justification. My thesis is that such constraints ought to be understood to be different in the case of evidence for risk vs. benefit assessment. I present the recent debate on the causal association between acetaminophen and asthma in order to illustrate the point at issue. (shrink)
Recent work in social epistemology has shown that, in certain situations, less communication leads to better outcomes for epistemic groups. In this paper, we show that, ceteris paribus, a Bayesian agent may believe less strongly that a single agent is biased than that an entire group of independent agents is biased. We explain this initially surprising result and show that it is in fact a consequence one may conceive on the basis of commonsense reasoning.
The problem of collecting, analyzing and evaluating evidence on adverse drug reactions (ADRs) is an example of the more general class of epistemological problems related to scientific inference and prediction, as well as a central problem of the health-care practice. Philosophical discussions have critically analysed the methodological pitfalls and epistemological implications of evidence assessment in medicine, however they have mainly focused on evidence of treatment efficacy. Most of this work is devoted to statistical methods of causal inference with a special (...) attention to the privileged role assigned to randomized controlled trials in Evidence Based Medicine. Regardless of whether the RCT’s privilege holds for efficacy assessment, it is nevertheless important to make a distinction between causal inference of intended and unintended effects, in that the unknowns at stake are heterogonous in the two contexts. This point has been emphasized by epidemiologists in the last decade. Their main focus is methodological, and regards the fact that bias and confounding do not affect studies on intended and unintended effects in the same way. However, deeper concerns ground the intuition for such a distinction; these are related to the constraints which we impose on evidence and their epistemological justification. My thesis is that such constraints ought to be understood to be different in the case of evidence for risk vs. benefit assessment. I present the recent debate on the causal association between acetaminophen and asthma in order to illustrate the point at issue. (shrink)
How we can reliably draw inferences from data, evidence and/or experience has been and continues to be a pressing question in everyday life, the sciences, politics and a number of branches in philosophy (traditional epistemology, social epistemology, formal epistemology, logic and philosophy of the sciences). In a world in which we can now longer fully rely on our experiences, interlocutors, measurement instruments, data collection and storage systems and even news outlets to draw reliable inferences, the issue becomes even more pressing. (...) While we were working on this question using a formal epistemology approach Landes and Osimani (2020); De Pretis et al. (2019); Osimani and Landes (2020); Osimani (2020), we realised that the width of current active interests in the notion of reliability was much broader than we initially thought. Given the breadth of approaches and angles present in philosophy (even in this journal Schubert 2012; Avigad 2021; Claveau and Grenier 2019; Kummerfeld and Danks 2014; Landes 2021; Trpin et al. 2021; Schippers 2014; Schindler 2011; Kelly et al. 2016; Mayo-Wilson 2014; Olsson and Schubert 2007; Pittard 2017), we thought that it would be beneficial to provide a forum for an open exchange of ideas, in which philosophers working in different paradigms could come together. Our call for expression of interest received a great variety of promised manuscripts, the variety is reflected in the published papers. They range from fields far away from our own interests such as quantum probabilities de Ronde et al. (2021), evolvable software systems Primiero et al. (2021), to topics closer to our own research in the philosophy of medicine Lalumera et al. (2020), psychology Dutilh et al. (2021), traditional epistemology Dunn (2021); Tolly (2021) to finally close shared interests in formal epistemology Romero and Sprenger (2021) even within our own department Merdes et al. (2021). Our job is now to reliably inform you about all the contributions in the papers in this special issue. Unfortunately, that task is beyond our capabilities. What we can and instead do is to summarise the contributed papers to inform your reliable inference to read them all in great detail. (shrink)
The paper considers the legal tools that have been developed in German pharmaceutical regulation as a result of the precautionary attitude inaugurated by the Contergan decision. These tools are the notion of “well-founded suspicion”, which attenuates the requirements for safety intervention by relaxing the requirement of a proved causal connection between danger and source, and the introduction of the reversal of proof burden in liability norms. The paper focuses on the first and proposes seeing the precautionary principle as an instance (...) of the requirement that one should maximise expected utility. In order to maximise expected utility certain probabilities are required and it is argued that objective Bayesianism offers the most plausible means to determine the optimal decision in cases where evidence supports diverging choices. (shrink)
Contemporary debates about scientific institutions and practice feature many proposed reforms. Most of these require increased efforts from scientists. But how do scientists’ incentives for effort interact? How can scientific institutions encourage scientists to invest effort in research? We explore these questions using a game-theoretic model of publication markets. We employ a base game between authors and reviewers, before assessing some of its tendencies by means of analysis and simulations. We compare how the effort expenditures of these groups interact in (...) our model under a variety of settings, such as double-blind and open review systems. We make a number of findings, including that open review can increase the effort of authors in a range of circumstances and that these effects can manifest in a policy-relevant period of time. However, we find that open review’s impact on authors’ efforts is sensitive to the strength of several other influences. (shrink)
Risk communication has been generally categorized as a warning act, which is performed in order to prevent or minimize risk. On the other side, risk analysis has also underscored the role played by information in reducing uncertainty about risk. In both approaches the safety aspects related to the protection of the right to health are on focus. However, it seems that there are cases where a risk cannot possibly be avoided or uncertainty reduced, this is for instance valid for the (...) declaration of side effects associated with pharmaceutical products or when a decision about drug approval or retirement must be delivered on the available evidence. In these cases, risk communication seems to accomplish other tasks than preventing risk or reducing uncertainty. The present paper analyzes the legal instruments which have been developed in order to control and manage the risks related to drugs – such as the notion of “development risk” or “residual risk” – and relates them to different kinds of uncertainty. These are conceptualized as epistemic, ecological, metric, ethical, and stochastic, depending on their nature. By referring to this taxonomy, different functions of pharmaceutical risk communication are identified and connected with the legal tools of uncertainty management. The purpose is to distinguish the different functions of risk communication and make explicit their different legal nature and implications. (shrink)
The paper addresses charges of risk and loss aversion as well as of irrationality directed against the precautionary principle, by providing an epistemic analysis of its specific role in the safety law system. In particular, I contend that: 1) risk aversion is not a form of irrational or biased behaviour; 2) both risk and loss aversion regard the form of the utility function, whereas PP rather regards the information on which to base the decision; 3) thus PP has formally nothing (...) to do with risk or loss aversion but rather with risk awareness; 4) PP removes a fictional construct in the legal system, according to which any hazard should be ignored and denied until it is scientifically proven; 5) the quandary originates in the tension between current methods of evidence evaluation, and the logic underlying PP which demands for a probabilistic epistemology. (shrink)
Purpose The purpose of this paper is to suggest a definition of genetic information by taking into account the debate surrounding it. Particularly, the objections raised by Developmental Systems Theory to Teleosemantic endorsements of the notion of genetic information as well as deflationist approaches which suggest to ascribe the notion of genetic information a heuristic value at most, and to reduce it to that of causality. Design/methodology/approach The paper presents the notion of genetic information through its historical evolution and analyses (...) it with the conceptual tools offered by philosophical theories of causation on one side and linguistics on the other. Findings The concept of genetic information is defined as a special kind of cause which causes something to be one way rather than another, by combining elementary units one way rather than another. Tested against the notion of “genetic error” this definition demonstrates to provide an exhaustive account of the common denominators associated with the notion of genetic information: causal specificity; combinatorial mechanism; arbitrariness. Originality/value The definition clarifies how the notion of information is understood when applied to genetic phenomena and also contributes to the debate on the notion of information, broadly meant, which is still affected by lack of consensus. (shrink)
Medical diagnosis has been traditionally recognized as a privileged field of application for so called probabilistic induction. Consequently, the Bayesian theorem, which mathematically formalizes this form of inference, has been seen as the most adequate tool for quantifying the uncertainty surrounding the diagnosis by providing probabilities of different diagnostic hypotheses, given symptomatic or laboratory data. On the other side, it has also been remarked that differential diagnosis rather works by exclusion, e.g. by modus tollens, i.e. deductively. By drawing on a (...) case history, this paper aims at clarifying some points on the issue. Namely: 1) Medical diagnosis does not represent, strictly speaking, a form of induction, but a type, of what in Peircean terms should be called ‘abduction’ (identifying a case as the token of a specific type); 2) in performing the single diagnostic steps, however, different inferential methods are used for both inductive and deductive nature: modus tollens, hypothetical-deductive method, abduction; 3) Bayes’ theorem is a probabilized form of abduction which uses mathematics in order to justify the degree of confidence which can be entertained on a hypothesis given the available evidence; 4) although theoretically irreconcilable, in practice, both the hypothetical- deductive method and the Bayesian one, are used in the same diagnosis with no serious compromise for its correctness; 5) Medical diagnosis, especially differential diagnosis, also uses a kind of “probabilistic modus tollens”, in that, signs (symptoms or laboratory data) are taken as strong evidence for a given hypothesis not to be true: the focus is not on hypothesis confirmation, but instead on its refutation [Pr (¬ H/E1, E2, …, En)]. Especially at the beginning of a complicated case, odds are between the hypothesis that is potentially being excluded and a vague “other”. This procedure has the advantage of providing a clue of what evidence to look for and to eventually reduce the set of candidate hypotheses if conclusive negative evidence is found. 6) Bayes’ theorem in the hypothesis-confirmation form can more faithfully, although idealistically, represent the medical diagnosis when the diagnostic itinerary has come to a reduced set of plausible hypotheses after a process of progressive elimination of candidate hypotheses; 7) Bayes’ theorem is however indispensable in the case of litigation in order to assess doctor’s responsibility for medical error by taking into account the weight of the evidence at his disposal. (shrink)
Personalized medicine relies on two points: 1) causal knowledge about the possible effects of X in a given statistical population; 2) assignment of the given individual to a suitable reference class. Regarding point 1, standard approaches to causal inference are generally considered to be characterized by a trade-off between how confidently one can establish causality in any given study (internal validity) and extrapolating such knowledge to specific target groups (external validity). Regarding point 2, it is uncertain which reference class leads (...) to the most reliable inferences. -/- Instead, pharmacovigilance focuses on both elements of the individual prediction at the same time, that is, the establishment of the possible causal link between a given drug and an observed adverse event, and the identification of possible subgroups, where such links may arise. We develop an epistemic framework that exploits the joint contribution of different dimensions of evidence and allows one to deal with the reference class problem not only by relying on statistical data about covariances, but also by drawing on causal knowledge. That is, the probability that a given individual will face a given side effect, will probabilistically depend on his characteristics and the plausible causal models in which such features become relevant. The evaluation of the causal models is grounded on the available evidence and theory. (shrink)
Background: Evidence suggesting adverse drug reactions often emerges unsystematically and unpredictably in form of anecdotal reports, case series and survey data. Safety trials and observational studies also provide crucial information regarding the safety of drugs. Hence, integrating multiple types of pharmacovigilance evidence is key to minimising the risks of harm. Methods: In previous work, we began the development of a Bayesian framework for aggregating multiple types of evidence to assess the probability of a putative causal link between drugs and side (...) effects. This framework arose out of a philosophical analysis of the Bradford Hill Guidelines. In this article, we expand the Bayesian framework and add “evidential modulators,” which bear on the assessment of the reliability of incoming study results. The overall framework for evidence synthesis, “E-Synthesis”, is then applied to a case study. Results: Theoretically and computationally, E-Synthesis exploits coherence of partly or fully independent evidence converging towards the hypothesis of interest, in order to update its posterior probability. With respect to other frameworks for evidence synthesis, our Bayesian model has the unique feature of grounding its inferential machinery on a consolidated theory of hypothesis confirmation, and in allowing any data from heterogeneous sources, and methods to be quantitatively integrated into the same inferential framework. Conclusions: E-Synthesis is highly flexible concerning the allowed input, while at the same time relying on a consistent computational system, that is philosophically and statistically grounded. Furthermore, by introducing evidential modulators, and thereby breaking up the different dimensions of evidence, E-Synthesis allows them to be explicitly tracked in updating causal hypotheses. (shrink)
According to the Variety of Evidence Thesis items of evidence from independent lines of investigation are more confirmatory, ceteris paribus, than e.g. replications of analogous studies. This thesis is known to fail Bovens and Hartmann, Claveau. How- ever, the results obtained by the former only concern instruments whose evidence is either fully random or perfectly reliable; instead in Claveau, unreliability is modelled as deterministic bias. In both cases, the unreliable instrument delivers totally irrelevant information. We present a model which formalises (...) both reliability, and unreliability, differently. Our instruments are either reliable, but affected by random error, or they are biased but not deterministically so. Bovens and Hartmann’s results are counter-intuitive in that in their model a long series of consistent reports from the same instrument does not raise suspicion of “too-good-to- be-true” evidence. This happens precisely because they neither contemplate the role of systematic bias, nor unavoidable random error of reliable instruments. In our model the Variety of Evidence Thesis fails as well, but the area of failure is considerably smaller than for Bovens and Hartmann, Claveau and holds for realistic cases. The essential mechanism which triggers VET failure is the rate of false to true positives for the two kinds of instruments. Our emphasis is on modelling beliefs about sources of knowledge and their role in hypothesis confirmation in interaction with dimensions of evidence, such as variety and consistency. (shrink)
A current trend in bioethics considers genetic information as family property. This paper uses a logical approach to critically examine Matthew Liao’s proposal on the familial nature of genetic information as grounds for the duty to share it with relatives and for breach of confidentiality by the geneticist. The authors expand on the topic by examining the relationship between the arguments of probability and the familial nature of genetic information, as well as the concept of harm in the context of (...) genetic risk. Lastly, they examine the concept of harm in relation to the type of situations w the potential recipient of the information is not the person directly affected by the risk. (shrink)
If well-designed, the results of a Randomised Clinical Trial can justify a causal claim between treatment and effect in the study population; however, additional information might be needed to carry over this result to another population. RCTs have been criticized exactly on grounds of failing to provide this sort of information Evidence, inference and enquiry. Oxford University Press, New York, 2011), as well as to black-box important details regarding the mechanisms underpinning the causal law instantiated by the RCT result. On (...) the other side, so-called In Silico Clinical Trials face the same criticisms addressed against standard modelling and simulation techniques, and cannot be equated to experiments Philosophy of molecular medicine: foundational issues in research and practice, Routledge, New York, 2017; Parker in Synthese 169:483–496, 2009; Parke in Philos Sci 81:516–536, 2014; Diez Roux in Am J Epidemiol 181:100–102, 2015 and related discussions in Frigg and Reiss in Synthese 169:593–613, 2009; Winsberg in Synthese 169:575–592, 2009; Beisbart and Norton in Int Stud Philos Sci 26:403–422, 2012). We undertake a formal analysis of both methods in order to identify their distinct contribution to causal inference in the clinical setting. Britton et al.’s study :E2098–E2105, 2013) on the impact of ion current variability on cardiac electrophysiology is used for illustrative purposes. We deduce that, by predicting variability through interpolation, ISCTs aid with problems regarding extrapolation of RCTs results, and therefore in assessing their external validity. Furthermore, ISCTs can be said to encode “thick” causal knowledge —as opposed to “thin” difference-making information inferred from RCTs. Hence, ISCTs and RCTs cannot replace one another but rather, they are complementary in that the former provide information about the determinants of variability of causal effects, while the latter can, under certain conditions, establish causality in the first place. (shrink)
The paper addresses charges of risk and loss aversion as well as of irrationality directed against the precautionary principle (PP), by providing an epistemic analysis of its specific role in the safety law system. In particular, I contend that: 1) risk aversion is not a form of irrational or biased behaviour; 2) both risk and loss aversion regard the form of the utility function, whereas PP rather regards the information on which to base the decision; 3) thus PP has formally (...) nothing to do with risk or loss aversion but rather with risk awareness; 4) PP removes a fictional construct in the legal system, according to which any hazard should be ignored and denied until it is scientifically proven; 5) the quandary originates in the tension between current methods of evidence evaluation, and the logic underlying PP which demands for a probabilistic epistemology. (shrink)
Findings about the desire for health-risk information are heterogeneous and sometimes contradictory. In particular, they seem to be at variance with established psychological theories of information-seeking behavior.The present paper posits the decision about treating illness with medicine as the causal determinant for the expected net value of information, and attempts to explain idiosyncrasies in information-seeking behavior by using the notion of decision sensitivity to incoming information.Furthermore, active information avoidance is explained by modeling the expected emotional distress potentially brought about by (...) “bad news” as a disutility factor in pay-off maximization.In this context two notions of uncertainty are distinguished: an epistemic uncertainty related to the prognostic probability assigned to the risk, and an emotional uncertainty related to the expected damage. Health-risk information can both reduce epistemic and increase emotional uncertainty, giving rise to idiosyncratic processing strategies. (shrink)
Package leaflets belong to the complex communication system related to the minimization and prevention of pharmaceutical risk. Their legal nature is not exhausted by safety regulation though: as a privileged form of product instruction, they are also subject to liability regulation with a consequent reallocation of damage responsibility through risk disclosure. This article presents the results of a doctoral dissertation devoted to the legal and communicative analysis of PL information. After illustrating the articulation of pharmaceutical risk through risk prevention norms, (...) the paper goes on with a discussion of the PL role within the therapeutic decision as a complementary vehicle to doctor’s information. It results that the liability framework in which both information channels are embedded determines a communication model, which far from promoting a shared decision process, radicalizes the two-step communication structure typical of the informed consent model inherited by surgery judicature. The second part investigates PL information as a source of knowledge updating through the methodological tools provided by Bayesian decision theory. Finally, an empirical study conducted over a sample of 55 drug consumers investigates the impact of PL information on drug risk perception and its perceived value to therapeutic decision. (shrink)