Throughout the biological and biomedical sciences there is a growing need for, prescriptive ‘minimum information’ (MI) checklists specifying the key information to include when reporting experimental results are beginning to find favor with experimentalists, analysts, publishers and funders alike. Such checklists aim to ensure that methods, data, analyses and results are described to a level sufficient to support the unambiguous interpretation, sophisticated search, reanalysis and experimental corroboration and reuse of data sets, facilitating the extraction of maximum value from data sets (...) them. However, such ‘minimum information’ MI checklists are usually developed independently by groups working within representatives of particular biologically- or technologically-delineated domains. Consequently, an overview of the full range of checklists can be difficult to establish without intensive searching, and even tracking thetheir individual evolution of single checklists may be a non-trivial exercise. Checklists are also inevitably partially redundant when measured one against another, and where they overlap is far from straightforward. Furthermore, conflicts in scope and arbitrary decisions on wording and sub-structuring make integration difficult. This presents inhibit their use in combination. Overall, these issues present significant difficulties for the users of checklists, especially those in areas such as systems biology, who routinely combine information from multiple biological domains and technology platforms. To address all of the above, we present MIBBI (Minimum Information for Biological and Biomedical Investigations); a web-based communal resource for such checklists, designed to act as a ‘one-stop shop’ for those exploring the range of extant checklist projects, and to foster collaborative, integrative development and ultimately promote gradual integration of checklists. (shrink)
This book investigates how citizens who have differences and disagreements ought to relate to one another in a liberal democracy. Specifically, this book advances a metaphor of citizenship that I call 'role-based constitutional fellowship.' Role-based constitutional fellowship, I argue, is a desirable way for citizens to relate to one another in conditions of modern pluralism, where multiple races, ethnicities, religions, and economic statuses exist and where citizens adhere to and pursue competing political interests, creeds, and objectives. Under role-based constitutional fellowship, (...) citizens share a sense that they are united in a common aim and that they are largely committed to doing what is necessary to pursue that aim - that they are fellows. I describe this sense of fellowship as constitutional and role-based. (shrink)
In Generalizing Generalizability in Information Systems Research Lee and Baskerville (2003) attempt to clarify generalization and distinguish four types of generalization. Although this is a useful objective, what they call generalization is often not generalization at all in the proper sense of the word. We elucidate generalization by locating their major errors. A main source of these is their failure to understand the depth of Hume’s problem of induction. We give a thorough explication of the problem and then give a (...) solution. Lastly, we propose an alternative taxonomy of generalization: theoretical, within-population, cross-population, contextual, and temporal. (shrink)
In “Generalizing Generalizability in Information Systems Research,” Lee and Baskerville try to clarify generalization and classify it into four types. Unfortunately, their account is problematic. We propose repairs. Central among these is our balance-of-evidence argument that we should adopt the view that Hume’s problem of induction has a solution, even if we do not know what it is. We build upon this by proposing an alternative classification of induction. There are five types of generalization: theoretical, within-population, cross-population, contextual, and temporal, (...) with theoretical generalization being across the empirical and theoretical levels and the rest within the empirical level. Our classification also includes two kinds of inductive reasoning that do not belong to the domain of generalization. We then discuss the implications of our classification for information systems research. (shrink)
In the twenty-seven questions translated in this volume, most never before published in English, William of Ockham considers a host of theological and philosophical issues, including the nature of virtue and vice, the relationship between the intellect and the will, the scope of human freedom, the possibility of God's creating a better world, the role of love and hatred in practical reasoning, whether God could command someone to do wrong, and more.
Among the most widely discussed of William of Ockham’s texts on ethics is his Quodlibet III, q. 14. But despite a large literature on this question, there is no consensus on what Ockham’s answer is to the central question raised in it, specifically, what obligations one would have if one were to receive a divine command to not love God. (Surprisingly, there is also little explicit recognition in the literature of this lack of consensus.) Via a close reading of the (...) text, I argue, contrary to much of the literature, that Ockham believes that if one were given this command, one would be obligated to refrain from loving God and would also be able to fulfill this obligation without any moral wrongdoing. Among other results, this study will help clarify Ockham’s much-discussed claim that loving God is “a necessarily virtuous act.”. (shrink)
[Work in progress.] According to standard late medieval Christian thought, humans in heaven are unable to sin, having been “confirmed” in their goodness; and, nevertheless, are more free than humans are in the present life. The rise of voluntarist conceptions of the will in the late thirteenth century made it increasingly difficult to hold onto both claims. Peter Olivi suggested that the impeccability of the blessed was dependent upon a special activity of God upon their wills and argued that this (...) external constraint upon their wills did not eliminate their freedom. Later voluntarists largely agreed with Olivi in attributing the confirmation of the blessed to be dependent upon God’s activity in some way, but disputed the means by which and the extent to which the wills of those in heaven could be said to retain their freedom. This paper will examine various attempts made to either harmonize these two claims or else to soften the blow of rejecting one of them; among the authors surveyed will be Peter John Olivi, John Duns Scotus, Henry of Harclay, William of Ockham, Walter Chatton, and Margurite Porete. (shrink)
This paper states two sets of axioms sufficient for extensive measurement. The first set, like previously published axioms, requires that each of the objects measured must be classifiable as either greater than, or less than, or indifferent to each other object. The second set, however, requires only that any two objects be classifiable as either indifferent or different, and does not need any information about which object is greater. Each set of axioms produces an extensive scale with the usual properties (...) of additivity and uniqueness except for unit. Moreover, the axioms imply Weber's Law: whether two objects are indifferent depends only upon the ratio of their scale values. (shrink)
It has been recently argued by a number of metaphysicians—Trenton Merricks and Eric Olson among them—that any variety of dualism that claims that human persons have souls as proper parts (rather than simply being identical to souls) will face a too-many-thinker problem. In this paper, I examine whether this objection applies to the views of Aquinas, who famously claims that human persons are soul-body composites. I go on to argue that a straightforward readingof Aquinas’s texts might lead us to (...) believe that he falls prey to Merricks and Olson’s objection, but that a more heterodox interpretation reveals a way to avoidthis problem. (shrink)
Ockham’s own formulations of his Razor state that one should only include a given entity in one’s ontology when one has either sensory evidence, demonstrative argument, or theological authority in favor of it. But how does Ockham decide which theological claims to treat as data for theory construction? Here I show how over time (perhaps in no small part due to pressure and attention from ecclesiastical censors) Ockham refined and changed the way he formulated his Razor, particularly the “authority clause” (...) that states that authoritative theological pronouncements constitute a reason for postulating entities in one’s ontology. This refinement proceeded across three stages, culminating in the political writings of the final period of his life, in which Ockham offers reasons (not previously mentioned in scholarly discussions of Ockham’s Razor) against granting ecclesial authority any significant role to play in settling ontological questions. (shrink)
William Ockham held that, in addition to written and spoken language, there exists a mental language, a structured representational system common to all thinking beings. Here I present and evaluate an argument found in several places across Ockham's corpus, wherein he argues that positing a mental language is necessary for the nominalist to meet certain ontological constraints imposed by Aristotle’s account of scientific demonstration.
The purpose of this work is to elaborate an empirically grounded mathematical model of the magnitude of consequences component of "moral intensity", 366, 1991) that can be used to evaluate different ethical situations. The model is built using the analytical hierarchy process and empirical data from the legal profession. One contribution of our work is that it illustrates how AHP can be applied in the field of ethics. Following a review of the literature, we discuss the development of the model. (...) We then illustrate how the model can be used to rank-order three well-known ethical reasoning cases in terms of the magnitude of consequences. The work concludes with implications for theory, practice, and future research. Specifically we discuss how this work extends the previous work by Collins regarding the nature harm variable. We also discuss the contribution this work makes in the development of ethical scenarios used to test hypotheses in the field of business ethics. Finally, we discuss how the model can be used for after-action review, contribute to organizational learning, train employees in ethical reasoning, and aid in the design and development of decision support systems that support ethical reasoning. (shrink)
The purpose of this work is to elaborate an empirically grounded mathematical model of the magnitude of consequences component of “moral intensity” (Jones, Academy of Management Review 16 (2),366, 1991) that can be used to evaluate different ethical situations. The model is built using the analytical hierarchy process (AHP) (Saaty, The Analytic Hierarchy Process , 1980) and empirical data from the legal profession. One contribution of our work is that it illustrates how AHP can be applied in the field of (...) ethics. Following a review of the literature, we discuss the development of the model. We then illustrate how the model can be used to rank-order three well-known ethical reasoning cases in terms of the magnitude of consequences. The work concludes with implications for theory, practice, and future research. Specifically we discuss how this work extends the previous work by Collins ( Journal of Business Ethics 8 , 1, 1989) regarding the nature of harm variable. We also discuss the contribution this work makes in the development of ethical scenarios used to test hypotheses in the field of business ethics. Finally, we discuss how the model can be used for after-action review, contribute to organizational learning, train employees in ethical reasoning, and aid in the design and development of decision support systems that support ethical reasoning. (shrink)
The primary appeal of stakeholder theory in business ethics derives from its promise to help solve two large and often morally difficult problems: (1) how to manage people fairly and efficiently and (2) how to determine the extent of a firm's moral responsibilities beyond its obligations to enhance its profits and economic value. This article investigates a variety of conceptual quandaries that stakeholder theory faces in addressing these two general problems. It argues that these quandaries pose intractable obstacles for stakeholder (...) theory which prevent it from delivering on its large promises. Acknowledging that various versions of stakeholder theory have made a contribution in elucidating the complex nature of firms and business decision making, the article argues that it is time to move on. More precise explications of the nature of modern firms focusing on the application of basic moral principles to different business contexts and situations are likely to prove more accurate and useful. (shrink)
Although contemporary methods of environmental regulation have registered some significant accomplishments, the current system of environmental law is not working well enough. First the good news: Since the first Earth Day in 1970, smog has decreased in the United States by thirty percent. The number of lakes and rivers safe for fishing and swimming has increased by one-third. Recycling has begun to reduce levels of municipal waste. Ocean dumping has been curtailed. Forests have begun to expand. One success story is (...) the virtual elimination of airborne lead in the United States. Another is the rapid phase-out of ozone-layer depleting chemicals worldwide. Nevertheless, prominent commentators of diverse political persuasions agree in an assessment that conventional models of environmental law have “failed.” Many environmental problems remain unsolved: species extinction, global desertification and deforestation, possible global climate change, and continuing severe air and water pollution in urban areas and poor countries. What is more, successful environmental protection has come only at enormous economic cost. By the year 2000, the Environmental Protection Agency estimates that the United States will spend approximately two percent of its gross national product on environmental pollution control. Academic economists have pointed out the nonsensical inefficiency of many environmental regulations, but usually to no avail. (shrink)
This paper reexamines the long-standing problem of the nature and magnitude of the catastrophic Hellenic expedition to Egypt c. 460-454. An uneasy scholarly consensus posits that many fewer than the 200 triremes implied by Thucydides were involved in the momentous defeat, yet the arguments employed by proponents and detractors of this hypothesis have not been decisive. This paper attempts to develop a better understanding of the final stages of the campaign in order to settle the question of losses. Thucydides offers (...) the most reliable narrative of the events in Egypt , but the compressed nature of the pentekontaetia has left us with a brief, lacunary text. Examination of the verbs poliorkein and kataklēiein and the noun poliorkia in appropriate contexts throughout Thucydides' history reveals that the words connote a tight blockade that seeks to deny all supplies to the besieged; the terms do not normally convey less stringent varieties of military harassment. Application of this understanding to the passages in question shows that the 200 triremes initially mentioned by Thucydides could not possibly have been engaged in Egypt when the siege of Prosopitis island began: a force of such size under a tight blockade could never have held out for 18 months. This conclusion is supported by an economic and demographic survey of the fifth-century B.C. Egyptian Delta, which suggests that resources would not have been plentiful in the region. A much smaller Greek force, perhaps 40 to 50 triremes, must have been involved in the final siege. (shrink)