Research ethicists have recently declared a new ethical imperative: that researchers should communicate the results of research to participants. For some analysts, the obligation is restricted to the communication of the general findings or conclusions of the study. However, other analysts extend the obligation to the disclosure of individual research results, especially where these results are perceived to have clinical relevance. Several scholars have advanced cogent critiques of the putative obligation to disclose individual research results. They question whether ethical goals (...) are served by disclosure or violated by non-disclosure, and whether the communication of research results respects ethically salient differences between research practices and clinical care. Empirical data on these questions are limited. Available evidence suggests, on the one hand, growing support for disclosure, and on the other, the potential for significant harm. (shrink)
Trust between transaction partners in cyberspace has come to be considered a distinct possibility. In this article the focus is on the conditions for its creation by way of assuming, not inferring trust. After a survey of its development over the years (in the writings of authors like Luhmann, Baier, Gambetta, and Pettit), this mechanism of trust is explored in a study of personal journal blogs. After a brief presentation of some technicalities of blogging and authors’ motives for writing their (...) diaries, I try to answer the question, ‘Why do the overwhelming majority of web diarists dare to expose the intimate details of their lives to the world at large?’ It is argued that the mechanism of assuming trust is at play: authors simply assume that future visitors to their blog will be sympathetic readers, worthy of their intimacies. This assumption then may create a self-fulfilling cycle of mutual admiration. Thereupon, this phenomenon of blogging about one’s intimacies is linked to Calvert’s theory of ‘mediated voyeurism’ and Mathiesen’s notion of ‘synopticism’. It is to be interpreted as a form of ‘empowering exhibitionism’ that reaffirms subjectivity. Various types of ‘synopticon’ are distinguished, each drawing the line between public and private differently. In the most ‘radical’ synopticon blogging proceeds in total transparency and the concept of privacy is declared obsolete; the societal gaze of surveillance is proudly returned and nullified. Finally it is shown that, in practice, these conceptions of blogging are put to a severe test, while authors often have to cope with known people from ‘real life’ complaining, and with ‘trolling’ strangers. (shrink)
Can trust evolve on the Internet between virtual strangers? Recently, Pettit answered this question in the negative. Focusing on trust in the sense of ‘dynamic, interactive, and trusting’ reliance on other people, he distinguishes between two forms of trust: primary trust rests on the belief that the other is trustworthy, while the more subtle secondary kind of trust is premised on the belief that the other cherishes one’s esteem, and will, therefore, reply to an act of trust in kind (‘trust-responsiveness’). (...) Based on this theory Pettit argues that trust between virtual strangers is impossible: they lack all evidence about one another, which prevents the imputation of trustworthiness and renders the reliance on trust-responsiveness ridiculous. I argue that this argument is flawed, both empirically and theoretically. In several virtual communities amazing acts of trust between pure virtuals have been observed. I propose that these can be explained as follows. On the one hand, social cues, reputation, reliance on third parties, and participation in (quasi-) institutions allow imputing trustworthiness to varying degrees. On the other, precisely trust-responsiveness is also relied upon, as a necessary supplement to primary trust. In virtual markets, esteem as a fair trader is coveted while it contributes to building up one’s reputation. In task groups, a hyperactive style of action may be adopted which amounts to assuming (not: inferring) trust. Trustors expect that their virtual co-workers will reply in kind while such an approach is to be considered the most appropriate in cyberspace. In non-task groups, finally, members often display intimacies while they are confident someone else ‘out there’ will return them. This is facilitated by the one-to-many, asynchronous mode of communication within mailing lists. (shrink)
English - language Wikipedia is constantly being plagued by vandalistic contributions on a massive scale. In order to fight them its volunteer contributors deploy an array of software tools and autonomous bots. After an analysis of their functioning and the ‘ coactivity ’ in use between humans and bots, this research ‘ discloses ’ the moral issues that emerge from the combined patrolling by humans and bots. Administrators provide the stronger tools only to trusted users, thereby creating a new hierarchical (...) layer. Further, surveillance exhibits several troubling features : questionable profiling practices, the use of the controversial measure of reputation, ‘ oversurveillance ’ where quantity trumps quality, and a prospective loss of the required moral skills whenever bots take over from humans. The most troubling aspect, though, is that Wikipedia has become a Janus - faced institution. One face is the basic platform of MediaWiki software, transparent to all. Its other face is the anti - vandalism system, which, in contrast, is opaque to the average user, in particular as a result of the algorithms and neural networks in use. Finally it is argued that this secrecy impedes a much needed discussion to unfold ; a discussion that should focus on a ‘ rebalancing ’ of the anti - vandalism system and the development of more ethical information practices towards contributors. (shrink)
Humanitarian health care practitioners working outside familiar settings, and without familiar supports, encounter ethical challenges both familiar and distinct. The ethical guidance they rely upon ought to reflect this. Using data from empirical studies, we explore the strengths and weaknesses of two ethical models that could serve as resources for understanding ethical challenges in humanitarian health care: clinical ethics and public health ethics. The qualitative interviews demonstrate the degree to which traditional teaching and values of clinical health ethics seem insufficient (...) for addressing all the realities of health care practice during humanitarian missions. They equally suggest that greater good orientations of public health ethics can thwart the best intentions of health care professionals wanting to attend to the interests of individual patients. Even though neither is complete on its own for helping guide health professionals on field missions, taken together these models have much to offer. At the same time, the narratives of the humanitarian health care workers illustrate how some of the crucial differences between public health ethics and clinical ethics generate tensions in humanitarian health practice. We offer an analysis of some of the complexities this creates for humanitarian health care ethics, and consider ways of adjudicating between the two models. (shrink)
Two property regimes for software development may be distinguished. Within corporations, on the one hand, a Private Regime obtains which excludes all outsiders from access to a firm's software assets. It is shown how the protective instruments of secrecy and both copyright and patent have been strengthened considerably during the last two decades. On the other, a Public Regime among hackers may be distinguished, initiated by individuals, organizations or firms, in which source code is freely exchanged. It is argued that (...) copyright is put to novel use here: claiming their rights, authors write `open source licenses' that allow public usage of the code, while at the same time regulating the inclusion of users. A `regulated commons' is created. The analysis focuses successively on the most important open source licenses to emerge, the problem of possible incompatibility between them (especially as far as the dominant General Public License is concerned), and the fragmentation into several user communities that may result. (shrink)
In order to fight massive vandalism the English- language Wikipedia has developed a system of surveillance which is carried out by humans and bots, supported by various tools. Central to the selection of edits for inspection is the process of using filters or profiles. Can this profiling be justified? On the basis of a careful reading of Frederick Schauer’s books about rules in general (1991) and profiling in particular (2003) I arrive at several conclusions. The effectiveness, efficiency, and risk-aversion of (...) edit selection all greatly increase as a result. The argument for increasing predictability suggests making all details of profiling manifestly public. Also, a wider distribution of the more sophisticated anti-vandalism tools seems indicated. As to the specific dimensions used in profiling, several critical remarks are developed. When patrollers use ‘assisted edit- ing’ tools, severe ‘overuse’ of several features (anonymity, warned before) is a definite possibility, undermining profile efficacy. The easy remedy suggested is to render all of them invisible on the interfaces as displayed to patrollers. Finally, concerning not only assisted editing tools but tools against vandalism generally, it is argued that the anonymity feature is a sensitive category: anons have been in dispute for a long time (while being more prone to vandalism). Targeting them as a special category violates the social contract upon which Wikipedia is based. The feature is therefore a candidate for mandatory ‘underuse’: it should be banned from all anti-vandalism filters and profiling algorithms, and no longer be visible as a special edit trait. (shrink)
Algorithmic decision-making based on profiling may significantly affect people’s destinies. As a rule, however, explanations for such decisions are lacking. What are the chances for a “right to explanation” to be realized soon? After an exploration of the regulatory efforts that are currently pushing for such a right it is concluded that, at the moment, the GDPR stands out as the main force to be reckoned with. In cases of profiling, data subjects are granted the right to receive meaningful information (...) about the functionality of the system in use; for fully automated profiling decisions even an explanation has to be given. However, the trade secrets and intellectual property rights involved must be respected as well. These conflicting rights must be balanced against each other; what will be the outcome? Looking back to 1995, when a similar kind of balancing had been decreed in Europe concerning the right of access, Wachter et al. find that according to judicial opinion only generalities of the algorithm had to be disclosed, not specific details. This hardly augurs well for a future right of access let alone to explanation. Thereupon the landscape of IPRs for machine learning is analysed. Spurred by new USPTO guidelines that clarify when inventions are eligible to be patented, the number of patent applications in the US related to ML in general, and to “predictive analytics” in particular, has soared since 2010—and Europe has followed. I conjecture that in such a climate of intensified protection of intellectual property, companies may legitimately claim that the more their application combines several ML assets that, in addition, are useful in multiple sectors, the more value is at stake when confronted with a call for explanation by data subjects. Consequently, the right to explanation may be severely crippled. (shrink)
A cursory view of the history of ethical thinking shows the presence of a limited variety of 'styles of ethical reasoning', a term used in analogy of Crombie's 'styles of scientific reasoning' for systems of thought that set their own standards and techniques for providing evidence. Each style of reasoning tends to suggest a specific role for ethicists. Styles are appropriate relative to particular contexts of problems and require special institutions to flourish. Herman De Dijn's discussion in Taboes, monsters en (...) loterijen of the problems hybridsproduced in biomedicine and biotechnology pose for the symbolic order is shown to rest on two 'styles of ethical reasoning', viz, 'postulation of principles' and 'hermeneutics'. It is argued that both styles fall short, but for different reasons. 'Postulation of principles' is ill-equipped to answer problems posed by a world of change. Hermeneutics requires a pace of time and institutions not available in current society. Confronted with the problems De Dijn discusses, therefore two separate problems arise. Ethicistsnot only have to learn how to account for the hybrids that biomedicine and biotechnology produce, they also have to think about and contribute to institutional innovation. Rather than opening a space for discussing both tasks, De Dijn's styles of reasoning close off this matter. (shrink)
Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are (...) inherently opaque. It is concluded that, at least presently, full transparency for oversight bodies alone is the only feasible option; extending it to the public at large is normally not advisable. Moreover, it is argued that algorithmic decisions preferably should become more understandable; to that effect, the models of machine learning to be employed should either be interpreted ex post or be interpretable by design ex ante. (shrink)
In communities of user-generated content, systems for the management of content and/or their contributors are usually accepted without much protest. Not so, however, in the case of Wikipedia, in which the proposal to introduce a system of review for new edits (in order to counter vandalism) led to heated discussions. This debate is analysed, and arguments of both supporters and opponents (of English, German and French tongue) are extracted from Wikipedian archives. In order to better understand this division of the (...) minds, an analogy is drawn with theories of bureaucracy as developed for real-life organizations. From these it transpires that bureaucratic rules may be perceived as springing from either a control logic or an enabling logic. In Wikipedia, then, both perceptions were at work, depending on the underlying views of participants. Wikipedians either rejected the proposed scheme (because it is antithetical to their conception of Wikipedia as a community) or endorsed it (because it is consonant with their conception of Wikipedia as an organization with clearly defined boundaries). Are other open-content communities susceptible to the same kind of ‘essential contestation’? (shrink)
Big Data are increasingly used in machine learning in order to create predictive models. How are predictive practices that use such models to be situated? In the field of surveillance studies many of its practitioners assert that “governance by discipline” has given way to “governance by risk”. The individual is dissolved into his/her constituent data and no longer addressed. I argue that, on the contrary, in most of the contexts where predictive modelling is used, it constitutes Foucauldian discipline. Compliance to (...) a norm occupies centre stage; suspected deviants are subjected to close attention—as the precursor of possible sanctions. The predictive modelling involved uses personal data from both the focal institution and elsewhere. As a result, the individual re-emerges as the focus of scrutiny. Subsequently, small excursions into Foucauldian texts discuss his discourses on the creation of the “delinquent”, and on the governmental approach to smallpox epidemics. It is shown that his insights only mildly resemble prediction as based on machine learning; several conceptual steps had to be taken for modern machine learning to evolve. Finally, the options available to those subjected to predictive disciplining are discussed: to what extent can they comply, question, or resist? Through a discussion of the concepts of transparency and “gaming the system” I conclude that our predicament is gloomy, in a Kafkaesque fashion. (shrink)
Bestaat de kernactiviteit van de meester erin om zijn eigen kennis uit te leggen en over te dragen? De Franse filosoof Jacques Rancière laat zien dat een gelegenheidsexperiment van Joseph Jacotot ons een ander voorbeeld aanreikt: de onwetende meester. In zijn boek De onwetende meester: vijf lessen over intellectuele emancipatie (Le maître ignorant: Cinq leçons sur l'émancipation intellectuelle) stelt hij dat de onwetende meester evengoed of zelfs beter in staat is leerlingen iets te leren dan de wetende meester. Rancière (...) neemt in feite twee onderwijspraktijken als voorbeeld: de traditionele praktijk van de wetende meester die uitleg geeft en kennis overdraagt en de experimentele praktijk van de onwetende meester die geen uitleg geeft maar vooral gericht is op de verificatie van aandacht. In dit artikel wil ik op basis van een korte analyse van deze twee praktijken laten zien hoe we uit Rancière’s boek een model kunnen afleiden voor het articuleren van de theoretische principes die impliciet in onderwijspraktijken werkzaam zijn. Dit model is gestoeld op het volgende uitgangspunt: door een succesvolle of juist problematische onderwijspraktijk als voorbeeld te nemen kunnen we de theoretische principes articuleren die erin werkzaam zijn. Dit schept de mogelijkheid om de focus van de van de praktijken naar de principes te verleggen. Hierdoor wordt de onderwijspraktijk op een nieuwe manier inzichtelijk en kunnen we haar theoretisch expliciteren. (shrink)
The ideas behind open source software are currently applied to the production of encyclopedias. A sample of six English text-based, neutral-point-of-view, online encyclopedias of the kind are identified: h2g2, Wikipedia, Scholarpedia, Encyclopedia of Earth, Citizendium and Knol. How do these projects deal with the problem of trusting their participants to behave as competent and loyal encyclopedists? Editorial policies for soliciting and processing content are shown to range from high discretion to low discretion; that is, from granting unlimited trust to limited (...) trust. Their conceptions of the proper role for experts are also explored and it is argued that to a great extent they determine editorial policies. Subsequently, internal discussions about quality guarantee at Wikipedia are rendered. All indications are that review and ?super-review? of new edits will become policy, to be performed by Wikipedians with a better reputation. Finally, while for encyclopedias the issue of organizational trust largely coincides with epistemological trust, a link is made with theories about the acceptance of testimony. It is argued that both non-reductionist views (the ?acceptance principle? and the ?assurance view?) and reductionist ones (an appeal to background conditions, and a?newly defined??expertise view?) have been implemented in editorial strategies over the past decade. (shrink)
Open-source communities that focus on content rely squarely on the contributions of invisible strangers in cyberspace. How do such communities handle the problem of trusting that strangers have good intentions and adequate competence? This question is explored in relation to communities in which such trust is a vital issue: peer production of software (FreeBSD and Mozilla in particular) and encyclopaedia entries (Wikipedia in particular). In the context of open-source software, it is argued that trust was inferred from an underlying ‘hacker (...) ethic’, which already existed. The Wikipedian project, by contrast, had to create an appropriate ethic along the way. In the interim, the assumption simply had to be that potential contributors were trustworthy; they were granted ‘substantial trust’. Subsequently, projects from both communities introduced rules and regulations which partly substituted for the need to perceive contributors as trustworthy. They faced a design choice in the continuum between a high-discretion design (granting a large amount of trust to contributors) and a low-discretion design (leaving only a small amount of trust to contributors). It is found that open-source designs for software and encyclopaedias are likely to converge in the future towards a mid-level of discretion. In such a design the anonymous user is no longer invested with unquestioning trust. (shrink)
The term ‘responsible AI’ has been coined to denote AI that is fair and non-biased, transparent and explainable, secure and safe, privacy-proof, accountable, and to the benefit of mankind. Since 2016, a great many organizations have pledged allegiance to such principles. Amongst them are 24 AI companies that did so by posting a commitment of the kind on their website and/or by joining the ‘Partnership on AI’. By means of a comprehensive web search, two questions are addressed by this study: (...) Did the signatory companies actually try to implement these principles in practice, and if so, how? What are their views on the role of other societal actors in steering AI towards the stated principles? It is concluded that some three of the largest amongst them have carried out valuable steps towards implementation, in particular by developing and open sourcing new software tools. To them, charges of mere ‘ethics washing’ do not apply. Moreover, some 10 companies from both the USA and Europe have publicly endorsed the position that apart from self-regulation, AI is in urgent need of governmental regulation. They mostly advocate focussing regulation on high-risk applications of AI, a policy which to them represents the sensible middle course between laissez-faire on the one hand and outright bans on technologies on the other. The future shaping of standards, ethical codes, and laws as a result of these regulatory efforts remains, of course, to be determined. (shrink)
Many virtual communities that rely on user-generated content (such as social news sites, citizen journals, and encyclopedias in particular) offer unrestricted and immediate ‘write access’ to every contributor. It is argued that these communities do not just assume that the trust granted by that policy is well-placed; they have developed extensive mechanisms that underpin the trust involved (‘backgrounding’). These target contributors (stipulating legal terms of use and developing etiquette, both underscored by sanctions) as well as the contents contributed by them (...) (patrolling for illegal and/or vandalist content, variously performed by humans and bots; voting schemes). Backgrounding trust is argued to be important since it facilitates the avoidance of bureaucratic measures that may easily cause unrest among community members and chase them away. (shrink)
Open-content communities that focus on co-creation without requirements for entry have to face the issue of institutional trust in contributors. This research investigates the various ways in which these communities manage this issue. It is shown that communities of open-source software—continue to—rely mainly on hierarchy (reserving write-access for higher echelons), which substitutes (the need for) trust. Encyclopedic communities, though, largely avoid this solution. In the particular case of Wikipedia, which is confronted with persistent vandalism, another arrangement has been pioneered instead. (...) Trust (i.e. full write-access) is ‘backgrounded’ by means of a permanent mobilization of Wikipedians to monitor incoming edits. Computational approaches have been developed for the purpose, yielding both sophisticated monitoring tools that are used by human patrollers, and bots that operate autonomously. Measures of reputation are also under investigation within Wikipedia; their incorporation in monitoring efforts, as an indicator of the trustworthiness of editors, is envisaged. These collective monitoring efforts are interpreted as focusing on avoiding possible damage being inflicted on Wikipedian spaces, thereby being allowed to keep the discretionary powers of editing intact for all users. Further, the essential differences between backgrounding and substituting trust are elaborated. Finally it is argued that the Wikipedian monitoring of new edits, especially by its heavy reliance on computational tools, raises a number of moral questions that need to be answered urgently. (shrink)
Hacker communities of the 1970s and 1980s developed a quite characteristic work ethos. Its norms are explored and shown to be quite similar to those which Robert Merton suggested govern academic life: communism, universalism, disinterestedness, and organized scepticism. In the 1990s the Internet multiplied the scale of these communities, allowing them to create successful software programs like Linux and Apache. After renaming themselves the `open source software' movement, with an emphasis on software quality, they succeeded in gaining corporate interest. As (...) one of the main results, their `open' practices have entered industrial software production. The resulting clash of cultures, between the more academic CUDOS norms and their corporate counterparts, is discussed and assessed. In all, the article shows that software practices are a fascinating seedbed for the genesis of work ethics of various kinds, depending on their societal context. (shrink)
Over de vraag hoe geloof en verstand zich onderling verhouden, bestaat geen communis opinio; ook in het verleden is die er nooit geweest. Integendeel, de filosofiehistorie vertoont een complexe verscheidenheid van opvattingen. In dit artikel heb ik deze geordend in een beperkt aantal grondschema’s of grondmodellen. Ik breng zeven van die grondmodellen ter sprake, en duid ze kortweg aan met de termen identificatie, conflict, subordinatie, complementariteit, fundering, authenticiteit en transformatie. Mijn analyse laat zien hoe deze modellen, eenmaal present op (...) het publieke forum, zich blijvend doen gelden, tot op de dag van vandaag. Hun sterke en zwakke kanten dagen ons uit tot een eigen standpuntbepaling in het debat over religie en rede. (shrink)
Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Can transparency contribute to restoring accountability for such systems? Several objections are examined: the loss of privacy when data sets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms are inherently opaque. It is concluded that transparency is (...) certainly useful, but only up to a point: extending it to the public at large is normally not to be advised. Moreover, in order to make algorithmic decisions understandable, models of machine learning to be used should either be interpreted ex post or be interpretable by design ex ante. (shrink)
During the two last decades, speeded up by the development of the Internet, several types of commons have been opened up for intellectual resources. In this article their variety is being explored as to the kind of resources and the type of regulation involved. The open source software movement initiated the phenomenon, by creating a copyright-based commons of source code that can be labelled `dynamic': allowing both use and modification of resources. Additionally, such a commons may be either protected from (...) appropriation (by `copyleft' licensing), or unprotected. Around the year 2000, this approach was generalized by the Creative Commons initiative. In the process they added a `static' commons, in which only use of resources is allowed. This mould was applied to the sciences and the humanities in particular, and various Open Access initiatives unfolded. A final aspect of copyright-based commons is the distinction between active and passive commons: while the latter is only a site for obtaining resources, the former is also a site for production of new resources by communities of volunteers (`peer production'). Finally, several patent commons are discussed, which mainly aim at preventing patents blocking the further development of science. Throughout, attention is drawn to interrelationships between the various commons. (shrink)
How may human agents come to trust (sophisticated) artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet (ro)bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; (...) as a consequence, they approach the robots involved as being trustworthy (“zones of trust”). Properly speaking, users rely on the overall accountability of the institution. Besides this option we explore some novel ways for trust development: trust becomes normatively laden and thereby the mechanism of exclusive reliance on the normative force of trust (as-if trust) may come into play - the efficacy of which has already been proven for persons meeting face-to-face or over the Internet (virtual trust). For one thing, machines may evolve into moral machines, or machines skilled in the art of deception. While both developments might seem to facilitate proper trust and turn as-if trust into a feasible option, they are hardly to be taken seriously (while being science-fiction, immoral, or both). For another, the new trend in robotics is towards coactivity between human and machine operators in a team (away from making robots as autonomous as possible). Inside the team trust is a necessity for smooth operations. In support of this, humans in particular need to be able to develop and maintain accurate mental models of their machine counterparts. Nevertheless, the trust involved is bound to remain nonnormative. It is argued, though, that excellent opportunities exist to build relations of trust toward outside users who are pondering their reliance on the coactive team. The task of managing this trust has to be allotted to human operators of the team, who operate as linking pin between the outside world and the team. Since the robotic team has now been turned into an anthropomorphic team, users may well develop normative trust towards them; correspondingly, trusting the team in as-if fashion becomes feasible. (shrink)
In `real' space, third partieshave always been useful to facilitatetransactions. With cyberspace opening up, it isto be expected that intermediation will alsodevelop in a virtual fashion. The articlefocuses upon new cyberroles for third partiesthat seem to announce themselves clearly.First, virtualization of the market place haspaved the way for `cybermediaries', who brokerbetween supply and demand of material andinformational goods. Secondly,cybercommunication has created newuncertainties concerning informational securityand privacy. Also, as in real space,transacting supposes some decency with one'spartners. These needs are being addressed (...) byTrusted Third Parties, anonymizers, escrowarrangements, facilitators and externalauditing. Virtual reputation trackingmechanisms are being developed as well.Finally, in order to resolve disputes,mediators and arbitrators have started offeringtheir services online. In the closing sectionthese emerging cyberroles are assessedcritically. It is argued in particular, thatboth cybermediaries and cyberjustice poseserious threats to privacy. Moreover, onlinedispute resolution, as it is practised now,neglects its duties of public accounting. (shrink)
Aristotle's De Anima is the first systematic philosophical account of the soul, which serves to explain the functioning of all mortal living things. In his commentary, Ronald Polansky argues that the work is far more structured and systematic than previously supposed. He contends that Aristotle seeks a comprehensive understanding of the soul and its faculties. By closely tracing the unfolding of the many-layered argumentation and the way Aristotle fits his inquiry meticulously within his scheme of the sciences, Polansky answers questions (...) relating to the general definition of soul and the treatment of each of the soul's principal capacities: nutrition, sense perception, phantasia, intellect, and locomotion. The commentary sheds light on every section of the De Anima and the work as a unit. It offers a challenge to earlier and current interpretations of the relevance and meaning of Aristotle's highly influential treatise. (shrink)
In dit artikel onderzoek ik of de standaardbenaderingen van burgerschapsonderwijs in de Lage Landen geschikt zijn om jonge mensen voor te bereiden om de huidige politieke realiteiten tegemoet te treden, laat staan om onrecht te bestrijden. Ik laat zien waarom een nadruk op ‘democratische principes’ of de rechtsstaat de status quo waarschijnlijk niet zal veranderen zolang opvoeders er niet in slagen de aandacht voor de waarheid te cultiveren die nodig is om te kunnen oordelen over rivaliserende normatieve claims. (...) Met name op tolerantie gebaseerde interpretaties van burgerschap zullen weinig bereiken in afwezigheid van burgerlijke deugden – vooral moreel oordelen en morele moed – die nodig zijn voor dissent. Dissent omvat minimaal een bereidheid om de waarheid te spreken tegen de macht. Maar het ernstigste probleem betreffende burgerschapsonderwijs op school betreft de legitimiteit ervan, gegeven dat dat onderwijs gebaseerd is op een door de overheid opgelegd curriculum met als doel een gewenste respons op de boodschap ervan op te leggen en te conditioneren. Daardoor staat het per definitie vijandig tegenover dissent. (shrink)
The present paper draws on climate science and the philosophy of science in order to evaluate climate-model-based approaches to assessing climate projections. We analyze the difficulties that arise in such assessment and outline criteria of adequacy for approaches to it. In addition, we offer a critical overview of the approaches used in the IPCC working group one fourth report, including the confidence building, Bayesian and likelihood approaches. Finally, we consider approaches that do not feature in the IPCC reports, including three (...) approaches drawn from the philosophy of science. We find that all available approaches face substantial challenges, with IPCC approaches having as a primary source of difficulty their goal of providing probabilistic assessments. (shrink)
How may human agents come to trust artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; as (...) a consequence, they approach the robots involved as being trustworthy. Properly speaking, users rely on the overall accountability of the institution. Besides this option we explore some novel ways for trust development: trust becomes normatively laden and thereby the mechanism of exclusive reliance on the normative force of trust may come into play - the efficacy of which has already been proven for persons meeting face-to-face or over the Internet. For one thing, machines may evolve into moral machines, or machines skilled in the art of deception. While both developments might seem to facilitate proper trust and turn as-if trust into a feasible option, they are hardly to be taken seriously. For another, the new trend in robotics is towards coactivity between human and machine operators in a team. Inside the team trust is a necessity for smooth operations. In support of this, humans in particular need to be able to develop and maintain accurate mental models of their machine counterparts. Nevertheless, the trust involved is bound to remain non-normative. It is argued, though, that excellent opportunities exist to build relations of trust toward outside users who are pondering their reliance on the coactive team. The task of managing this trust has to be allotted to human operators of the team, who operate as linking pin between the outside world and the team. Since the robotic team has now been turned into an anthropomorphic team, users may well develop normative trust towards them; correspondingly, trusting the team in as-if fashion becomes feasible. (shrink)
How may human agents come to trust artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; as (...) a consequence, they approach the robots involved as being trustworthy. Properly speaking, users rely on the overall accountability of the institution. Besides this option we explore some novel ways for trust development: trust becomes normatively laden and thereby the mechanism of exclusive reliance on the normative force of trust may come into play - the efficacy of which has already been proven for persons meeting face-to-face or over the Internet. For one thing, machines may evolve into moral machines, or machines skilled in the art of deception. While both developments might seem to facilitate proper trust and turn as-if trust into a feasible option, they are hardly to be taken seriously. For another, the new trend in robotics is towards coactivity between human and machine operators in a team. Inside the team trust is a necessity for smooth operations. In support of this, humans in particular need to be able to develop and maintain accurate mental models of their machine counterparts. Nevertheless, the trust involved is bound to remain non-normative. It is argued, though, that excellent opportunities exist to build relations of trust toward outside users who are pondering their reliance on the coactive team. The task of managing this trust has to be allotted to human operators of the team, who operate as linking pin between the outside world and the team. Since the robotic team has now been turned into an anthropomorphic team, users may well develop normative trust towards them; correspondingly, trusting the team in as-if fashion becomes feasible. (shrink)
The present paper draws on climate science and the philosophy of science in order to evaluate climate-model-based approaches to assessing climate projections. We analyze the difficulties that arise in such assessment and outline criteria of adequacy for approaches to it. In addition, we offer a critical overview of the approaches used in the IPCC working group one fourth report, including the confidence building, Bayesian and likelihood approaches. Finally, we consider approaches that do not feature in the IPCC reports, including three (...) approaches drawn from the philosophy of science. We find that all available approaches face substantial challenges, with IPCC approaches having as a primary source of difficulty their goal of providing probabilistic assessments. (shrink)
De Interpretatione is among Aristotle's most influential and widely read writings; C. W. A. Whitaker presents the first systematic study of this work, and offers a radical new view of its aims, its structure, and its place in Aristotle's system. He shows that De Interpretatione is not a disjointed essay on ill-connected subjects, as traditionally thought, but a highly organized and systematic treatise on logic, argument, and dialectic.
Aristotle's treatise De Interpretatione is one of his central works; it continues to be the focus of much attention and debate. C. W. A. Whitaker presents the first systematic study of this work, and offers a radical new view of its aims, its structure, and its place in Aristotle's system, basing this view upon a detailed chapter-by-chapter analysis.By treating the work systematically, rather than concentrating on certain selected passages, Whitaker is able to show that, contrary to traditional opinion, it forms (...) an organized and coherent whole. He argues that the De Interpretatione is intended to provide the underpinning for dialectic, the system of argument by question and answer set out in Aristotle's Topics; and he rejects the traditional view that the De Interpretatione concerns the assertion and is oriented towards the formal logic of the Prior Analytics. In doing so, he sheds valuable new light on some of Aristotle's most famous texts. (shrink)
Plutarch's essay de fortuna Romanorum has attracted divergent judgements. Ziegler dismissed it as ‘eine nicht weiter ernst zu nehmende rhetorische Stilübung’. By Flacelière it was hailed as ‘une ébauche de méditation sur le prodigieux destin de Rome’. It is time to consider the work afresh and to discover whether there is common ground between these two views. Rather than offering a general appreciation, my treatment will take the work chapter by chapter, considering points of interest as they arise. This method (...) will enable us to compare what Plutarch says on particular subjects and themes in de fort. Rom. with what he says or does not say about them elsewhere. We shall thus be able to see clearly that for the most part the ideas he presents in the essay correspond with his thoughts about the rôle of fortune expressed in more serious writing, and that, where there is no correspondence, this is attributable to the rhetorical background. I do not intend to address directly the frequently discussed but insoluble question of whether we have in de fort. Rom. only one of two original works, that is whether there was once a de virtute Romanorum which Plutarch composed or answered. De fort. Rom. itself in fact gives almost as much prominence to άρετή as to τúχη, and their competing roles will be carefully evaluated. Nor do I look at the dating of the work. (shrink)
The goal of this paper is to increase interest in Cicero’s “De Officiis” as both a textbook and resource for developing curricula at the secondary and post-secondary level. The paper begins by tracing the extensive influence that the work has had in ethics, political philosophy, literature, and education before proceeding to an explanation for why its influence has waned since the nineteenth century. Next, the paper contends that “De Officiis” addresses some of the most relevant and pressing questions in ethics. (...) Finally, the paper provides suggestions on how the work can be used in the classroom. (shrink)
The goal of this paper is to increase interest in Cicero’s “De Officiis” as both a textbook and resource for developing curricula at the secondary and post-secondary level. The paper begins by tracing the extensive influence that the work has had in ethics, political philosophy, literature, and education before proceeding to an explanation for why its influence has waned since the nineteenth century. Next, the paper contends that “De Officiis” addresses some of the most relevant and pressing questions in ethics. (...) Finally, the paper provides suggestions on how the work can be used in the classroom. (shrink)
Bringing together a group of outstanding new essays on Aristotle's De Anima, this book covers topics such as the relation between soul and body, sense-perception, imagination, memory, desire, and thought, which present the philosophical substance of Aristotle's views to the modern reader. The contributors write with philosophical subtlety and wide-ranging scholarship, locating their interpretations firmly within the context of Aristotle's thought as a whole.u.
The arguments of the Stoic Chrysippus recorded in Cicero's De Fato are of great importance to Deleuze's conception of events in The Logic of Sense. The purpose of this paper is to explicate these arguments, to which Deleuze's allusions are extremely terse, and to situate them in the context of Deleuze's broader project in that book. Drawing on contemporary scholarship on the Stoics, I show the extent to which Chrysippus' views on compatibilism, hypothetical inference and astrology support Deleuze's claim that (...) the Stoics developed a theory of compatibilities and incompatibilities of events independent of corporeal states of affairs. (shrink)
Although it is common for interpreters of Aristotle's De Anima to treat the soul as a specially related set of powers of capacities, I argue against this view on the grounds that the plausible options for reconciling the claim that the soul is a set of powers with Aristotle's repeated claim that the soul is an actuality cannot be unsuccessful. Moreover, I argue that there are good reasons to be wary of attributing to Aristotle the view that the soul is (...) a set of powers because this claim conflicts with several of his metaphysical commitments, most importantly his claims about form and substance. I argue that although there are passages in the De Anima in which Aristotle discusses the soul in terms of its powers or capacities, these discussions do not establish that the soul is a set of capacities. (shrink)