Fast, Cheap, and Unethical? The Interplay of Morality and Methodology in Crowdsourced Survey Research

Review of Philosophy and Psychology 9 (2):363-379 (2018)
  Copy   BIBTEX


Crowdsourcing is an increasingly popular method for researchers in the social and behavioral sciences, including experimental philosophy, to recruit survey respondents. Crowdsourcing platforms, such as Amazon’s Mechanical Turk (MTurk), have been seen as a way to produce high quality survey data both quickly and cheaply. However, in the last few years, a number of authors have claimed that the low pay rates on MTurk are morally unacceptable. In this paper, I explore some of the methodological implications for online experimental philosophy research if, in fact, typical pay practices on MTurk are morally impermissible. I argue that the most straightforward solution to this apparent moral problem—paying survey respondents more and relying only on “high reputation” respondents—will likely increase the number of subjects who have previous experience with survey materials and thus are “non-naïve” with respect to those materials. I then discuss some likely effects that this increase in experimental non-naivete will have on some aspects of the “negative” program in experimental philosophy, focusing in particular on recent debates about philosophical expertise.



    Upload a copy of this work     Papers currently archived: 91,102

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Deep, Cheap, and Improvable.Peter Danielson, Rana Ahmad, Zosia Bornik, Hadi Dowlatabadi & Edwin Levy - 2007 - Journal of Philosophical Research 32 (9999):315-326.
Survey-Driven Romanticism.Simon Cullen - 2010 - Review of Philosophy and Psychology 1 (2):275-296.


Added to PP

25 (#569,967)

6 months
4 (#477,225)

Historical graph of downloads
How can I increase my downloads?