Abstract
Workers can be fired from jobs, citizens sent to jail, and adolescents more likely to experience depression, all because of algorithms. Algorithms have considerable impacts on our lives. To increase user satisfaction and trust, the most common proposal from academics and developers is to increase the transparency of algorithmic design. While there is a large body of literature on algorithmic transparency, the impact of unethical data collection practices is less well understood. Currently, there is limited research on the factors that affect users’ trust in data collection practices and algorithmic transparency. In this research, we explore the relative impact of both factors as they relate to important outcome measures such as user’s trust, comfort level, and moral acceptability. We conducted two pilot studies to learn what real users consider to be ethical and unethical data collection practices, as well as high and low transparency for algorithms. We then used these findings in a 2 × 2 design to examine how transparency and the acceptability of data collection practices impact users’ acceptance, comfort, and trust in algorithms. Our results suggest that the singular emphasis on algorithmic transparency may be misplaced. Given the difference in their impact to increase acceptance, trust, and user satisfaction, a more effective strategy would be to also understand and abide by users’ views of ethical data collection practices.