Results for 'trust in automation'

999 found
Order:
  1.  1
    Trust in automation: Designing for appropriate reliance.J. D. Lee & K. A. See - 2004 - Human Factors 46.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  2.  21
    From Trust in Automation to Decision Neuroscience: Applying Cognitive Neuroscience Methods to Understand and Improve Interaction Decisions Involved in Human Automation Interaction.Kim Drnec, Amar R. Marathe, Jamie R. Lukos & Jason S. Metcalfe - 2016 - Frontiers in Human Neuroscience 10.
  3. A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems.K. E. Schaefer, J. Y. Chen, J. L. Szalma & P. A. Hancock - 2016 - Human Factors 58.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  4.  1
    A Panoramic View of Trust in the Time of Digital Automated Decision Making – Failings of Trust in the Post Office and the Tax Authorities.Esther Oluffa Pedersen - forthcoming - SATS.
    The ongoing Post Office scandal in the UK and the 2021 Child Daycare Benefit Scandal in the Netherlands make up exemplary cases of how digital automation has changed and in fact severely harmed trust relations ranging from trust in oneself over trust in social roles, trust in institutions, trust in technology and general trust. By looking closer at how digital automation in these cases generated ruptures in the lives of ordinary citizens and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5.  21
    Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents.Ewart J. de Visser, Paul J. Beatty, Justin R. Estepp, Spencer Kohn, Abdulaziz Abubshait, John R. Fedota & Craig G. McDonald - 2018 - Frontiers in Human Neuroscience 12.
  6.  18
    Individual Differences in Attributes of Trust in Automation: Measurement and Application to System Design.Thomas B. Sheridan - 2019 - Frontiers in Psychology 10.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  7. Engineering trust in complex automated systems.J. B. Lyons, K. S. Koltai, N. T. Ho, W. B. Johnson, D. E. Smith & R. J. Shively - 2016 - Ergon. Des 24.
     
    Export citation  
     
    Bookmark  
  8. The effect of culture on trust in automation: reliability and workload.S. -. Y. Chien, M. Lewis, K. Sycara, J. -. S. Liu & A. Kumru - 2018 - ACM Trans. Interact. Intell. Syst. (TIIS) 8.
     
    Export citation  
     
    Bookmark  
  9.  15
    Evaluating and Modeling Human-Machine Teaming and Trust in Automation while on the Road.Nathan Tenhundfeld, Ewart De Visser, Chad Tossell & Victor Finomore - 2018 - Frontiers in Human Neuroscience 12.
  10. Trust in engineering.Philip J. Nickel - 2021 - In Diane Michelfelder & Neelke Doorn (eds.), Routledge Handbook of Philosophy of Engineering. Taylor & Francis Ltd. pp. 494-505.
    Engineers are traditionally regarded as trustworthy professionals who meet exacting standards. In this chapter I begin by explicating our trust relationship towards engineers, arguing that it is a linear but indirect relationship in which engineers “stand behind” the artifacts and technological systems that we rely on directly. The chapter goes on to explain how this relationship has become more complex as engineers have taken on two additional aims: the aim of social engineering to create and steer trust between (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  11.  21
    Expertise, Automation and Trust in X-Ray Screening of Cabin Baggage.Alain Chavaillaz, Adrian Schwaninger, Stefan Michel & Juergen Sauer - 2019 - Frontiers in Psychology 10.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12. In AI we trust? Perceptions about automated decision-making by artificial intelligence.Theo Araujo, Natali Helberger, Sanne Kruikemeier & Claes H. de Vreese - 2020 - AI and Society 35 (3):611-623.
    Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, and judicial (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  13.  40
    Developing Automated Deceptions and the Impact on Trust.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2015 - Philosophy and Technology 28 (1):91-105.
    As software developers design artificial agents , they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  14.  51
    Application of artificial intelligence: risk perception and trust in the work context with different impact levels and task types.Uwe Klein, Jana Depping, Laura Wohlfahrt & Pantaleon Fassbender - forthcoming - AI and Society:1-12.
    Following the studies of Araujo et al. (AI Soc 35:611–623, 2020) and Lee (Big Data Soc 5:1–16, 2018), this empirical study uses two scenario-based online experiments. The sample consists of 221 subjects from Germany, differing in both age and gender. The original studies are not replicated one-to-one. New scenarios are constructed as realistically as possible and focused on everyday work situations. They are based on the AI acceptance model of Scheuer (Grundlagen intelligenter KI-Assistenten und deren vertrauensvolle Nutzung. Springer, Wiesbaden, 2020) (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15.  14
    Shall AI moderators be made visible? Perception of accountability and trust in moderation systems on social media platforms.Dominic DiFranzo, Natalya N. Bazarova, Aparajita Bhandari & Marie Ozanne - 2022 - Big Data and Society 9 (2).
    This study examines how visibility of a content moderator and ambiguity of moderated content influence perception of the moderation system in a social media environment. In the course of a two-day pre-registered experiment conducted in a realistic social media simulation, participants encountered moderated comments that were either unequivocally harsh or ambiguously worded, and the source of moderation was either unidentified, or attributed to other users or an automated system (AI). The results show that when comments were moderated by an AI (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  16.  7
    More Than a Feeling—Interrelation of Trust Layers in Human-Robot Interaction and the Role of User Dispositions and State Anxiety.Linda Miller, Johannes Kraus, Franziska Babel & Martin Baumann - 2021 - Frontiers in Psychology 12.
    With service robots becoming more ubiquitous in social life, interaction design needs to adapt to novice users and the associated uncertainty in the first encounter with this technology in new emerging environments. Trust in robots is an essential psychological prerequisite to achieve safe and convenient cooperation between users and robots. This research focuses on psychological processes in which user dispositions and states affect trust in robots, which in turn is expected to impact the behavior and reactions in the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  17.  23
    Looking for Age Differences in Self-Driving Vehicles: Examining the Effects of Automation Reliability, Driving Risk, and Physical Impairment on Trust.Ericka Rovira, Anne Collins McLaughlin, Richard Pak & Luke High - 2019 - Frontiers in Psychology 10.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18.  13
    Effects of Trust, Self-Confidence, and Feedback on the Use of Decision Automation.Rebecca Wiczorek & Joachim Meyer - 2019 - Frontiers in Psychology 10.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19.  56
    Do the Ends Justify the Means? Variation in the Distributive and Procedural Fairness of Machine Learning Algorithms.Lily Morse, Mike Horia M. Teodorescu, Yazeed Awwad & Gerald C. Kane - 2021 - Journal of Business Ethics 181 (4):1083-1095.
    Recent advances in machine learning methods have created opportunities to eliminate unfairness from algorithmic decision making. Multiple computational techniques (i.e., algorithmic fairness criteria) have arisen out of this work. Yet, urgent questions remain about the perceived fairness of these criteria and in which situations organizations should use them. In this paper, we seek to gain insight into these questions by exploring fairness perceptions of five algorithmic criteria. We focus on two key dimensions of fairness evaluations: distributive fairness and procedural fairness. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  20.  11
    Multi-device trust transfer: Can trust be transferred among multiple devices?Kohei Okuoka, Kouichi Enami, Mitsuhiko Kimoto & Michita Imai - 2022 - Frontiers in Psychology 13.
    Recent advances in automation technology have increased the opportunity for collaboration between humans and multiple autonomous systems such as robots and self-driving cars. In research on autonomous system collaboration, the trust users have in autonomous systems is an important topic. Previous research suggests that the trust built by observing a task can be transferred to other tasks. However, such research did not focus on trust in multiple different devices but in one device or several of the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. Transparency and the Black Box Problem: Why We Do Not Trust AI.Warren J. von Eschenbach - 2021 - Philosophy and Technology 34 (4):1607-1622.
    With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  22.  41
    Trust, risk perception, and intention to use autonomous vehicles: an interdisciplinary bibliometric review.Mohammad Naiseh, Jediah Clark, Tugra Akarsu, Yaniv Hanoch, Mario Brito, Mike Wald, Thomas Webster & Paurav Shukla - forthcoming - AI and Society:1-21.
    Autonomous vehicles (AV) offer promising benefits to society in terms of safety, environmental impact and increased mobility. However, acute challenges persist with any novel technology, inlcuding the perceived risks and trust underlying public acceptance. While research examining the current state of AV public perceptions and future challenges related to both societal and individual barriers to trust and risk perceptions is emerging, it is highly fragmented across disciplines. To address this research gap, by using the Web of Science database, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  23.  3
    Algorithmic Decision-Making, Agency Costs, and Institution-Based Trust.Keith Dowding & Brad R. Taylor - 2024 - Philosophy and Technology 37 (2):1-22.
    Algorithm Decision Making (ADM) systems designed to augment or automate human decision-making have the potential to produce better decisions while also freeing up human time and attention for other pursuits. For this potential to be realised, however, algorithmic decisions must be sufficiently aligned with human goals and interests. We take a Principal-Agent (P-A) approach to the questions of ADM alignment and trust. In a broad sense, ADM is beneficial if and only if human principals can trust algorithmic agents (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24.  39
    Trust, understanding, and machine translation: the task of translation and the responsibility of the translator.Melvin Chen - forthcoming - AI and Society:1-13.
    Could translation be fully automated? We must first acknowledge the complexity, ambiguity, and diversity of natural languages. These aspects of natural languages, when combined with a particular dilemma known as the computational dilemma, appear to imply that the machine translator faces certain obstacles that a human translator has already managed to overcome. At the same time, science has not yet solved the problem of how human brains process natural languages and how human beings come to acquire natural language understanding. We (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns.Aurelia Tamò-Larrieux, Christoph Lutz, Eduard Fosch Villaronga & Heike Felzmann - 2019 - Big Data and Society 6 (1).
    Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect (...)
    Direct download  
     
    Export citation  
     
    Bookmark   14 citations  
  26.  42
    Modeling AI Trust for 2050: perspectives from media and info-communication experts.Katalin Feher, Lilla Vicsek & Mark Deuze - forthcoming - AI and Society:1-14.
    The study explores the future of AI-driven media and info-communication as envisioned by experts from all world regions, defining relevant terminology and expectations for 2050. Participants engaged in a 4-week series of surveys, questioning their definitions and projections about AI for the field of media and communication. Their expectations predict universal access to democratically available, automated, personalized and unbiased information determined by trusted narratives, recolonization of information technology and the demystification of the media process. These experts, as technology ambassadors, advocate (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27.  13
    Toward a Holistic Communication Approach to an Automated Vehicle's Communication With Pedestrians: Combining Vehicle Kinematics With External Human-Machine Interfaces for Differently Sized Automated Vehicles.Merle Lau, Meike Jipp & Michael Oehl - 2022 - Frontiers in Psychology 13.
    Future automated vehicles of different sizes will share the same space with other road users, e. g., pedestrians. For a safe interaction, successful communication needs to be ensured, in particular, with vulnerable road users, such as pedestrians. Two possible communication means exist for AVs: vehicle kinematics for implicit communication and external human-machine interfaces for explicit communication. However, the exact interplay is not sufficiently studied yet for pedestrians' interactions with AVs. Additionally, very few other studies focused on the interplay of vehicle (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28.  83
    Nonverbal Behaviors “Speak” Relational Messages of Dominance, Trust, and Composure.Judee K. Burgoon, Xinran Wang, Xunyu Chen, Steven J. Pentland & Norah E. Dunbar - 2021 - Frontiers in Psychology 12.
    Nonverbal signals color the meanings of interpersonal relationships. Humans rely on facial, head, postural, and vocal signals to express relational messages along continua. Three of relevance are dominance-submission, composure-nervousness and trust-distrust. Machine learning and new automated analysis tools are making possible a deeper understanding of the dynamics of relational communication. These are explored in the context of group interactions during a game entailing deception. The “messiness” of studying communication under naturalistic conditions creates many measurement and design obstacles that are (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29.  14
    Emerging technologies and anticipatory images: Uncertain ways of knowing with automated and connected mobilities.Sarah Pink, Vaike Fors & Thomas Lindgren - 2018 - Philosophy of Photography 9 (2):195-216.
    In this article we outline two different ways of ‘seeing’ autonomous driving (AD) cars. The first corresponds with the technological innovation narrative, published in online industry, policy, business and other news contexts, that pitches AD cars as the solution to societal problems, and urges users to trust and accept them so that such benefits can be accrued. The second is a narrative of everyday improvisation, which was visualized through our video ethnography and participant mapping exercises. Our research, undertaken in (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  30.  12
    The ABC of algorithmic aversion: not agent, but benefits and control determine the acceptance of automated decision-making.Gabi Schaap, Tibor Bosse & Paul Hendriks Vettehen - forthcoming - AI and Society:1-14.
    While algorithmic decision-making (ADM) is projected to increase exponentially in the coming decades, the academic debate on whether people are ready to accept, trust, and use ADM as opposed to human decision-making is ongoing. The current research aims at reconciling conflicting findings on ‘algorithmic aversion’ in the literature. It does so by investigating algorithmic aversion while controlling for two important characteristics that are often associated with ADM: increased benefits (monetary and accuracy) and decreased user control. Across three high-powered (Ntotal (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31.  7
    ‘Can I trust my patient?’ Machine Learning support for predicting patient behaviour.Florian Funer & Sabine Salloch - 2023 - Journal of Medical Ethics 49 (8):543-544.
    Giorgia Pozzi’s feature article1 on the risks of testimonial injustice when using automated prediction drug monitoring programmes (PDMPs) turns the spotlight on a pressing and well-known clinical problem: physicians’ challenges to predict patient behaviour, so that treatment decisions can be made based on this information, despite any fallibility. Currently, as one possible way to improve prognostic assessments of patient behaviour, Machine Learning-driven clinical decision support systems (ML-CDSS) are being developed and deployed. To make her point, Pozzi discusses ML-CDSSs that are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  32.  54
    Adopting AI: how familiarity breeds both trust and contempt.Michael C. Horowitz, Lauren Kahn, Julia Macdonald & Jacquelyn Schneider - forthcoming - AI and Society:1-15.
    Despite pronouncements about the inevitable diffusion of artificial intelligence and autonomous technologies, in practice, it is human behavior, not technology in a vacuum, that dictates how technology seeps into—and changes—societies. To better understand how human preferences shape technological adoption and the spread of AI-enabled autonomous technologies, we look at representative adult samples of US public opinion in 2018 and 2020 on the use of four types of autonomous technologies: vehicles, surgery, weapons, and cyber defense. By focusing on these four diverse (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33.  41
    Bosses without a heart: socio-demographic and cross-cultural determinants of attitude toward Emotional AI in the workplace.Peter Mantello, Manh-Tung Ho, Minh-Hoang Nguyen & Quan-Hoang Vuong - 2023 - AI and Society 38 (1):97-119.
    Biometric technologies are becoming more pervasive in the workplace, augmenting managerial processes such as hiring, monitoring and terminating employees. Until recently, these devices consisted mainly of GPS tools that track location, software that scrutinizes browser activity and keyboard strokes, and heat/motion sensors that monitor workstation presence. Today, however, a new generation of biometric devices has emerged that can sense, read, monitor and evaluate the affective state of a worker. More popularly known by its commercial moniker, Emotional AI, the technology stems (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  34.  23
    Big Data and Democracy.Kevin Macnish & Jai Galliott (eds.) - 2020 - Edinburgh University Press.
    What's wrong with targeted advertising in political campaigns? Are echo chambers a matter of genuine concern? How does data collection impact on trust in society? As decision-making becomes increasingly automated, how can decision-makers be held to account? This collection consider potential solutions to these challenges.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  35.  29
    Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems.Andrea Ferrario - 2022 - Journal of Medical Ethics 48 (7):492-494.
    In their article ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’, Durán and Jongsma discuss the epistemic and ethical challenges raised by black box algorithms in medical practice. The opacity of black box algorithms is an obstacle to the trustworthiness of their outcomes. Moreover, the use of opaque algorithms is not normatively justified in medical practice. The authors introduce a formalism, called computational reliabilism, which allows generating justified beliefs on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36.  9
    In Defense of Sociotechnical Pragmatism.David Watson & Jakob Mökander - 2023 - In Francesca Mazzi (ed.), The 2022 Yearbook of the Digital Governance Research Group. Springer Nature Switzerland. pp. 131-164.
    The current discourse on fairness, accountability, and transparency in machine learning is driven by two competing narratives: sociotechnical dogmatism, which holds that society is full of inefficiencies and imperfections that can only be solved by better algorithms; and sociotechnical skepticism, which opposes many instances of automation on principle. Both perspectives, we argue, are reductive and unhelpful. In this chapter, we review a large, diverse body of literature in an attempt to move beyond this restrictive duality, toward a pragmatic synthesis (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  37.  14
    PDMP causes more than just testimonial injustice.Tina Nguyen - 2023 - Journal of Medical Ethics 49 (8):549-550.
    In the article ‘Testimonial injustice in medical machine learning’, Pozzi argues that the prescription drug monitoring programme (PDMP) leads to testimonial injustice as physicians are more inclined to trust the PDMP’s risk scores over the patient’s own account of their medication history.1 Pozzi further develops this argument by discussing how credibility shifts from patients to machine learning (ML) systems that are supposedly neutral. As a result, a sense of distrust is now formed between patients and physicians. While there are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38.  30
    “I’m afraid I can’t let you do that, Doctor”: meaningful disagreements with AI in medical contexts.Hendrik Kempt, Jan-Christoph Heilinger & Saskia K. Nagel - forthcoming - AI and Society:1-8.
    This paper explores the role and resolution of disagreements between physicians and their diagnostic AI-based decision support systems. With an ever-growing number of applications for these independently operating diagnostic tools, it becomes less and less clear what a physician ought to do in case their diagnosis is in faultless conflict with the results of the DSS. The consequences of such uncertainty can ultimately lead to effects detrimental to the intended purpose of such machines, e.g. by shifting the burden of proof (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  39. Artificial Intelligence as a Means to Moral Enhancement.Michał Klincewicz - 2016 - Studies in Logic, Grammar and Rhetoric 48 (1):171-187.
    This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as Kantianism or utilitarianism, (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  40.  61
    Big tech and societal sustainability: an ethical framework.Bernard Arogyaswamy - 2020 - AI and Society 35 (4):829-840.
    Sustainability is typically viewed as consisting of three forces, economic, social, and ecological, in tension with one another. In this paper, we address the dangers posed to societal sustainability. The concern being addressed is the very survival of societies where the rights of individuals, personal and collective freedoms, an independent judiciary and media, and democracy, despite its messiness, are highly valued. We argue that, as a result of various technological innovations, a range of dysfunctional impacts are threatening social and political (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  41.  6
    When can we Kick (Some) Humans “Out of the Loop”? An Examination of the use of AI in Medical Imaging for Lumbar Spinal Stenosis.Kathryn Muyskens, Yonghui Ma, Jerry Menikoff, James Hallinan & Julian Savulescu - forthcoming - Asian Bioethics Review:1-17.
    Artificial intelligence (AI) has attracted an increasing amount of attention, both positive and negative. Its potential applications in healthcare are indeed manifold and revolutionary, and within the realm of medical imaging and radiology (which will be the focus of this paper), significant increases in accuracy and speed, as well as significant savings in cost, stand to be gained through the adoption of this technology. Because of its novelty, a norm of keeping humans “in the loop” wherever AI mechanisms are deployed (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42. From eye to machine: Shifting authority in color measurement.Sean F. Johnston - 2002 - In Barbara Saunders & Van Jaap Brakel (eds.), Theories, Technologies, Instrumentalities of Color: Anthropological and Historiographic Perspectives. Upa. pp. 289-306.
    Given a subject so imbued with contention and conflicting theoretical stances, it is remarkable that automated instruments ever came to replace the human eye as sensitive arbiters of color specification. Yet, dramatic shifts in assumptions and practice did occur in the first half of the twentieth century. How and why was confidence transferred from careful observers to mechanized devices when the property being measured – color – had become so closely identified with human physiology and psychology? A fertile perspective on (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  11
    Satellites, war, climate change, and the environment: are we at risk for environmental deskilling?Samantha Jo Fried - 2023 - AI and Society 38 (6):2305-2313.
    Currently, we find ourselves in a paradigm in which we believe that accepting climate change data will lead to a kind of automatic action toward the preservation of our environment. I have argued elsewhere (Fried 2020) that this lack of civic action on climate data is significant when placed in the historical, military context of the technologies that collect this data––Earth remote sensing technologies. However, I have not yet discussed the phenomenological or moral implications of this context, which are deeply (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  44.  21
    Satellites, war, climate change, and the environment: are we at risk for environmental deskilling?Samantha Jo Fried - 2020 - AI and Society:1-9.
    Currently, we find ourselves in a paradigm in which we believe that accepting climate change data will lead to a kind of automatic action toward the preservation of our environment. I have argued elsewhere (Fried 2020) that this lack of civic action on climate data is significant when placed in the historical, military context of the technologies that collect this data––Earth remote sensing technologies. However, I have not yet discussed the phenomenological or moral implications of this context, which are deeply (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45.  65
    An Analysis of Student Privacy Rights in the Use of Plagiarism Detection Systems.Bo Brinkman - 2013 - Science and Engineering Ethics 19 (3):1255-1266.
    Plagiarism detection services are a powerful tool to help encourage academic integrity. Adoption of these services has proven to be controversial due to ethical concerns about students’ rights. Central to these concerns is the fact that most such systems make permanent archives of student work to be re-used in plagiarism detection. This computerization and automation of plagiarism detection is changing the relationships of trust and responsibility between students, educators, educational institutions, and private corporations. Educators must respect student privacy (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46.  14
    Trust in numbers: the pursuit of objectivity in science and public life.Theodore M. Porter - 1995 - Princeton, N.J.: Princeton University Press.
    What accounts for the prestige of quantitative methods? The usual answer is that quantification is desirable in social investigation as a result of its successes in science. Trust in Numbers questions whether such success in the study of stars, molecules, or cells should be an attractive model for research on human societies, and examines why the natural sciences are highly quantitative in the first place. Theodore Porter argues that a better understanding of the attractions of quantification in business, government, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   90 citations  
  47.  30
    Public Trust in Business and Its Determinants.Bidhan Parmar, Kirsten Martin & Michael Pirson - 2019 - Business and Society 58 (1):132-166.
    Public trust in business, defined as the degree to which the public—meaning society at large—trusts business in general, is largely understudied. This article suggests four domains of existing trust research from which scholars of public trust in business can draw. The authors then propose four main hypotheses, which aim to predict the determinants of public trust, and test these hypotheses using a factorial vignette methodology. These results will provide scholars with more direction as this article is, (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   9 citations  
  48. Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  49.  9
    Constructing the ‘automatic’ Greenwich time system: George Biddell Airy and the telegraphic distribution of time, c.1852–1880.Yuto Ishibashi - 2020 - British Journal for the History of Science 53 (1):25-46.
    In the context of the telegraphic distribution of Greenwich time, while the early experiments, the roles of successive Astronomers Royal in its expansion, and its impacts on the standardization of time in Victorian Britain have all been evaluated, the attempts of George Biddell Airy and his collaborators in constructing the Royal Observatory's time signals as the authoritative source of standard time have been underexplored within the existing historical literature. This paper focuses on the wide-ranging activities of Airy, his assistant astronomers, (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50.  26
    Trust in nurse–patient relationships.Leyla Dinç & Chris Gastmans - 2013 - Nursing Ethics 20 (5):501-516.
    The aim of this study was to report the results of a literature review of empirical studies on trust within the nurse–patient relationship. A search of electronic databases yielded 34 articles published between 1980 and 2011. Twenty-two studies used a qualitative design, and 12 studies used quantitative research methods. The context of most quantitative studies was nurse caring behaviours, whereas most qualitative studies focused on trust in the nurse–patient relationship. Most of the quantitative studies used a descriptive design, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
1 — 50 / 999