Results for 'chatbots, chatGPT, ethics of AI, AI, emojis, manipulation, deception'

990 found
Order:
  1. Chatbots shouldn’t use emojis.Carissa Véliz - 2023 - Nature 615:375.
    Limits need to be set on AI’s ability to simulate human feelings. Ensuring that chatbots don’t use emotive language, including emojis, would be a good start. Emojis are particularly manipulative. Humans instinctively respond to shapes that look like faces — even cartoonish or schematic ones — and emojis can induce these reactions.
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  2.  15
    AI Chatbots and Challenges of HIPAA Compliance for AI Developers and Vendors.Delaram Rezaeikhonakdar - 2023 - Journal of Law, Medicine and Ethics 51 (4):988-995.
    Developers and vendors of large language models (“LLMs”) — such as ChatGPT, Google Bard, and Microsoft’s Bing at the forefront—can be subject to Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) when they process protected health information (“PHI”) on behalf of the HIPAA covered entities. In doing so, they become business associates or subcontractors of a business associate under HIPAA.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  85
    Ethics of generative AI and manipulation: a design-oriented research agenda.Michael Klenk - 2024 - Ethics and Information Technology 26 (1):1-15.
    Generative AI enables automated, effective manipulation at scale. Despite the growing general ethical discussion around generative AI, the specific manipulation risks remain inadequately investigated. This article outlines essential inquiries encompassing conceptual, empirical, and design dimensions of manipulation, pivotal for comprehending and curbing manipulation risks. By highlighting these questions, the article underscores the necessity of an appropriate conceptualisation of manipulation to ensure the responsible development of Generative AI technologies.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  4.  19
    Why ChatGPT Means Communication Ethics Problems for Bioethics.Andrew J. Barnhart, Jo Ellen M. Barnhart & Kris Dierickx - 2023 - American Journal of Bioethics 23 (10):80-82.
    In his article, “What should ChatGPT mean for bioethics?” I. Glenn Cohen explores the bioethical implications of Open AI’s chatbot ChatGPT and the use of similar Large Language Models (LLMs) (Cohen...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Ethics of generative AI.Hazem Zohny, John McMillan & Mike King - 2023 - Journal of Medical Ethics 49 (2):79-80.
    Artificial intelligence (AI) and its introduction into clinical pathways presents an array of ethical issues that are being discussed in the JME. 1–7 The development of AI technologies that can produce text that will pass plagiarism detectors 8 and are capable of appearing to be written by a human author 9 present new issues for medical ethics. One set of worries concerns authorship and whether it will now be possible to know that an author or student in fact produced (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  6.  9
    Plagiarism and Wrong Content as Potential Challenges of Using Chatbots Like ChatGPT in Medical Research.Sam Sedaghat - forthcoming - Journal of Academic Ethics:1-4.
    Chatbots such as ChatGPT have the potential to change researchers’ lives in many ways. Despite all the advantages of chatbots, many challenges to using chatbots in medical research remain. Wrong and incorrect content presented by chatbots is a major possible disadvantage. The authors’ credibility could be tarnished if wrong content is presented in medical research. Additionally, ChatGPT, as the currently most popular generative AI, does not routinely present references for its answers. Double-checking references and resources used by chatbots might be (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7. Escape climate apathy by harnessing the power of generative AI.Quan-Hoang Vuong & Manh-Tung Ho - 2024 - AI and Society 39:1-2.
    “Throw away anything that sounds too complicated. Only keep what is simple to grasp...If the information appears fuzzy and causes the brain to implode after two sentences, toss it away and stop listening. Doing so will make the news as orderly and simple to understand as the truth.” - In “GHG emissions,” The Kingfisher Story Collection, (Vuong 2022a).
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  8. Africa, ChatGPT, and Generative AI Systems: Ethical Benefits, Concerns, and the Need for Governance.Kutoma Wakunuma & Damian Eke - 2024 - Philosophies 9 (3):80.
    This paper examines the impact and implications of ChatGPT and other generative AI technologies within the African context while looking at the ethical benefits and concerns that are particularly pertinent to the continent. Through a robust analysis of ChatGPT and other generative AI systems using established approaches for analysing the ethics of emerging technologies, this paper provides unique ethical benefits and concerns for these systems in the African context. This analysis combined approaches such as anticipatory technology ethics (ATE), (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9.  25
    Ethical Problems of the Use of Deepfakes in the Arts and Culture.Rafael Cejudo - 2023 - In Francisco Lara & Jan Deckers (eds.), Ethics of Artificial Intelligence. Springer Nature Switzerland. pp. 129-148.
    Deepfakes are highly realistic, albeit fake, audiovisual contents created with AI. This technology allows the use of deceptive audiovisual material that can impersonate someone’s identity to erode their reputation or manipulate the audience. Deepfakes are also one of the applications of AI that can be used in cultural industries and even to produce works of art. On the one hand, it is important to clarify whether deepfakes in arts and culture are free from the ethical dangers mentioned above. On the (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  10. AI Can Help Us Live More Deliberately.Julian Friedland - 2019 - MIT Sloan Management Review 60 (4).
    Our rapidly increasing reliance on frictionless AI interactions may increase cognitive and emotional distance, thereby letting our adaptive resilience slacken and our ethical virtues atrophy from disuse. Many trends already well underway involve the offloading of cognitive, emotional, and ethical labor to AI software in myriad social, civil, personal, and professional contexts. Gradually, we may lose the inclination and capacity to engage in critically reflective thought, making us more cognitively and emotionally vulnerable and thus more anxious and prone to manipulation (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  11. The dialectic of desire: AI chatbots and the desire not to know.Jack Black - 2023 - Psychoanalysis, Culture and Society 28 (4):607--618.
    Exploring the relationship between humans and AI chatbots, as well as the ethical concerns surrounding their use, this paper argues that our relations with chatbots are not solely based on their function as a source of knowledge, but, rather, on the desire for the subject not to know. It is argued that, outside of the very fears and anxieties that underscore our adoption of AI, the desire not to know reveals the potential to embrace the very loss AI avers. Consequently, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  25
    Friend or foe? Exploring the implications of large language models on the science system.Benedikt Fecher, Marcel Hebing, Melissa Laufer, Jörg Pohle & Fabian Sofsky - forthcoming - AI and Society:1-13.
    The advent of ChatGPT by OpenAI has prompted extensive discourse on its potential implications for science and higher education. While the impact on education has been a primary focus, there is limited empirical research on the effects of large language models (LLMs) and LLM-based chatbots on science and scientific practice. To investigate this further, we conducted a Delphi study involving 72 researchers specializing in AI and digitization. The study focused on applications and limitations of LLMs, their effects on the science (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13.  51
    Liability to Deception and Manipulation: The Ethics of Undercover Policing.Christopher Nathan - 2016 - Journal of Applied Philosophy 34 (3):370-388.
    Does undercover police work inevitably wrong its targets? Or are undercover activities justified by a general security benefit? In this article I argue that people can make themselves liable to deception and manipulation. The debate on undercover policing will proceed more fruitfully if the tactic can be conceptualised along those lines, rather than as essentially ‘dirty hands’ activity, in which people are wronged in pursuit of a necessary good, or in instrumentalist terms, according to which the harms of undercover (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  14. ChatGPT: deconstructing the debate and moving it forward.Mark Coeckelbergh & David J. Gunkel - forthcoming - AI and Society:1-11.
    Large language models such as ChatGPT enable users to automatically produce text but also raise ethical concerns, for example about authorship and deception. This paper analyses and discusses some key philosophical assumptions in these debates, in particular assumptions about authorship and language and—our focus—the use of the appearance/reality distinction. We show that there are alternative views of what goes on with ChatGPT that do not rely on this distinction. For this purpose, we deploy the two phased approach of deconstruction (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  15.  25
    Ethical and legal challenges of AI in marketing: an exploration of solutions.Dinesh Kumar & Nidhi Suthar - forthcoming - Journal of Information, Communication and Ethics in Society.
    Purpose Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal implications of using AI in marketing. Although previous research has revealed various ethical and legal issues, such as algorithmic discrimination and data privacy, there are no definitive answers. This paper aims to fill this gap by investigating AI’s ethical and legal concerns in marketing and suggesting feasible solutions. Design/methodology/approach The paper synthesises information from academic (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16. The Hazards of Putting Ethics on Autopilot.Julian Friedland, B. Balkin, David & Kristian Myrseth - 2024 - MIT Sloan Management Review 65 (4).
    The generative AI boom is unleashing its minions. Enterprise software vendors have rolled out legions of automated assistants that use large language model (LLM) technology, such as ChatGPT, to offer users helpful suggestions or to execute simple tasks. These so-called copilots and chatbots can increase productivity and automate tedious manual work. In this article, we explain how that leads to the risk that users' ethical competence may degrade over time — and what to do about it.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   32 citations  
  18. ChatGPT: Temptations of Progress.Rushabh H. Doshi, Simar S. Bajaj & Harlan M. Krumholz - 2023 - American Journal of Bioethics 23 (4):6-8.
    ChatGPT is an artificial intelligence (AI) chatbot that processes and generates natural language text, offering human-like responses to a wide range of questions and prompts. Five days after its re...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Social Agency for Artifacts: Chatbots and the Ethics of Artificial Intelligence.John Symons & Syed AbuMusab - 2024 - Digital Society 3:1-28.
    Ethically significant consequences of artificially intelligent artifacts will stem from their effects on existing social relations. Artifacts will serve in a variety of socially important roles—as personal companions, in the service of elderly and infirm people, in commercial, educational, and other socially sensitive contexts. The inevitable disruptions that these technologies will cause to social norms, institutions, and communities warrant careful consideration. As we begin to assess these effects, reflection on degrees and kinds of social agency will be required to make (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  20.  12
    Plagiarism, Academic Ethics, and the Utilization of Generative AI in Academic Writing.Julian Koplin - 2023 - International Journal of Applied Philosophy 37 (2):17-40.
    In the wake of ChatGPT’s release, academics and journal editors have begun making important decisions about whether and how to integrate generative artificial intelligence (AI) into academic publishing. Some argue that AI outputs in scholarly works constitute plagiarism, and so should be disallowed by academic journals. Others suggest that it is acceptable to integrate AI output into academic papers, provided that its contributions are transparently disclosed. By drawing on Taylor’s work on academic norms, this paper argues against both views. Unlike (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21.  98
    AI ethics should not remain toothless! A call to bring back the teeth of ethics.Rowena Rodrigues & Anaïs Rességuier - 2020 - Big Data and Society 7 (2).
    Ethics has powerful teeth, but these are barely being used in the ethics of AI today – it is no wonder the ethics of AI is then blamed for having no teeth. This article argues that ‘ethics’ in the current AI ethics field is largely ineffective, trapped in an ‘ethical principles’ approach and as such particularly prone to manipulation, especially by industry actors. Using ethics as a substitute for law risks its abuse and misuse. (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   29 citations  
  22.  24
    Are All Deceptions Manipulative or All Manipulations Deceptive?Shlomo Cohen - 2023 - Journal of Ethics and Social Philosophy 25 (2).
    Moral reflection and deliberation on both deception and manipulation is hindered by lack of agreement on the precise meanings of these concepts. Specifically, there is disagreement on how to understand their relation vis-à-vis each other. Curiously, according to one prominent view, all deceptions are instances of manipulations, while according to another, all manipulations are instances of deceptions. This paper makes that implicit disagreement explicit, and argues that both views are untenable. It concludes that deception and manipulation partially overlap, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23. All too human? Identifying and mitigating ethical risks of Social AI.Henry Shevlin - manuscript
    This paper presents an overview of the risks and benefits of Social AI, understood as conversational AI systems that cater to human social needs like romance, companionship, or entertainment. Section 1 of the paper provides a brief history of conversational AI systems and introduces conceptual distinctions to help distinguish varieties of Social AI and pathways to their deployment. Section 2 of the paper adds further context via a brief discussion of anthropomorphism and its relevance to assessment of human-chatbot relationships. Section (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  49
    Generative AI, Specific Moral Values: A Closer Look at ChatGPT’s New Ethical Implications for Medical AI.Gavin Victor, Jean-Christophe Bélisle-Pipon & Vardit Ravitsky - 2023 - American Journal of Bioethics 23 (10):65-68.
    Cohen’s (2023) mapping exercise of possible bioethical issues emerging from the use of ChatGPT in medicine provides an informative, useful, and thought-provoking trigger for discussions of AI ethic...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  25. The Whiteness of AI.Stephen Cave & Kanta Dihal - 2020 - Philosophy and Technology 33 (4):685-703.
    This paper focuses on the fact that AI is predominantly portrayed as white—in colour, ethnicity, or both. We first illustrate the prevalent Whiteness of real and imagined intelligent machines in four categories: humanoid robots, chatbots and virtual assistants, stock images of AI, and portrayals of AI in film and television. We then offer three interpretations of the Whiteness of AI, drawing on critical race theory, particularly the idea of the White racial frame. First, we examine the extent to which this (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  26.  14
    Death of a reviewer or death of peer review integrity? the challenges of using AI tools in peer reviewing and the need to go beyond publishing policies.Vasiliki Mollaki - 2024 - Research Ethics 20 (2):239-250.
    Peer review facilitates quality control and integrity of scientific research. Although publishing policies have adapted to include the use of Artificial Intelligence (AI) tools, such as Chat Generative Pre-trained Transformer (ChatGPT), in the preparation of manuscripts by authors, there is a lack of guidelines or policies on whether peer reviewers can use such tools. The present article highlights the lack of policies on the use of AI tools in the peer review process (PRP) and argues that we need to go (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. The Ethics of Military Influence Operations.Michael Skerker - 2023 - Conatus 8 (2):589-612.
    This article articulates a framework for normatively assessing influence operations, undertaken by national security institutions. Section I categorizes the vast field of possible types of influence operations according to the communication’s content, its attribution, the rights of the target audience, the communication’s purpose, and its secondary effects. Section II populates these categories with historical examples and section III evaluates these cases with a moral framework. I argue that deceptive or manipulative communications directed at non-liable audiences are presumptively immoral and illegitimate (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  28. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  29. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  30.  74
    How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners.Eva Weber-Guskar - 2021 - Ethics and Information Technology 23 (4):601-610.
    Interactions between humans and machines that include artificial intelligence are increasingly common in nearly all areas of life. Meanwhile, AI-products are increasingly endowed with emotional characteristics. That is, they are designed and trained to elicit emotions in humans, to recognize human emotions and, sometimes, to simulate emotions. The introduction of such systems in our lives is met with some criticism. There is a rather strong intuition that there is something wrong about getting attached to a machine, about having certain emotions (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  31. Facing Immersive “Post-Truth” in AIVR?Nadisha-Marie Aliman & Leon Kester - 2020 - Philosophies 5 (4):45.
    In recent years, prevalent global societal issues related to fake news, fakery, misinformation, and disinformation were brought to the fore, leading to the construction of descriptive labels such as “post-truth” to refer to the supposedly new emerging era. Thereby, the (mis-)use of technologies such as AI and VR has been argued to potentially fuel this new loss of “ground-truth”, for instance, via the ethically relevant deepfakes phenomena and the creation of realistic fake worlds, presumably undermining experiential veracity. Indeed, _unethical_ and (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  32. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge Social Science Handbook of Ai. Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33.  26
    Hybrid Ethics for Generative AI: Some Philosophical Inquiries on GANs.Antonio Carnevale, Claudia Falchi Delgado & Piercosma Bisconti - 2023 - Humana Mente 16 (44).
    Until now, the mass spread of fake news and its negative consequences have implied mainly textual content towards a loss of citizens' trust in institutions. Recently, a new type of machine learning framework has arisen, Generative Adversarial Networks (GANs) – a class of deep neural network models capable of creating multimedia content (photos, videos, audio) that simulate accurate content with extreme precision. While there are several areas of worthwhile application of GANs – e.g., in the field of audio-visual production, human-computer (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  34. AI and the future of humanity: ChatGPT-4, philosophy and education – Critical responses.Michael A. Peters, Liz Jackson, Marianna Papastephanou, Petar Jandrić, George Lazaroiu, Colin W. Evers, Bill Cope, Mary Kalantzis, Daniel Araya, Marek Tesar, Carl Mika, Lei Chen, Chengbing Wang, Sean Sturm, Sharon Rider & Steve Fuller - forthcoming - Educational Philosophy and Theory.
    Michael A PetersBeijing Normal UniversityChatGPT is an AI chatbot released by OpenAI on November 30, 2022 and a ‘stable release’ on February 13, 2023. It belongs to OpenAI’s GPT-3 family (generativ...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  35. What is a subliminal technique? An ethical perspective on AI-driven influence.Juan Pablo Bermúdez, Rune Nyrup, Sebastian Deterding, Celine Mougenot, Laura Moradbakhti, Fangzhou You & Rafael A. Calvo - 2023 - Ieee Ethics-2023 Conference Proceedings.
    Concerns about threats to human autonomy feature prominently in the field of AI ethics. One aspect of this concern relates to the use of AI systems for problematically manipulative influence. In response to this, the European Union’s draft AI Act (AIA) includes a prohibition on AI systems deploying subliminal techniques that alter people’s behavior in ways that are reasonably likely to cause harm (Article 5(1)(a)). Critics have argued that the term ‘subliminal techniques’ is too narrow to capture the target (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  11
    Ethical exploration of chatGPT in the modern K-14 economics classroom.Brad Scott & Sandy van der Poel - 2024 - International Journal of Ethics Education 9 (1):65-77.
    This paper addresses the challenge of ethically integrating ChatGPT, a sophisticated AI language model, into K-14 economics education. Amidst the growing presence of AI in classrooms, it proposes the “Evaluate, Reflect, Assurance” model, a novel decision-making framework grounded in normative and virtue ethics, to guide educators. This approach is detailed through a theoretical decision tree, offering educators a heuristic tool to weigh the educational advantages and ethical dimensions of using ChatGPT. An educator can use the decision tree to reach (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  37.  25
    The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts.Mohammad Hosseini, David B. Resnik & Kristi Holmes - 2023 - Research Ethics 19 (4):449-465.
    In this article, we discuss ethical issues related to using and disclosing artificial intelligence (AI) tools, such as ChatGPT and other systems based on large language models (LLMs), to write or edit scholarly manuscripts. Some journals, such as Science, have banned the use of LLMs because of the ethical problems they raise concerning responsible authorship. We argue that this is not a reasonable response to the moral conundrums created by the use of LLMs because bans are unenforceable and would encourage (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  38.  51
    The Ethics of ‘Deathbots’.Nora Freya Lindemann - 2022 - Science and Engineering Ethics 28 (6):1-15.
    Recent developments in AI programming allow for new applications: individualized chatbots which mimic the speaking and writing behaviour of one specific living or dead person. ‘Deathbots’, chatbots of the dead, have already been implemented and are currently under development by the first start-up companies. Thus, it is an urgent issue to consider the ethical implications of deathbots. While previous ethical theories of deathbots have always been based on considerations of the dignity of the deceased, I propose to shift the focus (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  39. The Ethics of Automating Therapy.Jake Burley, James J. Hughes, Alec Stubbs & Nir Eisikovits - 2024 - Ieet White Papers.
    The mental health crisis and loneliness epidemic have sparked a growing interest in leveraging artificial intelligence (AI) and chatbots as a potential solution. This report examines the benefits and risks of incorporating chatbots in mental health treatment. AI is used for mental health diagnosis and treatment decision-making and to train therapists on virtual patients. Chatbots are employed as always-available intermediaries with therapists, flagging symptoms for human intervention. But chatbots are also sold as stand-alone virtual therapists or as friends and lovers. (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  40.  65
    Analysis of Beliefs Acquired from a Conversational AI: Instruments-based Beliefs, Testimony-based Beliefs, and Technology-based Beliefs.Ori Freiman - forthcoming - Episteme:1-17.
    Speaking with conversational AIs, technologies whose interfaces enable human-like interaction based on natural language, has become a common phenomenon. During these interactions, people form their beliefs due to the say-so of conversational AIs. In this paper, I consider, and then reject, the concepts of testimony-based beliefs and instrument-based beliefs as suitable for analysis of beliefs acquired from these technologies. I argue that the concept of instrument-based beliefs acknowledges the non-human agency of the source of the belief. However, the analysis focuses (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  41. Feminist Re-Engineering of Religion-Based AI Chatbots.Hazel T. Biana - 2024 - Philosophies 9 (1):20.
    Religion-based AI chatbots serve religious practitioners by bringing them godly wisdom through technology. These bots reply to spiritual and worldly questions by drawing insights or citing verses from the Quran, the Bible, the Bhagavad Gita, the Torah, or other holy books. They answer religious and theological queries by claiming to offer historical contexts and providing guidance and counseling to their users. A criticism of these bots is that they may give inaccurate answers and proliferate bias by propagating homogenized versions of (...)
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  42.  14
    The Impact of Ethics Instruction and Internship on Students’ Ethical Perceptions About Social Media, Artificial Intelligence, and ChatGPT.I. -Huei Cheng & Seow Ting Lee - 2024 - Journal of Media Ethics 39 (2):114-129.
    Communication programs seek to cultivate students who become professionals not only with expertise in their chosen field, but also ethical awareness. The current study investigates how exposure to ethics instruction and internship experiences may influence communication students’ ethical perceptions, including ideological orientations on idealism and relativism, as well as awareness of contemporary ethical issues related to social media and artificial intelligence (AI). The effects were also assessed on students’ support for general uses of AI for communication practices and adoption (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43. The Ethics of Marketing to Vulnerable Populations.David Palmer & Trevor Hedberg - 2013 - Journal of Business Ethics 116 (2):403-413.
    An orthodox view in marketing ethics is that it is morally impermissible to market goods to specially vulnerable populations in ways that take advantage of their vulnerabilities. In his signature article “Marketing and the Vulnerable,” Brenkert (Bus Ethics Q Ruffin Ser 1:7–20, 1998) provided the first substantive defense of this position, one which has become a well-established view in marketing ethics. In what follows, we throw new light on marketing to the vulnerable by critically evaluating key components (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  44.  19
    The Ethics of Democratic Deceit.Derek Edyvane - 2014 - Journal of Applied Philosophy 32 (3):310-325.
    Deception presents a distinctive ethical problem for democratic politicians. This is because there seem in certain situations to be compelling democratic reasons for politicians both to deceive and not to deceive the public. Some philosophers have sought to negotiate this tension by appeal to moral principle, but such efforts may misrepresent the felt ambivalence surrounding dilemmas of public office. A different approach appeals to the moral character of politicians, and to the variety of forms of manipulative communication at their (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  45. Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures.Daniel Susser - 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 1.
    For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of (...)
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  46.  59
    Generative AI and human–robot interaction: implications and future agenda for business, society and ethics.Bojan Obrenovic, Xiao Gu, Guoyu Wang, Danijela Godinic & Ilimdorjon Jakhongirov - forthcoming - AI and Society:1-14.
    The revolution of artificial intelligence (AI), particularly generative AI, and its implications for human–robot interaction (HRI) opened up the debate on crucial regulatory, business, societal, and ethical considerations. This paper explores essential issues from the anthropomorphic perspective, examining the complex interplay between humans and AI models in societal and corporate contexts. We provided a comprehensive review of existing literature on HRI, with a special emphasis on the impact of generative models such as ChatGPT. The scientometric study posits that due to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47.  41
    Deceiving versus manipulating: An evidence‐based definition of deception.Don Fallis - 2024 - Analytic Philosophy 65 (2):223-240.
    What distinguishes deception from manipulation? Cohen (Australasian Journal of Philosophy, 96, 483 and 2018) proposes a new answer and explores its ethical implications. Appealing to new cases of “non‐deceptive manipulation” that involve intentionally causing a false belief, he offers a new definition of deception in terms of communication that rules out these counterexamples to the traditional definition. And, he leverages this definition in support of the claim that deception “carries heavier moral weight” than manipulation. In this paper, (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  48. ChatGPT’s Responses to Dilemmas in Medical Ethics: The Devil is in the Details.Lukas J. Meier - 2023 - American Journal of Bioethics 23 (10):63-65.
    In their Target Article, Rahimzadeh et al. (2023) discuss the virtues and vices of employing ChatGPT in ethics education for healthcare professionals. To this end, they confront the chatbot with a moral dilemma and analyse its response. In interpreting the case, ChatGPT relies on Beauchamp and Childress’ four prima-facie principles: beneficence, non-maleficence, respect for patient autonomy, and justice. While the chatbot’s output appears admirable at first sight, it is worth taking a closer look: ChatGPT not only misses the point (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  49. Norms of Truthfulness and Non-Deception in Kantian Ethics.Donald Wilson - 2015 - In Pablo Muchnik Oliver Thorndike (ed.), Rethinking Kant Volume 4. Cambridge Scholars Press. pp. 111-134.
    Questions about the morality of lying tend to be decided in a distinctive way early in discussions of Kant’s view on the basis of readings of the false promising example in his Groundwork of The metaphysics of morals. The standard deception-as-interference model that emerges typically yields a very general and strong presumption against deception associated with a narrow and rigorous model subject to a range of problems. In this paper, I suggest an alternative account based on Kant’s discussion (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  50. AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA).Alexandre Erler & Vincent C. Müller - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge. pp. 187-199.
    This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 990