Abstract
Public consumption of artificial intelligence (AI) technologies has been rarely investigated from the perspective of data surveillance and security. We show that the technology acceptance model, when properly modified with security and surveillance fears about AI, builds an insight on how individuals begin to use, accept, or evaluate AI and its automated decisions. We conducted two studies, and found positive roles of perceived ease of use (PEOU) and perceived usefulness (PU). AI security concern, however, negatively affected PEOU and PU, resulting in less acceptance of AI—(1) use, (2) preference, and (3) participation. AI surveillance concern also had negative effects on the credibility of AI and its recommendations. We integrated extant literature on socio-demographic differences, providing an insight on how AI acceptance is based on one’s rationality regarding (1) technological risks (security/surveillance) and (2) benefits (PEOU/PU) as well as other contextual factors of socio-demographics.