Abstract
The development and application of artificial intelligence (AI) technology has raised many concerns about privacy violations in the public. Thus, privacy-preserving computation technologies (PPCTs) have been developed, and it is expected that these new privacy protection technologies can solve the current privacy problems. By not directly using raw data provided by users, PPCTs claim to protect privacy in a better way than their predecessors. They still have technical limitations, and considerable research has treated PPCTs as a privacy-protecting tool and focused on possible technical improvements. In this article, we argue that PPCTs cannot effectively protect privacy in the narrow sense due to their technical limitations. Moreover, although these shortcomings could be remedied, they still fall into a paradox because their aim is to protect privacy, but they could reveal private information from users. This paradox of privacy protection not only aggravates the social impact arising from AI privacy issues but may also make the current privacy protection lose its meaning, resulting in a situation in which privacy protection is useless.