Practical quantum computing devices and their applications to AI in particular are presently mostly speculative. Nevertheless, questions about whether this future technology, if achieved, presents any special ethical issues are beginning to take shape. As with any novel technology, one can be reasonably confident that the challenges presented by "quantum AI" will be a mixture of something new and something old. Other commentators (Sevilla & Moreno 2019), have emphasized continuity, arguing that quantum computing does not substantially affect approaches to value alignment methods for AI, although they allow that further questions arise concerning governance and verification of quantum AI applications. In this brief paper, we turn our attention to the problem of identifying as-yet-unknown discontinuities that might result from quantum AI applications. Wise development, introduction, and use of any new technology depends on successfully anticipating new modes of failure for that technology. This requires rigorous efforts to break systems in protected sandboxes, and it must be conducted at all stages of technology design, development, and deployment. Such testing must also be informed by technical expertise but cannot be left solely to experts in the technology because of the history of failures to predict how non-experts will use or adapt to new technologies. This interplay between experts and non-experts may be particularly acute for quantum AI because quantum mechanics is notoriously difficult to understand. (As Richard Feynman quipped, "Anyone who claims to understand quantum mechanics is either lying or crazy.") We will discuss the extent to which the difficulties in understanding the physics underlying quantum computing challenges attempts to anticipate new failure modes that might be introduced in AI applications intended for unsupervised operation in the public sphere.