Abstract
With new technologies come new ethical challenges. Often, we can apply previously established principles, even though it may take some time to fully understand the detail of the new technology - or the questions that arise from it. The International Commission on Radiological Protection, for example, was founded in 1928 and has based its advice on balancing the radiation exposure associated with X-rays and CT scans with the diagnostic benefits of the new investigations. They have regularly updated their advice as evidence has accumulated and technologies have changed,1 and have been able to extrapolate from well-established ethical principles. Other new technologies lend themselves less well to off-the-peg ethical solutions. In several articles in this edition the ethical challenges associated with the use of artificial intelligence in medicine are addressed. Although multiple ethical codes and guidelines have been written on the use and development of AI, Hagendorf noted that many of them reiterated a ‘ deontologically oriented, action-restricting ethic based on universal abidance of principles and rules’. 2 Applying pre-existing ethical frameworks to artificial intelligence is problematic for several reasons. In particular, AI has two characteristics which are very different from the current clinical practice on which traditional medical ethics are based: 1. The so called ‘black box’ of deep learning, whereby a deep neural network is trained to iteratively adapt to make …