Abstract
Artificial Intelligence (AI) is increasingly being used in disaster risk management applications to predict the effect of upcoming disasters, plan for mitigation strategies, and determine who needs how much aid after a disaster strikes. The media is filled with unintended ethical concerns of AI algorithms, such as image recognition algorithms not recognizing persons of color or racist algorithmic predictions of whether offenders will recidivate. We know such unintended ethical consequences must play a role in DRM as well, yet there is surprisingly little research on exactly what the unintended consequences are and what we can do to mitigate them. The aim of this perspective is to call researchers working on fairness, accountability, and transparency to work with DRM and local experts—so we can ensure that disaster mitigation and relief is accountable, considers local values, and is not unintentionally biased.