“Moral Permissibility of Action Plans”, 2019-07-17 (; similar):
Research in classical planning so far was mainly concerned with generating a satisficing or an optimal plan. However, if such systems are used to make decisions that are relevant to humans, one should also consider the ethical consequences generated plans can have. We address this challenge by analyzing in how far it is possible to generalize existing approaches of machine ethics to automatic planning systems. Traditionally, ethical principles are formulated in an action-based manner, allowing to judge the execution of one action. We show how such a judgment can be generalized to plans. Further, we study the computational complexity of making ethical judgment about plans.
…We exemplified and explained our formalizations using classical moral dilemmas such as the trolley problem, and identified how and for which reasons different principles may arrive at different (or the same) conclusions. Furthermore, we studied the computational complexity of verifying whether a given plan is permissible with respect to each of the 5 investigated principles. We saw that, with respect to our formalization, verification is PSPACE-complete for utilitarianism, co-NP-complete for do-no-harm, for do-no-instrumental-harm, and for the doctrine of double effect, and that it is polynomial-time for deontology. It turned out that verifying the do-no-harm principles involves a combinatorial reasoning over possible sets of actions that lead to harm or that may be instrumental towards achieving a goal condition, which makes verifying those ethical principles surprisingly hard
View PDF: