When it becomes possible to program decision-making based on moral principles into machines, will self-interest or the public good predominate? In a series of surveys, Bonnefon et al. found that even though participants approve of autonomous vehicles that might sacrifice passengers to save others, respondents would prefer not to ride in such vehicles. Respondents would also not approve regulations mandating self-sacrifice, and such regulations would make them less willing to buy an autonomous vehicle.
Autonomous vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passenger to save the pedestrians. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge.
We found that participants in six Amazon Mechanical Turk studies approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs.
The study participants disapprove of enforcing utilitarian regulations for AVs and would be less willing to buy such an AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.
This article is available to AAAS members. Science research is available free with registration one year after initial publication.
See the Expert Opinion by Jack Stewart: People Want Self-Driving Cars That Save Lives. Especially Theirs
Am 12. April fand das erste Mal die von der Mobilitätsakademie des TCS organisierte ...»weiterlesen
EPTA Conference 2017 „Shaping the Future of Mobility“ Luzern, Verkehrshaus, Mittwoch, 8. ...»weiterlesen
Am 22. September war www.auto-mat.ch live vor Ort, als die ersten beiden automatischen ...»weiterlesen
Deutscher Verkehrsminister Dobrindt: Weltweit erste Leitlinien für Fahrcomputer»weiterlesen