A framework is provided for understanding the trajectory toward increasingly sophisticated artificial moral agents, emphasizing two dimensions: autonomy and sensitivity to morally relevant facts. Systems low on both dimensions have “operational morality” their moral significance is entirely in the hands of designers and users. Systems intermediate on either dimension have “functional morality” the machines themselves can assess and respond to moral challenges. Full moral agents, high on both dimensions, may be unattainable with present technology. This framework is compared to Moor's categories, which range from implicit ethical agents whose actions have ethical impact, to explicit ethical agents that are explicit ethical reasoners. Different ethical issues are raised by AI's various objectives from the augmentation of human decision making (basic decision support systems to cyborgs) to fully autonomous systems. Finally, the feasibility of a modified Turing Test for evaluating artificial moral agents—a Moral Turing Test—is discussed.
Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
If you think you should have access to this title, please contact your librarian.