A goal of machine morality is not just to raise many questions but to provide a resource for further development of artificial moral agents. Chapter 9 surveys software that is currently under development for moral decision making by (ro)bots. These experiments utilize a variety of strategies including case‐based reasoning or casuistry, deontic logic, connectionism (particularism), and the prima facie duties of W. D. Ross (also related to the principles of biomedical ethics). In addition to agent approaches that focus on the reasoning of one agent, researchers are working with multi‐agent environments and with multibots. This discussion serves as a comprehensive summary of research to date directed at making (ro)bots explicit moral reasoners. These experiments range from ethical advisors in health care to strategies for ensuring that (ro)bot soldiers won't violate international conventions.
Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
If you think you should have access to this title, please contact your librarian.