Implementing any top‐down ethical theory of ethics in an artificial moral agent will pose both computational and practical challenges. One central concern is framing the background information necessary for rule and duty based conceptions of ethics and utilitarianism. Asimov's three laws come readily to mind when considering rules for (ro)bots, but even these apparently straightforward principles are not likely to be practical for programming moral machines. To check whether a machine's actions conform to high‐level rules such as the Golden Rule, the deontology of Kant's categorical imperative, or the general demands of consequentialism, e.g. utilitarianism, fail to be computationally tractable.
Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
If you think you should have access to this title, please contact your librarian.