Building Moral Robots
New technologies introduce new moral questions, but the answers to these new questions generally can be found in traditional approaches to moral philosophy. Questions about morality and artificial intelligence fall into this category. Popular culture has long noted that if intelligent machines are developed which are capable of independent action, it will be exigent that they recognize moral limits. Most discussion of moral principles for such independent, thinking machines have focused on the possible content of moral principles that might be somehow “built in” to the machines. But I argue that the general status of such principles is as important as their content. For any independent being capable of free action, moral principles can not just be presented as insurmountable constraints imposed by some external force (by a programmer, for instance). Such externally imposed principles would not count as genuine moral principles. Any being, artificial or natural, who encountered such external constraints would simply regard them as practical obstacles, not as moral demands. Part of the nature of moral demands is that they must appeal to reasons that every agent can recognize and adopt as her own. So if machines are ever able to act on genuinely moral principles, these principles must appeal to and be endorsed by the machines’ own power of reason. The idea here is borrowed from Immanuel Kant, so a label may as well be borrowed also. For machines ever to act on genuinely moral principles, they must possess moral autonomy.
Richard Dean (Lebanon)
American University of Beirut
I received my Ph.D in philosophy from the University of North Carolina at Chapel Hill. I then taught philosophy at Rutgers University for three years, before taking my current position as an assistant professor at the American University of Beirut.
Person as Subject
(30 min. Conference Paper, English)