Conference paper
Discussions About Lying With An Ethical Reasoning Robot
The conversational ethical reasoning robot Immanuel is presented. Immanuel is capable of defending multiple ethical views on morally delicate situations. A study was conducted to evaluate the acceptance of Immanuel. The participants had a conversation with the robot on whether lying is permissibile in a given situation.
The robot first signaled uncertainty about whether lying is right or wrong in the situation, then disagreed with the participant’s view, and finally asked for justification. The results indicate that participants with a higher tendency to utilitarian judgments are initially more certain about their view as compared to participants with a higher tendency to deontological judgments.
These differences vanish at the end of the dialogue. Lying is defended and argued against by both utilitarian and deontologically oriented participants. The diversity of the reported arguments gives an idea of the variety of human moral judgment. Implications for the design and application of morally competent robots are discussed.
Language: | English |
---|---|
Publisher: | IEEE |
Year: | 2017 |
Pages: | 1445-1450 |
Proceedings: | 2017 26th IEEE International Symposium on Robot and Human Interactive Communication |
ISBN: | 1538635178 , 1538635186 , 1538635194 , 9781538635179 , 9781538635186 and 9781538635193 |
ISSN: | 19449445 and 19449437 |
Types: | Conference paper |
DOI: | 10.1109/ROMAN.2017.8172494 |
ORCIDs: | Bentzen, Martin Mose |
Cognition Concrete Ethics Immanuel conversational ethical reasoning robot Mouth Psychology Robots Senior citizens deontological judgments ethical aspects human factors human moral judgment intelligent robots moral reasons multiple ethical views perceived morality social robots utilitarian judgments