The Development of Overtrust: An Empirical Simulation and Psychological Analysis in the Context of Human–Robot Interaction

Ullrich, Daniel and Butz, Andreas and Diefenbach, Sarah (2021) The Development of Overtrust: An Empirical Simulation and Psychological Analysis in the Context of Human–Robot Interaction. Frontiers in Robotics and AI, 8. ISSN 2296-9144

[thumbnail of pubmed-zip/versions/1/package-entries/frobt-08-554578.pdf] Text
pubmed-zip/versions/1/package-entries/frobt-08-554578.pdf - Published Version

Download (1MB)

Abstract

With impressive developments in human–robot interaction it may seem that technology can do anything. Especially in the domain of social robots which suggest to be much more than programmed machines because of their anthropomorphic shape, people may overtrust the robot's actual capabilities and its reliability. This presents a serious problem, especially when personal well-being might be at stake. Hence, insights about the development and influencing factors of overtrust in robots may form an important basis for countermeasures and sensible design decisions. An empirical study [N = 110] explored the development of overtrust using the example of a pet feeding robot. A 2 × 2 experimental design and repeated measurements contrasted the effect of one's own experience, skill demonstration, and reputation through experience reports of others. The experiment was realized in a video environment where the participants had to imagine they were going on a four-week safari trip and leaving their beloved cat at home, making use of a pet feeding robot. Every day, the participants had to make a choice: go to a day safari without calling options (risk and reward) or make a boring car trip to another village to check if the feeding was successful and activate an emergency call if not (safe and no reward). In parallel to cases of overtrust in other domains (e.g., autopilot), the feeding robot performed flawlessly most of the time until in the fourth week; it performed faultily on three consecutive days, resulting in the cat's death if the participants had decided to go for the day safari on these days. As expected, with repeated positive experience about the robot's reliability on feeding the cat, trust levels rapidly increased and the number of control calls decreased. Compared to one's own experience, skill demonstration and reputation were largely neglected or only had a temporary effect. We integrate these findings in a conceptual model of (over)trust over time and connect these to related psychological concepts such as positivism, instant rewards, inappropriate generalization, wishful thinking, dissonance theory, and social concepts from human–human interaction. Limitations of the present study as well as implications for robot design and future research are discussed.

Item Type: Article
Subjects: Article Paper Librarian > Mathematical Science
Depositing User: Unnamed user with email support@article.paperlibrarian.com
Date Deposited: 28 Jun 2023 05:35
Last Modified: 17 Oct 2023 05:42
URI: http://editor.journal7sub.com/id/eprint/1375

Actions (login required)

View Item
View Item