Avoiding human overtrust in machines is a vital issue to establish a socially acceptable advanced driver assistance system (ADAS). However, research has not clarified the effective way of designing an ADAS to prevent driver overtrust in the system. It is necessary to develop a theoretical framework that is useful to understand how a human trust becomes excessive. This paper proposes a trust model by which overtrust can be clearly defined. It is shown that at least three types of overtrust are distinguished on the basis of the model. As an example, this paper discusses human overtrust in an adaptive cruise control (ACC) system. By conducting an experiment on a medium-fidelity driving simulator, we observed two types of overtrust among the three. The first one is that some drivers relied on the ACC system beyond its limit of decelerating capability. The second one is that a driver relied on the ACC systems by expecting that it could decelerate against a stopped vehicle. It is estimated through data analysis how those kinds of overtrust emerged. Furthermore, the possible ways for prevention of human overtrust in ADAS are discussed. © 2011 Springer-Verlag London Limited.
CITATION STYLE
Itoh, M. (2012). Toward overtrust-free advanced driver assistance systems. Cognition, Technology and Work, 14(1), 51–60. https://doi.org/10.1007/s10111-011-0195-2
Mendeley helps you to discover research relevant for your work.