Authors
Fatemeh Alizadeh, Oleksandra Vereschak, Dominik Pins, Gunnar Stevens, Gilles Bailly, Baptiste Caramiaux
Publication date
2022/6/27
Conference
20th European Conference on Computer-Supported Cooperative Work (ECSCW 2022)
Volume
6
Issue
2
Description
AI (artificial intelligence) systems are increasingly being used in all aspects of our lives, from mundane routines to sensitive decision-making and even creative tasks. Therefore, an appropriate level of trust is required so that users know when to rely on the system and when to override it. While research has looked extensively at fostering trust in human-AI interactions, the lack of standardized procedures for human-AI trust makes it difficult to interpret results and compare across studies. As a result, the fundamental understanding of trust between humans and AI remains fragmented. This workshop invites researchers to revisit existing approaches and work toward a standardized framework for studying AI trust to answer the open questions: (1) What does trust mean between humans and AI in different contexts? (2) How can we create and convey the calibrated level of trust in interactions with AI? And (3) How can we develop a standardized framework to address new challenges?
Total citations
Scholar articles
F Alizadeh, O Vereschak, D Pins, G Stevens, G Bailly… - 20th European Conference on Computer-Supported …, 2022