Bovo, R;
Giunchi, D;
Sidenmark, L;
Newn, J;
Gellersen, H;
Costanza, E;
Heinis, T;
(2023)
Speech-Augmented Cone-of-Vision for Exploratory Data Analysis.
In:
CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.
(pp. p. 162).
ACM: New York, NY, USA.
Preview |
Text
CHI_2023 (14).pdf - Accepted Version Download (3MB) | Preview |
Abstract
Mutual awareness of visual attention is crucial for successful collaboration. Previous research has explored various ways to represent visual attention, such as field-of-view visualizations and cursor visualizations based on eye-tracking, but these methods have limitations. Verbal communication is often utilized as a complementary strategy to overcome such disadvantages. This paper proposes a novel method that combines verbal communication with the Cone of Vision to improve gaze inference and mutual awareness in VR. We conducted a within-group study with pairs of participants who performed a collaborative analysis of data visualizations in VR. We found that our proposed method provides a better approximation of eye gaze than the approximation provided by head direction. Furthermore, we release the first collaborative head, eyes, and verbal behaviour dataset. The results of this study provide a foundation for investigating the potential of verbal communication as a tool for enhancing visual cues for joint attention.
Archive Staff Only
View Item |