UCL Discovery Stage
UCL home » Library Services » Electronic resources » UCL Discovery Stage

Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity

Berger, CC; Gonzalez-Franco, M; Tajadura-Jimenez, A; Florencio, D; Zhang, Z; (2018) Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity. Front. Neurosci. , 12 , Article 21. 10.3389/fnins.2018.00021. Green open access

[thumbnail of berger-2018.pdf] Text
berger-2018.pdf - Published Version
Access restricted to UCL open access staff

Download (775kB)

Abstract

Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.

Type: Article
Title: Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity
Open access status: An open access version is available from UCL Discovery
DOI: 10.3389/fnins.2018.00021
Publisher version: https://doi.org/10.3389/fnins.2018.00021
Language: English
Additional information: © 2018 Berger, Gonzalez-Franco, Tajadura-Jiménez, Florencio and Zhang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Keywords: Science & Technology, Life Sciences & Biomedicine, Neurosciences, Neurosciences & Neurology, virtual reality, HRTF (head related transfer function), spatial audio, auditory perception, auditory training, cross-modal perception, cross-modal plasticity, Auditory Spatial Perception, Sound Localization, Ventriloquism, Motion, Recalibration, Environments, Frequency, Vection, Bias, Cues
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences > UCL Interaction Centre
URI: https://discovery-pp.ucl.ac.uk/id/eprint/10047272
Downloads since deposit
77Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item