Huckvale, M;
Howard, IS;
Fagel, S;
(2009)
KLAIR: a Virtual Infant for Spoken Language Acquisition Research.
In:
Proceedings of Interspeech 2009.
(pp. pp. 696-699).
ISCA-INT SPEECH COMMUNICATION ASSOC
![]() Preview |
Text
is2009-klair-dist.pdf Download (237kB) |
Abstract
Recent research into the acquisition of spoken language has stressed the importance of learning through embodied linguistic interaction with caregivers rather than through passive observation. However the necessity of interaction makes experimental work into the simulation of infant speech acquisition difficult because of the technical complexity of building real-time embodied systems. In this paper we present KLAIR: a software toolkit for building simulations of spoken language acquisition through interactions with a virtual infant. The main part of KLAIR is a sensori-motor server that supplies a client machine learning application with a virtual infant on screen that can see, hear and speak. By encapsulating the real-time complexities of audio and video processing within a server that will run on a modern PC, we hope that KLAIR will encourage and facilitate more experimental research into spoken language acquisition through interaction.
Type: | Proceedings paper |
---|---|
Title: | KLAIR: a Virtual Infant for Spoken Language Acquisition Research |
Event: | 10th INTERSPEECH 2009 Conference |
Location: | Brighton, ENGLAND |
Dates: | 06 September 2009 - 10 September 2009 |
ISBN-13: | 978-1-61567-692-7 |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.21437/Interspeech.2009-240 |
Publisher version: | https://www.isca-archive.org/interspeech_2009/huck... |
Language: | English |
Additional information: | This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions. |
Keywords: | Science & Technology, Technology, Computer Science, Artificial Intelligence, Engineering, Electrical & Electronic, Computer Science, Engineering, speech acquisition, machine learning, autonomous agent, situated learning, toolkit, MODEL, SPEECH |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences > Speech, Hearing and Phonetic Sciences |
URI: | https://discovery-pp.ucl.ac.uk/id/eprint/97717 |
Archive Staff Only
![]() |
View Item |