Modeling Multimodal Multitasking in a Smart House

Authors:  Pilar Manchón, Carmen del Solar, Gabriel Amores, Guillermo Pérez

Polibits, 39, pp.65-72, 2009.

Abstract:  This paper belongs to an ongoing series of papers presented in different conferences illustrating the results obtained from the analysis of the MIMUS corpus. This corpus is the result of a number of WoZ experiments conducted at the University of Seville as part of the TALK Project. The main objective of the MIMUS corpus was to gather information about different users and their performance, preferences and usage of a multimodal multilingual natural dialogue system in the Smart Home scenario. The focus group is composed by wheel-chairbound users. In previous papers the corpus and all relevant information related to it has been analyzed in depth. In this paper, we will focus on multimodal multitasking during the experiments, that is, modeling how users may perform more than one task in parallel. These results may help us envision the importance of discriminating complementary vs. independent simultaneous events in multimodal systems. This gains more relevance when we take into account the likelihood of the cooccurrence of these events, and the fact that humans tend to multitask when they are sufficiently comfortable with the tools they are handling.

Keywords: Multimodal corpus; HCI; multimodal experiments; multimodal entries; multimodal multitasking

PDF: Modeling Multimodal Multitasking in a Smart House, Alternative link