Designing Multimodal In-Car Conversational Agents for Parents: A Simulator Study
Published in Adjunct Proceedings of the 17th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, 2025
Abstract
This paper investigates how in-car conversational agents can support parents driving with young children—a group prone to distraction and cognitive overload. In a high-stress driving simulator study, 22 participants experienced a navigation assistant using four modalities: visual, verbal, non-verbal auditory, and haptic. Feedback was collected using a modified NASA TLX (DALI) and interviews. Verbal navigation instructions were rated most effective, reducing stress and distraction, while haptic feedback was seen as a useful secondary channel for urgent cues. Non-verbal auditory signals were largely ineffective in noisy environments. Participants favored multimodal combinations that balanced clarity with low cognitive demand. Based on these findings, we offer design recommendations for adaptive, family-aware in-car systems, emphasizing verbal communication, context sensitivity, and user customization.
Key Contributions
- In-cabin Agents
- Multimodal Display Interactions
- Parent - Child interaction
