Close

Presentation

Effects of Self-Learning and Exploration for XR-based Interactions
DescriptionThis research explores the overtime learning trends of multimodal gaze-based interactions in tasks involving the movement of augmented objects within extended reality (XR) environments. This study employs three interactions, including two multimodal gaze-based approaches, and compares them with an unimodal hand-based interaction. The underlying hypothesis posits that gaze-based interactions outperform other modalities, promising improved performance, lower learnability rates, and enhanced efficiency. These assertions serve as the foundation for investigating the dynamics of self-learning and exploration within XR-based environments. To this end, the study addresses questions related to the temporal evolution of learnability, post-learning efficiency, and users' subjective preferences regarding these interaction modalities. This research shows that gaze-based interactions enhance performance, exhibit a lower learnability rate, and demonstrate higher efficiency. The ultimate goal is to contribute to the design and refinement of more effective, user-friendly, and adaptive XR user interfaces.
Event Type
Lecture
TimeThursday, September 12th9:45am - 10:15am MST
LocationFLW Salon A
Tracks
Extended Reality