The Usability of Speech and/or Gestures in Multi-Modal Interface Systems
Communication avec acte
Date
2017Résumé
Multi-Modal Interface Systems (MMIS) have proliferated in the last few decades, since they provide a direct interface for both Human Computer Interaction (HCI) and face-to-face communication. Our aim is to provide users without any prior 3D modelling experience, with a multi-modal interface to create a 3D object. The system also incorporates help throughout the drawing process and identifies simple words and gestures to accomplish a range of (simple to complex) modeling tasks. We have developed a multi-modal interface that allows users to design objects in 3D, using AutoCAD commands as well as speech and gesture. We have used a microphone to collect speech input and a Leap Motion sensor to collect gesture input in real time. Two sets of experiments were conducted to investigate the usability of the system and evaluate the system performance using Leap Motion versus keyboard and mouse. Our results indicate that performing a task using speech is perceived exhausting, when there is no shared vocabulary between man and machine, and the usability of traditional input devices supersedes the usability of speech and gestures. Only a small ratio of participants, less than 7% in our experiments were able to carry out the tasks with appropriate precision.
Fichier(s) constituant cette publication
Cette publication figure dans le(s) laboratoire(s) suivant(s)
Documents liés
Visualiser des documents liés par titre, auteur, créateur et sujet.
-
Article dans une revue avec comité de lectureLÉON, Jean-Claude; DUPEUX, Thomas; PERRET, Jérôme; CHARDONNET, Jean-Rémy (American Society of Mechanical Engineers, 2016)The simulation of grasping operations in virtual reality (VR) is required for many applications, especially in the domain of industrial product design, but it is very difficult to achieve without any haptic feedback. Force ...
-
BrevetL’invention concerne un périphérique d’interaction (1) apte à contrôler un élément de toucher et de préhension d’objets virtuels multidimensionnels, comportant au moins deux modules d’interaction (20), chaque module ...
-
Communication avec acteAchieving grasping tasks in real time with haptic feedback may require the control of a large number of degrees of freedom (DOFs) to model hand and finger movements. This is mandatory to grasp objects with dexterity. Here, ...
-
Communication avec acteThis paper presents a prototype of a hands-on immersive peripheral device for controlling a virtual hand with high dexterity. Based on the results of users’ tests on previous versions of our device and on the analysis of ...
-
BrevetAn interaction peripheral device capable of controlling an element for touching and grasping multidimensional virtual objects, including at least two interaction modules, each interaction module being intended to be actuated ...