Show simple item record

dc.contributor.author
 hal.structure.identifier
ALIBAY, Farzana
300713 Macquarie University
dc.contributor.author
 hal.structure.identifier
KAVAKLI, Manolya
300713 Macquarie University
dc.contributor.author
 hal.structure.identifier
BAIG, Muhammad Zeeshan
300713 Macquarie University
dc.contributor.author
 hal.structure.identifier
CHARDONNET, Jean-Rémy
495876 Laboratoire d'Electronique, d'Informatique et d'Image [EA 7508] [Le2i]
dc.date.accessioned2017
dc.date.available2017
dc.date.issued2017
dc.date.submitted2017
dc.identifier.isbn978-1-4503-2138-9
dc.identifier.urihttp://hdl.handle.net/10985/11682
dc.description.abstractMulti-Modal Interface Systems (MMIS) have proliferated in the last few decades, since they provide a direct interface for both Human Computer Interaction (HCI) and face-to-face communication. Our aim is to provide users without any prior 3D modelling experience, with a multi-modal interface to create a 3D object. The system also incorporates help throughout the drawing process and identifies simple words and gestures to accomplish a range of (simple to complex) modeling tasks. We have developed a multi-modal interface that allows users to design objects in 3D, using AutoCAD commands as well as speech and gesture. We have used a microphone to collect speech input and a Leap Motion sensor to collect gesture input in real time. Two sets of experiments were conducted to investigate the usability of the system and evaluate the system performance using Leap Motion versus keyboard and mouse. Our results indicate that performing a task using speech is perceived exhausting, when there is no shared vocabulary between man and machine, and the usability of traditional input devices supersedes the usability of speech and gestures. Only a small ratio of participants, less than 7% in our experiments were able to carry out the tasks with appropriate precision.
dc.language.isoen
dc.rightsPost-print
dc.subjectGesture
dc.subjectSpeech
dc.subjectSemantics
dc.subjectEmotion recognition
dc.subjectKinect
dc.subject3D object
dc.subjectLeap Motion
dc.titleThe Usability of Speech and/or Gestures in Multi-Modal Interface Systems
dc.typdocCommunication avec acte
dc.localisationInstitut de Chalon sur Saône
dc.subject.halInformatique: Interface homme-machine
dc.subject.halInformatique: Synthèse d'image et réalité virtuelle
ensam.audienceInternationale
ensam.conference.titleInternational Conference on Computer and Automation Engineering (ICCAE 2017)
ensam.conference.date2017-02-18
ensam.countryAustralie
ensam.title.proceeding2017 9th International Conference on Computer and Automation Engineering
ensam.page1-5
ensam.citySydney
ensam.peerReviewingOui
ensam.invitedCommunicationNon
ensam.proceedingOui
hal.identifierhal-01502555
hal.version1
hal.submission.permittedupdateFiles
hal.statusaccept


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record