Visual modeling of dynamic gestures using 3D appearance and motion features

Guangqi Ye, Jason J. Corso, Gregory D. Hager

Research output: Chapter in Book/Report/Conference proceedingChapter

7 Scopus citations

Abstract

We present a novel 3D gesture recognition scheme that combines the 3D appearance of the hand and the motion dynamics of the gesture to classify manipulative and controlling gestures. Our method does not directly track the hand. Instead, we take an object-centered approach that efficiently computes 3D appearance using a region-based coarse stereo matching algorithm. Motion cues are captured by differentiating the appearance feature with respect to time. An unsupervised learning scheme is carried out to capture the cluster structure of these features. Then, the image sequence of a gesture is converted to a series of symbols that indicate the cluster identities of each image pair. Two schemes, i.e., forward HMMs and neural networks, are used to model the dynamics of the gestures. We implemented a real-time system and performed gesture recognition experiments to analyze the performance with different combinations of the appearance and motion features. The system achieves recognition accuracy of over 96% using both the appearance and motion cues.

Original languageEnglish (US)
Title of host publicationReal-Time Vision for Human-Computer Interaction
PublisherSpringer US
Pages103-120
Number of pages18
ISBN (Print)0387276971, 9780387276977
DOIs
StatePublished - 2005

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'Visual modeling of dynamic gestures using 3D appearance and motion features'. Together they form a unique fingerprint.

Cite this