Gesture recognition using 3D appearance and motion features

Guangqi Ye, Jason J. Corso, Gregory D. Hager

Research output: Contribution to journalConference articlepeer-review

25 Scopus citations

Abstract

We present a novel 3D gesture recognition scheme that combines the 3D appearance of the hand and the motion dynamics of the gesture to classify manipulative and controlling gestures. Our method does not directly track the hand. Instead, we take an object-centered approach that efficiently computes the 3D appearance using a region-based coarse stereo matching algorithm in a volume around the hand. The motion cue is captured via differentiating the appearance feature. An unsupervised learning scheme is carried out to capture the cluster structure of these feature-volumes. Then, the image sequence of a gesture is converted to a series of symbols that indicate the cluster identities of each image pair. Two schemes (forward HMMs and neural networks) are used to model the dynamics of the gestures. We implemented a real-time system and performed numerous gesture recognition experiments to analyze the performance with different combinations of the appearance and motion features. The system achieves recognition accuracy of over 96% using both the proposed appearance and the motion cues.

Original languageEnglish (US)
Article number1384958
JournalIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
Volume2004-January
Issue numberJanuary
DOIs
StatePublished - 2004
Event2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2004 - Washington, United States
Duration: Jun 27 2004Jul 2 2004

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Gesture recognition using 3D appearance and motion features'. Together they form a unique fingerprint.

Cite this