Home > Student Projects > Real-time Gesture Recognition

MSc Project, 2006-2007

** This project has been taken, it is no longer available **

Learning based Real-time Gesture Recognition for Controlling Virtual Characters

Supervisor

Bill Triggs (Bill.Triggs@inrialpes.fr)

Summary

Keywords : motion capture, human-computer interaction, activity analysis, pattern recognition

Computer vision techniques are becoming increasingly popular for human computer interfaces and gaming, particularly since the advent of Sony's EyeToy and its competitors. However current interface devices cue mainly on simple change detection in prespecified image zones, which requires carefully arranged lighting and static backgrounds. A few can also track overall head movements, and use coloured wristbands to track hand movements. It would be more satisfying to be able to use full articular body movements as the input signal, without special clothing and without being distracted by background movements. To do this we will capitalize on the LEAR team's recent research on reconstructing human pose and motion from monocular image sequences, which showed that a machine learning approach based on a set of training sequences allows articular body pose to be recovered rapidly and comparatively robustly. The project will produce a real time reimplementation of the visual feature set and the learning based body pose regressor developed in this research, develop a subject localization algorithm to provide robust initialization and tracking of the subject's overall 3D position, and learn a set of body gestures that allow a virtual character to be controlled. If time permits, a head tracker and a learning based estimation of facial expressions may also be developed and integrated. The overall goal is to control a virtual avatar in real time. The focus will be on the computer vision aspects not on graphics.

References : The method will build on research done with Ankur Agarwal during his PhD thesis in LEAR. See this journal paper and the thesis.