Tuesday, September 20, 2011

Paper Reading #10: Sensing Foot Gestures from the Pocket



Reference Information
Sensing Foot Gestures from the Pocket
Jeremy Scott, David Dearman, Koji Yatani, Khai N. Truong
Presented at UIST'10, October 3-6, 2010, New York, New York, USA
Author Bios
  • Jeremy Scott received his B.Sc., M.Sc., and Ph.D. in Pharmacology & Toxicology from the University of Western Ontario.  Dr. Scott is currently part of the Faculty of Medicine at the University of Toronto.  
  • David Dearman is currently a PhD student at the University of Toronto in the Department of Computer Science.  His research bridges HCI, Ubiquitous Computing, and Mobile Computing.  
  • Koji Yatani is a PhD candidate in the University of Toronto under Professor Khai N. Truong.  He is interested in HCI and ubiquitous computing with emphasis on hardware and sensing technology.  
  • Khai N. Truong is an assistant professor in computer science at the University of Toronto.  He holds a Bachelor of Science degree in Computer Engineering from the School of Electrical and Computer Engineering at the Georgia Institute of Technology. 
Summary
Hypothesis
Foot motion can be used as an effective alternative to traditional hand motions for computer input.

Methods
The authors conducted an initial study of the efficacy of foot-based gestures involving lifting and rotating. Participants selected targets by rotating from the start position along three axes of rotation: the ankle, heel, and toe. Ankle rotations were further subdivided. Rotations were captured through a motion capture device that focused on a rigid foot model to ensure uniformity. No visual feedback was provided when making a selection, though users were trained. A second study logged accelerometer data points for further analysis.

A third study operated similarly to the first experiment, but used the authors' system instead, with three iPhones placed on the user. Again, a practice session preceded the test. Gestures were cross-validated through leave-one-participant-out, which tests against an omitted data point, and within-participant stratified cross-validation, which tests against a single participant at a time.

Results
The initial study found raising the heel, or Plantar flexion, to be the most accurate and preferred gesture for vertical angles. Plantar flexion also showed a consistent error rate across all angles whereas the other gestures increased in error as the angle increased. Among the rotation gestures, both heel and toe rotation were comparable in regards to error and range but heel rotation was greatly preferred by the participants. The second study tested gesture recognition using a phone located in a front pocket, back pocket, and hip mount which resulted in successful recognition 50.8%, 34.9%, and 86.4% of the time respectively.  Higher percentages resulted when the algorithm only had to determine which gesture type was being performed (heel rotation or Plantar flexion).

Contents
This paper is a study on performing foot based interactions such as lifting and rotation with the foot to classify gestures. Based on the results of interactions a system was created to recognize foot gestures using mobile phone placed in the user's pocket or holster.

Discussion
I think that this application was a cool way to utilize existing technology. However I did not see it as being practical or something I would use on a regular basis.

No comments:

Post a Comment