1 / 22

Modeling the Model Athlete

Modeling the Model Athlete. Simon Fothergill, Fourth Year Ph.D. student, Digital Technology Group, Computer Laboratory, University of Cambridge. Automatic Coaching of Rowing Technique. DTG Monday Meeting, 10 th November 2008. Based on paper:

tommy
Download Presentation

Modeling the Model Athlete

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Modeling the Model Athlete Simon Fothergill, Fourth Year Ph.D. student, Digital Technology Group, Computer Laboratory, University of Cambridge Automatic Coaching of Rowing Technique DTG Monday Meeting, 10th November 2008 Based on paper: Modelling the Model Athlete : Automatic Coaching of Rowing Technique; Simon Fothergill, Rob Harle, Sean Holden; S+SSPR08; Orlando, Florida, USA, December 2008

  2. Supplementary Sports coaching • Feedback is vital • Rowing technique is complex, precise and easy to capture • Good coaches aren’t enough • Sensor signals need interpreting • Biomechanical rules are complex and require specific sensors, if they exist at all

  3. Pattern Recognition • Statistical Arbitrary features that summerise the data in some way. E.g. RGB values, number of X • Structural Consider constituent parts and how they are related. E.g. “contains”, “above”, “more red” • Combination • Distance • Shape moments / smoothness

  4. System overview Population of strokes Individual aspect of technique Good Stroke quality classifier stroke Bad Motion capture system Lightweight markers Preprocessing of motion data Feature extraction Classification

  5. Motion capture • Bat system • Inertial sensors • Optical motion capture • VICON • Nintendo Wii controllers

  6. Preprocessing • Compensate for occlusions • Transform to the “erg co-ordinate system” defined by seat • Segment performance into strokes using handle trajectory extremities

  7. Feature extraction • Art • Modified various algorithms until found “a good one” for a set of strokes where each stroke is obviously different in over-all quality.

  8. Abstract features • Length • Height • Distance • Shape moments (λ11, λ12, λ21, λ02, λ20) • Speed moments: (μ 11, μ 12, μ 21, μ 02, μ20) ψ(s)

  9. Physical Performance features • Wobble (lateral variance) • Speed smoothness • μ-subtract, • LPF (3Hz) • dS/dt/dt, • ∑ • Shape smoothness • LPF (6Hz), • |dS/dt/dt|, • ++ > threshold (0.4ms-2)

  10. Domain features • Ratio (drive time : recovery time) • Drive and recovery angles

  11. System overview Population of strokes Individual aspect of technique Good Stroke quality classifier stroke Bad Motion capture system Lightweight markers Preprocessing of motion data Feature extraction Classification

  12. Machine learning • Normalisation and Negation • Each feature’s values are normalised to roughly between 0 and 1 • Highly negatively correlated features are negated • Good strokes are scored as 1 • Bad strokes are scored as 0

  13. Bias Feature 0 Weight 0 Bias weight Linear combination Composite representation of motion Feature N Weight N Machine learning Classification Method 1: Moore-Penrose F w = s (F-1 = Moore-Penrose pseudo-inverse of feature matrix) Method 2: Gradient descent Error function: Sum of the square of the differences weights initialised to 0 750 iterations 0.001 learning rate

  14. Machine learning • Validation of models • Training repeated using populations formed by leaving out different sets of strokes • Unseen strokes are then classified • Each stroke left out exactly once • Multiple performers (each performer left out) • Sensitivity analysis • Threshold computed to minimise misclassification • Features • Iterations

  15. Empirical Validation • Population • Six novice, male rowers in their mid-twenties • 60kg and 90kg • Very little or no rowing experience. • Not initially fatigued, comfortable rate, uncontrived manner. • Scoring • Single expert (coach) • Score whole performances (95% representative) • Bad = Expert considers a significant floor in technique • Good = Expert considers a noticeable improvement • Experimental method • Basic explanation • Give performance (~30 strokes) • Repeat to fatigue • Identify fault • Teach correction • Give performance (~30 strokes) whilst coach helps to maintain improved technique (for accumulating aspects)

  16. Empirical Validation • For an Individual and specific aspect • Training just that single aspects • Recognition of that single aspects with realistic combinations of different qualities for different aspects

  17. Empirical Validation • Across Individuals

  18. Discussion and Conclusions • Useful features λ02, λ20μ02andμ20used in at least 90% of the final feature sets for both algorithms. • Comparison of techniques • For single athletes, gradient descent not as fast • For multiple athletes, gradient descent more reliable • Encouragingly low misclassification • Suggets inter-variation from different athletes > athlete’s intra-variation

  19. Further Work • Characterisation of the process • Population • Domain • Algorithms • Reversing the models to allow prediction of optimal individual aspects of technique that can be merged to an optimal technique for an individual

  20. References • Modelling the Model Athlete : Automatic Coaching of Rowing Technique; Simon Fothergill, Rob Harle, Sean Holden; S+SSPR08; Orlando, Florida, USA, December 2008 • Ilg, Mezger & Giese. Estimation of Skill Levels in Sports Based on Hierarchical Spatio-Temporal Correspondences. DAGM 2003, LNCS 2781, pp. 523-531, 2003. • Murphy, Vignes, Yuh, Okamura. Automatic Motion Recognition and Skill Evaluation for Dynamic Tasks. EuroHaptics 2003, 2003. • Gordon. Automated Video Assessment of Human Performance. J. Greer (ed) Proceedings of AI-ED 95. pp. 541-546, 1995. • Rosen, Solazzo, Hannaford & Sinanan. Objective Laparoscopic Skills Assessments of Surgical Residents Using Hidden Markov Models Based on Haptic Information and Tool/Tissue Interactions. The Ninth Conference on Medicine Meets Virtual Reality, 2001. • Joint IAPR International Workshops on Structural and Syntactic Pattern Recognition and Statistical Techniques in Pattern Recognition (S+SSPR 2008) Orlando, Florida, USA, December 4-6, 2008 (http://ml.eecs.ucf.edu/ssspr/index.php) • 19th International Conference of Pattern Recognition, ICPR 2008 (http://www.icpr2008.org/) • Computer Laboratory, University of Cambridge (www.cl.cam.ac.uk)

  21. Acknowledgements • Professor Andy Hopper • Dr Sean Holden • Dr Rob Harle • Dr Joseph Newman • Brian Jones • Dr Mbou Eyole-Monono • The Digital Technology Group, Computer Laboratory • The Rainbow Group, Computer Laboratory

  22. Thank you! Questions?

More Related