240 likes | 377 Views
Combining Sampling and Autoregression for Motion Synthesis. David Oziem , Neill Campbell, Colin Dalton, David Gibson and Barry Thomas. Project Group. Motion Ripper Project Methods of motion capture. Re-using captured motion signatures. Synthesising new or extend motion sequences.
E N D
Combining Sampling and Autoregression for Motion Synthesis David Oziem, Neill Campbell, Colin Dalton, David Gibson and Barry Thomas
Project Group • Motion Ripper Project • Methods of motion capture. • Re-using captured motion signatures. • Synthesising new or extend motion sequences. • Tools to aid animation. • Collaboration between University of Bristol CS, Matrix Media & Granada. http://www.cs.bris.ac.uk/Research/Vision/MotionRipper/
Aim • A common presumption in synthesis is that the original data is stationary, i.e. • Its statistical properties hold true throughout the signal. • Given a random point in time it should not be possible to exactly predict its origin.
Example Original candle flame • Candle frame includes multiple different ‘states’ of motion. • We would like to be able to ignore these states. • Reisell’00 used prior knowledge (i.e. labelled frames)
Aim • Synthesise non-linear, non-Gaussian, non-stationary motion signals in N-D. Primary mode from Principal components analysis (PCA)
Video Textures • Video textures or temporal textures are textures with motion. (Szummer’96) • Schodl’00, reordered frames from the original to produce loops or continuous sequences. • Doesn’t produce new footage. • Campbell’01, Fitzgibbon’01, Reissell’01, used Autoregressive process (ARP) to synthesis frames. Examples of Video Textures
Autoregressive Process • Statistical model • Calculating the model involves working out the parameter vector (a1…an) and w. Current value at time t y(t) = – a1y(t – 1) – a2y(t – 2) – … – any(t – n) + w.ε Noise Parameter vector (a1,…,an)
Copying and ARP • Combine the benefits of copying with ARP • New motion signatures. • Requires no prior knowledge. • Handles non-Gaussian distributions and non-stationary sequences.
Important to reduce the complexity of the search process. Copying and ARP Original input PCA Reduced input
Temporal segments of between 15 to 30 frames. Need to reduce each segment to be able to train ARP’s. Copying and ARP Original input PCA Segmented input Reduced segments PCA Reduced input
Many of the learned models are unstable. 10-20% are used. Copying and ARP Original input PCA Segmented input Reduced segments PCA ARP Reduced input Synthesised segments
Copying and ARP Original input PCA Segmented input Reduced segments PCA ARP Reduced input Segment selection Synthesised segments Outputted Sequence
Example First mode Possible segments. End of generated sequence. Compared section Time t
Example First mode The segment to be copied is randomly selected from the closest 3. Time t
Example First mode Segments are blended together using a small overlap and averaging the overlapping pixels. Time t
Results Candle flame
Results Barman
Results Copying & ARP model original generated
Results Copying & ARP model original generated
Results Copying & ARP model original generated
Results original
Results Copying & ARP model original generated
Conclusions • Motion synthesis of complex sequence is possible with this approach. • New unseen footage/motion can be synthesised. • The model occasionally gets stuck in loops. • Use a hierarchical model to guide the sampling process.
Thanks • Questions.