AR motion generation (보류)

<aside> 🤗

Various tasks of generating motion (i.e text to motion, music to motion, speech to motion etc) utilize discrete motion priors. Follow up works try to improve these discrete priors using RVQ, PQVAE, HiearchicalVQVAE, compositional VQVAE etc. As human motion is inherently continuous, we propose to refine the outputs of the discrete methods via rectified flow models. This refinement model can be applied to any motion generation task to improve motion naturalness(in terms of FID scores) and is orthogonal to any novel discrete motion priors.

</aside>

and {discrete based motion prior의 output motion 의 한계를 보여주는 metric을 하나 찾아서 넣으면 매우 좋을듯 추가로 continpus refinement를 통해 이걸 낮출 수 있다는 주장까지 ie jitter or jerk}

our contribution is as follows

목표 : CVPR 2025 (11/15)

정기 미팅: 매주 수요일 오전 10시

📄Pages

Issues

ToDos (10/24~)

Experiment

Model design

Theory Ideas

Paper

Meeting Notes

Timeline