Bidirectional Autoregressive Diffusion Model for Dance Generation

Canyu Zhang1, Youbao Tang2, Ning Zhang2, Ruei-Sung Lin2, Mei Han2, Jing Xiao2, Song Wang1✉,
1University of South Carolina 2PAII Inc.

overview

Abstract

overview



Dance serves as a powerful medium for expressing human emotions, but the lifelike generation of dance is still a considerable challenge. Recently, diffusion models have showcased remarkable generative abilities across various domains. They hold promise for human motion generation due to their adaptable many-to-many nature. Nonetheless, current diffusion-based motion generation models often create entire motion sequences directly and unidirectionally, lacking focus on the motion with local and bidirectional enhancement. When choreographing high-quality dance movements, people need to take into account not only the musical context but also the nearby music-aligned dance motions. To authentically capture human behavior, we propose a Bidirectional Autoregressive Diffusion Model (BADM) for music-to-dance generation, where a bidirectional encoder is built to enforce that the generated dance is harmonious in both the forward and backward directions. To make the generated dance motion smoother, a local information decoder is built for local motion enhancement. The proposed framework is able to generate new motions based on the input conditions and nearby motions, which foresees individual motion slices iteratively and consolidates all predictions. To further refine the synchronicity between the generated dance and the beat, the beat information is incorporated as an input to generate better musicaligned dance movements. Experimental results demonstrate that the proposed model achieves state-of-the-art performance compared to existing unidirectional approaches on the prominent benchmark for music-to-dance generation.

Demo1

Demo2

Demo3

Demo4

BibTeX

@article{zhang2024BADM,
      title={Bidirectional Autoregressive Diffusion Model for Dance Generation},
      author={Canyu Zhang, Youbao Tang, Ning Zhang, Ruei-Sung Lin, Mei Han, Jing Xiao, Song Wang},
      journal={arXiv preprint arXiv:2402.04356},
      year={2024}
    }