Dancing is an instinct of humans, most of the time it comes through the inspiration of music, although this is not always the only trigger to create a dance. Since AI has been developed in all aspects of human life, generating dance by AI techniques can end in impressive results. Music to Movement GAN (MM GAN) is a generative adversarial network to generate dance movements based on input music. In this method, the model first learns how to move by decomposing dance into basic movements. Then, it learns how to dance by organizing the basic movements into dance sequences. In the end, it wraps up the dance sequences based on the music beats to produce a long-term dance.
In this project, our objective was to generate a classical dance sequence based on the “Dying Swan” music piece, using the Dancing2Music dance generator. This work has been conducted in AI Robo-lab at the University of Luxembourg. In collaboration with Betania Antico (dancer and choreographer), we also applied the Open-Pose pose detection model to a music video of Natalia Petrovna Osipova (principal ballerina with The Royal Ballet in London) performing the “Dying Swan” to analyze and compare the AI-generated dance movement and human dance movements from a choreographic point of view.
The student workshop
In the workshop we will talk about the project scenario and the way it evolved, the choices of implementations we had and the challenges we faced. We will also briefly review the project from the technical point of view about how the generative adversarial network is trained to generate dance sequences from an input music. The workshop will be held in English / German.
To register, please email email@example.com