Voices of VR

#1266: Converting Dance into Multi-Channel Generative AI Performance at 30FPS with “Kinetic Diffusion”

08.27.2023 - By Kent ByePlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

Brandon Powers is a creative director and choreographer who is creating experiences across physical and virtual space at the intersection of performance and technology. He was showing a dance performance at ONX Studios during Tribeca Immersive that was titled Kinetic Diffusion. It was created in collaboration with Aaron Santiago, and featured three screens that were being filled with delayed generative AI footage in near real-time and 30 frames per second, which required eleven 4090 GPUs in the cloud to achieve.

Powers was recording his dance with a mirrorless camera, and then was applying a depth map AI model to extrapolate his embodied movements so that it could be input as a real-time feed into Stable Diffusion with a set of prompts that were precisely timed out. The AI generated images ended up having a 2-8 second delay, which gave the effect of Powers dancing in a duet with himself, but modulated through a series of style transfer prompts. Overall, it was a hypnotically impressive display of generative AI at the intersection of XR and dance. I had a chance to catch up with Powers after his performance to get more context for how it came about, and the long evolution from his previous explorations at the intersections of AI and dance with Frankenstein AI that premiered at Sundance 2019 (see our previous conversation about it in episode #728). You can see a brief explainer video of Kinetic Diffusion within from Powers' TikTok channel.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

More episodes from Voices of VR