Vid by @ai_maak -- A little practice for upcoming music video projects for my own tracks. Source video quality wasn't great, so the Controlnet models within Stable Diffusion hat only crap to work with - at least in the distance. Funfact: there is not ONE good quality version of the video in the whole wide web. Please prove me wrong. What i learned while making, is that when using Controlnet in Stable diffusion you can actually use two or more models at the same time. I used canny and depth model for the clips towards the end, which gave a better result than the ones at the beginning. The whole thing is still not there wher i would like to see ai animation to be, but give it some months, and these kind of musi video reinterpretations will be even more mindblowing. Reinterpretation of the Jamiroquai hit “Virtual Insanity“ Text: What we're living in? Lemme tell ya Yeah, it's a wonder man can eat at all When thing
Hide player controls
Hide resume playing