YouTube to Text Converter

Transcript of Create CONTROLLABLE AI CHARACTERS for your MOVIES [FREE WORKFLOW + ComfyUI TUTORIAL]

Video Transcript:

You can now add controllable AI creatures or characters to your footage using this free AI workflow. All you need is a video clip and one single reference image of the subject you want to add. If you want, you can use tools like Blender or After Effects to precisely control your character's movements in the shot. But we're also going to show you other free alternatives developed by the amazing open-source community. to demonstrate how all of this works and what kind of results you can expect. We created a full short film using this technique, which we'll show at the end of this video. We'll also walk you through some other AI techniques that can seriously level up your VFX game, so make sure to stick around. There is an absolute explosion of open- source AI video right now. I first got inspired for this project when I saw this test by Nathan Shipley. He was using the any trajectory instruction model which is based on the one 2.1 video model. This one allows you to precisely control your AI videos by adding these paths. And what inspired me was his approach of animating these paths in After Effects and exporting them using a script he generated with Claude. I was really impressed by these results and immediately wanted to try it out myself. So, I created this test shot where I rigged a character using the After Effects puppet tool, exported some of the joint movements as trajectory paths using a script I generated with Cloud. And this was just a test to see if the After Effects puppet tool could serve as a sort of preview for the final movement generated with one. And while my animation wasn't great, it definitely works as a proof of concept, and it definitely helped to remove some of this puppet warp weirdness. Next, I wanted to see if I could apply the same concept to the 3D world. So, I went into Blender, imported a Migamo animation, and generated a script that lets me select a few bones from the rig and export their 2D coordinates in a format that the ATI video model can understand. And I was honestly surprised how well this worked. Now, you might say, why not use the open pose format for this? There are already a bunch of video models that can understand this format, and there is even a plug-in where you can animate these open pose bones directly in Blender. So why use these trajectory paths? Well, because with trajectory points, you can animate anything you want. Not just humans, but also like 2D animations, fantasy creatures, camera movements, or any subject that the open pose format wouldn't work for. So for our short film, we wanted to try out exactly this, animating a fantasy creature. This is also one of the hardest thing in VFX to do, like animating and integrating a creature seamlessly into a moving shot. So let's see just how good AI already is for this. So first we need to create our trajectory and for this you have multiple options. I initially tried After Effects where I imported this footage and then I animated this null object here on the screen. To export it just download my After Effects export script. Select the layer and go to file scripts run script file and click open. And this is the format that we need. You can just copy that into your workflow or save it all out as a text file. This is just an example for one object, but you can also select multiple objects and the script will export all the data in one single file. A free alternative would be to use Blender where you just need to import your footage and animate bones or locators as your trajectory path. To export these bones as 2D coordinates, we used Claude to create a Blender add-on that you can easily install by downloading the free script file, going into Blender, preferences, add-ons, install from disk, and selecting that file. This will then give you this new window where you can specify whether your exported point should be generated from the head, the tail, or the center of the selected bones. But this also works with locators just as well. If you want to export the same set of multiple bones from a more complex rig, you can create a selection set to easily select all the bones before exporting. And then you can set a path location where all your data will be saved out to as a text file. Another super easy to use and free alternative would be to use the website by what dreams cost. and he used Gemini to create this website here where you can just add and edit these splines. You can increase the tension for some smoother movement. You can change the path easing. You can also scale them over time. So that's really handy if you want an object to get closer to the camera. And once you're done, you can just click export video. And this will then create the driving video for you. So these are all your options. It really does not matter too much which one you choose. Choose the one that you are most comfortable with. So now that we have our motion path data, we run into a problem. And the problem is that the ATI video model pretty much only understands these paths. But we want to add like a new creature from a reference image into existing footage. So instead, I use the one 2.1 vase video model, which also understands trajectory paths, but also has a bunch more features that we can use. The first thing that I tried was giving vase the path exported from After Effects, the open pose control net of my body movements, the unedited start frame, and this reference image of the creature. When the video was generated, I was surprised to see that it just instantly worked and the movement of the creature looked pretty cool. I was also really impressed by how well it integrated. Like, look at these shadows and the interaction between the creature and me. But as the video kept going, my face began to change dramatically. And that makes sense because I only gave it like one frame of me to work with. To fix this, I wanted to try out video in painting. If you give vase a gray area, it will fill out that area based on your reference image prompt and surrounding video context. In the past, we used this technique for creating clean plates for VFX shots. But here, I wanted to fill it out with my crystal creature. So, I created an automatic setup that moves this gray area over the video based on the path that we created earlier. I added the reference image, and again, I was surprised to see that it just instantly worked. Overall, the animation felt a bit less natural and smooth. Still pretty cool, though. I liked how well it integrated and that I didn't turn into another person. Also, you can kind of control the size of the creature you want to add by changing the size of that gray area. I also tried combining like all of these techniques. So I gave way the inpainting area plus the control net and point data, but this seemed to be too much. The creature still moved along the path, but it was just an unanimated image sliding over the video. So I recommend using either the start frame plus control net technique for some natural character movement or the inpainting technique for the best consistency. We created one unified workflow inside of CompuI, a free node-based interface for AI models, so you can quickly test out both approaches for your shots. To help you install CompuI, we created this free guide on our website. However, please note that this is an advanced workflow, so you should try some simpler ones before you jump into this one. To make this next part easier to follow along, we've also put together a free written guide on our website. To use this workflow, just download the JSON file and drag and drop it into the Comf UI interface. Now, you might need to install a few custom nodes. To install them, just go to CompuI Manager, install missing custom nodes, select all of them, and restart. Now, for some reason, the one video wrapper nodes did not install correctly. So, we just tried that again. Clicking install, selecting the last version, and this time it worked. You can see the full workflow is here now, but we still need some models. We're working from left to right. So, let's zoom in on this left side here. And there you find all the model loaders for this workflow. And next to them, you can find these notes explaining where exactly to download the models and where you need to put them in your comfy UI folder structure. Once you have downloaded all these models and put them in the right place, click R in Comi or restart and make sure to select them all again in these notes. Also note that you can do a lot of speed optimizations for this workflow and I will put some installation tutorials for these improvements in the description. The problem is that they are not super easy to set up and this would be too much for this tutorial. So finally, you can use this workflow. As I said, we're working from left to right. So let's zoom in on the left here. Double check if you have all the correct models. And then we move over to the right. Here you can select your video resolution. And I recommend choosing two. Below that, you can set how many frames you want to process. And for the simple workflow, 81 is the maximum as this is the maximum for the one video model. You can select if you want to skip any frames. So you could skip some of the earlier frames. I don't want that. And then to the right here is like one of the most important nodes of the workflow because it will configure how you want to use this workflow. Set this to one and it will use the inpainting technique. Set this to two and it will use the control net and the start frame. Let's start with two. Let's start with the first frame and the control net. Finally, in this group, we need to choose the video that we want to upload. So, let me choose this one right here. And now, let's move further down. Here is a muted group. So, to activate this one, you can click CtrlM. It will extract the first frame of your video. And then you have this option here to create the path of your creature directly in Confui. However, this is pretty limited and I would recommend using another tool for this. But we just wanted to give you that and let you know that you can use this too. So, let me just mute that again. You can also use this fast mutter right here. And then let's take care of these other groups right here. I will start with the reference image here. I tried if using like multiple angles of the same character would improve the result and this actually kind of worked. But to prove that this is really not necessary, I will just use this image instead. When I now click run, the workflow will automatically remove the background for this creature and replace it with a gray background. Next, let's move down here to the point system. Set this to one. And this will try to use the path that you create right here. Or set this to two. And then you need to input your coordinates right here. Here's all my data. I click copy. Go to the workflow. Select select everything and replace it. And when I now click run, it recreated our path based on the data we gave it. In the group below, we can decide which control net we want to use. And you can do this using this switch right here. It's selected one. And this will use the open pose pose extracted from the video as well as the motion path. But you could also set this to two. This will then only use the motion path or select the three and this will then only use the open post control net. In the next group below this one, you can choose the start frame. Set this switch to one and it will just extract the first frame from your video. Set this to two and you can import your own frame right here. For example, if you want to inpaint your character in another program or select three and this will then add a gray area where your motion path begins. And this way, if your creature is already in the shot, you give one the freedom to put it there. But right now, I want the creature to enter the frame. So, I choose one. And finally, I go back up. this top group here, the inpainting setup. We will revisit that in a second when we actually use the inpainting configuration of this workflow. So, next we can move to the right here. And now we need to put in a prompt for what is going to happen in our video. And I'm using this one. Just describe what you want to see happening in the shot in natural language. And it can also help to describe where the creature is in the beginning of the video. So, this is pretty much it. Let's just click run. So, after a few minutes, the video is done. I don't know what's happening with this one leg right here. That looks a bit weird. So, if you have a video like this where it's like pretty close but not quite there, you can just go to the left here back to the sampler and just try out another seat. What you can also do is you can set the steps to something like two. And when we now generate a video, it will generate a sort of preview where we can already make out the final movements pretty well, but of course it looks pretty bad. And then we can quickly try out different seats. Choose the one where we like the movement and then bump up the steps to the final eight and render the final shot. And look how amazing this result looks. The shadows, the interaction with my arm, but it also added like the tiny bit of camera move and it changed my face a little bit. Of course, what you could do is like go to a compositing tool like New Gore After Effects and cut out the cat and put it back on top and blur the edges a little bit. But you can also try the inpainting configuration of the workflow that I'm going to show you now. For this, all you need to do is go back to the beginning here and change this node right here to one. So instead of using the control net, the workflow will now use this inpainting mask created right here. You can come down here, just expand it a little bit more. What you could also do is create like a static manual mask. For this, you can go to the preview bridge right here, right click, go to mask editor, and mask out the area where you want your creature to appear. Save that. And then you have to use this switch right here. Set that to two. But since I don't want that, I want the actual moving mask. I'll leave it at one. And now I can just activate the video group again and click run. This is also looking really good. The movement is a bit more rigid. It doesn't look as natural, but I think I can play around with the size of the gray mask here and try out some more seats and then I will probably find a result that I like. Now, this is a good time to mention that this video is sponsored by our Patreon supporters. If you want to help us create these deep dives and get access to our community Discord, consider supporting us. You can also get access to the advanced version of this workflow along with the workflow files for this shot. And let me quickly show you what you can do with it. As I already mentioned, a pretty annoying limitation with Juan 2.1 is that you can only generate 81 frames. And we had several shots in the short film that were much longer than this. So to fix this, we created this advanced version of the workflow that will break down the W generation into chunks and automatically stitch them together. In the beginning here, you can set the length of these chunks, but I will just keep them at 81. And the generation and stitching process mostly happens automatically. But if you want, you can set individual prompts for each chunk. And this way, your character's behavior could change over the duration of the shot. You can also use this workflow to add things other than creatures to your shots. Here, for example, I needed to add a large crystal to my hand. So, I just masked out the area using the static mask in painting tool from the workflow. And then I used this image of the crystal as reference. And here is the final shot. But one of the most impressive techniques that we developed for this short film was a way to quickly change out the background and composite the character back into the completely new world while preserving the exact camera movement from the original shot. We are already working on our next short film using this technique and we'll make a dedicated video about it because it's just so amazing. But let's not get ahead of oursel and let me present to you our newest short film, The Crystal Cat. [Music] [Music] I hope you enjoyed this video and you can appreciate all the time and effort we put into this. If you create anything with our workflows, feel free to tag me in your work. I always love to see what you come up with. And as always, thank you so much to our lovely Patreon supporters who make these deep dives possible. See you next time.

Create CONTROLLABLE AI CHARACTERS for your MOVIES [FREE WORKFLOW + ComfyUI TUTORIAL]

Channel: Mickmumpitz

Convert Another Video

Share transcript:

Want to generate another YouTube transcript?

Enter a YouTube URL below to generate a new transcript.