Transcript of How To Use Wan 2.2 Animate in ComfyUI | AI Character Replacement Tutorial
Video Transcript:
Today I'm going to show you how to use the new one animate inside Comy UI to swap characters in your video while keeping the exact same movement and even restyle the whole thing to create any animation you want. To get started, you need to install Comfy UI on your computer. Go to comfy.org. Click on download and choose an option based on your system. Open the downloaded file and follow the installation steps which are very straightforward. Once installed, Comfy UI will launch automatically. To use one 2.2 animate, click here, then click on browse templates. In the search box, type one and then open the one 2.2 animate workflow. When you do that, you will get this missing models message on the screen. So, go ahead and download all the files listed here. You can close the window for now, and the models will continue downloading in the background. The workflow also requires some external custom nodes. So if you see this window, make sure you go to manager, install missing custom nodes and download all the nodes listed here one by one. You will need to restart Confy UI. But before doing that, click here to check on the models download progress. If you notice any issues with the downloads, you can still get the models manually. And I will leave links to those models in the description box along with where to save them. Once you've downloaded all the models, go ahead and click restart. Comfy UI will take a few seconds to relaunch. After that, you can start using the workflow. At first, the workflow may look overwhelming, but don't worry. I will guide you through the essential settings and show you how to get this working step by step. Let's start by clicking here to upload a video of whatever character or person you want to replace. I recommend uploading a video that's no larger than 1080p resolution. Both vertical and landscape formats work fine. Next to that, on the left side, you can upload the picture of the character that you want to swap with the original. I chose an image of a pirate that I generated with Mid Journey. This pirate has many hanging accessories, long hair, a flowing dress, and that's because I wanted to test how well the model handles these elements and their physics. Here you can choose the dimensions of your output video. Keep in mind that one is somewhat of a heavy model, so higher resolution requires more time and resources. Also make sure that both width and height are divisible by 16. Otherwise, you will get a bunch of errors. I want to match the original aspect ratio of my video, which is 9x6. So, I'm setting the dimensions to 576x 1024. This is high enough for decent quality, but low enough for my machine to handle. Moving on to the prompt box, you can simply describe what's happening in the video in a short sentence. For example, in my final video, I want to have a female pirate dancing. So, that's all I need to mention. There is a point editor node, which is very important that I will come back to shortly. But first, let's look at the grow mask node. By default, it's set to 10, which means the mask will only be 10 pixels larger than the selected subject. This works fine for basic characters, but if your new character is larger, you might need to increase this value to expand the mask and allow more space for the new character. I'm setting mine to 35 to leave margin for my pirates flowing dress and accessories. You can see here that one animate offers two different modes, mix and move. Mix mode simply replaces the character in your video with a new one. Move mode takes the movements from your video and applies them to the reference picture, which means the background from the reference image will be retained in your output video. By default, the workflow is set to use mix mode. If you want to switch to move mode, simply disconnect the background video and character mask nodes from the one animate to video node. I will keep the default connections for now because I want to use the mix mode. By default, the length is set to 77 frames, which means one animate will process the first 77 frames of your video about 3 to 4 seconds, regardless of its original duration. You can increase this to 81 frames to process 5 seconds. I recommend using either 77 or 81 frames. There's also a completely different way to create even longer videos, which I will explain shortly. By default, the workflow outputs videos at 16 frames per second, but you can change this setting if needed or simply connect the get original FPS node to the FPS value to automatically match the frame rate of your original video. Now, below this, we have another group labeled video extend example. If you'd like to create a longer video, select all the nodes in this group and press CtrlB to unbypass them. Simply change the value to 81 and you will double the duration. And here you will also find instructions on how to extend your video even further. One last thing you need to do before running the workflow is select the K sampler node and press CtrlB to bypass it and I will explain why in just a minute. Then go ahead and click run and let the workflow process your video. Once the queue has been cleared, you will see that the first frame of your video has been loaded in the point editor node with several green and red markers added to the image. These markers will help the AI find and select the subject in your video. To assist with that, make sure you place green markers on your subject and red markers outside of it. You can add additional green markers by pressing shift and leftclick on your mouse. Add as many as you want. The more the better. And to add red markers, press shift and rightclick anywhere outside of the subject. And to remove any marker, simply right click on it. Once you're done with that, go back to the K sampler node, select it, and press CtrlB to reenable it. I purposely bypassed it earlier because this node takes the longest to process among all the nodes in this workflow. This way, you don't have to wait for the video to generate before using the point editor. And now we can go ahead and run the workflow fully. After processing, you will be able to preview your final video with the swapped character in the save video node. I did notice some artifacts, and the quality isn't that great in this example. However, I managed to get much better results using the same settings with different inputs. The better results came when the character's face were clearer and sharper in both the video and reference image. So, keep in mind that input quality makes a big difference. As you can see from how the hair and accessories move when 2.2 Animate handles physics remarkably well. It also works fine with characters that aren't fully visible in the reference image. To be honest, I didn't have much success with lip-syncing and facial expressions using this model. I will dig deeper and look into this issue and I will share my findings in the pinned comment as soon as I figure it out. To locate your video files, go to your Comfy Y folder, open the output folder, and you will find all your generated videos there. Regardless of its current limitations, this type of model opens up a whole new world for creativity in AI videos. You can not only replace characters, but also restyled videos and create entire animations from scratch. If you have any questions or requests, make sure you drop them in the comments. Don't forget to like and subscribe. If you want to try other creative Comfy workflows, check out this video. Stay creative and I'll see you in the next video. Peace. [Music]
How To Use Wan 2.2 Animate in ComfyUI | AI Character Replacement Tutorial
Channel: MDMZ
Share transcript:
Want to generate another YouTube transcript?
Enter a YouTube URL below to generate a new transcript.