YouTube to Text Converter

Transcript of Wan 2.2 Animate: FREE AI Character Swap & Lip-sync | ComfyUI Tutorial

Video Transcript:

You can now use AI to seamlessly replace any character in your video. Not only that, but you can also maintain the original speech and sync the lip movement or use your own video to drive the animation of another character and even restyle the entire video into any look you want. And in this tutorial, I'm going to show you how to do all of that for free using one 2.2 animate inside Comfy UI. I've already created a video about one 2.2 animate where I also highlighted some quality issues and limitations that many of you also noticed. Today, we will use a different workflow to optimize the output. I ran several tests and found that one 2.2 animate is very good with physics. It's also good at maintaining character consistency even when subjects spin around. I've successfully replaced characters who were holding or interacting with objects in the scene, which shows how powerful the subject detection really is in this workflow. To get started, you need to install Comfy UI on your computer. You can download it from comfy.org. Simply choose an option based on your system, open the downloaded file, and follow the installation steps. The process is very easy and straightforward. It's important to note that Comfy UI works best on Windows with an Nvidia GPU and running one 2.2 Animate requires at least 24 GB of GPU VRAM. The more VRAM you have, the faster your generations will process. However, if you don't have the required hardware, don't worry. You can simply run Comfy online and get access to higherend GPUs with various VRAM options. I will leave links to my favorite websites where you can run Comfy UI in the description box. Once Confiu is installed, it will launch automatically. Next, you need to download the necessary one 2.2 Animate models. I've included a detailed guide below with all the links and exact folders where you should save these models. Also, in the description box, you can download the one 2.2 Animate workflow. Simply drag and drop the file into Comfy UI to load it. Once you do that, you will get a window telling you that the workflow requires nodes not installed on your computer. To fix this, go to manager and click on install missing custom nodes. Install all the nodes in this list one by one. After that, you will need to restart Confui. And now you can start using the one 2.2 animate workflow. It might look a bit overwhelming at first, but don't worry. I'll guide you through the process and settings step by step. And by the way, I want to give a shout out to Kiji, one of the most respected individuals in the Comfy UI community. He created this workflow, but I've modified it slightly to make it easier to get started and ended up with this version right here. Okay, let's get started. Click here to upload a video of the character or person that you want to replace. I recommend using a video no larger than 1080p resolution. Both vertical and landscape formats work well. Here you can choose how many frames of your video you want to run through the workflow. And usually not being able to generate long videos is a common limitations of open-source video models. But with this, I managed to run 300 and 400 frames with no issues. But it will probably depend on the video. If you have more VRAM than me or use Comfy UI on the cloud, you can probably process even longer videos. I'm going to input 125 frames, which is around 5 seconds long. Here you can choose the dimensions of your output video. Keep in mind that higher resolution requires more time and resources. I want to match the original aspect ratio of my video, which is 9x6. So, I'm setting the dimensions to 576x 1024. This is high enough for decent quality, but low enough for my machine to handle. And I will show you later how you can quickly increase this resolution after generating. Just below you can click here to upload a picture of the character you want to replace the original with. You actually have two options here. You can either upload a completely new character that's totally different from your original. Just make sure you choose an image where the character is clearly visible and not too far from the camera. It also works fine if the new character's body is not fully visible in the frame. Your second option is to upload a stylized version of the same character. This way you can transform your video into any visual style such as anime, claimation, 3D animation, and more. To do this, first export one frame from your original video. Ideally, when your character is most visible, then you need to turn that single image into any style you want. And I'm going to do that using artlist.io. Go to AI image and video. Switch to image to image and upload that frame. In the prompt box, just describe the style. For example, you can say turn this into anime style. Then pick a model. You've got options like Cadream, Nano Banana, or Flux. I will go with Cadream for this one. Hit generate. And in just a few seconds, you will get your stylized image. Now, Art List is the sponsor of today's video, and it's way more than just a place for stylizing images using AI. You can generate videos directly on the site using some of the best AI video models out there, including Google's Veo 3 and even Sora, too. It's a one-stop shop where you can also create voiceovers and dialogues in multiple languages, change existing voices, and of course, access art lists, legendary music library, sound effects, cinematic stock footage, and even motion graphic templates. Basically, everything you need to bring your ideas to life. So, if you want the best of both creative assets and AI creation tools in one place, Art List is a no-brainer. Use the link in the description box to get two extra months for free when subscribing to artlist.io. Now back to Comfy UI. Down below you can find the one video torch compile settings node. It's currently bypassed as you can see. If you unbypass this and change the attention mode to Sage Attention, it will supposedly speed up the generation. However, you will need to have both Torch and Sage installed on your computer. This hasn't worked very well for me, but I will leave an installation guide in the description in case you want to try it out. Next, move over here and find the grow mask with blur node, specifically the expand value. This setting is crucial. It's set to 10 by default, which means the mask will only be 10 pixels larger than the selected subject. This works fine for characters that are the same size as your original one, but if your new character is larger, you might need to increase this value to allow more space for the new character. I'm setting mine to 25. Now, let's move to the character replacement group of nodes. By default, the generated video will use the same background from your original video and simply replace the character with a new one. However, if you want to take the movement from your video and apply it to the reference picture and drive the motion of the new character, which means the background from the reference image will be retained in your output video. Simply disconnect the background image and mask nodes from the one video animate embeds node. I'll keep the default connections for now because I want to use this as a character replacement workflow. Let's move over to the prompt box here. All you need to do is input a simple description of what's happening in the video in a short sentence. For example, I wrote female clown talking. That's it. By default, one 2.2 animate generates videos at 16 frames per second, but you can match the original frame rate of your video in both video combined nodes. Once you're done with the settings, click here to run the workflow and start generating. And by the way, if you want to try other creative workflows, make sure you subscribe to the channel so you don't miss out on future videos. I also invite you to check out my Patreon where you can unlock exclusive workflows, project files, and advanced tutorials. After processing, you will be able to preview your final video with the swapped character in this node. And look at that. The replacement looks very clean. While there are some imperfections and weird glitches around the edges, the replacement looks almost seamless. And remember, this technology is only going to get better. The lip syncing is impressive. The body movement is pretty much identical to the original video. Also, the color and light matching looks really good. Keep in mind that results will vary depending on your choice of video and character. So, this is where experimenting with inputs and settings comes in. I highly encourage you to try out different combinations because experimenting is truly the best way to learn. If you feel the generated video could look better and the quality isn't up to your standards, you can use upscaling software to enhance it. Personally, my go-to option is top as video. To find your generated video, go to your Comfy folder, open the output folder, and you will find all your generated videos there. All you need to do is drag and drop your video here and click start editing. I usually enable enhancement and upscale the video by two or four times the original size. Each AI model inside topaz video offers different results. For AI generated videos, I typically use the rhea model. You can also enable frame interpolation. This is useful if your video was generated at a low frame rate like 16 fps for example. Topaz video will fill in the missing frames to create smoother motion. Once you're happy with the settings, just click quick export and you're good to go. On my machine, the upscaling usually takes about a minute. And as you can see, the details on the upscaled video are way sharper. And the overall image just looks cleaner, but it doesn't feel oversharpened or weird, which is why I love using Topas Video. If you have any questions or requests, please drop them in the comments below. Stay creative, and I'll see you in the next video. Peace. [Music]

Wan 2.2 Animate: FREE AI Character Swap & Lip-sync | ComfyUI Tutorial

Channel: MDMZ

Convert Another Video

Share transcript:

Want to generate another YouTube transcript?

Enter a YouTube URL below to generate a new transcript.