Transcript of ComfyUI Tutorial: How To Do Object swapp (Face, Clothes) Using Wan2.1 fun #comfyui #comfyuitutorial
Video Transcript:
hello everyone welcome back to the channel Today I will show you a new custom workflow based on one fun model This workflow allows you to do video face swapping cloud swapping and also logo instation on a video So if you are looking to get results like this one make sure to follow my tutorial And without further ado let's get started Okay before we start make sure to head over my Patreon in order to download the workflow for this video Once it is downloaded just make sure to drag and drop this workflow here Okay Once you open up this workflow you will find that it is composed of two main groups The first group it's the ace portrait and logo group which will allows you to do the the object swapping in general Either it is a face or clothes but also a logo And if you are looking to get more details about this ACE technology you can watch my other tutorial that I made a long time ago So what basically is going to do is we're going to take a input image and a reference image If you can focus on this example you can see that on this input image I managed to select uh the shirt of this uh young boy in order to generate an image with the new shirt that I'm going to use later on uh to generate a video So as you can see here we managed to change the clothes of this uh young man If you look more closer you can see that here we have the flux field model which has uh the unit loader for the flux field We have the VAE the dual clip loader And here we have the main uh core of this model section which is the Turblux Laura that allows you to generate uh the image more faster But also we have uh three different Laura that are dedicated for every application If you are looking for face swapping make sure to select this portrait Laura here by enabling this button If you are looking for logo and stationation or clothes swapping make sure to select this subject to then this model is uh plugged into this apply first block cache which going to help us to create this image more uh quickly and this model here is directly connected to the key sampler We also have this image processing but I will not bother you too much with this detail Put your mind that I automate the process here starting with the image input and the masking but also the necessary prompt for the image generation So all you have to do here is the drag the input image the reference image and click your prompt in order to get this results Okay But you have to put in mind that this image was uh extracted from a video uh that I wanted to do some editing So to do that all you have to focus is this fast group bypasser Make sure to bypass the fun control net group which is the second uh main group of this workflow You need to enable this image extract from video but also this video input reference So make sure to enable this one Sorry make sure to enable those two Once it is done all you have to do is put here your video where you have different parameters starting with frame load cap I managed to limitate the the the duration of this video according to my needs This low frame load cap value was uh equivalent to the length of the generated video You can also see here that I have the high and width of my video I make sure to limit this value too since I have a low VRAM GPU Once it is done it's going to extract the fra the first frame from uh this video as you can see here Then this first frame can be used by just copy and paste uh this C bar here and you can start uh do your editing either it is a face swapping or close swapping So once you have pick up your first image make sure to enable everything here and this way you're going to use this first image Then new image is going to be generated based on your input After that we will have uh the video generation process that will start automatically too All you have to do here is to make sure that you have this uh second group enabled And this uh main group is also composed of different many other groups starting with the model and VAE loader I will grab your attention that I am using a skip layer guidance here uh alongside with one video t and uh I noticed that most of you don't have the tcash notes that is a very complicated problem for me since it is related to the author of this notes every updates we have uh some notes that can be broken or that disappear So if you are facing some missing notes uh using my custom workflow make sure to either try to install the missing custom notes or try to find another uh notes or alternative for it Good Okay As I said earlier the video is going to be created automatically And in order to create this type of video this workflow is going to take into consideration the control net data that is generated here We have here on this group the the combination of both control uh usage We have the depth map that was created using this node and the pose estimator created using this node Both images are going to be combined here and the final video can be seen here And based on that we're going to use this fun uh uh one fun control to video notes which going to has different input starting with the start image and the control video The control video is going to be the one created using the control net prep-processors However for the start image is going to be this image that we this image that we created using the ACE groups So the video is going to start generating So the model is going to start generating the video starting with this image and uh it's going to also be helped using this prompt generator here that I am using too in order to create this final results The main advantage using this method it is using less VRAM and it is very adaptable for low VRAM graphic card users since we have the vase model that need uh a lot of VRAM memory However with this uh workflow you can obtain the same results without getting uh any I tested this uh workflow for a week now and I am getting a pretty good results The consistency is here Sometimes uh I am getting bad results All you have to do to fix that is to focus on the key sampler here You need to change the CFG scale between four to six but also you can play with the steps I managed to obtain all the good results with the step value between 20 and 25 So all you have to do here is do another try and as you can see the results is pretty convenient Okay if you are used to my workflow now you can notice that I did not include any upscaler notes The main reason behind that that uh the quality of the input video are pretty good and we are only generating a part of those video which uh is helpful in order to create a good quality content using a low V RAM So I strongly recommend you to use this workflow for video editing since we have the ability to do some changes without uh any high consumption of uh the VRM memory Of course you can find all the necessary requirements for your model here but also on my previous tutorial that focus on the A++ uh workflow Okay that's it for today tutorial If you like this video please push the like button for me Subscribe to my channel give me some comments down below and don't forget to watch my other video tutorial can also become a member of my Patreon channel in order to get uh early access to my content starting with workflow and other uh news related to AI generation or confi So thank you
ComfyUI Tutorial: How To Do Object swapp (Face, Clothes) Using Wan2.1 fun #comfyui #comfyuitutorial
Channel: CG Pixel
Share transcript:
Want to generate another YouTube transcript?
Enter a YouTube URL below to generate a new transcript.