YouTube to Text Converter

Transcript of Wan Alpha | 16-bit AI Video Generation in ComfyUI (Insane Results!)

Video Transcript:

Hey everyone, the artificial trainer here. Welcome in. Welcome back for another video. Today we're going to be talking 16bit video with one alpha. Super cool, novel, new way to generate video. It generates with the alpha already in the video. This is helpful for things like compositing or if you just generally need 16 bit videos for your VFX work or, you know, film work or whatever you do. getting that extra alpha layer is sometimes really helpful for doing different things. So, can be super useful for that. Also, I'm actually interested to try the VAE um just to add an alpha layer, not uh like a transparency alpha layer, but just adding an alpha layer and see if this can generate pure 16 bit. But this video is just going to be about generating 16bit videos with transparent backgrounds. As you can see from the beginning where I had videos playing over top of of my uh screen here. All right. So these nodes are not in the comfy manager yet. So we'll need to head to the GitHub repo for one alpha. You can find it in the description below. So the nodes are all the same as the standard Comi nodes except for this one saver node. So, we need to open that up and click the download. It downloads this RGBA save tools. And go to your Comfy UI folder and then go to custom nodes and just copy that file right into your custom nodes. So, you can see RGBA save tools right there. And then restart your Comfy UI environment. Okay, while it's restarting, you can download the workflow from the description or my Patreon. While you're at it, subscribe to my Patreon. Make sure you like and subscribe. Hit the bell on YouTube. Helps the channel and you get notified whenever I release any new news about AI video. All right, so here is the workflow. It's just text to video right now. I'm curious to try this on image to video, but so far I've just done text to video. Then we also have to make sure that we have the models installed. So go to your Comfy models folder and then in the description below I have all the models linked for you. You can just download each of them and then put them in the locations that I'm showing here. So I actually put the link to that Python file right in the description as well. So you can just navigate it to navigate to it from there. But then we also have so the diffusion models. It's just the regular wall 2.1 textto video diffusion model. And you can see my 14B FP16 Loras. So you can see this is the Laura that adds the kind of transparent color as a background that the VAE uses to decode what the alpha is. And then the light X2V Laura. This is pretty standard at this point if you're doing anything W 2.1. And then the text encoders. The text encoder is just the standard one text encoder, the UMT5. And then the VAE are right here. So there's an alpha channel and an RG. All right. So once you have all your models in there, you should be good to run. A quick tip, if you want to make sure that you've selected the models, you can press R when you're hovering or when you've focused a node and it'll refresh the models that are inside there. And then we can run a video. So let's say um a realistic man and woman having a conversation. All right. And then you can see our alpha VAE decode is down here. And then our RGB one is here. And I do have the alpha outputting here so you can check out what the alpha looks like as well. Oh, and I made a mistake here. So, make sure you say um the background is transparent or else it might add some color in the background. Then it'll mess up the decode. All right. And so really really strange generation with this man getting a ponytail, but you can see that it does a great job at having that transparent background. And so this outputs a PNG sequence. So it it gives you this checkerboard preview, but it also outputs a PNG sequence. Yeah. So here's the PNG sequence. So you can see the transparency is there. So, in order to see the actual, you know, uh, transparent video, we'll need to go into an editing software, something like Da Vinci Resolve, and then when you upload, you need to find your video, wherever that path is to this one. And then that'll give you my transparent video. So you can see if I overlay these two on top of each other, you get a composited video, not just one with a black background that's covering up the other, right? So super useful for things like compositing. Um, this is definitely like a first beta launch of this that I'm sure that something better is going to come out here shortly, especially a lot of the people that I work with are looking for 16 bit and not just 8bit generation. So, I think Luma, I think, also just released something where you can input an 8bit video and it'll generate the alpha for you that you can put on top of it. So, definitely something to keep an eye on in the space. It's getting a lot better. So, really exciting. new technique to do the VAE decoding here. I'm excited to see what comes next. But that is it for this video. I hope you enjoyed it. Hope you find it useful. Follow my socials. I appreciate you watching this video and I'll talk to you in the next

Wan Alpha | 16-bit AI Video Generation in ComfyUI (Insane Results!)

Channel: ArtOfficial Labs

Convert Another Video

Share transcript:

Want to generate another YouTube transcript?

Enter a YouTube URL below to generate a new transcript.