YouTube to Text Converter

Transcript of These FREE 3D AI Workflows are INCREDIBLE! (ComfyUI Tutorial + Blender)

Video Transcript:

You can now generate stunning 2K, 4K, or even 8K textures for your existing 3D models or generate entirely new ones based on images or text right on your own computer for free. And in this video, I'm going to show you how to set everything up and how we used it to create a full VFX shot with an army of zombies in no time. We're also sharing a bunch of other techniques, like for example, how we used the plug-in to model and texture new environments with AI directly in Blender. Plus, we'll show you how we used the new open-source vase video model to turn myself into a zombie. So, make sure to watch to the end. The main workflow we built is powered by the free and open-source Hanuan 3D model. You might remember we used it in one of our previous videos for our short film Paper Jam, where we generated some of the characters and assets with it. We rigged up these characters, put everything together inside of Blender where we posed each shot, and then we finalized the look of each shot using the Flux image model. And that was necessary because while the quality of the 3D geometry, except for the topology, is pretty decent, the quality of the textures is pretty bad. But I always felt like the model could do better with just a bit of extra help. So over the past few weeks, I built these workflows that dramatically improve the texture quality using the Flux or SDXL image model. And you can now also import pre-existing 3D models with UVs into this workflow and have this model texture them. We wanted to test it by turning these three zombie characters into an epic army. For this, we shot this quick test scene here of me running around in a parking lot, tracked it in Adobe After Effects, and brought the 3D camera track into Blender, where I lined it up with a simple 3D scan of the parking lot that I captured with my iPhone. With that basic layout in place, we generated the crowd in Houdini using different Mixamo animations. And then we brought it all back into Blender. The shot was starting to look very promising, but now that parking lot felt a bit boring. That's when Nvidia reached out to us to show us their new AI blueprints, and it turned out to be exactly what we needed for this shot. So, thanks Nvidia for sponsoring the next part of this video. Nvidia AI blueprints are basically pre-built workflows that show you how to use AI tools in real world projects. They run on something called NIMS, which are small optimized services that let you run AI models really efficiently on their newer NVIDIA GPUs. There are already a bunch of these blueprints available, but the one that I wanted to try out was their 3D guided generative AI blueprint. This one lets you use the amazing Flux image model right inside of Blender. Thanks to their step-by-step guide, the installation was pretty easy. Once it was running in Blender, I just loaded up their example scene, typed a prompt, hit run, and it generated an image based on the depth information of the scene. It's basically like having Flux as a new render engine inside of Blender. So, that gave me an idea. I imported our 3D tracked shot, built a quick layout just using simple shapes like cubes, and started prototyping different looks for the environment just by tweaking the prompt and the seat. And one really cool thing that you could do with these renderings is project them back onto the scene geometry. Just go into edit mode, UV, project from view, and load in the image as a texture in your shader. If you move the camera too much, you can see that the background kind of falls apart here. So, what I did is I generated the foreground and background separately and combined these shaders in Blender using a mask. I added some AI generated assets, created a lighting setup, and boom, that's our final scene prototyped in maybe half an hour, an hour. If you want to try out Nvidia blueprints and nims yourself, check out the link in the description. Thanks again to Nvidia for sponsoring this part of the video. And now, let's get back to texturing our zombies. All my workflows are created in Comui, a free node-based interface for AI models. Installing it is very easy, and we created a guide for that on our website. Once you've set it up, start Comy UI and drag and drop the workflow file into the Comfy UI interface. There are a few missing custom nodes. To install them, go to manager, install missing custom nodes. Select all of them and click install. Then restart Confui and refresh your browser window. We still have some missing custom nodes and these are the Hunion 3D nodes. We will try that again. So go to install missing custom nodes. Install the nightly version. Restart CompuI. You can see the nodes are finally here, but we still have this error message. That's because we still need to do some extra steps. You can go to the GitHub page of the CompuIP by just clicking on the name. Scroll down to installation. To install these additional requirements, we need to go to our Confui folder and type in cmd into the address bar to open the command window. Copy the first command and paste it into the cmd window and hit enter. And now you see a very weird bug and that's because for some strange reason when you install Hanunion 3D sometimes it will not name the custom node file correctly. Look at this. Go to com UI custom nodes and there you have all the custom nodes and you can see the Hanion 3D wrapper is missing a U. All you need to do to fix it is just add that U. Paste in the command again. Hit enter. And now you can see it works. Since we're using the newest portable version of Confi, we need to copy this last command right here into the cmd window and hit enter. Once it's done, restart Confui and you should not have any error messages anymore. But we still need to download some models for everything to work. And you can find how and where to download these models in the little notes here next to the model folders. So just do that. Put them in the correct folders. Refresh or restart comui and select these models in the loader nodes. With this workflow, we are going to work from left to right. So let's zoom in on the left here. Make sure that you have selected all these models. And now you can import the mesh that you want to work with. If your object already has UVs, leave this note here deactivated. Otherwise, it will create new automatic UVs, which can be handy if you don't have any. But for this one, we have pretty okay UVs, so we want to keep them. In the next group, the different views for our characters will be extracted. And we don't have to change anything here. But this texture size right here, this is going to be our final output size of the texture. So, make sure to adjust that here if you want. Now we need to select the main view that we want to generate our texture from. The idea is that we generate a texture from one main view and this will then automatically be interpolated around the character. So for a humanoid character like this, it of course makes sense to do it from the front so that the face looks as good as possible. But you can select other views here as well. For example, if you have like this dinosaur character, it would make sense to choose a side view. You can click on this preview here to find out which image is which one. So this is number one 2 3 4 5 and so on and then just select that here. In the next group the main character texture will be generated based on the prompt that you give it here on the left side. On the top here you can input a style prompt for the general style animation, photography, 3D rendering, anime, whatever you want. And down here we input a prompt for the type of character that we want. Here you also have a negative prompt where you can put in what you don't want to see. Now we can just click run and we have our zombie wearing a hoodie. If you don't like your result, adjust the prompt or you could also try out different seats. So let's try another seat. Generate again. Maybe we can also give it like a skull. Let's change the t-shirt. Ripped jeans and t-shirt. Okay. And that's looking pretty spooky. Perfect. Let's continue. In this next group, you don't have to do anything at all here. The background will just be removed and changed to white. And this next group here will try to remove the lighting from the image. Often this does not work really well and it loses a lot of detail. So I added this option right here to blend it with the original image. Again, play around with the blend. And you can also play around with the seat here. So maybe you are lucky and you're getting a better version of your Dlit image. But for now, this has to be good enough. So let's continue to the next group here. The model will generate the views from the other sides based on the image that we give it. So, let's click run. When they are like really broken, like for example, like I don't know what's happening with his butt here. You can try changing the seat and generating a new set of images. That's better. He is finally wearing pants. Still still very tight pants though, but uh let's continue with that. So, with these set of images, we can move on to the next group where all these images will be upscaled to the resolution that you want them to be. Right. So, let's start at the top left. Here you have the option to add additional detail to your prompt. Down here, this control net makes sure that the character proportions will not change too much when you upscale these images. The next part of this workflow is the SD ultimate upscaler, which will upscale the image by four. So these different views are like 512 pixels. So this will upscale them to 2K resolution. And if you want, you can activate this second upscaler here using CtrlB. And this is set to two. So, this will double the resolution again, bringing it up to 4K, but you can go even higher if you want to and if you have the time. So, let's leave that deactivated. 2K is enough for me. And let's run this. It'll break up these images into smaller sections and upscale them one by one. And you can change how much detail it is allowed to add by changing this D noiseis value. Though, I would recommend it to keep it around this value or even a bit lower. Once that is done, it will be sent over to the face detailer. A similar concept. This will just pick out the faces and upscale them again, adding more detail. When it's done, you can use this image compar to see what changed. And yeah, this was doing a lot of heavy lifting. I mean, look at this. But since I put the D noise value so high, I allowed it to make changes a lot. So now he's wearing flip-flops. In the next group, you don't have to change anything. Here, the textures will be created. So you can see it will save them to the original UVs and it will also give you a normal map. The textures and the 3D model will then be saved out to your CompuI folder and you also get this preview right here. If you want to import these textures into Blender for example and build your shader, you can find them in the CompuI output folder. But let's first add some animations. So I upload him to Mixamo. Use the autorigger to give him a rig. Perfect. Download as FBX. Open Blender and import him. Go to shading. Add a new shader. And now I'm bringing in my two textures here. Connect this one to color and the next one to a normal map. Since this is a normal map, make sure to change the color space to non color and connect it. Let's add some quick lighting. And [Music] if you like these workflows and want to support our work, consider supporting us on Patreon. You can get access to the advanced versions of these workflows where we just try to add a bunch more features that might be useful to you. For example, you have the option to prompt with a reference image. Let's load in a car, for example, and create a prompt. And yeah, this is looking pretty cool. But let's say you already have an image of a car that you would like to use the style of. So I'll just activate this IP adapter setup right here. Load in the image of the car that I want to use and click run again. And now the car will follow the style of the input image. And you can use this reference for a lot of other stuff too, like you can use it as a style reference or as a reference for clothes. For example, let's now say you generated an image like that and you like it, but you want to change a certain part of the image. Like for example, the face looks a bit weird here and it's also not matching the geometry of the face perfectly. What you can do then is go down to the new inpainting group down here, right click on this image, go to mask editor and delete the part of the image that you don't like or that you want to change. Give it a new prompt and generate only that part of the image again. And you can see this is not absolutely perfect. I would try some different seats here, but it's matching the geometry much better already. And remember that it also will be upscaled later. The advanced workflows will also generate a height map right here that you can just use to displace the image for example, but I sometimes use that to grade myself like a specular a roughness map. Thanks for considering supporting us and feel free to let me know what features you would like to see added on our Discord community. But now, let's continue with the next free workflow. Let's say you don't already have a 3D model and you want to generate it, right? For this, just import my 3D generation and texturing workflow. And before you can use this one, you need an additional model that you can download right here. Download this from KI's hugging face page and put it inside of your models diffusion models folder. So here, instead of generating a texture based on an existing 3D model, we turn the process around and start by generating an image and then building the 3D model out of that image. So in this first group here, you can just enter a prompt for what you want to see. Let's stay with the zombie for example, like full body view zombie from the front. Let's click generate. Yeah, that's a pretty pretty cool zombie right here. Worked pretty well. If you want to generate your character in a certain pose, like for example a to pose, you can activate this input image right here using CtrlB and this apply control net. And now you can just go to the 3D open pose editor, for example. Make sure to set it that it's a square image. And now you can just click this button right here and download this open pose image. And now the character will generate in exactly this pose. If you want, you can also generate a character based on an input image. For this one, you can just activate this node here. Maybe this one right here. You still need to change the prompts. And then you need to connect this node instead of this node up here. And the workflow will create a 3D model based on that image. That's looking pretty good. If you don't like it, you can change the seat to generate another variation of this model. So, it worked pretty well for my Pixar version of myself. But let's get back to our zombies. So, it was finally time to put everything together. I had generated at least five different textures for each of the three zombie models. I then brought the textures into Blender, creating the different shaders and assigning them to the crowd simulation. I rendered everything out, compositing it in After Effects. But something was missing. For the ending of the shot, I needed to turn into a zombie. For this part, we used the new 12.1 vase video model in Compui, and we used a preview version of this model before for our short film where I turned into this beetle creature. But now the full model is out. We selected one frame from the middle of the sequence where I'm already a zombie and turned myself into the zombie using the new and absolutely amazing flux context that we'll definitely cover in a future video. So then we generated two videos with base starting from that center frame. We reversed the video for the first part of the video and then we generated the second part stitching them together in Da Vinci Resolve. I added that to the comp in After Effects which unfortunately kept crashing. So I was forced to call it a day adding some color grading to hide the imperfections. And here they are. Here are the final shots. [Music] Heat. Heat. [Music] And that's it for this one. I hope you found these workflows useful and inspiring. If you create something with them, please share and tag me in your work. I always love to see what you come up with. Huge thanks also to our amazing Patreon supporters who make these videos and deep dives possible. Thank you so much for supporting. Thank you so much for watching as well. And see you next time.

These FREE 3D AI Workflows are INCREDIBLE! (ComfyUI Tutorial + Blender)

Channel: Mickmumpitz

Convert Another Video

Share transcript:

Want to generate another YouTube transcript?

Enter a YouTube URL below to generate a new transcript.