YouTube to Text Converter

Transcript of I Tried EVERY Camera Movement Prompt in WAN 2.2 (FULL TUTORIAL)

Video Transcript:

This is a cinematic shot type and camera movement quality you can achieve with your consistent character in one 2.2. After extensive testing, I found the key to getting all the shot types and camera movements right, plus keywords to adjust the intensity of both the action and the camera motion. So, in this video, I'll show you pro tips on how to do this while also keeping your scenery consistent. And the best part is it works with both online AI tools and open source software like Comfy. Later in this video, I'll walk you through how to use one 2.2 on its official site, as well as the workflows I use in Comi, which include a few extra options you don't want to miss. But first, I'm starting with design AI, where I create my input images and use them to generate videos with the 1 2.2 model. I prefer the image to video feature as it gives you the most control when working with AI video. That said, I'll also touch on the potential of the text to video model. On the design homepage, I'm using the instant storyboard tool. And if you're using open source, you can get similar results with the flux context workflow, which I'll explain briefly at the end of the video. That said, I do find the instant storyboard tool delivers slightly better results. Here, I'll add this image, which I created using the following prompt. By clicking here, I can upload the image into the tool. Then I'll enter the prompt high angle shot of the woman. And to use this image as a reference, you need to add the ad symbol, which allows you to select your base image. The last line of the prompt is especially important. It ensures that the new image matches the lighting and color of the base image. This way you can create cinematic shots that all feel like part of the same story. Then I select the 16x9 aspect ratio and hit generate. I'm really impressed by the result. It's the exact same character with the same clothes, scenery, lighting, and colors. Now let's turn this image into a video by clicking on the AI video tab and select the one 2.2 model. Then I'll upload our image by dragging it in here and add my prompt. Camera pushes in for a close-up. And since the 1 2.2 2 model excels at realistic dynamic motion. It's the perfect match for our well-prepared image. Before we move on, here's a quick breakdown of what's coming next. I'll show three more shot types and camera movements with this woman, followed by three more with the assassin woman. Then I'll show how to change her cinematic style to a warmer look, followed by three more shots with the updated version. And after that, I'll show you how to add multiple characters and create an action scene and use the intensity keywords to adjust how much action appears in the shot. For the next shot, I added a low angle shot of the woman doing a model pose with one arm resting on her head. To give it a more cinematic feel, I added the sun is shining from behind. Let's create two more shots and see how well they fit together. I made a full body shot of the woman with her feet in the snow and a close-up of her lying down on the snow, resting her chin on one hand. I'm really impressed with the results. Everything clearly fits onto the same story. And with the 1 2.2 model, we can add amazing camera movements and turn them into high quality shots. But before we do that, I want to quickly show you how I created the thumbnail shot. For this, I used extreme high angle shot with dynamic movement with her arms towards the camera. Now, let's check out the video prompts and the results. For the full body shot, I used camera pulls back as the woman walks forward. And I really love the dynamics of this shot. For the base shot, which is a medium shot, I used camera zooms into the eyes as she tucks her hair behind her ears. This one really shows the potential of the one 2.2 model. The movement looks so realistic and you can see how well it follows the prompt. For the low angle shot, I used arc shot around the woman and although it's a subtle effect, I'm very pleased with the result. For the thumbnail shot, I used camera zooms away while panning to the right in circular motion. It might not be exactly what I was aiming for, but the result turned out amazing. I love the shot and the mountains in the background look incredibly consistent, crisp, and clear. And for the close-up shot, I used the woman puts her hands down on the ground and turns away from the camera while smiling. I didn't enter camera here as I just wanted to see how well it performs based on the image alone. For the next set of shot angles and camera movements, I'll use this image. I'm using an extreme close-up shot and added with focus on her eyes, which will help guide the AI model to go closer to the subject. For the camera movement, I used camera zooms into the eyes, showing her brown eyes in a very detailed way. For this shot, I used an extreme high angle shot and added looking up at the camera with her arms towards the camera and added dynamic movement, which gives this shot a great feel of liveless. Here you can see that this time I used two images. On the left, we see the assassin girl on a transparent background and next to it, an image of the background. While testing this tool extensively, I found that in some situation, it works very well to use only the base image. But in some cases, especially if you want to zoom out and show more of the scenery, it works better to upload two images. Then for the camera move, I used camera zooms away from the woman. I am very excited about this shot. I really love the dynamics, and with these kinds of shots, you can really start telling a story. Here, I wanted an over- the-shoulder shot, but the model wasn't reacting to that angle. So, I added a reference image in the third box and used the prompt shot from the back as she looks at the camera over the shoulder like the girl in image 3. For the video, I used this prompt. It's the exact same one I used in one of my earlier clingi tutorials. I am super excited about how well the 1 2.2 model listens to the prompt. I love the shot and would have loved to make it longer, but for now the limit is 5-second videos, but I assume that will change soon. For this image, I used an extreme wide shot angle and the background looks really cool. The character might not be 100% consistent, but that's almost always the case when creating wide-angle shots where the subject is very small. There simply aren't enough pixels to recreate the exact face from the base image. For the camera movement, I used camera tilts up into a drone shot above the woman. And I'm blown away by the result. If you want that zoom out aerial effect, it's really important to include into a drone shot in the prompt. If you only use camera tilts up, you get a result like this, which is also cool. But I personally love the added zoom out effect. Just be careful with the drone shot keyword because in this shot where I used gradually transitioning into a high angle drone shot above her, a real drone suddenly flew into the frame. In this shot, I wanted to create action and dynamic movement. So I made a full body shot of the assassin girl running. From my ultimate prom tool kit, I used dynamic motion blur and sand swirling to add energy to the image. These kinds of keywords are essential if you want action in your shot. They give one 2.2 to the chance to show its real potential. Then for the video, I used this prompt. And although it plays in slow motion, I really love the shot. Even though the prompt includes a lot of action SP keywords, I've noticed that with one 2.2, some videos come out with fast movement while others are in slow motion. On Twitter, Brent Lynch shared a post suggesting keywords like rocketing, intense, and fast movement to trigger more action style shots. I did get a video with faster motion using those, but for this shot, it didn't work in every video I created, but it worked great for the fighting action scene. I'll show you later in a second how I use them to control the action in that shot. Comparing these videos side by side, I am really impressed by the consistency, the colors, background, outfit, and character all align perfectly. This makes creating believable AI films so much easier. The instant storyboard tool can do much more. For example, you can create a product promo or insert yourself into VO3. And in this video, I'll show you exactly how to do this. Now, I'll quickly show you how to change the cinematic look of your character and base image. On Design AI, the output images are shown on the right side. Here, I select the chat editor and add the prompt, make this image more cinematic with a film look. That gave me this result, but the facial consistency is missing and the effect feels a bit too dramatic for my taste. So, in the instance storyboard tool, I added both images. And then I added the prompt, give image one the same lighting and colors of image two. This gave me an image with the same consistent character, but in a completely different cinematic environment, and I really love that flexibility. For the next image, I wanted to create an off-center shot, and the AI model responded really well to that keyword. To add more liess, I included bruting facial expression, looking into the camera, putting her hands in front of the camera like she takes it. For the camera prompt, I used the girl is standing in front of the camera agitated fast zoom to the girl's face. This shows how well one 2.2 handles facial expressions and emotions. Here I created an extreme low angle shot and added her legs are visible. This helps emphasizing the low angle shot. Another technique you can use if you don't want the legs to be visible is to add something like vast sky in the background to create a steep extreme low angle shot. These prompting techniques are key to cutting the camera correctly. In my ultimate prom tool kit, we've written over 30 pages on getting the right shot angle. It also includes thousand plus visual keywords all in one place with a discount code DM45. It's currently available for just $15. For the video prompt, I used arkshot of the assassin. I also tried keywords like orbiting around, but the one 2.2 model didn't seem to respond well to that. Another keyword the instant storyboard tool responds to very well is dialogue over the shoulder shot which is ideal for creating cinematic conversation scenes. For the video I used the prompt dialogue between the woman and the man static camera cinematic. I hope one 2.2 adds a feature soon that allows you to include a voice like in V3. The side profile shot is also recognized by the AI model and offers great potential for the video. For this, I used the prompt, "The woman walks to the right, camera tracking her head." One, 2.2 responded very well to the tracking keyword, and I'm satisfied with the result. Side by side, it's clear these shots belong to each other. They all share the same warm cinematic vibe that really ties the scene together. Let's move on to the action shot. Here, I added two characters, both on transparent backgrounds, and placed the background in a box on the right. I described them as wrestling and added fighting facial expressions, dynamic motion blur, and flying dirt to the prompt to create that intense action feel and give the scene real impact. Now, let's add the intensity keywords to control the action. For normal movement, I used the keywords fast intense movement. For stronger movement, I added fast intense movement, high speed. And for wild movement, I used fast intense movement, high-speed, rocketing, dynamic motion blur. I love the flexibility to control the action. To be honest, I prefer the first shot. It has a slight slow motion feel, but still feels very real to me. The other shots are great, too, but maybe not 100% realistic. I've also checked other videos, and while they show great speed in action, they don't always feel fully realistic, either. That said, I'm super impressed with the model, and I really enjoy working with it. Okay, now let's dive into the official 1 2.2 site and then dive into the Comfy Workflows. On the official one website, you get a few credits to create one or two images or videos per month. Now, if you go to the generate tab, you can see that you can choose either to create a video or an image. We're going to go for video. And then I can choose to use the image to video option or to use the text to video option. And now you can drag your image in here. And you can add your prompt in here and click generate. The text to video also has a lot of potential. And if you click here on this button, which is the user guide, I recommend scrolling through this article to get a better idea of what this model is capable of. It shows how the texture video model in 1 2.2 responds to different light sources, lighting types, times of day, shot sizes, compositions, lenses, and color tones. You can use this information wherever you're working with the model, like on Design AI or in Comi. I use the Comfy Y studio toolkit which cost $10 a month and gives access to over 200 organized workflows. The best part is it handles all the model Laura and VE installs for you. I've made a dedicated video about it on my channel if you want to learn more. I'll share my experience with the Flux context workflow shortly. Another reason I use Comfy Studio is that it comes with multiple workflows per model. For example, here's the one 2.2 workflow for image to video. And if I zoom out, you'll see that there are many more 1 2.2 two workflows available. For example, it includes the first large last frame workflow which lets you create smooth transitions between two images like you see here. And there's also a workflow to upscale your videos. You can also use the standard one 2.2 workflows by clicking here, then selecting the video tab and choosing the one workflow. However, if you don't have the Comfy Studio toolkit installed, you need to manually download all the models and place them in the correct folders. You'll also find the Flux context workflow here, but I'll use the one included in the Comi Studio toolkit instead since it comes with six different workflows, giving me everything I need to create anything the Flux context model can do. And what I've noticed with this workflow is that this works better with the Flux context if you have like an image on a transparent background and a special image from the scenery because you could also drag an image from her with already a background on it. But I had not great results with that. It did work in some cases, but I just had better results with the transparent image. I hope you found this video helpful. If you want to see how to put yourself into V3, check out this video or watch this one to learn more about Comfy Studio. And I'll add the video model comparison here as soon as you're

I Tried EVERY Camera Movement Prompt in WAN 2.2 (FULL TUTORIAL)

Channel: Digital Magic

Convert Another Video

Share transcript:

Want to generate another YouTube transcript?

Enter a YouTube URL below to generate a new transcript.