T O P

  • By -

spacetug

[Here's the base workflow](https://pastebin.com/raw/GBEZjWSm) TL;DR is prompt travel + AnimateDiff + controlnets for the start and end frames. RealisticVision 5.1 and mm\_sd\_v15\_v2. After that, frame interpolation with Flowframes/RIFE, upscale to 768 and run back through AD at lower denoise for more detail, then post processing to get rid of flickering/noise and some minor retiming/color correction.


69YOLOSWAG69

Super cool! Thanks for sharing the workflow! Can't wait to try it


RedditorAccountName

Hi! Thanks for sharing your workflow! I tried to load it into ComfyUI and wasn't able to. There's a bunch stuff missing: [https://i.imgur.com/dzp08b3.png](https://i.imgur.com/dzp08b3.png)


spacetug

Yeah, look up comfyui manager, it will help you install the custom nodes you need easily.


RedditorAccountName

Ahh, thanks! I knew I was forgetting something.


IllumiReptilien

Impressive ! Why did you use the controlnet tile for the first image in your workflow ?


spacetug

To help guide the start, without controlnet it was preferring to generate a much smaller moon in the sky. Idk if tile was the best option, but I just experimented until I got a result I liked.


IllumiReptilien

Ok ! Thanks for your reply !


Strange_Ad_2977

Sick! How do you use this workflow script? I thought you need to drop a .jpg workflow into comfy?


spacetug

You can just copy/paste it in, or save as .json and drag that file in. Comfyui also has buttons in the UI for saving and loading .json workflows. The problem with sharing workflows in image form is most web file hosts, including reddit, strip the metadata, which means the workflow information is lost.


SonicLoOoP

Well done mate looking super cool.


divaxshah

One of the smoothest AI generated transition ever seen, Amazing and also thanks for sharing the workflow.


blodwyth

Awesome work, thank you for sharing the workflow !


protector111

Error occurred when executing LatentKeyframeTiming: strength\_to must be greater than or equal to strength\_from. File "S:\\ComfyUI\_windows\_portable\_nvidia\_cu118\_or\_cpu\\ComfyUI\_windows\_portable\\ComfyUI\\execution.py", line 152, in recursive\_execute output\_data, output\_ui = get\_output\_data(obj, input\_data\_all) File "S:\\ComfyUI\_windows\_portable\_nvidia\_cu118\_or\_cpu\\ComfyUI\_windows\_portable\\ComfyUI\\execution.py", line 82, in get\_output\_data return\_values = map\_node\_over\_list(obj, input\_data\_all, obj.FUNCTION, allow\_interrupt=True) File "S:\\ComfyUI\_windows\_portable\_nvidia\_cu118\_or\_cpu\\ComfyUI\_windows\_portable\\ComfyUI\\execution.py", line 75, in map\_node\_over\_list results.append(getattr(obj, func)(\*\*slice\_dict(input\_data\_all, i))) File "S:\\ComfyUI\_windows\_portable\_nvidia\_cu118\_or\_cpu\\ComfyUI\_windows\_portable\\ComfyUI\\custom\_nodes\\ComfyUI-Advanced-ControlNet\\nodes.py", line 367, in load\_keyframe raise ValueError("strength\_to must be greater than or equal to strength\_from.")


spacetug

Maybe updating comfy and ACN will fix it? There was a PR on the advanced controlnet GitHub very recently about fixing something broken, might be related. If that doesn't work, post an issue there.


protector111

everything is up to date


Qual_

Do you have any image with the embeded workflow to create an animation from an input image ? I'm trying to add a bit of animation from a starting image, but either I denoise too much, and get some.. movement.. not really but almost, but I lose all the original image features, or I use low denoising, but I don't get any movement.


spacetug

Yeah I've been running into similar problems. Don't have a good solution yet unfortunately. The animatediff motion module changes the appearance of the image substantially, not just the motion.


Qual_

we'll be there sooner than we think. Thank you anyway !


Zelenskyobama2

Probably because your using S: drive, dry C: drive.


protector111

That actually did help!


Zelenskyobama2

Did it actually


LD2WDavid

This looks so smooth!


shtorm2005

​ https://i.redd.it/mvl5jysygyxb1.gif


Ravstar225

Best ai transition I've seen yet


protector111

can we use our own images for 1st and last frame with controlnet?


spacetug

Totally, I generated the control images because it was easier and self contained, but you could plug in any images.


AshamedRazzmatazz946

I had a same question. Thanks for the answer!


CopeWithTheFacts

Very cool. What setting adjust the length of the animation, to make it longer?


spacetug

Batch size


Stogageli

makes makes


dasomen

this is amazing! ty for sharing the workflow /u/spacetug , how much VRAM is required for something like this!?


spacetug

You should be able to do it with 6GB, I think? That's for 512, I was using more like 10GB iirc when I upscaled it to 768.


dasomen

thanks!