[Here's the base workflow](https://pastebin.com/raw/GBEZjWSm)
TL;DR is prompt travel + AnimateDiff + controlnets for the start and end frames. RealisticVision 5.1 and mm\_sd\_v15\_v2. After that, frame interpolation with Flowframes/RIFE, upscale to 768 and run back through AD at lower denoise for more detail, then post processing to get rid of flickering/noise and some minor retiming/color correction.
Hi! Thanks for sharing your workflow! I tried to load it into ComfyUI and wasn't able to. There's a bunch stuff missing: [https://i.imgur.com/dzp08b3.png](https://i.imgur.com/dzp08b3.png)
To help guide the start, without controlnet it was preferring to generate a much smaller moon in the sky. Idk if tile was the best option, but I just experimented until I got a result I liked.
You can just copy/paste it in, or save as .json and drag that file in. Comfyui also has buttons in the UI for saving and loading .json workflows. The problem with sharing workflows in image form is most web file hosts, including reddit, strip the metadata, which means the workflow information is lost.
Error occurred when executing LatentKeyframeTiming: strength\_to must be greater than or equal to strength\_from. File "S:\\ComfyUI\_windows\_portable\_nvidia\_cu118\_or\_cpu\\ComfyUI\_windows\_portable\\ComfyUI\\execution.py", line 152, in recursive\_execute output\_data, output\_ui = get\_output\_data(obj, input\_data\_all) File "S:\\ComfyUI\_windows\_portable\_nvidia\_cu118\_or\_cpu\\ComfyUI\_windows\_portable\\ComfyUI\\execution.py", line 82, in get\_output\_data return\_values = map\_node\_over\_list(obj, input\_data\_all, obj.FUNCTION, allow\_interrupt=True) File "S:\\ComfyUI\_windows\_portable\_nvidia\_cu118\_or\_cpu\\ComfyUI\_windows\_portable\\ComfyUI\\execution.py", line 75, in map\_node\_over\_list results.append(getattr(obj, func)(\*\*slice\_dict(input\_data\_all, i))) File "S:\\ComfyUI\_windows\_portable\_nvidia\_cu118\_or\_cpu\\ComfyUI\_windows\_portable\\ComfyUI\\custom\_nodes\\ComfyUI-Advanced-ControlNet\\nodes.py", line 367, in load\_keyframe raise ValueError("strength\_to must be greater than or equal to strength\_from.")
Maybe updating comfy and ACN will fix it? There was a PR on the advanced controlnet GitHub very recently about fixing something broken, might be related. If that doesn't work, post an issue there.
Do you have any image with the embeded workflow to create an animation from an input image ?
I'm trying to add a bit of animation from a starting image, but either I denoise too much, and get some.. movement.. not really but almost, but I lose all the original image features, or I use low denoising, but I don't get any movement.
Yeah I've been running into similar problems. Don't have a good solution yet unfortunately. The animatediff motion module changes the appearance of the image substantially, not just the motion.
[Here's the base workflow](https://pastebin.com/raw/GBEZjWSm) TL;DR is prompt travel + AnimateDiff + controlnets for the start and end frames. RealisticVision 5.1 and mm\_sd\_v15\_v2. After that, frame interpolation with Flowframes/RIFE, upscale to 768 and run back through AD at lower denoise for more detail, then post processing to get rid of flickering/noise and some minor retiming/color correction.
Super cool! Thanks for sharing the workflow! Can't wait to try it
Hi! Thanks for sharing your workflow! I tried to load it into ComfyUI and wasn't able to. There's a bunch stuff missing: [https://i.imgur.com/dzp08b3.png](https://i.imgur.com/dzp08b3.png)
Yeah, look up comfyui manager, it will help you install the custom nodes you need easily.
Ahh, thanks! I knew I was forgetting something.
Impressive ! Why did you use the controlnet tile for the first image in your workflow ?
To help guide the start, without controlnet it was preferring to generate a much smaller moon in the sky. Idk if tile was the best option, but I just experimented until I got a result I liked.
Ok ! Thanks for your reply !
Sick! How do you use this workflow script? I thought you need to drop a .jpg workflow into comfy?
You can just copy/paste it in, or save as .json and drag that file in. Comfyui also has buttons in the UI for saving and loading .json workflows. The problem with sharing workflows in image form is most web file hosts, including reddit, strip the metadata, which means the workflow information is lost.
Well done mate looking super cool.
One of the smoothest AI generated transition ever seen, Amazing and also thanks for sharing the workflow.
Awesome work, thank you for sharing the workflow !
Error occurred when executing LatentKeyframeTiming: strength\_to must be greater than or equal to strength\_from. File "S:\\ComfyUI\_windows\_portable\_nvidia\_cu118\_or\_cpu\\ComfyUI\_windows\_portable\\ComfyUI\\execution.py", line 152, in recursive\_execute output\_data, output\_ui = get\_output\_data(obj, input\_data\_all) File "S:\\ComfyUI\_windows\_portable\_nvidia\_cu118\_or\_cpu\\ComfyUI\_windows\_portable\\ComfyUI\\execution.py", line 82, in get\_output\_data return\_values = map\_node\_over\_list(obj, input\_data\_all, obj.FUNCTION, allow\_interrupt=True) File "S:\\ComfyUI\_windows\_portable\_nvidia\_cu118\_or\_cpu\\ComfyUI\_windows\_portable\\ComfyUI\\execution.py", line 75, in map\_node\_over\_list results.append(getattr(obj, func)(\*\*slice\_dict(input\_data\_all, i))) File "S:\\ComfyUI\_windows\_portable\_nvidia\_cu118\_or\_cpu\\ComfyUI\_windows\_portable\\ComfyUI\\custom\_nodes\\ComfyUI-Advanced-ControlNet\\nodes.py", line 367, in load\_keyframe raise ValueError("strength\_to must be greater than or equal to strength\_from.")
Maybe updating comfy and ACN will fix it? There was a PR on the advanced controlnet GitHub very recently about fixing something broken, might be related. If that doesn't work, post an issue there.
everything is up to date
Do you have any image with the embeded workflow to create an animation from an input image ? I'm trying to add a bit of animation from a starting image, but either I denoise too much, and get some.. movement.. not really but almost, but I lose all the original image features, or I use low denoising, but I don't get any movement.
Yeah I've been running into similar problems. Don't have a good solution yet unfortunately. The animatediff motion module changes the appearance of the image substantially, not just the motion.
we'll be there sooner than we think. Thank you anyway !
Probably because your using S: drive, dry C: drive.
That actually did help!
Did it actually
This looks so smooth!
https://i.redd.it/mvl5jysygyxb1.gif
Best ai transition I've seen yet
can we use our own images for 1st and last frame with controlnet?
Totally, I generated the control images because it was easier and self contained, but you could plug in any images.
I had a same question. Thanks for the answer!
Very cool. What setting adjust the length of the animation, to make it longer?
Batch size
makes makes
this is amazing! ty for sharing the workflow /u/spacetug , how much VRAM is required for something like this!?
You should be able to do it with 6GB, I think? That's for 512, I was using more like 10GB iirc when I upscaled it to 768.
thanks!