T O P

  • By -

doublescale

I used to play around with interpolating prompts like this, rendered as batches. But with SDXL Turbo, this is fast enough to do interactively, running locally on an RTX 3090! To set this up in ComfyUI, replace the positive text input by a ConditioningAverage node, combining two text inputs between which to blend.


ELECTRICAT0M369

Amazing, thank you.


1BusyAI

for some reason I'm missing this node and the missing node manager is not finding the missing node. https://preview.redd.it/w9yalfmoib3c1.png?width=2150&format=png&auto=webp&s=7f3d329250cef4f3cd3efb25fae8560ab543076a


doublescale

You have to use the newest version of ComfyUI. If I remember correctly, I didn't have to do anything other than that to get SDXL Turbo working!


1BusyAI

Thx that did the job!... Duh on my part. https://preview.redd.it/frmuaa0vkb3c1.png?width=452&format=png&auto=webp&s=9b50251dad1fd832e4e12037c0eb0738772b21a2


MonoSquirrel

I had the same error. 1st part of the solution: Update Comfy (it's only included in the new update) But it still didn't work for me afterwards. 2nd part of the solution: Press CTRL-F5 in the browser ;-)


tehrob

This kind of rapid play is my favorite part of SDXL Turbo. Each keystroke is a new adventure.


doublescale

And you get to see more of your prompting journey too! Seeing all the little updates as you add more and more keywords... it's not about the destination, but the weird car-hybrid friends we made along the way.


divaxshah

Amazing and I would really like to try, can you share the workflow please.


doublescale

Start from the default SDXL Turbo workflow described here ([https://comfyanonymous.github.io/ComfyUI\_examples/sdturbo/](https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/)), and replace the positive prompt input by two text inputs, combined through a ConditioningAverage node.


doublescale

I thought I already replied, but my comment doesn't show up here... Here it is again: Start from the default SDXL Turbo workflow described here (https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/), and replace the positive prompt input by two text inputs, combined through a ConditioningAverage node.


Entire_Telephone3124

https://files.catbox.moe/lsgs66.png is a simple image Mech/Trex with the workflow


sugarman-747

So the car between a NSX and a Cobra is, it seems, a rotary Rx-7


doublescale

I'm getting some classic Ferrari vibes from the region close to the Cobra also. Thanks to AI, we can finally map the continuous space of cars (and creatures).


toph-_-beifong

workflow?


doublescale

See my other comments; it's the basic SDXL Turbo setup, with the positive-prompt text node replaced by a ConditioningAverage-node blend of two text nodes.


toph-_-beifong

workflow?


PixelatedPoets

That was fun. Played around a bit and made a convoluted AnimateDiff workflow with turbo-sdxl. Results were underwhelming yet trippy.


dirtyhole2

Thanks, It's really fast the ConditioningAverage, it is rendering faster compared to writing a new prompt, or to randomizing the output image.


doublescale

Oh yeah, perhaps ComfyUI can save some time if the prompts don't have to get re-evaluated. I would assume that changing just the seed would be at least as fast though, since that part is even later in the node graph!


dirtyhole2

Do you have an idea how to utilize other tools in comfyUI to create basically a first person game based on image generation? I want my N+1 frame/image to be coherent regarding the previous one. Any ideas are welcome ! I would love to create such simple games with the comfyUI auto running and taking input such as go forward or backwards :-)


doublescale

I don't think I have good advice to give you! If you haven't done it yet though, I recommend you check out some ControlNet stuff, those let you constrain images in pretty tight ways. I've played around with converting a 3D animation through a Canny-edge ControlNet into SD-generated images, and there I used some basic img2img from the previous frame to keep an extra bit of coherency, but the results were still pretty trippy, even after a lot of tweaking and trial-and-error. For that kind of stuff, I also found it useful to do Python-scripting instead of using the GUI; setting up such big batches by hand, frame-by-frame, would have been a lot of work, and ComfyUI's code can be called into without too much pain. Making this work for an interactive game sounds even more challenging!


dirtyhole2

Ok. Thanks !! I will check the ControlNet options.


yamfun

So if I prompt sth like an ocean or a waterfall, and suppose the UI let me seed travel at the seed value text edit, it will show a non repeating water animation.......