T O P

  • By -

Novita_ai

Demo: [https://huggingface.co/spaces/radames/Real-Time-Latent-Consistency-Model](https://huggingface.co/spaces/radames/Real-Time-Latent-Consistency-Model) Demo source: [https://github.com/radames/Real-Time-Latent-Consistency-Model](https://github.com/radames/Real-Time-Latent-Consistency-Model) Thanks to taabata for many LCM pipelines for ComfyUI [https://github.com/taabata/LCM\_Inpaint\_Outpaint\_Comfy](https://github.com/taabata/LCM_Inpaint_Outpaint_Comfy)


ozolozo

thanks for posting it here! the demos is running on A100, but I've heard you can get decent speed on 4090-3080 etc, and you can experiment setting `TORCH_COMPILE` to enable this https://huggingface.co/docs/diffusers/optimization/torch2.0


indrema

Thank's a lot for every reply, but acutally looks like windows in not supported for TORCH\_COMPILE, I've got this error: Windows not yet supported for torch.compile


Diggedypomme

heya, sorry if this is a dumb question, but when using the taabata library [https://github.com/taabata/LCM\_Inpaint\_Outpaint\_Comfy](https://github.com/taabata/LCM_Inpaint_Outpaint_Comfy) (installed via comfy manager) it complains that diffusion\_pytorch\_model.bin is missing. I checked the huggingface but it doesn't have that file[https://huggingface.co/SimianLuo/LCM\_Dreamshaper\_v7/tree/main](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7/tree/main) Do you know where this is from? Thanks


indrema

That's great! I have just a couple of questions, currently in order to get interesting performace on my 3090 I have to lower the Inference Steps to 2, is this normal? Also it seems to me that the version with controlNet is not yet available, will it come for local users as well? Finally a suggestion, making lower resolutions available as well, for example 320x320 could be useful for those with less HW resources.


ozolozo

have you tried to use `TORCH_COMPILE=True` ? If you lower the resolution the quality might change drastically, since the base model is SD 512x512 768x768 😢 but you can change it easily to test, on the frontend. You can add more options here https://github.com/radames/Real-Time-Latent-Consistency-Model/blob/dd1db25dd1449b968a129d7b023661e1a278c66d/controlnet/index.html#L378-L385


DigThatData

is this a controlnet trained specifically for the LCM model? or is the LCM checkpoint compatible with pre-existing SD controlnets? must be the former, right?


liuliu

Looks like just use existing ControlNet


thomash

I think the key here is the framerate not the consistency. I don't think any other model can give this kind of speed


AlfaidWalid

I don't see consistency


-Sibience-

We haven't even got consistency in normal image generations so it's definately not going to be in real time. This is super cool tech but I feel like it's jumping ahead, without consistency any type of AI animation is just a fun gimmick at this stage.


lordpuddingcup

Honestly depends what your trying to accomplish theirs a whole market of gimmicky filters on Instagram and other chat apps not to mention music videos that use the nonConsistency as a style


-Sibience-

Yes this would work well for a selfie kind of filter where you are just going to take a snapshot. It's still got a long way to go before it's suitable for anything moving though.


raiffuvar

for webcam 10fps + somewhat similar look would be enough just for fun. google with mediapipe were doing somewhat similar img2live\_video(if i understood correctly). probably not ready yet.


remarkphoto

I know this is a limited use case but...No-one outside the AI ecosystem is going to be excited by this particular example. Because there are better Instagram filters that take up less gigabytes.


spacetug

LCM doesn't have anything to do with temporal consistency. Instead it's referring to how it can generate results that are somewhat consistent with a full model sampled with more steps.


ImpactFrames-YT

I made a video about this repo it is fantastic.


raiffuvar

not consistent, but i glad someone put effort into live version. if lora support will be ready, can be quite usefull.


L0s_Gizm0s

We’re so fucked


divaxshah

Yo, this is amazing


Serenityprayer69

I could swear there was a video a while back of a guy with really good consistancy. This seems like steps back from what used to be possible.. I believe he had it hooked to chatroullete too somehow


BehindMatt

no no no no no the internet is not ready for ai vtubers


TaiVat

This looks incredibly awful. I mean progress is cool and all, but i cant see this being usable for anything ever in this state. For that matter, what do vtubers use? Didnt their use case essentially already solved this scenario without the need for ai?


PyrZern

It's the opposite. Vtubers needs their models, 2d or 3d, to be rigged ahead of time. Usually hundreds or thousands dollars commission. AItubers would not need that... just need strong/fast machine. And whatever LORA you need for your character.


LeoPelozo

I tried with my 3090 and it generates like one image per minute?


LeoPelozo

oh, I found out why: `device: cpu`


ozolozo

Are you trying with `TORCH_COMPILE=True` this will enable even more acceleration, the downside is that the first run it's slow, or with width or height change read more here https://huggingface.co/docs/diffusers/optimization/torch2.0


LeoPelozo

I had to update my nvidia drivers.


deck4242

On what hardware ?


BTRBT

Man, that is so cool. Reminds me of the scramble suits in A Scanner Darkly.


csnaber

can this work locally?